Strong Inference in Mathematical Modeling: A Method for Robust Science in the Twenty-First Century.
Ganusov, Vitaly V
2016-01-01
While there are many opinions on what mathematical modeling in biology is, in essence, modeling is a mathematical tool, like a microscope, which allows consequences to logically follow from a set of assumptions. Only when this tool is applied appropriately, as microscope is used to look at small items, it may allow to understand importance of specific mechanisms/assumptions in biological processes. Mathematical modeling can be less useful or even misleading if used inappropriately, for example, when a microscope is used to study stars. According to some philosophers (Oreskes et al., 1994), the best use of mathematical models is not when a model is used to confirm a hypothesis but rather when a model shows inconsistency of the model (defined by a specific set of assumptions) and data. Following the principle of strong inference for experimental sciences proposed by Platt (1964), I suggest "strong inference in mathematical modeling" as an effective and robust way of using mathematical modeling to understand mechanisms driving dynamics of biological systems. The major steps of strong inference in mathematical modeling are (1) to develop multiple alternative models for the phenomenon in question; (2) to compare the models with available experimental data and to determine which of the models are not consistent with the data; (3) to determine reasons why rejected models failed to explain the data, and (4) to suggest experiments which would allow to discriminate between remaining alternative models. The use of strong inference is likely to provide better robustness of predictions of mathematical models and it should be strongly encouraged in mathematical modeling-based publications in the Twenty-First century.
Strong Inference in Mathematical Modeling: A Method for Robust Science in the Twenty-First Century
Ganusov, Vitaly V.
2016-01-01
While there are many opinions on what mathematical modeling in biology is, in essence, modeling is a mathematical tool, like a microscope, which allows consequences to logically follow from a set of assumptions. Only when this tool is applied appropriately, as microscope is used to look at small items, it may allow to understand importance of specific mechanisms/assumptions in biological processes. Mathematical modeling can be less useful or even misleading if used inappropriately, for example, when a microscope is used to study stars. According to some philosophers (Oreskes et al., 1994), the best use of mathematical models is not when a model is used to confirm a hypothesis but rather when a model shows inconsistency of the model (defined by a specific set of assumptions) and data. Following the principle of strong inference for experimental sciences proposed by Platt (1964), I suggest “strong inference in mathematical modeling” as an effective and robust way of using mathematical modeling to understand mechanisms driving dynamics of biological systems. The major steps of strong inference in mathematical modeling are (1) to develop multiple alternative models for the phenomenon in question; (2) to compare the models with available experimental data and to determine which of the models are not consistent with the data; (3) to determine reasons why rejected models failed to explain the data, and (4) to suggest experiments which would allow to discriminate between remaining alternative models. The use of strong inference is likely to provide better robustness of predictions of mathematical models and it should be strongly encouraged in mathematical modeling-based publications in the Twenty-First century. PMID:27499750
Strong inference in mathematical modeling: a method for robust science in the 21st century
Directory of Open Access Journals (Sweden)
Vitaly V. Ganusov
2016-07-01
Full Text Available While there are many opinions on what mathematical modeling in biology is, in essence, modeling is a mathematical tool, like a microscope, which allows consequences to logically follow from a set of assumptions. Only when this tool is applied appropriately, as microscope is used to look at small items, it may allow to understand importance of specific mechanisms/assumptions in biological processes. Mathematical modeling can be less useful or even misleading if used inappropriately, for example, when a microscope is used to study stars. According to some philosophers [1], the best use of mathematical models is not when a model is used to confirm a hypothesis but rather when a model shows inconsistency of the model (defined by a specific set of assumptions and data. Following the principle of strong inference for experimental sciences proposed by Platt [2], I suggest ``strong inference in mathematical modeling'' as an effective and robust way of using mathematical modeling to understand mechanisms driving dynamics of biological systems. The major steps of strong inference in mathematical modeling are 1 to develop multiple alternative models for the phenomenon in question; 2 to compare the models with available experimental data and to determine which of the models are not consistent with the data; 3 to determine reasons why rejected models failed to explain the data, and 4 to suggest experiments which would allow to discriminate between remaining alternative models. The use of strong inference is likely to provide better robustness of predictions of mathematical models and it should be strongly encouraged in mathematical modeling-based publications in the 21st century.
<strong>Generic Patch Inference>
DEFF Research Database (Denmark)
Andersen, Jesper; Lawall, Julia Laetitia
2008-01-01
A key issue in maintaining Linux device drivers is the need to update drivers in response to evolutions in Linux internal libraries. Currently, there is little tool support for performing and documenting such changes. In this paper we present a tool, spfind, that identifies common changes made...... developers can use it to extract an abstract representation of the set of changes that others have made. Our experiments on recent changes in Linux show that the inferred generic patches are more concise than the corresponding patches found in commits to the Linux source tree while being safe with respect...
Population genetics inference for longitudinally-sampled mutants under strong selection.
Lacerda, Miguel; Seoighe, Cathal
2014-11-01
Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.
Probing the Small-scale Structure in Strongly Lensed Systems via Transdimensional Inference
Daylan, Tansu; Cyr-Racine, Francis-Yan; Diaz Rivero, Ana; Dvorkin, Cora; Finkbeiner, Douglas P.
2018-02-01
Strong lensing is a sensitive probe of the small-scale density fluctuations in the Universe. We implement a pipeline to model strongly lensed systems using probabilistic cataloging, which is a transdimensional, hierarchical, and Bayesian framework to sample from a metamodel (union of models with different dimensionality) consistent with observed photon count maps. Probabilistic cataloging allows one to robustly characterize modeling covariances within and across lens models with different numbers of subhalos. Unlike traditional cataloging of subhalos, it does not require model subhalos to improve the goodness of fit above the detection threshold. Instead, it allows the exploitation of all information contained in the photon count maps—for instance, when constraining the subhalo mass function. We further show that, by not including these small subhalos in the lens model, fixed-dimensional inference methods can significantly mismodel the data. Using a simulated Hubble Space Telescope data set, we show that the subhalo mass function can be probed even when many subhalos in the sample catalogs are individually below the detection threshold and would be absent in a traditional catalog. The implemented software, Probabilistic Cataloger (PCAT) is made publicly available at https://github.com/tdaylan/pcat.
DEFF Research Database (Denmark)
Møller, Jesper
2010-01-01
Chapter 9: This contribution concerns statistical inference for parametric models used in stochastic geometry and based on quick and simple simulation free procedures as well as more comprehensive methods based on a maximum likelihood or Bayesian approach combined with markov chain Monte Carlo...... (MCMC) techniques. Due to space limitations the focus is on spatial point processes....
DEFF Research Database (Denmark)
Møller, Jesper
(This text written by Jesper Møller, Aalborg University, is submitted for the collection ‘Stochastic Geometry: Highlights, Interactions and New Perspectives', edited by Wilfrid S. Kendall and Ilya Molchanov, to be published by ClarendonPress, Oxford, and planned to appear as Section 4.1 with the ......(This text written by Jesper Møller, Aalborg University, is submitted for the collection ‘Stochastic Geometry: Highlights, Interactions and New Perspectives', edited by Wilfrid S. Kendall and Ilya Molchanov, to be published by ClarendonPress, Oxford, and planned to appear as Section 4.......1 with the title ‘Inference'.) This contribution concerns statistical inference for parametric models used in stochastic geometry and based on quick and simple simulation free procedures as well as more comprehensive methods using Markov chain Monte Carlo (MCMC) simulations. Due to space limitations the focus...
DEFF Research Database (Denmark)
Lenoir, Jonathan; Graae, Bente; Aarrestad, Per
2013-01-01
-change impacts. Is this local spatial buffering restricted to topographically complex terrains? To answer this, we here study fine-grained thermal variability across a 2500-km wide latitudinal gradient in Northern Europe encompassing a large array of topographic complexities. We first combined plant community...... data, Ellenberg temperature indicator values, locally measured temperatures (LmT) and globally interpolated temperatures (GiT) in a modelling framework to infer biologically relevant temperature conditions from plant assemblages within community-inferred temperatures: CiT). We...... temperature indicator values in combination with plant assemblages explained 46-72% of variation in LmT and 92-96% of variation in GiT during the growing season (June, July, August). Growing-season CiT range within 1-km(2) units peaked at 60-65°N and increased with terrain roughness, averaging 1.97 °C (SD = 0...
International Nuclear Information System (INIS)
Kosugi, Motoko
2006-01-01
Our previous study showed a big difference between expert's own risk perception and experts' inference of the public risk perception about technologies. So, this study tried to clarify the effect of the perceived distance in risk perception between the public and experts themselves on forwardness in science communication to the public. The questionnaire survey results reaffirmed that experts were inclined to feel larger difference in risk perception between the public and themselves on the subject of their own specialty than of non-specialty. The result also suggested the tendency that the bigger experts recognized difference in risk perception from the public, the less they actually had experiences of science communication including communication with the public. Moreover, the result showed that experiences of science communication had positive effects on belief of the public's scientific literacy. (author)
Johnson, Leigh A; Chan, Lauren M; Weese, Terri L; Busby, Lisa D; McMurry, Samuel
2008-09-01
Members of the phlox family (Polemoniaceae) serve as useful models for studying various evolutionary and biological processes. Despite its biological importance, no family-wide phylogenetic estimate based on multiple DNA regions with complete generic sampling is available. Here, we analyze one nuclear and five chloroplast DNA sequence regions (nuclear ITS, chloroplast matK, trnL intron plus trnL-trnF intergeneric spacer, and the trnS-trnG, trnD-trnT, and psbM-trnD intergenic spacers) using parsimony and Bayesian methods, as well as assessments of congruence and long branch attraction, to explore phylogenetic relationships among 84 ingroup species representing all currently recognized Polemoniaceae genera. Relationships inferred from the ITS and concatenated chloroplast regions are similar overall. A combined analysis provides strong support for the monophyly of Polemoniaceae and subfamilies Acanthogilioideae, Cobaeoideae, and Polemonioideae. Relationships among subfamilies, and thus for the precise root of Polemoniaceae, remain poorly supported. Within the largest subfamily, Polemonioideae, four clades corresponding to tribes Polemonieae, Phlocideae, Gilieae, and Loeselieae receive strong support. The monogeneric Polemonieae appears sister to Phlocideae. Relationships within Polemonieae, Phlocideae, and Gilieae are mostly consistent between analyses and data permutations. Many relationships within Loeselieae remain uncertain. Overall, inferred phylogenetic relationships support a higher-level classification for Polemoniaceae proposed in 2000.
Pechenick, Eitan Adam; Danforth, Christopher M; Dodds, Peter Sheridan
2015-01-01
It is tempting to treat frequency trends from the Google Books data sets as indicators of the "true" popularity of various words and phrases. Doing so allows us to draw quantitatively strong conclusions about the evolution of cultural perception of a given topic, such as time or gender. However, the Google Books corpus suffers from a number of limitations which make it an obscure mask of cultural popularity. A primary issue is that the corpus is in effect a library, containing one of each book. A single, prolific author is thereby able to noticeably insert new phrases into the Google Books lexicon, whether the author is widely read or not. With this understood, the Google Books corpus remains an important data set to be considered more lexicon-like than text-like. Here, we show that a distinct problematic feature arises from the inclusion of scientific texts, which have become an increasingly substantive portion of the corpus throughout the 1900 s. The result is a surge of phrases typical to academic articles but less common in general, such as references to time in the form of citations. We use information theoretic methods to highlight these dynamics by examining and comparing major contributions via a divergence measure of English data sets between decades in the period 1800-2000. We find that only the English Fiction data set from the second version of the corpus is not heavily affected by professional texts. Overall, our findings call into question the vast majority of existing claims drawn from the Google Books corpus, and point to the need to fully characterize the dynamics of the corpus before using these data sets to draw broad conclusions about cultural and linguistic evolution.
Multimodel inference and adaptive management
Rehme, S.E.; Powell, L.A.; Allen, Craig R.
2011-01-01
Ecology is an inherently complex science coping with correlated variables, nonlinear interactions and multiple scales of pattern and process, making it difficult for experiments to result in clear, strong inference. Natural resource managers, policy makers, and stakeholders rely on science to provide timely and accurate management recommendations. However, the time necessary to untangle the complexities of interactions within ecosystems is often far greater than the time available to make management decisions. One method of coping with this problem is multimodel inference. Multimodel inference assesses uncertainty by calculating likelihoods among multiple competing hypotheses, but multimodel inference results are often equivocal. Despite this, there may be pressure for ecologists to provide management recommendations regardless of the strength of their study’s inference. We reviewed papers in the Journal of Wildlife Management (JWM) and the journal Conservation Biology (CB) to quantify the prevalence of multimodel inference approaches, the resulting inference (weak versus strong), and how authors dealt with the uncertainty. Thirty-eight percent and 14%, respectively, of articles in the JWM and CB used multimodel inference approaches. Strong inference was rarely observed, with only 7% of JWM and 20% of CB articles resulting in strong inference. We found the majority of weak inference papers in both journals (59%) gave specific management recommendations. Model selection uncertainty was ignored in most recommendations for management. We suggest that adaptive management is an ideal method to resolve uncertainty when research results in weak inference.
Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction
Imbens, Guido W.; Rubin, Donald B.
2015-01-01
Most questions in social and biomedical sciences are causal in nature: what would happen to individuals, or to groups, if part of their environment were changed? In this groundbreaking text, two world-renowned experts present statistical methods for studying such questions. This book starts with the notion of potential outcomes, each corresponding…
Gentile, Natacha; Siegwolf, Rolf T W; Esseiva, Pierre; Doyle, Sean; Zollinger, Kurt; Delémont, Olivier
2015-06-01
Isotope ratio mass spectrometry (IRMS) has been used in numerous fields of forensic science in a source inference perspective. This review compiles the studies published on the application of isotope ratio mass spectrometry (IRMS) to the traditional fields of forensic science so far. It completes the review of Benson et al. [1] and synthesises the extent of knowledge already gathered in the following fields: illicit drugs, flammable liquids, human provenancing, microtraces, explosives and other specific materials (packaging tapes, safety matches, plastics, etc.). For each field, a discussion assesses the state of science and highlights the relevance of the information in a forensic context. Through the different discussions which mark out the review, the potential and limitations of IRMS, as well as the needs and challenges of future studies are emphasized. The paper elicits the various dimensions of the source which can be obtained from the isotope information and demonstrates the transversal nature of IRMS as a tool for source inference. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Observation, Inference, and Imagination: Elements of Edgar Allan Poe's Philosophy of Science
Gelfert, Axel
2014-03-01
Edgar Allan Poe's standing as a literary figure, who drew on (and sometimes dabbled in) the scientific debates of his time, makes him an intriguing character for any exploration of the historical interrelationship between science, literature and philosophy. His sprawling `prose-poem' Eureka (1848), in particular, has sometimes been scrutinized for anticipations of later scientific developments. By contrast, the present paper argues that it should be understood as a contribution to the raging debates about scientific methodology at the time. This methodological interest, which is echoed in Poe's `tales of ratiocination', gives rise to a proposed new mode of—broadly abductive—inference, which Poe attributes to the hybrid figure of the `poet-mathematician'. Without creative imagination and intuition, Science would necessarily remain incomplete, even by its own standards. This concern with imaginative (abductive) inference ties in nicely with his coherentism, which grants pride of place to the twin virtues of Simplicity and Consistency, which must constrain imagination lest it degenerate into mere fancy.
Lane-Getaz, Sharon
2017-01-01
In reaction to misuses and misinterpretations of p-values and confidence intervals, a social science journal editor banned p-values from its pages. This study aimed to show that education could address misuse and abuse. This study examines inference-related learning outcomes for social science students in an introductory course supplemented with…
Targeted learning in data science causal inference for complex longitudinal studies
van der Laan, Mark J
2018-01-01
This textbook for graduate students in statistics, data science, and public health deals with the practical challenges that come with big, complex, and dynamic data. It presents a scientific roadmap to translate real-world data science applications into formal statistical estimation problems by using the general template of targeted maximum likelihood estimators. These targeted machine learning algorithms estimate quantities of interest while still providing valid inference. Targeted learning methods within data science area critical component for solving scientific problems in the modern age. The techniques can answer complex questions including optimal rules for assigning treatment based on longitudinal data with time-dependent confounding, as well as other estimands in dependent data structures, such as networks. Included in Targeted Learning in Data Science are demonstrations with soft ware packages and real data sets that present a case that targeted learning is crucial for the next generatio...
Ajhar, Edward A.; Blackwell, E.; Quesada, D.
2010-05-01
In South Florida, science teacher preparation is often weak as a shortage of science teachers often prompts administrators to assign teachers to science classes just to cover the classroom needs. This results is poor preparation of students for college science course work, which, in turn, causes the next generation of science teachers to be even weaker than the first. This cycle must be broken in order to prepare better students in the sciences. At St. Thomas University in Miami Gardens, Florida, our School of Science has teamed with our Institute for Education to create a program to alleviate this problem: A Master of Science in Education with a Concentration in Earth/Space Science. The Master's program consists of 36 total credits. Half the curriculum consists of traditional educational foundation and instructional leadership courses while the other half is focused on Earth and Space Science content courses. The content area of 18 credits also provides a separate certificate program. Although traditional high school science education places a heavy emphasis on Earth Science, this program expands that emphasis to include the broader context of astronomy, astrophysics, astrobiology, planetary science, and the practice and philosophy of science. From this contextual basis the teacher is better prepared to educate and motivate middle and high school students in all areas of the physical sciences. Because hands-on experience is especially valuable to educators, our program uses materials and equipment including small optical telescopes (Galileoscopes), several 8-in and 14-in Celestron and Meade reflectors, and a Small Radio Telescope installed on site. (Partial funding provided by the US Department of Education through Minority Science and Engineering Improvement Program grant P120A050062.)
Schrago, Carlos G; Menezes, Albert N; Furtado, Carolina; Bonvicino, Cibele R; Seuanez, Hector N
2014-11-05
Neotropical primates (NP) are presently distributed in the New World from Mexico to northern Argentina, comprising three large families, Cebidae, Atelidae, and Pitheciidae, consequently to their diversification following their separation from Old World anthropoids near the Eocene/Oligocene boundary, some 40 Ma. The evolution of NP has been intensively investigated in the last decade by studies focusing on their phylogeny and timescale. However, despite major efforts, the phylogenetic relationship between these three major clades and the age of their last common ancestor are still controversial because these inferences were based on limited numbers of loci and dating analyses that did not consider the evolutionary variation associated with the distribution of gene trees within the proposed phylogenies. We show, by multispecies coalescent analyses of selected genome segments, spanning along 92,496,904 bp that the early diversification of extant NP was marked by a 2-fold increase of their effective population size and that Atelids and Cebids are more closely related respective to Pitheciids. The molecular phylogeny of NP has been difficult to solve because of population-level phenomena at the early evolution of the lineage. The association of evolutionary variation with the distribution of gene trees within proposed phylogenies is crucial for distinguishing the mean genetic divergence between species (the mean coalescent time between loci) from speciation time. This approach, based on extensive genomic data provided by new generation DNA sequencing, provides more accurate reconstructions of phylogenies and timescales for all organisms. © The Author(s) 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Directory of Open Access Journals (Sweden)
Z. T. Guo
2009-02-01
Full Text Available We correlate the China loess and Antarctica ice records to address the inter-hemispheric climate link over the past 800 ka. The results show a broad coupling between Asian and Antarctic climates at the glacial-interglacial scale. However, a number of decoupled aspects are revealed, among which marine isotope stage (MIS 13 exhibits a strong anomaly compared with the other interglacials. It is characterized by unusually positive benthic oxygen (δ^{18}O and carbon isotope (δ^{13}C values in the world oceans, cooler Antarctic temperature, lower summer sea surface temperature in the South Atlantic, lower CO_{2} and CH_{4} concentrations, but by extremely strong Asian, Indian and African summer monsoons, weakest Asian winter monsoon, and lowest Asian dust and iron fluxes. Pervasive warm conditions were also evidenced by the records from northern high-latitude regions. These consistently indicate a warmer Northern Hemisphere and a cooler Southern Hemisphere, and hence a strong asymmetry of hemispheric climates during MIS-13. Similar anomalies of lesser extents also occurred during MIS-11 and MIS-5e. Thus, MIS-13 provides a case that the Northern Hemisphere experienced a substantial warming under relatively low concentrations of greenhouse gases. It suggests that the global climate system possesses a natural variability that is not predictable from the simple response of northern summer insolation and atmospheric CO_{2} changes. During MIS-13, both hemispheres responded in different ways leading to anomalous continental, marine and atmospheric conditions at the global scale. The correlations also suggest that the marine δ^{18}O record is not always a reliable indicator of the northern ice-volume changes, and that the asymmetry of hemispheric climates is one of the prominent factors controlling the strength of Asian, Indian and African monsoon circulations, most likely through modulating the position of
Observation, Inference, and Imagination: Elements of Edgar Allan Poe's Philosophy of Science
Gelfert, Axel
2014-01-01
Edgar Allan Poe's standing as a literary figure, who drew on (and sometimes dabbled in) the scientific debates of his time, makes him an intriguing character for any exploration of the historical interrelationship between science, literature and philosophy. His sprawling "prose-poem" "Eureka" (1848), in particular, has…
Claveau, François
2012-12-01
This article examines two theses formulated by Russo and Williamson (2007) in their study of causal inference in the health sciences. The two theses are assessed against evidence from a specific case in the social sciences, i.e., research on the institutional determinants of the aggregate unemployment rate. The first Russo-Williamson Thesis is that a causal claim can only be established when it is jointly supported by difference-making and mechanistic evidence. This thesis is shown not to hold. While researchers in my case study draw extensively on both types of evidence, one causal claim out of the three analyzed is established even though it is exclusively supported by mechanistic evidence. The second Russo-Williamson Thesis is that standard accounts of causality fail to handle the dualist epistemology highlighted in the first Thesis. I argue that a counterfactual-manipulationist account of causality--which is endorsed by many philosophers as well as many social scientists--can perfectly make sense of the typical strategy in my case study to draw on both difference-making and mechanistic evidence; it is just an instance of the common strategy of increasing evidential variety. Copyright © 2012 Elsevier Ltd. All rights reserved.
Nagao, Makoto
1990-01-01
Knowledge and Inference discusses an important problem for software systems: How do we treat knowledge and ideas on a computer and how do we use inference to solve problems on a computer? The book talks about the problems of knowledge and inference for the purpose of merging artificial intelligence and library science. The book begins by clarifying the concept of """"knowledge"""" from many points of view, followed by a chapter on the current state of library science and the place of artificial intelligence in library science. Subsequent chapters cover central topics in the artificial intellig
Patterns of rationality recurring inferences in science, social cognition and religious thinking
Bertolotti, Tommaso
2015-01-01
This book proposes an applied epistemological framework for investigating science, social cognition and religious thinking based on inferential patterns that recur in the different domains. It presents human rationality as a tool that allows us to make sense of our (physical or social) surroundings. It shows that the resulting cognitive activity produces a broad spectrum of outputs, such as scientific models and experimentation, gossip and social networks, but also ancient and contemporary deities. The book consists of three parts, the first of which addresses scientific modeling and experimentation, and their application to the analysis of scientific rationality. Thus, this part continues the tradition of eco-cognitive epistemology and abduction studies. The second part deals with the relationship between social cognition and cognitive niche construction, i.e. the evolutionarily relevant externalization of knowledge onto the environment, while the third part focuses on what is commonly defined as "irrational...
O'Shaughnessy, Richard; Gerosa, Davide; Wysocki, Daniel
2017-07-07
The inferred parameters of the binary black hole GW151226 are consistent with nonzero spin for the most massive black hole, misaligned from the binary's orbital angular momentum. If the black holes formed through isolated binary evolution from an initially aligned binary star, this misalignment would then arise from a natal kick imparted to the first-born black hole at its birth during stellar collapse. We use simple kinematic arguments to constrain the characteristic magnitude of this kick, and find that a natal kick v_{k}≳50 km/s must be imparted to the black hole at birth to produce misalignments consistent with GW151226. Such large natal kicks exceed those adopted by default in most of the current supernova and binary evolution models.
Fighting A Strong Headwind: Challenges in Communicating The Science of Climate Change
Mann, M. E.
2008-12-01
Communicating science to the public is an intrinsic challenge to begin with. An effective communicator must find ways to translate often technical and complex scientific findings for consumption by an audience unfamiliar with the basic tools and lexicon that scientists themselves take for granted. The challenge is made all the more difficult still when the science has implications for public policy, and the scientists face attack by institutions who judge themselves to be at threat by the implications of scientific findings. Such areas of science include (but certainly are not limited to) evolution, stem cell research, environmental health, and the subject of this talk--climate change. In each of these areas, a highly organized, well funded effort has been mounted to attack the science and the scientists themselves. These attacks are rarely fought in legitimate scientific circles such as the peer-reviewed scientific literature or other scholarly venues, but rather through rhetorically-aimed efforts delivered by media outlets aligned with the views of the attackers, and by politicians and groups closely aligned with special interests. I will discuss various approaches to combating such attacks, drawing upon my own experiences in the public arena with regard to the scientific discourse on climate change.
Energy Technology Data Exchange (ETDEWEB)
Diehl, H. T.; Buckley-Geer, E. J.; Lindgren, K. A.; Nord, B.; Gaitsch, H.; Gaitsch, S.; Lin, H.; Allam, S.; Collett, T. E.; Furlanetto, C.; Gill, M. S. S.; More, A.; Nightingale, J.; Odden, C.; Pellico, A.; Tucker, D. L.; Costa, L. N. da; Neto, A. Fausti; Kuropatkin, N.; Soares-Santos, M.; Welch, B.; Zhang, Y.; Frieman, J. A.; Abdalla, F. B.; Annis, J.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Burke, D. L.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Cunha, C. E.; D’Andrea, C. B.; Desai, S.; Dietrich, J. P.; Drlica-Wagner, A.; Evrard, A. E.; Finley, D. A.; Flaugher, B.; García-Bellido, J.; Gerdes, D. W.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; James, D. J.; Kuehn, K.; Kuhlmann, S.; Lahav, O.; Li, T. S.; Lima, M.; Maia, M. A. G.; Marshall, J. L.; Menanteau, F.; Miquel, R.; Nichol, R. C.; Nugent, P.; Ogando, R. L. C.; Plazas, A. A.; Reil, K.; Romer, A. K.; Sako, M.; Sanchez, E.; Santiago, B.; Scarpine, V.; Schindler, R.; Schubnell, M.; Sevilla-Noarbe, I.; Sheldon, E.; Smith, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Walker, A. R.
2017-09-01
We report the results of our searches for strong gravitational lens systems in the Dark Energy Survey (DES) Science Verication and Year 1 observations. The Science Verication data spans approximately 250 sq. deg. with median i
Papayannakos, Dimitris P.
2008-06-01
The structure of David’s Bloor argument for the Strong Programme (SP) in Science Studies is criticized from the philosophical perspective of anti-skeptical, scientific realism. The paper transforms the common criticism of SP—that the symmetry principle of SP implies an untenable form of cognitive relativism—into the clear philosophical issue of naturalism versus Platonism. It is also argued that the concrete patterns of SP’s interest-explanations and its sociological definition of knowledge involve philosophical skepticism. It is claimed, then, that the most problematic elements of SP reside primarily in philosophical skepticism. It is also claimed that this sort of criticism can be directed against other more radical, versions of constructivism in science and science education studies.
Kumagai, Hiroyuki; Pulido, Nelson; Fukuyama, Eiichi; Aoi, Shin
2013-01-01
investigate source processes of the 2011 Tohoku-Oki earthquake, we utilized a source location method using high-frequency (5-10 Hz) seismic amplitudes. In this method, we assumed far-field isotropic radiation of S waves, and conducted a spatial grid search to find the best fitting source locations along the subducted slab in each successive time window. Our application of the method to the Tohoku-Oki earthquake resulted in artifact source locations at shallow depths near the trench caused by limited station coverage and noise effects. We then assumed various source node distributions along the plate, and found that the observed seismograms were most reasonably explained when assuming deep source nodes. This result suggests that the high-frequency seismic waves were radiated at deeper depths during the earthquake, a feature which is consistent with results obtained from teleseismic back-projection and strong-motion source model studies. We identified three high-frequency subevents, and compared them with the moment-rate function estimated from low-frequency seismograms. Our comparison indicated that no significant moment release occurred during the first high-frequency subevent and the largest moment-release pulse occurred almost simultaneously with the second high-frequency subevent. We speculated that the initial slow rupture propagated bilaterally from the hypocenter toward the land and trench. The landward subshear rupture propagation consisted of three successive high-frequency subevents. The trenchward propagation ruptured the strong asperity and released the largest moment near the trench.
Caticha, Ariel
2011-03-01
In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme.
Kroese, A.H.; van der Meulen, E.A.; Poortema, Klaas; Schaafsma, W.
1995-01-01
The making of statistical inferences in distributional form is conceptionally complicated because the epistemic 'probabilities' assigned are mixtures of fact and fiction. In this respect they are essentially different from 'physical' or 'frequency-theoretic' probabilities. The distributional form is
Caticha, Ariel
2010-01-01
In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEn...
International Nuclear Information System (INIS)
Nord, B.; Buckley-Geer, E.; Lin, H.; Diehl, H. T.; Kuropatkin, N.; Allam, S.; Finley, D. A.; Flaugher, B.; Gaitsch, H.; Merritt, K. W.; Helsby, J.; Amara, A.; Collett, T.; Caminha, G. B.; De Bom, C.; Da Pereira, M. Elidaiana S.; Desai, S.; Dúmet-Montoya, H.; Furlanetto, C.; Gill, M.
2016-01-01
We report the observation and confirmation of the first group- and cluster-scale strong gravitational lensing systems found in Dark Energy Survey data. Through visual inspection of data from the Science Verification season, we identified 53 candidate systems. We then obtained spectroscopic follow-up of 21 candidates using the Gemini Multi-object Spectrograph at the Gemini South telescope and the Inamori-Magellan Areal Camera and Spectrograph at the Magellan/Baade telescope. With this follow-up, we confirmed six candidates as gravitational lenses: three of the systems are newly discovered, and the remaining three were previously known. Of the 21 observed candidates, the remaining 15 either were not detected in spectroscopic observations, were observed and did not exhibit continuum emission (or spectral features), or were ruled out as lensing systems. The confirmed sample consists of one group-scale and five galaxy-cluster-scale lenses. The lensed sources range in redshift z ∼ 0.80–3.2 and in i -band surface brightness i SB ∼ 23–25 mag arcsec −2 (2″ aperture). For each of the six systems, we estimate the Einstein radius θ E and the enclosed mass M enc , which have ranges θ E ∼ 5″–9″ and M enc ∼ 8 × 10 12 to 6 × 10 13 M ⊙ , respectively.
Energy Technology Data Exchange (ETDEWEB)
Nord, B.; Buckley-Geer, E.; Lin, H.; Diehl, H. T.; Kuropatkin, N.; Allam, S.; Finley, D. A.; Flaugher, B.; Gaitsch, H.; Merritt, K. W. [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Helsby, J. [Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL 60637 (United States); Amara, A. [Department of Physics, ETH Zurich, Wolfgang-Pauli-Strasse 16, CH-8093 Zurich (Switzerland); Collett, T. [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth, PO1 3FX (United Kingdom); Caminha, G. B.; De Bom, C.; Da Pereira, M. Elidaiana S. [ICRA, Centro Brasileiro de Pesquisas Físicas, Rua Dr. Xavier Sigaud 150, CEP 22290-180, Rio de Janeiro, RJ (Brazil); Desai, S. [Excellence Cluster Universe, Boltzmannstrasse 2, D-85748 Garching (Germany); Dúmet-Montoya, H. [Universidade Federal do Rio de Janeiro—Campus Macaé, Rua Aloísio Gomes da Silva, 50—Granja dos Cavaleiros, Cep: 27930-560, Macaé, RJ (Brazil); Furlanetto, C. [University of Nottingham, School of Physics and Astronomy, Nottingham NG7 2RD (United Kingdom); Gill, M., E-mail: nord@fnal.gov [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); Collaboration: DES Collaboration; and others
2016-08-10
We report the observation and confirmation of the first group- and cluster-scale strong gravitational lensing systems found in Dark Energy Survey data. Through visual inspection of data from the Science Verification season, we identified 53 candidate systems. We then obtained spectroscopic follow-up of 21 candidates using the Gemini Multi-object Spectrograph at the Gemini South telescope and the Inamori-Magellan Areal Camera and Spectrograph at the Magellan/Baade telescope. With this follow-up, we confirmed six candidates as gravitational lenses: three of the systems are newly discovered, and the remaining three were previously known. Of the 21 observed candidates, the remaining 15 either were not detected in spectroscopic observations, were observed and did not exhibit continuum emission (or spectral features), or were ruled out as lensing systems. The confirmed sample consists of one group-scale and five galaxy-cluster-scale lenses. The lensed sources range in redshift z ∼ 0.80–3.2 and in i -band surface brightness i {sub SB} ∼ 23–25 mag arcsec{sup −2} (2″ aperture). For each of the six systems, we estimate the Einstein radius θ {sub E} and the enclosed mass M {sub enc}, which have ranges θ {sub E} ∼ 5″–9″ and M {sub enc} ∼ 8 × 10{sup 12} to 6 × 10{sup 13} M {sub ⊙}, respectively.
Aggelopoulos, Nikolaos C
2015-08-01
Perceptual inference refers to the ability to infer sensory stimuli from predictions that result from internal neural representations built through prior experience. Methods of Bayesian statistical inference and decision theory model cognition adequately by using error sensing either in guiding action or in "generative" models that predict the sensory information. In this framework, perception can be seen as a process qualitatively distinct from sensation, a process of information evaluation using previously acquired and stored representations (memories) that is guided by sensory feedback. The stored representations can be utilised as internal models of sensory stimuli enabling long term associations, for example in operant conditioning. Evidence for perceptual inference is contributed by such phenomena as the cortical co-localisation of object perception with object memory, the response invariance in the responses of some neurons to variations in the stimulus, as well as from situations in which perception can be dissociated from sensation. In the context of perceptual inference, sensory areas of the cerebral cortex that have been facilitated by a priming signal may be regarded as comparators in a closed feedback loop, similar to the better known motor reflexes in the sensorimotor system. The adult cerebral cortex can be regarded as similar to a servomechanism, in using sensory feedback to correct internal models, producing predictions of the outside world on the basis of past experience. Copyright © 2015 Elsevier Ltd. All rights reserved.
Elements of Causal Inference: Foundations and Learning Algorithms
DEFF Research Database (Denmark)
Peters, Jonas Martin; Janzing, Dominik; Schölkopf, Bernhard
A concise and self-contained introduction to causal inference, increasingly important in data science and machine learning......A concise and self-contained introduction to causal inference, increasingly important in data science and machine learning...
Rohatgi, Vijay K
2003-01-01
Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth
Energy Technology Data Exchange (ETDEWEB)
Diehl, H. T.; Buckley-Geer, E. J.; Lindgren, K. A.; Nord, B.; Gaitsch, H.; Gaitsch, S.; Lin, H.; Allam, S.; Odden, C.; Pellico, A.; Tucker, D. L.; Kuropatkin, N.; Soares-Santos, M. [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Collett, T. E. [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth, PO1 3FX (United Kingdom); Furlanetto, C.; Nightingale, J. [University of Nottingham, School of Physics and Astronomy, Nottingham NG7 2RD (United Kingdom); Gill, M. S. S. [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); More, A. [Kavli IPMU (WPI), UTIAS, The University of Tokyo, Kashiwa, Chiba 277-8583 (Japan); Costa, L. N. da; Neto, A. Fausti, E-mail: diehl@fnal.gov [Laboratório Interinstitucional de e-Astronomia—LIneA, Rua Gal. José Cristino 77, Rio de Janeiro, RJ—20921-400 (Brazil); Collaboration: DES Collaboration; and others
2017-09-01
We report the results of searches for strong gravitational lens systems in the Dark Energy Survey (DES) Science Verification and Year 1 observations. The Science Verification data span approximately 250 sq. deg. with a median i -band limiting magnitude for extended objects (10 σ ) of 23.0. The Year 1 data span approximately 2000 sq. deg. and have an i -band limiting magnitude for extended objects (10 σ ) of 22.9. As these data sets are both wide and deep, they are particularly useful for identifying strong gravitational lens candidates. Potential strong gravitational lens candidate systems were initially identified based on a color and magnitude selection in the DES object catalogs or because the system is at the location of a previously identified galaxy cluster. Cutout images of potential candidates were then visually scanned using an object viewer and numerically ranked according to whether or not we judged them to be likely strong gravitational lens systems. Having scanned nearly 400,000 cutouts, we present 374 candidate strong lens systems, of which 348 are identified for the first time. We provide the R.A. and decl., the magnitudes and photometric properties of the lens and source objects, and the distance (radius) of the source(s) from the lens center for each system.
Emotional inferences by pragmatics
Iza-Miqueleiz, Mauricio
2017-01-01
It has for long been taken for granted that, along the course of reading a text, world knowledge is often required in order to establish coherent links between sentences (McKoon & Ratcliff 1992, Iza & Ezquerro 2000). The content grasped from a text turns out to be strongly dependent upon the reader’s additional knowledge that allows a coherent interpretation of the text as a whole. The world knowledge directing the inference may be of distinctive nature. Gygax et al. (2007) showed that m...
International Nuclear Information System (INIS)
2003-04-01
This book introduces isolated natural sciences and engineering by showing Seoul national university, KAIST, Pohang university of science and technology. It covers revivals of institutes, the key of national competitiveness including Daeduck research complex, ETRI, KIST, and institutes of private companies. It continues the world where sciences and engineering is treated well and explains nurturing of professional engineers, CEO, CTO, nation's brains in political circle. It wraps up investment for success, more advanced science education, giving benefits to students in engineering and making star scientists.
DEFF Research Database (Denmark)
Katajainen, Jyrki
2008-01-01
In this project the goal is to develop the safe * family of containers for the CPH STL. The containers to be developed should be safer and more reliable than any of the existing implementations. A special focus should be put on strong exception safety since none of the existing prototypes available...
Morse, P. E.; Reading, A. M.; Lueg, C.
2014-12-01
Pattern-recognition in scientific data is not only a computational problem but a human-observer problem as well. Human observation of - and interaction with - data visualization software can augment, select, interrupt and modify computational routines and facilitate processes of pattern and significant feature recognition for subsequent human analysis, machine learning, expert and artificial intelligence systems.'Tagger' is a Mac OS X interactive data visualisation tool that facilitates Human-Computer interaction for the recognition of patterns and significant structures. It is a graphical application developed using the Quartz Composer framework. 'Tagger' follows a Model-View-Controller (MVC) software architecture: the application problem domain (the model) is to facilitate novel ways of abstractly representing data to a human interlocutor, presenting these via different viewer modalities (e.g. chart representations, particle systems, parametric geometry) to the user (View) and enabling interaction with the data (Controller) via a variety of Human Interface Devices (HID). The software enables the user to create an arbitrary array of tags that may be appended to the visualised data, which are then saved into output files as forms of semantic metadata. Three fundamental problems that are not strongly supported by conventional scientific visualisation software are addressed:1] How to visually animate data over time, 2] How to rapidly deploy unconventional parametrically driven data visualisations, 3] How to construct and explore novel interaction models that capture the activity of the end-user as semantic metadata that can be used to computationally enhance subsequent interrogation. Saved tagged data files may be loaded into Tagger, so that tags may be tagged, if desired. Recursion opens up the possibility of refining or overlapping different types of tags, tagging a variety of different POIs or types of events, and of capturing different types of specialist
International Nuclear Information System (INIS)
Froissart, Marcel
1976-01-01
Strong interactions are introduced by their more obvious aspect: nuclear forces. In hadron family, the nucleon octet, OMEGA - decuplet, and quark triply are successively considered. Pion wave having been put at the origin of nuclear forces, low energy phenomena are described, the force being explained as an exchange of structure corresponding to a Regge trajectory in a variable rotating state instead of the exchange of a well defined particle. At high energies the concepts of pomeron, parton and stratons are introduced, pionization and fragmentation are briefly differentiated [fr
EI: A Program for Ecological Inference
Directory of Open Access Journals (Sweden)
Gary King
2004-09-01
Full Text Available The program EI provides a method of inferring individual behavior from aggregate data. It implements the statistical procedures, diagnostics, and graphics from the book A Solution to the Ecological Inference Problem: Reconstructing Individual Behavior from Aggregate Data (King 1997. Ecological inference, as traditionally defined, is the process of using aggregate (i.e., "ecological" data to infer discrete individual-level relationships of interest when individual-level data are not available. Ecological inferences are required in political science research when individual-level surveys are unavailable (e.g., local or comparative electoral politics, unreliable (racial politics, insufficient (political geography, or infeasible (political history. They are also required in numerous areas of ma jor significance in public policy (e.g., for applying the Voting Rights Act and other academic disciplines ranging from epidemiology and marketing to sociology and quantitative history.
Heuristics as Bayesian inference under extreme priors.
Parpart, Paula; Jones, Matt; Love, Bradley C
2018-05-01
Simple heuristics are often regarded as tractable decision strategies because they ignore a great deal of information in the input data. One puzzle is why heuristics can outperform full-information models, such as linear regression, which make full use of the available information. These "less-is-more" effects, in which a relatively simpler model outperforms a more complex model, are prevalent throughout cognitive science, and are frequently argued to demonstrate an inherent advantage of simplifying computation or ignoring information. In contrast, we show at the computational level (where algorithmic restrictions are set aside) that it is never optimal to discard information. Through a formal Bayesian analysis, we prove that popular heuristics, such as tallying and take-the-best, are formally equivalent to Bayesian inference under the limit of infinitely strong priors. Varying the strength of the prior yields a continuum of Bayesian models with the heuristics at one end and ordinary regression at the other. Critically, intermediate models perform better across all our simulations, suggesting that down-weighting information with the appropriate prior is preferable to entirely ignoring it. Rather than because of their simplicity, our analyses suggest heuristics perform well because they implement strong priors that approximate the actual structure of the environment. We end by considering how new heuristics could be derived by infinitely strengthening the priors of other Bayesian models. These formal results have implications for work in psychology, machine learning and economics. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Density estimation in tiger populations: combining information for strong inference
Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.
2012-01-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Causal inference based on counterfactuals
Directory of Open Access Journals (Sweden)
Höfler M
2005-09-01
Full Text Available Abstract Background The counterfactual or potential outcome model has become increasingly standard for causal inference in epidemiological and medical studies. Discussion This paper provides an overview on the counterfactual and related approaches. A variety of conceptual as well as practical issues when estimating causal effects are reviewed. These include causal interactions, imperfect experiments, adjustment for confounding, time-varying exposures, competing risks and the probability of causation. It is argued that the counterfactual model of causal effects captures the main aspects of causality in health sciences and relates to many statistical procedures. Summary Counterfactuals are the basis of causal inference in medicine and epidemiology. Nevertheless, the estimation of counterfactual differences pose several difficulties, primarily in observational studies. These problems, however, reflect fundamental barriers only when learning from observations, and this does not invalidate the counterfactual concept.
Nonparametric Bayesian inference in biostatistics
Müller, Peter
2015-01-01
As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...
DEFF Research Database (Denmark)
Andersen, Jesper
2009-01-01
Collateral evolution the problem of updating several library-using programs in response to API changes in the used library. In this dissertation we address the issue of understanding collateral evolutions by automatically inferring a high-level specification of the changes evident in a given set ...... specifications inferred by spdiff in Linux are shown. We find that the inferred specifications concisely capture the actual collateral evolution performed in the examples....
System Support for Forensic Inference
Gehani, Ashish; Kirchner, Florent; Shankar, Natarajan
Digital evidence is playing an increasingly important role in prosecuting crimes. The reasons are manifold: financially lucrative targets are now connected online, systems are so complex that vulnerabilities abound and strong digital identities are being adopted, making audit trails more useful. If the discoveries of forensic analysts are to hold up to scrutiny in court, they must meet the standard for scientific evidence. Software systems are currently developed without consideration of this fact. This paper argues for the development of a formal framework for constructing “digital artifacts” that can serve as proxies for physical evidence; a system so imbued would facilitate sound digital forensic inference. A case study involving a filesystem augmentation that provides transparent support for forensic inference is described.
Cultural effects on the association between election outcomes and face-based trait inferences.
Directory of Open Access Journals (Sweden)
Chujun Lin
Full Text Available How competent a politician looks, as assessed in the laboratory, is correlated with whether the politician wins in real elections. This finding has led many to investigate whether the association between candidate appearances and election outcomes transcends cultures. However, these studies have largely focused on European countries and Caucasian candidates. To the best of our knowledge, there are only four cross-cultural studies that have directly investigated how face-based trait inferences correlate with election outcomes across Caucasian and Asian cultures. These prior studies have provided some initial evidence regarding cultural differences, but methodological problems and inconsistent findings have complicated our understanding of how culture mediates the effects of candidate appearances on election outcomes. Additionally, these four past studies have focused on positive traits, with a relative neglect of negative traits, resulting in an incomplete picture of how culture may impact a broader range of trait inferences. To study Caucasian-Asian cultural effects with a more balanced experimental design, and to explore a more complete profile of traits, here we compared how Caucasian and Korean participants' inferences of positive and negative traits correlated with U.S. and Korean election outcomes. Contrary to previous reports, we found that inferences of competence (made by participants from both cultures correlated with both U.S. and Korean election outcomes. Inferences of open-mindedness and threat, two traits neglected in previous cross-cultural studies, were correlated with Korean but not U.S. election outcomes. This differential effect was found in trait judgments made by both Caucasian and Korean participants. Interestingly, the faster the participants made face-based trait inferences, the more strongly those inferences were correlated with real election outcomes. These findings provide new insights into cultural effects and the
Cultural effects on the association between election outcomes and face-based trait inferences.
Lin, Chujun; Adolphs, Ralph; Alvarez, R Michael
2017-01-01
How competent a politician looks, as assessed in the laboratory, is correlated with whether the politician wins in real elections. This finding has led many to investigate whether the association between candidate appearances and election outcomes transcends cultures. However, these studies have largely focused on European countries and Caucasian candidates. To the best of our knowledge, there are only four cross-cultural studies that have directly investigated how face-based trait inferences correlate with election outcomes across Caucasian and Asian cultures. These prior studies have provided some initial evidence regarding cultural differences, but methodological problems and inconsistent findings have complicated our understanding of how culture mediates the effects of candidate appearances on election outcomes. Additionally, these four past studies have focused on positive traits, with a relative neglect of negative traits, resulting in an incomplete picture of how culture may impact a broader range of trait inferences. To study Caucasian-Asian cultural effects with a more balanced experimental design, and to explore a more complete profile of traits, here we compared how Caucasian and Korean participants' inferences of positive and negative traits correlated with U.S. and Korean election outcomes. Contrary to previous reports, we found that inferences of competence (made by participants from both cultures) correlated with both U.S. and Korean election outcomes. Inferences of open-mindedness and threat, two traits neglected in previous cross-cultural studies, were correlated with Korean but not U.S. election outcomes. This differential effect was found in trait judgments made by both Caucasian and Korean participants. Interestingly, the faster the participants made face-based trait inferences, the more strongly those inferences were correlated with real election outcomes. These findings provide new insights into cultural effects and the difficult question of
Cultural effects on the association between election outcomes and face-based trait inferences
Adolphs, Ralph; Alvarez, R. Michael
2017-01-01
How competent a politician looks, as assessed in the laboratory, is correlated with whether the politician wins in real elections. This finding has led many to investigate whether the association between candidate appearances and election outcomes transcends cultures. However, these studies have largely focused on European countries and Caucasian candidates. To the best of our knowledge, there are only four cross-cultural studies that have directly investigated how face-based trait inferences correlate with election outcomes across Caucasian and Asian cultures. These prior studies have provided some initial evidence regarding cultural differences, but methodological problems and inconsistent findings have complicated our understanding of how culture mediates the effects of candidate appearances on election outcomes. Additionally, these four past studies have focused on positive traits, with a relative neglect of negative traits, resulting in an incomplete picture of how culture may impact a broader range of trait inferences. To study Caucasian-Asian cultural effects with a more balanced experimental design, and to explore a more complete profile of traits, here we compared how Caucasian and Korean participants’ inferences of positive and negative traits correlated with U.S. and Korean election outcomes. Contrary to previous reports, we found that inferences of competence (made by participants from both cultures) correlated with both U.S. and Korean election outcomes. Inferences of open-mindedness and threat, two traits neglected in previous cross-cultural studies, were correlated with Korean but not U.S. election outcomes. This differential effect was found in trait judgments made by both Caucasian and Korean participants. Interestingly, the faster the participants made face-based trait inferences, the more strongly those inferences were correlated with real election outcomes. These findings provide new insights into cultural effects and the difficult question of
Energy Technology Data Exchange (ETDEWEB)
Petrov, S.
1996-10-01
Languages with a solvable implication problem but without complete and consistent systems of inference rules (`poor` languages) are considered. The problem of existence of finite complete and consistent inference rule system for a ``poor`` language is stated independently of the language or rules syntax. Several properties of the problem arc proved. An application of results to the language of join dependencies is given.
Bayesian statistical inference
Directory of Open Access Journals (Sweden)
Bruno De Finetti
2017-04-01
Full Text Available This work was translated into English and published in the volume: Bruno De Finetti, Induction and Probability, Biblioteca di Statistica, eds. P. Monari, D. Cocchi, Clueb, Bologna, 1993.Bayesian statistical Inference is one of the last fundamental philosophical papers in which we can find the essential De Finetti's approach to the statistical inference.
Geometric statistical inference
International Nuclear Information System (INIS)
Periwal, Vipul
1999-01-01
A reparametrization-covariant formulation of the inverse problem of probability is explicitly solved for finite sample sizes. The inferred distribution is explicitly continuous for finite sample size. A geometric solution of the statistical inference problem in higher dimensions is outlined
Bailer-Jones, Coryn A. L.
2017-04-01
Preface; 1. Probability basics; 2. Estimation and uncertainty; 3. Statistical models and inference; 4. Linear models, least squares, and maximum likelihood; 5. Parameter estimation: single parameter; 6. Parameter estimation: multiple parameters; 7. Approximating distributions; 8. Monte Carlo methods for inference; 9. Parameter estimation: Markov chain Monte Carlo; 10. Frequentist hypothesis testing; 11. Model comparison; 12. Dealing with more complicated problems; References; Index.
Logical inference and evaluation
International Nuclear Information System (INIS)
Perey, F.G.
1981-01-01
Most methodologies of evaluation currently used are based upon the theory of statistical inference. It is generally perceived that this theory is not capable of dealing satisfactorily with what are called systematic errors. Theories of logical inference should be capable of treating all of the information available, including that not involving frequency data. A theory of logical inference is presented as an extension of deductive logic via the concept of plausibility and the application of group theory. Some conclusions, based upon the application of this theory to evaluation of data, are also given
Probability and Statistical Inference
Prosper, Harrison B.
2006-01-01
These lectures introduce key concepts in probability and statistical inference at a level suitable for graduate students in particle physics. Our goal is to paint as vivid a picture as possible of the concepts covered.
On quantum statistical inference
Barndorff-Nielsen, O.E.; Gill, R.D.; Jupp, P.E.
2003-01-01
Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, developments in the theory of quantum measurements have
2018-02-15
expressed a variety of inference techniques on discrete and continuous distributions: exact inference, importance sampling, Metropolis-Hastings (MH...without redoing any math or rewriting any code. And although our main goal is composable reuse, our performance is also good because we can use...control paths. • The Hakaru language can express mixtures of discrete and continuous distributions, but the current disintegration transformation
Introductory statistical inference
Mukhopadhyay, Nitis
2014-01-01
This gracefully organized text reveals the rigorous theory of probability and statistical inference in the style of a tutorial, using worked examples, exercises, figures, tables, and computer simulations to develop and illustrate concepts. Drills and boxed summaries emphasize and reinforce important ideas and special techniques.Beginning with a review of the basic concepts and methods in probability theory, moments, and moment generating functions, the author moves to more intricate topics. Introductory Statistical Inference studies multivariate random variables, exponential families of dist
Statistical Inference and Patterns of Inequality in the Global North
Moran, Timothy Patrick
2006-01-01
Cross-national inequality trends have historically been a crucial field of inquiry across the social sciences, and new methodological techniques of statistical inference have recently improved the ability to analyze these trends over time. This paper applies Monte Carlo, bootstrap inference methods to the income surveys of the Luxembourg Income…
Type Inference with Inequalities
DEFF Research Database (Denmark)
Schwartzbach, Michael Ignatieff
1991-01-01
of (monotonic) inequalities on the types of variables and expressions. A general result about systems of inequalities over semilattices yields a solvable form. We distinguish between deciding typability (the existence of solutions) and type inference (the computation of a minimal solution). In our case, both......Type inference can be phrased as constraint-solving over types. We consider an implicitly typed language equipped with recursive types, multiple inheritance, 1st order parametric polymorphism, and assignments. Type correctness is expressed as satisfiability of a possibly infinite collection...
Pillow, Bradford H; Pearson, Raeanne M; Hecht, Mary; Bremer, Amanda
2010-01-01
Children and adults rated their own certainty following inductive inferences, deductive inferences, and guesses. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Deductions were rated as more certain than strong inductions beginning in Grade 3, and fourth-grade children and adults differentiated strong inductions, weak inductions, and informed guesses from pure guesses. By Grade 3, participants also gave different types of explanations for their deductions and inductions. These results are discussed in relation to children's concepts of cognitive processes, logical reasoning, and epistemological development.
Hofmann, B
2008-06-01
Are there similarities between scientific and moral inference? This is the key question in this article. It takes as its point of departure an instance of one person's story in the media changing both Norwegian public opinion and a brand-new Norwegian law prohibiting the use of saviour siblings. The case appears to falsify existing norms and to establish new ones. The analysis of this case reveals similarities in the modes of inference in science and morals, inasmuch as (a) a single case functions as a counter-example to an existing rule; (b) there is a common presupposition of stability, similarity and order, which makes it possible to reason from a few cases to a general rule; and (c) this makes it possible to hold things together and retain order. In science, these modes of inference are referred to as falsification, induction and consistency. In morals, they have a variety of other names. Hence, even without abandoning the fact-value divide, there appear to be similarities between inference in science and inference in morals, which may encourage communication across the boundaries between "the two cultures" and which are relevant to medical humanities.
Gyssens, I C
2008-10-01
Despite many European Union (EU) conferences on fighting microbial resistance, rates of resistance in Europe continue to increase. Although research is catching up with discovery, the development of new antimicrobials is threatened by economic factors, in particular the need for a return of investment via high-volume sales. The EU should invest in independent research into the economic and business aspects of antibiotic development. Multidisciplinary input from the fields of finance, law, marketing, sociology and psychology will inform a broad agenda for change at the regulatory, academic and commercial levels and identify new options for novel anti-infective research and development, as recently recommended by the Science Academies of Europe (EASAC).
Subjective randomness as statistical inference.
Griffiths, Thomas L; Daniels, Dylan; Austerweil, Joseph L; Tenenbaum, Joshua B
2018-06-01
Some events seem more random than others. For example, when tossing a coin, a sequence of eight heads in a row does not seem very random. Where do these intuitions about randomness come from? We argue that subjective randomness can be understood as the result of a statistical inference assessing the evidence that an event provides for having been produced by a random generating process. We show how this account provides a link to previous work relating randomness to algorithmic complexity, in which random events are those that cannot be described by short computer programs. Algorithmic complexity is both incomputable and too general to capture the regularities that people can recognize, but viewing randomness as statistical inference provides two paths to addressing these problems: considering regularities generated by simpler computing machines, and restricting the set of probability distributions that characterize regularity. Building on previous work exploring these different routes to a more restricted notion of randomness, we define strong quantitative models of human randomness judgments that apply not just to binary sequences - which have been the focus of much of the previous work on subjective randomness - but also to binary matrices and spatial clustering. Copyright © 2018 Elsevier Inc. All rights reserved.
Watson, Jane
2007-01-01
Inference, or decision making, is seen in curriculum documents as the final step in a statistical investigation. For a formal statistical enquiry this may be associated with sophisticated tests involving probability distributions. For young students without the mathematical background to perform such tests, it is still possible to draw informal…
Hybrid Optical Inference Machines
1991-09-27
with labels. Now, events. a set of facts cal be generated in the dyadic form "u, R 1,2" Eichmann and Caulfield (19] consider the same type of and can...these enceding-schemes. These architectures are-based pri- 19. G. Eichmann and H. J. Caulfield, "Optical Learning (Inference)marily on optical inner
Hrdlicka, Patrick J; Karmakar, Saswata
2017-11-29
Oligonucleotides (ONs) modified with 2'-O-(pyren-1-yl)methylribonucleotides have been explored for a range of applications in molecular biology, nucleic acid diagnostics, and materials science for more than 25 years. The first part of this review provides an overview of synthetic strategies toward 2'-O-(pyren-1-yl)methylribonucleotides and is followed by a summary of biophysical properties of nucleic acid duplexes modified with these building blocks. Insights from structural studies are then presented to rationalize the reported properties. In the second part, applications of ONs modified with 2'-O-(pyren-1-yl)methyl-RNA monomers are reviewed, which include detection of RNA targets, discrimination of single nucleotide polymorphisms, formation of self-assembled pyrene arrays on nucleic acid scaffolds, the study of charge transfer phenomena in nucleic acid duplexes, and sequence-unrestricted recognition of double-stranded DNA. The predictable binding mode of the pyrene moiety, coupled with the microenvironment-dependent properties and synthetic feasibility, render 2'-O-(pyren-1-yl)methyl-RNA monomers as a promising class of pyrene-functionalized nucleotide building blocks for new applications in molecular biology, nucleic acid diagnostics, and materials science.
An introduction to the philosophy of science
Staley, Kent W
2014-01-01
This book guides readers by gradual steps through the central concepts and debates in the philosophy of science. Using concrete examples from the history of science, Kent W. Staley shows how seemingly abstract philosophical issues are relevant to important aspects of scientific practice. Structured in two parts, the book first tackles the central concepts of the philosophy of science, such as the problem of induction, falsificationism, and underdetermination, and important figures and movements, such as the logical empiricists, Thomas Kuhn, and Paul Feyerabend. The second part turns to contemporary debates in the philosophy of science, such as scientific realism, explanation, the role of values in science, the different views of scientific inference, and probability. This broad yet detailed overview will give readers a strong grounding whilst also providing opportunities for further exploration. It will be of particular interest to students of philosophy, the philosophy of science, and science. Read more at h...
Explanatory Preferences Shape Learning and Inference.
Lombrozo, Tania
2016-10-01
Explanations play an important role in learning and inference. People often learn by seeking explanations, and they assess the viability of hypotheses by considering how well they explain the data. An emerging body of work reveals that both children and adults have strong and systematic intuitions about what constitutes a good explanation, and that these explanatory preferences have a systematic impact on explanation-based processes. In particular, people favor explanations that are simple and broad, with the consequence that engaging in explanation can shape learning and inference by leading people to seek patterns and favor hypotheses that support broad and simple explanations. Given the prevalence of explanation in everyday cognition, understanding explanation is therefore crucial to understanding learning and inference. Copyright © 2016 Elsevier Ltd. All rights reserved.
Inference rule and problem solving
Energy Technology Data Exchange (ETDEWEB)
Goto, S
1982-04-01
Intelligent information processing signifies an opportunity of having man's intellectual activity executed on the computer, in which inference, in place of ordinary calculation, is used as the basic operational mechanism for such an information processing. Many inference rules are derived from syllogisms in formal logic. The problem of programming this inference function is referred to as a problem solving. Although logically inference and problem-solving are in close relation, the calculation ability of current computers is on a low level for inferring. For clarifying the relation between inference and computers, nonmonotonic logic has been considered. The paper deals with the above topics. 16 references.
Stochastic processes inference theory
Rao, Malempati M
2014-01-01
This is the revised and enlarged 2nd edition of the authors’ original text, which was intended to be a modest complement to Grenander's fundamental memoir on stochastic processes and related inference theory. The present volume gives a substantial account of regression analysis, both for stochastic processes and measures, and includes recent material on Ridge regression with some unexpected applications, for example in econometrics. The first three chapters can be used for a quarter or semester graduate course on inference on stochastic processes. The remaining chapters provide more advanced material on stochastic analysis suitable for graduate seminars and discussions, leading to dissertation or research work. In general, the book will be of interest to researchers in probability theory, mathematical statistics and electrical and information theory.
Making Type Inference Practical
DEFF Research Database (Denmark)
Schwartzbach, Michael Ignatieff; Oxhøj, Nicholas; Palsberg, Jens
1992-01-01
We present the implementation of a type inference algorithm for untyped object-oriented programs with inheritance, assignments, and late binding. The algorithm significantly improves our previous one, presented at OOPSLA'91, since it can handle collection classes, such as List, in a useful way. Abo......, the complexity has been dramatically improved, from exponential time to low polynomial time. The implementation uses the techniques of incremental graph construction and constraint template instantiation to avoid representing intermediate results, doing superfluous work, and recomputing type information....... Experiments indicate that the implementation type checks as much as 100 lines pr. second. This results in a mature product, on which a number of tools can be based, for example a safety tool, an image compression tool, a code optimization tool, and an annotation tool. This may make type inference for object...
Directory of Open Access Journals (Sweden)
João Paulo Monteiro
2001-12-01
Full Text Available Russell's The Problems of Philosophy tries to establish a new theory of induction, at the same time that Hume is there accused of an irrational/ scepticism about induction". But a careful analysis of the theory of knowledge explicitly acknowledged by Hume reveals that, contrary to the standard interpretation in the XXth century, possibly influenced by Russell, Hume deals exclusively with causal inference (which he never classifies as "causal induction", although now we are entitled to do so, never with inductive inference in general, mainly generalizations about sensible qualities of objects ( whether, e.g., "all crows are black" or not is not among Hume's concerns. Russell's theories are thus only false alternatives to Hume's, in (1912 or in his (1948.
Causal inference in econometrics
Kreinovich, Vladik; Sriboonchitta, Songsak
2016-01-01
This book is devoted to the analysis of causal inference which is one of the most difficult tasks in data analysis: when two phenomena are observed to be related, it is often difficult to decide whether one of them causally influences the other one, or whether these two phenomena have a common cause. This analysis is the main focus of this volume. To get a good understanding of the causal inference, it is important to have models of economic phenomena which are as accurate as possible. Because of this need, this volume also contains papers that use non-traditional economic models, such as fuzzy models and models obtained by using neural networks and data mining techniques. It also contains papers that apply different econometric models to analyze real-life economic dependencies.
Active inference and learning.
Friston, Karl; FitzGerald, Thomas; Rigoli, Francesco; Schwartenbeck, Philipp; O Doherty, John; Pezzulo, Giovanni
2016-09-01
This paper offers an active inference account of choice behaviour and learning. It focuses on the distinction between goal-directed and habitual behaviour and how they contextualise each other. We show that habits emerge naturally (and autodidactically) from sequential policy optimisation when agents are equipped with state-action policies. In active inference, behaviour has explorative (epistemic) and exploitative (pragmatic) aspects that are sensitive to ambiguity and risk respectively, where epistemic (ambiguity-resolving) behaviour enables pragmatic (reward-seeking) behaviour and the subsequent emergence of habits. Although goal-directed and habitual policies are usually associated with model-based and model-free schemes, we find the more important distinction is between belief-free and belief-based schemes. The underlying (variational) belief updating provides a comprehensive (if metaphorical) process theory for several phenomena, including the transfer of dopamine responses, reversal learning, habit formation and devaluation. Finally, we show that active inference reduces to a classical (Bellman) scheme, in the absence of ambiguity. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Learning Convex Inference of Marginals
Domke, Justin
2012-01-01
Graphical models trained using maximum likelihood are a common tool for probabilistic inference of marginal distributions. However, this approach suffers difficulties when either the inference process or the model is approximate. In this paper, the inference process is first defined to be the minimization of a convex function, inspired by free energy approximations. Learning is then done directly in terms of the performance of the inference process at univariate marginal prediction. The main ...
Probabilistic inductive inference: a survey
Ambainis, Andris
2001-01-01
Inductive inference is a recursion-theoretic theory of learning, first developed by E. M. Gold (1967). This paper surveys developments in probabilistic inductive inference. We mainly focus on finite inference of recursive functions, since this simple paradigm has produced the most interesting (and most complex) results.
Nonparametric statistical inference
Gibbons, Jean Dickinson
2010-01-01
Overall, this remains a very fine book suitable for a graduate-level course in nonparametric statistics. I recommend it for all people interested in learning the basic ideas of nonparametric statistical inference.-Eugenia Stoimenova, Journal of Applied Statistics, June 2012… one of the best books available for a graduate (or advanced undergraduate) text for a theory course on nonparametric statistics. … a very well-written and organized book on nonparametric statistics, especially useful and recommended for teachers and graduate students.-Biometrics, 67, September 2011This excellently presente
DEFF Research Database (Denmark)
Andersen, Jesper; Lawall, Julia
2010-01-01
A key issue in maintaining Linux device drivers is the need to keep them up to date with respect to evolutions in Linux internal libraries. Currently, there is little tool support for performing and documenting such changes. In this paper we present a tool, spdiff, that identifies common changes...... developers can use it to extract an abstract representation of the set of changes that others have made. Our experiments on recent changes in Linux show that the inferred generic patches are more concise than the corresponding patches found in commits to the Linux source tree while being safe with respect...
Efficient Bayesian inference for ARFIMA processes
Graves, T.; Gramacy, R. B.; Franzke, C. L. E.; Watkins, N. W.
2015-03-01
Many geophysical quantities, like atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long-range dependence (LRD). LRD means that these quantities experience non-trivial temporal memory, which potentially enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LRD. In this paper we present a modern and systematic approach to the inference of LRD. Rather than Mandelbrot's fractional Gaussian noise, we use the more flexible Autoregressive Fractional Integrated Moving Average (ARFIMA) model which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LRD, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g. short memory effects) can be integrated over in order to focus on long memory parameters, and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data, with favorable comparison to the standard estimators.
Contingency inferences driven by base rates: Valid by sampling
Directory of Open Access Journals (Sweden)
Florian Kutzner
2011-04-01
Full Text Available Fiedler et al. (2009, reviewed evidence for the utilization of a contingency inference strategy termed pseudocontingencies (PCs. In PCs, the more frequent levels (and, by implication, the less frequent levels are assumed to be associated. PCs have been obtained using a wide range of task settings and dependent measures. Yet, the readiness with which decision makers rely on PCs is poorly understood. A computer simulation explored two potential sources of subjective validity of PCs. First, PCs are shown to perform above chance level when the task is to infer the sign of moderate to strong population contingencies from a sample of observations. Second, contingency inferences based on PCs and inferences based on cell frequencies are shown to partially agree across samples. Intriguingly, this criterion and convergent validity are by-products of random sampling error, highlighting the inductive nature of contingency inferences.
Bayesian methods for hackers probabilistic programming and Bayesian inference
Davidson-Pilon, Cameron
2016-01-01
Bayesian methods of inference are deeply natural and extremely powerful. However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice–freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples a...
Bayesianism and inference to the best explanation
Directory of Open Access Journals (Sweden)
Valeriano IRANZO
2008-01-01
Full Text Available Bayesianism and Inference to the best explanation (IBE are two different models of inference. Recently there has been some debate about the possibility of “bayesianizing” IBE. Firstly I explore several alternatives to include explanatory considerations in Bayes’s Theorem. Then I distinguish two different interpretations of prior probabilities: “IBE-Bayesianism” (IBE-Bay and “frequentist-Bayesianism” (Freq-Bay. After detailing the content of the latter, I propose a rule for assessing the priors. I also argue that Freq-Bay: (i endorses a role for explanatory value in the assessment of scientific hypotheses; (ii avoids a purely subjectivist reading of prior probabilities; and (iii fits better than IBE-Bayesianism with two basic facts about science, i.e., the prominent role played by empirical testing and the existence of many scientific theories in the past that failed to fulfil their promises and were subsequently abandoned.
International Development Research Centre (IDRC) Digital Library (Canada)
David Spurgeon
Give us the tools: science and technology for development. Ottawa, ...... altered technical rela- tionships among the factors used in the process of production, and the en- .... to ourselves only the rights of audit and periodic substantive review." If a ...... and destroying scarce water reserves, recreational areas and a generally.
Dopamine, reward learning, and active inference
Directory of Open Access Journals (Sweden)
Thomas eFitzgerald
2015-11-01
Full Text Available Temporal difference learning models propose phasic dopamine signalling encodes reward prediction errors that drive learning. This is supported by studies where optogenetic stimulation of dopamine neurons can stand in lieu of actual reward. Nevertheless, a large body of data also shows that dopamine is not necessary for learning, and that dopamine depletion primarily affects task performance. We offer a resolution to this paradox based on an hypothesis that dopamine encodes the precision of beliefs about alternative actions, and thus controls the outcome-sensitivity of behaviour. We extend an active inference scheme for solving Markov decision processes to include learning, and show that simulated dopamine dynamics strongly resemble those actually observed during instrumental conditioning. Furthermore, simulated dopamine depletion impairs performance but spares learning, while simulated excitation of dopamine neurons drives reward learning, through aberrant inference about outcome states. Our formal approach provides a novel and parsimonious reconciliation of apparently divergent experimental findings.
Dopamine, reward learning, and active inference.
FitzGerald, Thomas H B; Dolan, Raymond J; Friston, Karl
2015-01-01
Temporal difference learning models propose phasic dopamine signaling encodes reward prediction errors that drive learning. This is supported by studies where optogenetic stimulation of dopamine neurons can stand in lieu of actual reward. Nevertheless, a large body of data also shows that dopamine is not necessary for learning, and that dopamine depletion primarily affects task performance. We offer a resolution to this paradox based on an hypothesis that dopamine encodes the precision of beliefs about alternative actions, and thus controls the outcome-sensitivity of behavior. We extend an active inference scheme for solving Markov decision processes to include learning, and show that simulated dopamine dynamics strongly resemble those actually observed during instrumental conditioning. Furthermore, simulated dopamine depletion impairs performance but spares learning, while simulated excitation of dopamine neurons drives reward learning, through aberrant inference about outcome states. Our formal approach provides a novel and parsimonious reconciliation of apparently divergent experimental findings.
Feature Inference Learning and Eyetracking
Rehder, Bob; Colner, Robert M.; Hoffman, Aaron B.
2009-01-01
Besides traditional supervised classification learning, people can learn categories by inferring the missing features of category members. It has been proposed that feature inference learning promotes learning a category's internal structure (e.g., its typical features and interfeature correlations) whereas classification promotes the learning of…
An Inference Language for Imaging
DEFF Research Database (Denmark)
Pedemonte, Stefano; Catana, Ciprian; Van Leemput, Koen
2014-01-01
We introduce iLang, a language and software framework for probabilistic inference. The iLang framework enables the definition of directed and undirected probabilistic graphical models and the automated synthesis of high performance inference algorithms for imaging applications. The iLang framewor...
Energy Technology Data Exchange (ETDEWEB)
Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ahn, Sungsoo [Korea Advanced Inst. Science and Technology (KAIST), Daejeon (Korea, Republic of); Shin, Jinwoo [Korea Advanced Inst. Science and Technology (KAIST), Daejeon (Korea, Republic of)
2017-05-25
Computing partition function is the most important statistical inference task arising in applications of Graphical Models (GM). Since it is computationally intractable, approximate methods have been used to resolve the issue in practice, where meanfield (MF) and belief propagation (BP) are arguably the most popular and successful approaches of a variational type. In this paper, we propose two new variational schemes, coined Gauged-MF (G-MF) and Gauged-BP (G-BP), improving MF and BP, respectively. Both provide lower bounds for the partition function by utilizing the so-called gauge transformation which modifies factors of GM while keeping the partition function invariant. Moreover, we prove that both G-MF and G-BP are exact for GMs with a single loop of a special structure, even though the bare MF and BP perform badly in this case. Our extensive experiments, on complete GMs of relatively small size and on large GM (up-to 300 variables) confirm that the newly proposed algorithms outperform and generalize MF and BP.
Social Inference Through Technology
Oulasvirta, Antti
Awareness cues are computer-mediated, real-time indicators of people’s undertakings, whereabouts, and intentions. Already in the mid-1970 s, UNIX users could use commands such as “finger” and “talk” to find out who was online and to chat. The small icons in instant messaging (IM) applications that indicate coconversants’ presence in the discussion space are the successors of “finger” output. Similar indicators can be found in online communities, media-sharing services, Internet relay chat (IRC), and location-based messaging applications. But presence and availability indicators are only the tip of the iceberg. Technological progress has enabled richer, more accurate, and more intimate indicators. For example, there are mobile services that allow friends to query and follow each other’s locations. Remote monitoring systems developed for health care allow relatives and doctors to assess the wellbeing of homebound patients (see, e.g., Tang and Venables 2000). But users also utilize cues that have not been deliberately designed for this purpose. For example, online gamers pay attention to other characters’ behavior to infer what the other players are like “in real life.” There is a common denominator underlying these examples: shared activities rely on the technology’s representation of the remote person. The other human being is not physically present but present only through a narrow technological channel.
Testing strong interaction theories
International Nuclear Information System (INIS)
Ellis, J.
1979-01-01
The author discusses possible tests of the current theories of the strong interaction, in particular, quantum chromodynamics. High energy e + e - interactions should provide an excellent means of studying the strong force. (W.D.L.)
Inverse Ising Inference Using All the Data
Aurell, Erik; Ekeberg, Magnus
2012-03-01
We show that a method based on logistic regression, using all the data, solves the inverse Ising problem far better than mean-field calculations relying only on sample pairwise correlation functions, while still computationally feasible for hundreds of nodes. The largest improvement in reconstruction occurs for strong interactions. Using two examples, a diluted Sherrington-Kirkpatrick model and a two-dimensional lattice, we also show that interaction topologies can be recovered from few samples with good accuracy and that the use of l1 regularization is beneficial in this process, pushing inference abilities further into low-temperature regimes.
Nonparametric statistical inference
Gibbons, Jean Dickinson
2014-01-01
Thoroughly revised and reorganized, the fourth edition presents in-depth coverage of the theory and methods of the most widely used nonparametric procedures in statistical analysis and offers example applications appropriate for all areas of the social, behavioral, and life sciences. The book presents new material on the quantiles, the calculation of exact and simulated power, multiple comparisons, additional goodness-of-fit tests, methods of analysis of count data, and modern computer applications using MINITAB, SAS, and STATXACT. It includes tabular guides for simplified applications of tests and finding P values and confidence interval estimates.
Optimization methods for logical inference
Chandru, Vijay
2011-01-01
Merging logic and mathematics in deductive inference-an innovative, cutting-edge approach. Optimization methods for logical inference? Absolutely, say Vijay Chandru and John Hooker, two major contributors to this rapidly expanding field. And even though ""solving logical inference problems with optimization methods may seem a bit like eating sauerkraut with chopsticks. . . it is the mathematical structure of a problem that determines whether an optimization model can help solve it, not the context in which the problem occurs."" Presenting powerful, proven optimization techniques for logic in
78 FR 15710 - Strong Sensitizer Guidance
2013-03-12
... the supplemental definition of ``strong sensitizer'' found at 16 CFR 1500.3(c)(5). The Commission is proposing to revise the supplemental definition of ``strong sensitizer'' due to advancements in the science...'' definition, assist manufacturers in understanding how CPSC staff would assess whether a substance and/or...
Explanation in causal inference methods for mediation and interaction
VanderWeele, Tyler
2015-01-01
A comprehensive examination of methods for mediation and interaction, VanderWeele's book is the first to approach this topic from the perspective of causal inference. Numerous software tools are provided, and the text is both accessible and easy to read, with examples drawn from diverse fields. The result is an essential reference for anyone conducting empirical research in the biomedical or social sciences.
On quantum statistical inference
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Gill, Richard D.; Jupp, Peter E.
Recent developments in the mathematical foundations of quantum mechanics have brought the theory closer to that of classical probability and statistics. On the other hand, the unique character of quantum physics sets many of the questions addressed apart from those met classically in stochastics....... Furthermore, concurrent advances in experimental techniques and in the theory of quantum computation have led to a strong interest in questions of quantum information, in particular in the sense of the amount of information about unknown parameters in given observational data or accessible through various...
Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen M.
2011-01-01
The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A numbe...
On principles of inductive inference
Kostecki, Ryszard Paweł
2011-01-01
We propose an intersubjective epistemic approach to foundations of probability theory and statistical inference, based on relative entropy and category theory, and aimed to bypass the mathematical and conceptual problems of existing foundational approaches.
Statistical inference via fiducial methods
Salomé, Diemer
1998-01-01
In this thesis the attention is restricted to inductive reasoning using a mathematical probability model. A statistical procedure prescribes, for every theoretically possible set of data, the inference about the unknown of interest. ... Zie: Summary
Statistical inference for stochastic processes
National Research Council Canada - National Science Library
Basawa, Ishwar V; Prakasa Rao, B. L. S
1980-01-01
The aim of this monograph is to attempt to reduce the gap between theory and applications in the area of stochastic modelling, by directing the interest of future researchers to the inference aspects...
Active inference, communication and hermeneutics.
Friston, Karl J; Frith, Christopher D
2015-07-01
Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others--during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions--both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then--in principle--they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Abortion: Strong's counterexamples fail
DEFF Research Database (Denmark)
Di Nucci, Ezio
2009-01-01
This paper shows that the counterexamples proposed by Strong in 2008 in the Journal of Medical Ethics to Marquis's argument against abortion fail. Strong's basic idea is that there are cases--for example, terminally ill patients--where killing an adult human being is prima facie seriously morally...
International Nuclear Information System (INIS)
Goldman, M.V.
1984-01-01
After a brief discussion of beam-excited Langmuir turbulence in the solar wind, we explain the criteria for wave-particle, three-wave and strong turbulence interactions. We then present the results of a numerical integration of the Zakharov equations, which describe the strong turbulence saturation of a weak (low-density) high energy, bump-on-tail beam instability. (author)
Strongly Correlated Systems Theoretical Methods
Avella, Adolfo
2012-01-01
The volume presents, for the very first time, an exhaustive collection of those modern theoretical methods specifically tailored for the analysis of Strongly Correlated Systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and materials science, belong to this class of systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognized main contributors. The exposition has a clear pedagogical cut and fully reports on the most relevant case study where the specific technique showed to be very successful in describing and enlightening the puzzling physics of a particular strongly correlated system. The book is intended for advanced graduate students and post-docs in the field as textbook and/or main reference, but also for other researchers in the field who appreciates consulting a single, but comprehensive, source or wishes to get acquainted, in a as painless as po...
Strongly correlated systems numerical methods
Mancini, Ferdinando
2013-01-01
This volume presents, for the very first time, an exhaustive collection of those modern numerical methods specifically tailored for the analysis of Strongly Correlated Systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and material science, belong to this class of systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognized main contributors. The exposition has a clear pedagogical cut and fully reports on the most relevant case study where the specific technique showed to be very successful in describing and enlightening the puzzling physics of a particular strongly correlated system. The book is intended for advanced graduate students and post-docs in the field as textbook and/or main reference, but also for other researchers in the field who appreciate consulting a single, but comprehensive, source or wishes to get acquainted, in a as painless as possi...
Strongly correlated systems experimental techniques
Mancini, Ferdinando
2015-01-01
The continuous evolution and development of experimental techniques is at the basis of any fundamental achievement in modern physics. Strongly correlated systems (SCS), more than any other, need to be investigated through the greatest variety of experimental techniques in order to unveil and crosscheck the numerous and puzzling anomalous behaviors characterizing them. The study of SCS fostered the improvement of many old experimental techniques, but also the advent of many new ones just invented in order to analyze the complex behaviors of these systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and materials science, belong to this class of systems. The volume presents a representative collection of the modern experimental techniques specifically tailored for the analysis of strongly correlated systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognize...
Direct Evidence for a Dual Process Model of Deductive Inference
Markovits, Henry; Brunet, Marie-Laurence; Thompson, Valerie; Brisson, Janie
2013-01-01
In 2 experiments, we tested a strong version of a dual process theory of conditional inference (cf. Verschueren et al., 2005a, 2005b) that assumes that most reasoners have 2 strategies available, the choice of which is determined by situational variables, cognitive capacity, and metacognitive control. The statistical strategy evaluates inferences…
Optimal inference with suboptimal models: Addiction and active Bayesian inference
Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl
2015-01-01
When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321
2008-01-01
Since the invention of the laser in the 1960s, people have strived to reach higher intensities and shorter pulse durations. High intensities and ultrashort pulse durations are intimately related. Recent developments have shown that high intensity lasers also open the way to realize pulses with the shortest durations to date, giving birth to the field of attosecond science (1 asec = 10-18s). This book is about high-intensity lasers and their applications. The goal is to give an up to date introduction to the technology behind these laser systems and to the broad range of intense laser applications. These applications include AMO (atomic molecular and optical) physics, x-ray science, attosecond science, plasma physics and particle acceleration, condensed matter science and laser micromachining, and finally even high-energy physics.
Dessi, Roberta; Rustichini, Aldo
2015-01-01
A large literature in psychology, and more recently in economics, has argued that monetary rewards can reduce intrinsic motivation. We investigate whether the negative impact persists when intrinsic motivation is strong, and test this hypothesis experimentally focusing on the motivation to undertake interesting and challenging tasks, informative about individual ability. We find that this type of task can generate strong intrinsic motivation, that is impervious to the effect of monetary incen...
Bitcoin Meets Strong Consistency
Decker, Christian; Seidel, Jochen; Wattenhofer, Roger
2014-01-01
The Bitcoin system only provides eventual consistency. For everyday life, the time to confirm a Bitcoin transaction is prohibitively slow. In this paper we propose a new system, built on the Bitcoin blockchain, which enables strong consistency. Our system, PeerCensus, acts as a certification authority, manages peer identities in a peer-to-peer network, and ultimately enhances Bitcoin and similar systems with strong consistency. Our extensive analysis shows that PeerCensus is in a secure state...
Strong gravity and supersymmetry
International Nuclear Information System (INIS)
Chamseddine, Ali H.; Salam, A.; Strathdee, J.
1977-11-01
A supersymmetric theory is constructed for a strong f plus a weak g graviton, together with their accompanying massive gravitinos, by gaugin the gradel 0Sp(2,2,1)x 0Sp(2,2,1) structure. The mixing term between f and g fields, which makes the strong graviton massive, can be introduced through a spontaneous symmetry-breaking mechanism implemented in this note by constructing a non-linear realization of the symmetry group
Interactive Instruction in Bayesian Inference
DEFF Research Database (Denmark)
Khan, Azam; Breslav, Simon; Hornbæk, Kasper
2018-01-01
An instructional approach is presented to improve human performance in solving Bayesian inference problems. Starting from the original text of the classic Mammography Problem, the textual expression is modified and visualizations are added according to Mayer’s principles of instruction. These pri......An instructional approach is presented to improve human performance in solving Bayesian inference problems. Starting from the original text of the classic Mammography Problem, the textual expression is modified and visualizations are added according to Mayer’s principles of instruction....... These principles concern coherence, personalization, signaling, segmenting, multimedia, spatial contiguity, and pretraining. Principles of self-explanation and interactivity are also applied. Four experiments on the Mammography Problem showed that these principles help participants answer the questions...... that an instructional approach to improving human performance in Bayesian inference is a promising direction....
On Maximum Entropy and Inference
Directory of Open Access Journals (Sweden)
Luigi Gresele
2017-11-01
Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.
Eight challenges in phylodynamic inference
Directory of Open Access Journals (Sweden)
Simon D.W. Frost
2015-03-01
Full Text Available The field of phylodynamics, which attempts to enhance our understanding of infectious disease dynamics using pathogen phylogenies, has made great strides in the past decade. Basic epidemiological and evolutionary models are now well characterized with inferential frameworks in place. However, significant challenges remain in extending phylodynamic inference to more complex systems. These challenges include accounting for evolutionary complexities such as changing mutation rates, selection, reassortment, and recombination, as well as epidemiological complexities such as stochastic population dynamics, host population structure, and different patterns at the within-host and between-host scales. An additional challenge exists in making efficient inferences from an ever increasing corpus of sequence data.
Problem solving and inference mechanisms
Energy Technology Data Exchange (ETDEWEB)
Furukawa, K; Nakajima, R; Yonezawa, A; Goto, S; Aoyama, A
1982-01-01
The heart of the fifth generation computer will be powerful mechanisms for problem solving and inference. A deduction-oriented language is to be designed, which will form the core of the whole computing system. The language is based on predicate logic with the extended features of structuring facilities, meta structures and relational data base interfaces. Parallel computation mechanisms and specialized hardware architectures are being investigated to make possible efficient realization of the language features. The project includes research into an intelligent programming system, a knowledge representation language and system, and a meta inference system to be built on the core. 30 references.
Inferring Human Mobility from Sparse Low Accuracy Mobile Sensing Data
DEFF Research Database (Denmark)
Cuttone, Andrea; Jørgensen, Sune Lehmann; Larsen, Jakob Eg
2014-01-01
Understanding both collective and personal human mobility is a central topic in Computational Social Science. Smartphone sensing data is emerging as a promising source for studying human mobility. However, most literature focuses on high-precision GPS positioning and high-frequency sampling, which...... is not always feasible in a longitudinal study or for everyday applications because location sensing has a high battery cost. In this paper we study the feasibility of inferring human mobility from sparse, low accuracy mobile sensing data. We validate our results using participants' location diaries......, and analyze the inferred geographical networks, the time spent at different places, and the number of unique places over time. Our results suggest that low resolution data allows accurate inference of human mobility patterns....
Grouping preprocess for haplotype inference from SNP and CNV data
International Nuclear Information System (INIS)
Shindo, Hiroyuki; Chigira, Hiroshi; Nagaoka, Tomoyo; Inoue, Masato; Kamatani, Naoyuki
2009-01-01
The method of statistical haplotype inference is an indispensable technique in the field of medical science. The authors previously reported Hardy-Weinberg equilibrium-based haplotype inference that could manage single nucleotide polymorphism (SNP) data. We recently extended the method to cover copy number variation (CNV) data. Haplotype inference from mixed data is important because SNPs and CNVs are occasionally in linkage disequilibrium. The idea underlying the proposed method is simple, but the algorithm for it needs to be quite elaborate to reduce the calculation cost. Consequently, we have focused on the details on the algorithm in this study. Although the main advantage of the method is accuracy, in that it does not use any approximation, its main disadvantage is still the calculation cost, which is sometimes intractable for large data sets with missing values.
Grouping preprocess for haplotype inference from SNP and CNV data
Energy Technology Data Exchange (ETDEWEB)
Shindo, Hiroyuki; Chigira, Hiroshi; Nagaoka, Tomoyo; Inoue, Masato [Department of Electrical Engineering and Bioscience, School of Advanced Science and Engineering, Waseda University, 3-4-1, Okubo, Shinjuku-ku, Tokyo 169-8555 (Japan); Kamatani, Naoyuki, E-mail: masato.inoue@eb.waseda.ac.j [Institute of Rheumatology, Tokyo Women' s Medical University, 10-22, Kawada-cho, Shinjuku-ku, Tokyo 162-0054 (Japan)
2009-12-01
The method of statistical haplotype inference is an indispensable technique in the field of medical science. The authors previously reported Hardy-Weinberg equilibrium-based haplotype inference that could manage single nucleotide polymorphism (SNP) data. We recently extended the method to cover copy number variation (CNV) data. Haplotype inference from mixed data is important because SNPs and CNVs are occasionally in linkage disequilibrium. The idea underlying the proposed method is simple, but the algorithm for it needs to be quite elaborate to reduce the calculation cost. Consequently, we have focused on the details on the algorithm in this study. Although the main advantage of the method is accuracy, in that it does not use any approximation, its main disadvantage is still the calculation cost, which is sometimes intractable for large data sets with missing values.
Object-Oriented Type Inference
DEFF Research Database (Denmark)
Schwartzbach, Michael Ignatieff; Palsberg, Jens
1991-01-01
We present a new approach to inferring types in untyped object-oriented programs with inheritance, assignments, and late binding. It guarantees that all messages are understood, annotates the program with type information, allows polymorphic methods, and can be used as the basis of an op...
Inference in hybrid Bayesian networks
DEFF Research Database (Denmark)
Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2009-01-01
Since the 1980s, Bayesian Networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability-techniques (like fault trees...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....
Mixed normal inference on multicointegration
Boswijk, H.P.
2009-01-01
Asymptotic likelihood analysis of cointegration in I(2) models, see Johansen (1997, 2006), Boswijk (2000) and Paruolo (2000), has shown that inference on most parameters is mixed normal, implying hypothesis test statistics with an asymptotic 2 null distribution. The asymptotic distribution of the
Statistical inference and Aristotle's Rhetoric.
Macdonald, Ranald R
2004-11-01
Formal logic operates in a closed system where all the information relevant to any conclusion is present, whereas this is not the case when one reasons about events and states of the world. Pollard and Richardson drew attention to the fact that the reasoning behind statistical tests does not lead to logically justifiable conclusions. In this paper statistical inferences are defended not by logic but by the standards of everyday reasoning. Aristotle invented formal logic, but argued that people mostly get at the truth with the aid of enthymemes--incomplete syllogisms which include arguing from examples, analogies and signs. It is proposed that statistical tests work in the same way--in that they are based on examples, invoke the analogy of a model and use the size of the effect under test as a sign that the chance hypothesis is unlikely. Of existing theories of statistical inference only a weak version of Fisher's takes this into account. Aristotle anticipated Fisher by producing an argument of the form that there were too many cases in which an outcome went in a particular direction for that direction to be plausibly attributed to chance. We can therefore conclude that Aristotle would have approved of statistical inference and there is a good reason for calling this form of statistical inference classical.
Strongly interacting Fermi gases
Directory of Open Access Journals (Sweden)
Bakr W.
2013-08-01
Full Text Available Strongly interacting gases of ultracold fermions have become an amazingly rich test-bed for many-body theories of fermionic matter. Here we present our recent experiments on these systems. Firstly, we discuss high-precision measurements on the thermodynamics of a strongly interacting Fermi gas across the superfluid transition. The onset of superfluidity is directly observed in the compressibility, the chemical potential, the entropy, and the heat capacity. Our measurements provide benchmarks for current many-body theories on strongly interacting fermions. Secondly, we have studied the evolution of fermion pairing from three to two dimensions in these gases, relating to the physics of layered superconductors. In the presence of p-wave interactions, Fermi gases are predicted to display toplogical superfluidity carrying Majorana edge states. Two possible avenues in this direction are discussed, our creation and direct observation of spin-orbit coupling in Fermi gases and the creation of fermionic molecules of 23Na 40K that will feature strong dipolar interactions in their absolute ground state.
International Nuclear Information System (INIS)
Marier, D.
1992-01-01
This article presents the results of a financial rankings survey which show a strong economic activity in the independent energy industry. The topics of the article include advisor turnover, overseas banks, and the increase in public offerings. The article identifies the top project finance investors for new projects and restructurings and rankings for lenders
Universal Darwinism As a Process of Bayesian Inference.
Campbell, John O
2016-01-01
Many of the mathematical frameworks describing natural selection are equivalent to Bayes' Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus, natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians). As Bayesian inference can always be cast in terms of (variational) free energy minimization, natural selection can be viewed as comprising two components: a generative model of an "experiment" in the external world environment, and the results of that "experiment" or the "surprise" entailed by predicted and actual outcomes of the "experiment." Minimization of free energy implies that the implicit measure of "surprise" experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature.
Universal Darwinism as a process of Bayesian inference
Directory of Open Access Journals (Sweden)
John Oberon Campbell
2016-06-01
Full Text Available Many of the mathematical frameworks describing natural selection are equivalent to Bayes’ Theorem, also known as Bayesian updating. By definition, a process of Bayesian Inference is one which involves a Bayesian update, so we may conclude that these frameworks describe natural selection as a process of Bayesian inference. Thus natural selection serves as a counter example to a widely-held interpretation that restricts Bayesian Inference to human mental processes (including the endeavors of statisticians. As Bayesian inference can always be cast in terms of (variational free energy minimization, natural selection can be viewed as comprising two components: a generative model of an ‘experiment’ in the external world environment, and the results of that 'experiment' or the 'surprise' entailed by predicted and actual outcomes of the ‘experiment’. Minimization of free energy implies that the implicit measure of 'surprise' experienced serves to update the generative model in a Bayesian manner. This description closely accords with the mechanisms of generalized Darwinian process proposed both by Dawkins, in terms of replicators and vehicles, and Campbell, in terms of inferential systems. Bayesian inference is an algorithm for the accumulation of evidence-based knowledge. This algorithm is now seen to operate over a wide range of evolutionary processes, including natural selection, the evolution of mental models and cultural evolutionary processes, notably including science itself. The variational principle of free energy minimization may thus serve as a unifying mathematical framework for universal Darwinism, the study of evolutionary processes operating throughout nature.
Strong Electroweak Symmetry Breaking
Grinstein, Benjamin
2011-01-01
Models of spontaneous breaking of electroweak symmetry by a strong interaction do not have fine tuning/hierarchy problem. They are conceptually elegant and use the only mechanism of spontaneous breaking of a gauge symmetry that is known to occur in nature. The simplest model, minimal technicolor with extended technicolor interactions, is appealing because one can calculate by scaling up from QCD. But it is ruled out on many counts: inappropriately low quark and lepton masses (or excessive FCNC), bad electroweak data fits, light scalar and vector states, etc. However, nature may not choose the minimal model and then we are stuck: except possibly through lattice simulations, we are unable to compute and test the models. In the LHC era it therefore makes sense to abandon specific models (of strong EW breaking) and concentrate on generic features that may indicate discovery. The Technicolor Straw Man is not a model but a parametrized search strategy inspired by a remarkable generic feature of walking technicolor,...
Plasmons in strong superconductors
International Nuclear Information System (INIS)
Baldo, M.; Ducoin, C.
2011-01-01
We present a study of the possible plasmon excitations that can occur in systems where strong superconductivity is present. In these systems the plasmon energy is comparable to or smaller than the pairing gap. As a prototype of these systems we consider the proton component of Neutron Star matter just below the crust when electron screening is not taken into account. For the realistic case we consider in detail the different aspects of the elementary excitations when the proton, electron components are considered within the Random-Phase Approximation generalized to the superfluid case, while the influence of the neutron component is considered only at qualitative level. Electron screening plays a major role in modifying the proton spectrum and spectral function. At the same time the electron plasmon is strongly modified and damped by the indirect coupling with the superfluid proton component, even at moderately low values of the gap. The excitation spectrum shows the interplay of the different components and their relevance for each excitation modes. The results are relevant for neutrino physics and thermodynamical processes in neutron stars. If electron screening is neglected, the spectral properties of the proton component show some resemblance with the physical situation in high-T c superconductors, and we briefly discuss similarities and differences in this connection. In a general prospect, the results of the study emphasize the role of Coulomb interaction in strong superconductors.
PREFACE: Strongly correlated electron systems Strongly correlated electron systems
Saxena, Siddharth S.; Littlewood, P. B.
2012-07-01
This special section is dedicated to the Strongly Correlated Electron Systems Conference (SCES) 2011, which was held from 29 August-3 September 2011, in Cambridge, UK. SCES'2011 is dedicated to 100 years of superconductivity and covers a range of topics in the area of strongly correlated systems. The correlated electronic and magnetic materials featured include f-electron based heavy fermion intermetallics and d-electron based transition metal compounds. The selected papers derived from invited presentations seek to deepen our understanding of the rich physical phenomena that arise from correlation effects. The focus is on quantum phase transitions, non-Fermi liquid phenomena, quantum magnetism, unconventional superconductivity and metal-insulator transitions. Both experimental and theoretical work is presented. Based on fundamental advances in the understanding of electronic materials, much of 20th century materials physics was driven by miniaturisation and integration in the electronics industry to the current generation of nanometre scale devices. The achievements of this industry have brought unprecedented advances to society and well-being, and no doubt there is much further to go—note that this progress is founded on investments and studies in the fundamentals of condensed matter physics from more than 50 years ago. Nevertheless, the defining challenges for the 21st century will lie in the discovery in science, and deployment through engineering, of technologies that can deliver the scale needed to have an impact on the sustainability agenda. Thus the big developments in nanotechnology may lie not in the pursuit of yet smaller transistors, but in the design of new structures that can revolutionise the performance of solar cells, batteries, fuel cells, light-weight structural materials, refrigeration, water purification, etc. The science presented in the papers of this special section also highlights the underlying interest in energy-dense materials, which
Cognitive Inference Device for Activity Supervision in the Elderly
Mishra, Nilamadhab; Lin, Chung-Chih; Chang, Hsien-Tsung
2014-01-01
Human activity, life span, and quality of life are enhanced by innovations in science and technology. Aging individual needs to take advantage of these developments to lead a self-regulated life. However, maintaining a self-regulated life at old age involves a high degree of risk, and the elderly often fail at this goal. Thus, the objective of our study is to investigate the feasibility of implementing a cognitive inference device (CI-device) for effective activity supervision in the elderly....
Statistical learning and selective inference.
Taylor, Jonathan; Tibshirani, Robert J
2015-06-23
We describe the problem of "selective inference." This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have "cherry-picked"--searched for the strongest associations--means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis.
Bayesian inference with ecological applications
Link, William A
2009-01-01
This text is written to provide a mathematically sound but accessible and engaging introduction to Bayesian inference specifically for environmental scientists, ecologists and wildlife biologists. It emphasizes the power and usefulness of Bayesian methods in an ecological context. The advent of fast personal computers and easily available software has simplified the use of Bayesian and hierarchical models . One obstacle remains for ecologists and wildlife biologists, namely the near absence of Bayesian texts written specifically for them. The book includes many relevant examples, is supported by software and examples on a companion website and will become an essential grounding in this approach for students and research ecologists. Engagingly written text specifically designed to demystify a complex subject Examples drawn from ecology and wildlife research An essential grounding for graduate and research ecologists in the increasingly prevalent Bayesian approach to inference Companion website with analyt...
Statistical inference an integrated approach
Migon, Helio S; Louzada, Francisco
2014-01-01
Introduction Information The concept of probability Assessing subjective probabilities An example Linear algebra and probability Notation Outline of the bookElements of Inference Common statistical modelsLikelihood-based functions Bayes theorem Exchangeability Sufficiency and exponential family Parameter elimination Prior Distribution Entirely subjective specification Specification through functional forms Conjugacy with the exponential family Non-informative priors Hierarchical priors Estimation Introduction to decision theoryBayesian point estimation Classical point estimation Empirical Bayes estimation Comparison of estimators Interval estimation Estimation in the Normal model Approximating Methods The general problem of inference Optimization techniquesAsymptotic theory Other analytical approximations Numerical integration methods Simulation methods Hypothesis Testing Introduction Classical hypothesis testingBayesian hypothesis testing Hypothesis testing and confidence intervalsAsymptotic tests Prediction...
Bayesian inference on proportional elections.
Directory of Open Access Journals (Sweden)
Gabriel Hideki Vatanabe Brunello
Full Text Available Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.
Probability biases as Bayesian inference
Directory of Open Access Journals (Sweden)
Andre; C. R. Martins
2006-11-01
Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.
Statistical inference on residual life
Jeong, Jong-Hyeon
2014-01-01
This is a monograph on the concept of residual life, which is an alternative summary measure of time-to-event data, or survival data. The mean residual life has been used for many years under the name of life expectancy, so it is a natural concept for summarizing survival or reliability data. It is also more interpretable than the popular hazard function, especially for communications between patients and physicians regarding the efficacy of a new drug in the medical field. This book reviews existing statistical methods to infer the residual life distribution. The review and comparison includes existing inference methods for mean and median, or quantile, residual life analysis through medical data examples. The concept of the residual life is also extended to competing risks analysis. The targeted audience includes biostatisticians, graduate students, and PhD (bio)statisticians. Knowledge in survival analysis at an introductory graduate level is advisable prior to reading this book.
Statistical inference a short course
Panik, Michael J
2012-01-01
A concise, easily accessible introduction to descriptive and inferential techniques Statistical Inference: A Short Course offers a concise presentation of the essentials of basic statistics for readers seeking to acquire a working knowledge of statistical concepts, measures, and procedures. The author conducts tests on the assumption of randomness and normality, provides nonparametric methods when parametric approaches might not work. The book also explores how to determine a confidence interval for a population median while also providing coverage of ratio estimation, randomness, and causal
On Quantum Statistical Inference, II
Barndorff-Nielsen, O. E.; Gill, R. D.; Jupp, P. E.
2003-01-01
Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, theoretical developments in the theory of quantum measurements have brought the basic mathematical framework for the probability calculations much closer to that of classical probability theory. The present paper reviews this field and proposes and inte...
Nonparametric predictive inference in reliability
International Nuclear Information System (INIS)
Coolen, F.P.A.; Coolen-Schrijner, P.; Yan, K.J.
2002-01-01
We introduce a recently developed statistical approach, called nonparametric predictive inference (NPI), to reliability. Bounds for the survival function for a future observation are presented. We illustrate how NPI can deal with right-censored data, and discuss aspects of competing risks. We present possible applications of NPI for Bernoulli data, and we briefly outline applications of NPI for replacement decisions. The emphasis is on introduction and illustration of NPI in reliability contexts, detailed mathematical justifications are presented elsewhere
International Nuclear Information System (INIS)
Gorenstein, M. I.; Gazdzicki, M.
2011-01-01
Analysis of fluctuations of hadron production properties in collisions of relativistic particles profits from use of measurable intensive quantities which are independent of system size variations. The first family of such quantities was proposed in 1992; another is introduced in this paper. Furthermore we present a proof of independence of volume fluctuations for quantities from both families within the framework of the grand canonical ensemble. These quantities are referred to as strongly intensive ones. Influence of conservation laws and resonance decays is also discussed.
Strong-coupling approximations
International Nuclear Information System (INIS)
Abbott, R.B.
1984-03-01
Standard path-integral techniques such as instanton calculations give good answers for weak-coupling problems, but become unreliable for strong-coupling. Here we consider a method of replacing the original potential by a suitably chosen harmonic oscillator potential. Physically this is motivated by the fact that potential barriers below the level of the ground-state energy of a quantum-mechanical system have little effect. Numerically, results are good, both for quantum-mechanical problems and for massive phi 4 field theory in 1 + 1 dimensions. 9 references, 6 figures
Variational inference & deep learning : A new synthesis
Kingma, D.P.
2017-01-01
In this thesis, Variational Inference and Deep Learning: A New Synthesis, we propose novel solutions to the problems of variational (Bayesian) inference, generative modeling, representation learning, semi-supervised learning, and stochastic optimization.
Variational inference & deep learning: A new synthesis
Kingma, D.P.
2017-01-01
In this thesis, Variational Inference and Deep Learning: A New Synthesis, we propose novel solutions to the problems of variational (Bayesian) inference, generative modeling, representation learning, semi-supervised learning, and stochastic optimization.
Continuous Integrated Invariant Inference, Phase I
National Aeronautics and Space Administration — The proposed project will develop a new technique for invariant inference and embed this and other current invariant inference and checking techniques in an...
Rotating compressible fluids under strong stratification
Czech Academy of Sciences Publication Activity Database
Feireisl, Eduard; Lu, Y.; Novotný, A.
2014-01-01
Roč. 19, October (2014), s. 11-18 ISSN 1468-1218 Keywords : rotating fluid * compressible Navier-Stokes * strong stratification Subject RIV: BA - General Mathematics Impact factor: 2.519, year: 2014 http://www.sciencedirect.com/science/article/pii/S1468121814000212#
Strongly disordered superconductors
International Nuclear Information System (INIS)
Muttalib, K.A.
1982-01-01
We examine some universal effects of strong non-magnetic disorder on the electron-phonon and electron-electron interactions in a superconductor. In particular we explicitly take into account the effect of slow diffusion of electrons in a disordered medium by working in an exact impurity eigenstate representation. We find that the normal diffusion of electrons characterized by a constant diffusion coefficient does not lead to any significant correction to the electron-phonon or the effective electron-electron interactions in a superconductor. We then consider sufficiently strong disorder where Anderson localization of electrons becomes important and determine the effect of localization on the electron-electron interactions. We find that due to localization, the diffusion of electrons becomes anomalous in the sense that the diffusion coefficient becomes scale dependent. This results in an increase in the effective electron-electron interaction with increasing disorder. We propose that this provides a natural explanation for the unusual sensitivity of the transition temperature T/sub c/ of the high T/sub c/ superconductors (T/sub c/ > 10 0 K) to damage effects
Dvali, Gia
2009-01-01
We show that whenever a 4-dimensional theory with N particle species emerges as a consistent low energy description of a 3-brane embedded in an asymptotically-flat (4+d)-dimensional space, the holographic scale of high-dimensional gravity sets the strong coupling scale of the 4D theory. This connection persists in the limit in which gravity can be consistently decoupled. We demonstrate this effect for orbifold planes, as well as for the solitonic branes and string theoretic D-branes. In all cases the emergence of a 4D strong coupling scale from bulk holography is a persistent phenomenon. The effect turns out to be insensitive even to such extreme deformations of the brane action that seemingly shield 4D theory from the bulk gravity effects. A well understood example of such deformation is given by large 4D Einstein term in the 3-brane action, which is known to suppress the strength of 5D gravity at short distances and change the 5D Newton's law into the four-dimensional one. Nevertheless, we observe that the ...
Variations on Bayesian Prediction and Inference
2016-05-09
inference 2.2.1 Background There are a number of statistical inference problems that are not generally formulated via a full probability model...problem of inference about an unknown parameter, the Bayesian approach requires a full probability 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...the problem of inference about an unknown parameter, the Bayesian approach requires a full probability model/likelihood which can be an obstacle
Adaptive Inference on General Graphical Models
Acar, Umut A.; Ihler, Alexander T.; Mettu, Ramgopal; Sumer, Ozgur
2012-01-01
Many algorithms and applications involve repeatedly solving variations of the same inference problem; for example we may want to introduce new evidence to the model or perform updates to conditional dependencies. The goal of adaptive inference is to take advantage of what is preserved in the model and perform inference more rapidly than from scratch. In this paper, we describe techniques for adaptive inference on general graphs that support marginal computation and updates to the conditional ...
Antonella Del Rosso
2016-01-01
Twenty years of designing, building and testing a number of innovative technologies, with the strong belief that the endeavour would lead to a historic breakthrough. The Bulletin publishes an abstract of the Courier’s interview with Barry Barish, one of the founding fathers of LIGO. The plots show the signals of gravitational waves detected by the twin LIGO observatories at Livingston, Louisiana, and Hanford, Washington. (Image: Caltech/MIT/LIGO Lab) On 11 February, the Laser Interferometer Gravitational-Wave Observatory (LIGO) and Virgo collaborations published a historic paper in which they showed a gravitational signal emitted by the merger of two black holes. These results come after 20 years of hard work by a large collaboration of scientists operating the two LIGO observatories in the US. Barry Barish, Linde Professor of Physics, Emeritus at the California Institute of Technology and former Director of the Global Design Effort for the Internat...
Strongly interacting Higgs bosons
International Nuclear Information System (INIS)
Appelquist, T.; Bernard, C.
1980-01-01
The sensitivity of present-energy weak interactions to a strongly interacting heavy-Higgs-boson sector is discussed. The gauged nonlinear sigma model, which is the limit of the linear model as the Higgs-boson mass goes to infinity, is used to organize and catalogue all possible heavy-Higgs-boson effects. As long as the SU(2)/sub L/ x SU(2)/sub R/ symmetry of the Higgs sector is preserved, these effects are found to be small, of the order of the square of the gauge coupling times logarithms (but not powers) of the Higgs-boson mass divided by the W mass. We work in the context of a simplified model with gauge group SU(2)/sub L/; the extension to SU(2)/sub L/ x U(1) is briefly discussed
Sweller, Naomi; Hayes, Brett K
2010-08-01
Three studies examined how task demands that impact on attention to typical or atypical category features shape the category representations formed through classification learning and inference learning. During training categories were learned via exemplar classification or by inferring missing exemplar features. In the latter condition inferences were made about missing typical features alone (typical feature inference) or about both missing typical and atypical features (mixed feature inference). Classification and mixed feature inference led to the incorporation of typical and atypical features into category representations, with both kinds of features influencing inferences about familiar (Experiments 1 and 2) and novel (Experiment 3) test items. Those in the typical inference condition focused primarily on typical features. Together with formal modelling, these results challenge previous accounts that have characterized inference learning as producing a focus on typical category features. The results show that two different kinds of inference learning are possible and that these are subserved by different kinds of category representations.
Generative inference for cultural evolution.
Kandler, Anne; Powell, Adam
2018-04-05
One of the major challenges in cultural evolution is to understand why and how various forms of social learning are used in human populations, both now and in the past. To date, much of the theoretical work on social learning has been done in isolation of data, and consequently many insights focus on revealing the learning processes or the distributions of cultural variants that are expected to have evolved in human populations. In population genetics, recent methodological advances have allowed a greater understanding of the explicit demographic and/or selection mechanisms that underlie observed allele frequency distributions across the globe, and their change through time. In particular, generative frameworks-often using coalescent-based simulation coupled with approximate Bayesian computation (ABC)-have provided robust inferences on the human past, with no reliance on a priori assumptions of equilibrium. Here, we demonstrate the applicability and utility of generative inference approaches to the field of cultural evolution. The framework advocated here uses observed population-level frequency data directly to establish the likely presence or absence of particular hypothesized learning strategies. In this context, we discuss the problem of equifinality and argue that, in the light of sparse cultural data and the multiplicity of possible social learning processes, the exclusion of those processes inconsistent with the observed data might be the most instructive outcome. Finally, we summarize the findings of generative inference approaches applied to a number of case studies.This article is part of the theme issue 'Bridging cultural gaps: interdisciplinary studies in human cultural evolution'. © 2018 The Author(s).
sick: The Spectroscopic Inference Crank
Casey, Andrew R.
2016-03-01
There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal
Inferring network structure from cascades
Ghonge, Sushrut; Vural, Dervis Can
2017-07-01
Many physical, biological, and social phenomena can be described by cascades taking place on a network. Often, the activity can be empirically observed, but not the underlying network of interactions. In this paper we offer three topological methods to infer the structure of any directed network given a set of cascade arrival times. Our formulas hold for a very general class of models where the activation probability of a node is a generic function of its degree and the number of its active neighbors. We report high success rates for synthetic and real networks, for several different cascade models.
SICK: THE SPECTROSCOPIC INFERENCE CRANK
Energy Technology Data Exchange (ETDEWEB)
Casey, Andrew R., E-mail: arc@ast.cam.ac.uk [Institute of Astronomy, University of Cambridge, Madingley Road, Cambdridge, CB3 0HA (United Kingdom)
2016-03-15
There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
2013-01-01
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....
Inference in hybrid Bayesian networks
International Nuclear Information System (INIS)
Langseth, Helge; Nielsen, Thomas D.; Rumi, Rafael; Salmeron, Antonio
2009-01-01
Since the 1980s, Bayesian networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability techniques (like fault trees and reliability block diagrams). However, limitations in the BNs' calculation engine have prevented BNs from becoming equally popular for domains containing mixtures of both discrete and continuous variables (the so-called hybrid domains). In this paper we focus on these difficulties, and summarize some of the last decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability.
SICK: THE SPECTROSCOPIC INFERENCE CRANK
International Nuclear Information System (INIS)
Casey, Andrew R.
2016-01-01
There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal
Strong-interaction nonuniversality
International Nuclear Information System (INIS)
Volkas, R.R.; Foot, R.; He, X.; Joshi, G.C.
1989-01-01
The universal QCD color theory is extended to an SU(3) 1 direct product SU(3) 2 direct product SU(3) 3 gauge theory, where quarks of the ith generation transform as triplets under SU(3)/sub i/ and singlets under the other two factors. The usual color group is then identified with the diagonal subgroup, which remains exact after symmetry breaking. The gauge bosons associated with the 16 broken generators then form two massive octets under ordinary color. The interactions between quarks and these heavy gluonlike particles are explicitly nonuniversal and thus an exploration of their physical implications allows us to shed light on the fundamental issue of strong-interaction universality. Nonuniversality and weak flavor mixing are shown to generate heavy-gluon-induced flavor-changing neutral currents. The phenomenology of these processes is studied, as they provide the major experimental constraint on the extended theory. Three symmetry-breaking scenarios are presented. The first has color breaking occurring at the weak scale, while the second and third divorce the two scales. The third model has the interesting feature of radiatively induced off-diagonal Kobayashi-Maskawa matrix elements
Wickens, F
Our friend and colleague John Strong was cruelly taken from us by a brain tumour on Monday 31st July, a few days before his 65th birthday John started his career working with a group from Westfield College, under the leadership of Ted Bellamy. He obtained his PhD and spent the early part of his career on experiments at Rutherford Appleton Laboratory (RAL), but after the early 1970s his research was focussed on experiments in CERN. Over the years he made a number of notable contributions to experiments in CERN: The Omega spectrometer adopted a system John had originally developed for experiments at RAL using vidicon cameras to record the sparks in the spark chambers; He contributed to the success of NA1 and NA7, where he became heavily involved in the electronic trigger systems; He was responsible for the second level trigger system for the ALEPH detector and spent five years leading a team that designed and built the system, which ran for twelve years with only minor interventions. Following ALEPH he tur...
Stirring Strongly Coupled Plasma
Fadafan, Kazem Bitaghsir; Rajagopal, Krishna; Wiedemann, Urs Achim
2009-01-01
We determine the energy it takes to move a test quark along a circle of radius L with angular frequency w through the strongly coupled plasma of N=4 supersymmetric Yang-Mills (SYM) theory. We find that for most values of L and w the energy deposited by stirring the plasma in this way is governed either by the drag force acting on a test quark moving through the plasma in a straight line with speed v=Lw or by the energy radiated by a quark in circular motion in the absence of any plasma, whichever is larger. There is a continuous crossover from the drag-dominated regime to the radiation-dominated regime. In the crossover regime we find evidence for significant destructive interference between energy loss due to drag and that due to radiation as if in vacuum. The rotating quark thus serves as a model system in which the relative strength of, and interplay between, two different mechanisms of parton energy loss is accessible via a controlled classical gravity calculation. We close by speculating on the implicati...
Plasma pressure and anisotropy inferred from the Tsyganenkomagnetic field model
Directory of Open Access Journals (Sweden)
F. Cao
Full Text Available A numerical procedure has been developed to deduce the plasma pressure and anisotropy from the Tsyganenko magnetic field model. The Tsyganenko empirical field model, which is based on vast satellite field data, provides a realistic description of magnetic field configuration in the magnetosphere. When the force balance under the static condition is assumed, the electromagnetic <strong>J×B> force from the Tsyganenko field model can be used to infer the plasma pressure and anisotropy distributions consistent with the field model. It is found that the <strong>J×B> force obtained from the Tsyganenko field model is not curl-free. The curl-free part of the <strong>J×B> force in an empirical field model can be balanced by the gradient of the isotropic pressure, while the nonzero curl of the <strong>J×B> force can only be associated with the pressure anisotropy. The plasma pressure and anisotropy in the near-Earth plasma sheet are numerically calculated to obtain a static equilibrium consistent with the Tsyganenko field model both in the noon-midnight meridian and in the equatorial plane. The plasma pressure distribution deduced from the Tsyganenko 1989 field model is highly anisotropic and shows this feature early in the substorm growth phase. The pressure anisotropy parameter α_{P}, defined as α_{P}=1-P_{Vert}P_{⊥}, is typically ~0.3 at x ≈ -4.5R_{E} and gradually decreases to a small negative value with an increasing tailward distance. The pressure anisotropy from the Tsyganenko 1989 model accounts for 50% of the cross-tail current at maximum and only in a highly localized region near xsim-10R_{E}. In comparison, the plasma pressure anisotropy inferred from the Tsyganenko 1987 model is much smaller. We also find that the boundary
Cloern, James E.; Jassby, Alan D.; Carstensen, Jacob; Bennett, William A.; Kimmerer, Wim; Mac Nally, Ralph; Schoellhamer, David H.; Winder, Monika
2012-01-01
We comment on a nonstandard statistical treatment of time-series data first published by Breton et al. (2006) in Limnology and Oceanography and, more recently, used by Glibert (2010) in Reviews in Fisheries Science. In both papers, the authors make strong inferences about the underlying causes of population variability based on correlations between cumulative sum (CUSUM) transformations of organism abundances and environmental variables. Breton et al. (2006) reported correlations between CUSUM-transformed values of diatom biomass in Belgian coastal waters and the North Atlantic Oscillation, and between meteorological and hydrological variables. Each correlation of CUSUM-transformed variables was judged to be statistically significant. On the basis of these correlations, Breton et al. (2006) developed "the first evidence of synergy between climate and human-induced river-based nitrate inputs with respect to their effects on the magnitude of spring Phaeocystis colony blooms and their dominance over diatoms."
Cognitive Inference Device for Activity Supervision in the Elderly
Directory of Open Access Journals (Sweden)
Nilamadhab Mishra
2014-01-01
Full Text Available Human activity, life span, and quality of life are enhanced by innovations in science and technology. Aging individual needs to take advantage of these developments to lead a self-regulated life. However, maintaining a self-regulated life at old age involves a high degree of risk, and the elderly often fail at this goal. Thus, the objective of our study is to investigate the feasibility of implementing a cognitive inference device (CI-device for effective activity supervision in the elderly. To frame the CI-device, we propose a device design framework along with an inference algorithm and implement the designs through an artificial neural model with different configurations, mapping the CI-device’s functions to minimise the device’s prediction error. An analysis and discussion are then provided to validate the feasibility of CI-device implementation for activity supervision in the elderly.
Rational Inference of Beliefs and Desires From Emotional Expressions.
Wu, Yang; Baker, Chris L; Tenenbaum, Joshua B; Schulz, Laura E
2018-04-01
We investigated people's ability to infer others' mental states from their emotional reactions, manipulating whether agents wanted, expected, and caused an outcome. Participants recovered agents' desires throughout. When the agent observed, but did not cause the outcome, participants' ability to recover the agent's beliefs depended on the evidence they got (i.e., her reaction only to the actual outcome or to both the expected and actual outcomes; Experiments 1 and 2). When the agent caused the event, participants' judgments also depended on the probability of the action (Experiments 3 and 4); when actions were improbable given the mental states, people failed to recover the agent's beliefs even when they saw her react to both the anticipated and actual outcomes. A Bayesian model captured human performance throughout (rs ≥ .95), consistent with the proposal that people rationally integrate information about others' actions and emotional reactions to infer their unobservable mental states. Copyright © 2017 Cognitive Science Society, Inc.
Lower complexity bounds for lifted inference
DEFF Research Database (Denmark)
Jaeger, Manfred
2015-01-01
instances of the model. Numerous approaches for such “lifted inference” techniques have been proposed. While it has been demonstrated that these techniques will lead to significantly more efficient inference on some specific models, there are only very recent and still quite restricted results that show...... the feasibility of lifted inference on certain syntactically defined classes of models. Lower complexity bounds that imply some limitations for the feasibility of lifted inference on more expressive model classes were established earlier in Jaeger (2000; Jaeger, M. 2000. On the complexity of inference about...... that under the assumption that NETIME≠ETIME, there is no polynomial lifted inference algorithm for knowledge bases of weighted, quantifier-, and function-free formulas. Further strengthening earlier results, this is also shown to hold for approximate inference and for knowledge bases not containing...
Statistical inference for financial engineering
Taniguchi, Masanobu; Ogata, Hiroaki; Taniai, Hiroyuki
2014-01-01
This monograph provides the fundamentals of statistical inference for financial engineering and covers some selected methods suitable for analyzing financial time series data. In order to describe the actual financial data, various stochastic processes, e.g. non-Gaussian linear processes, non-linear processes, long-memory processes, locally stationary processes etc. are introduced and their optimal estimation is considered as well. This book also includes several statistical approaches, e.g., discriminant analysis, the empirical likelihood method, control variate method, quantile regression, realized volatility etc., which have been recently developed and are considered to be powerful tools for analyzing the financial data, establishing a new bridge between time series and financial engineering. This book is well suited as a professional reference book on finance, statistics and statistical financial engineering. Readers are expected to have an undergraduate-level knowledge of statistics.
Type inference for correspondence types
DEFF Research Database (Denmark)
Hüttel, Hans; Gordon, Andy; Hansen, Rene Rydhof
2009-01-01
We present a correspondence type/effect system for authenticity in a π-calculus with polarized channels, dependent pair types and effect terms and show how one may, given a process P and an a priori type environment E, generate constraints that are formulae in the Alternating Least Fixed......-Point (ALFP) logic. We then show how a reasonable model of the generated constraints yields a type/effect assignment such that P becomes well-typed with respect to E if and only if this is possible. The formulae generated satisfy a finite model property; a system of constraints is satisfiable if and only...... if it has a finite model. As a consequence, we obtain the result that type/effect inference in our system is polynomial-time decidable....
Causal inference in public health.
Glass, Thomas A; Goodman, Steven N; Hernán, Miguel A; Samet, Jonathan M
2013-01-01
Causal inference has a central role in public health; the determination that an association is causal indicates the possibility for intervention. We review and comment on the long-used guidelines for interpreting evidence as supporting a causal association and contrast them with the potential outcomes framework that encourages thinking in terms of causes that are interventions. We argue that in public health this framework is more suitable, providing an estimate of an action's consequences rather than the less precise notion of a risk factor's causal effect. A variety of modern statistical methods adopt this approach. When an intervention cannot be specified, causal relations can still exist, but how to intervene to change the outcome will be unclear. In application, the often-complex structure of causal processes needs to be acknowledged and appropriate data collected to study them. These newer approaches need to be brought to bear on the increasingly complex public health challenges of our globalized world.
Inference Attacks and Control on Database Structures
Directory of Open Access Journals (Sweden)
Muhamed Turkanovic
2015-02-01
Full Text Available Today’s databases store information with sensitivity levels that range from public to highly sensitive, hence ensuring confidentiality can be highly important, but also requires costly control. This paper focuses on the inference problem on different database structures. It presents possible treats on privacy with relation to the inference, and control methods for mitigating these treats. The paper shows that using only access control, without any inference control is inadequate, since these models are unable to protect against indirect data access. Furthermore, it covers new inference problems which rise from the dimensions of new technologies like XML, semantics, etc.
LAIT: a local ancestry inference toolkit.
Hui, Daniel; Fang, Zhou; Lin, Jerome; Duan, Qing; Li, Yun; Hu, Ming; Chen, Wei
2017-09-06
Inferring local ancestry in individuals of mixed ancestry has many applications, most notably in identifying disease-susceptible loci that vary among different ethnic groups. Many software packages are available for inferring local ancestry in admixed individuals. However, most of these existing software packages require specific formatted input files and generate output files in various types, yielding practical inconvenience. We developed a tool set, Local Ancestry Inference Toolkit (LAIT), which can convert standardized files into software-specific input file formats as well as standardize and summarize inference results for four popular local ancestry inference software: HAPMIX, LAMP, LAMP-LD, and ELAI. We tested LAIT using both simulated and real data sets and demonstrated that LAIT provides convenience to run multiple local ancestry inference software. In addition, we evaluated the performance of local ancestry software among different supported software packages, mainly focusing on inference accuracy and computational resources used. We provided a toolkit to facilitate the use of local ancestry inference software, especially for users with limited bioinformatics background.
Forward and backward inference in spatial cognition.
Directory of Open Access Journals (Sweden)
Will D Penny
Full Text Available This paper shows that the various computations underlying spatial cognition can be implemented using statistical inference in a single probabilistic model. Inference is implemented using a common set of 'lower-level' computations involving forward and backward inference over time. For example, to estimate where you are in a known environment, forward inference is used to optimally combine location estimates from path integration with those from sensory input. To decide which way to turn to reach a goal, forward inference is used to compute the likelihood of reaching that goal under each option. To work out which environment you are in, forward inference is used to compute the likelihood of sensory observations under the different hypotheses. For reaching sensory goals that require a chaining together of decisions, forward inference can be used to compute a state trajectory that will lead to that goal, and backward inference to refine the route and estimate control signals that produce the required trajectory. We propose that these computations are reflected in recent findings of pattern replay in the mammalian brain. Specifically, that theta sequences reflect decision making, theta flickering reflects model selection, and remote replay reflects route and motor planning. We also propose a mapping of the above computational processes onto lateral and medial entorhinal cortex and hippocampus.
Generative Inferences Based on Learned Relations
Chen, Dawn; Lu, Hongjing; Holyoak, Keith J.
2017-01-01
A key property of relational representations is their "generativity": From partial descriptions of relations between entities, additional inferences can be drawn about other entities. A major theoretical challenge is to demonstrate how the capacity to make generative inferences could arise as a result of learning relations from…
Inference in models with adaptive learning
Chevillon, G.; Massmann, M.; Mavroeidis, S.
2010-01-01
Identification of structural parameters in models with adaptive learning can be weak, causing standard inference procedures to become unreliable. Learning also induces persistent dynamics, and this makes the distribution of estimators and test statistics non-standard. Valid inference can be
Fiducial inference - A Neyman-Pearson interpretation
Salome, D; VonderLinden, W; Dose,; Fischer, R; Preuss, R
1999-01-01
Fisher's fiducial argument is a tool for deriving inferences in the form of a probability distribution on the parameter space, not based on Bayes's Theorem. Lindley established that in exceptional situations fiducial inferences coincide with posterior distributions; in the other situations fiducial
Uncertainty in prediction and in inference
Hilgevoord, J.; Uffink, J.
1991-01-01
The concepts of uncertainty in prediction and inference are introduced and illustrated using the diffraction of light as an example. The close re-lationship between the concepts of uncertainty in inference and resolving power is noted. A general quantitative measure of uncertainty in
Causal inference in economics and marketing.
Varian, Hal R
2016-07-05
This is an elementary introduction to causal inference in economics written for readers familiar with machine learning methods. The critical step in any causal analysis is estimating the counterfactual-a prediction of what would have happened in the absence of the treatment. The powerful techniques used in machine learning may be useful for developing better estimates of the counterfactual, potentially improving causal inference.
Nonparametric predictive inference in statistical process control
Arts, G.R.J.; Coolen, F.P.A.; Laan, van der P.
2000-01-01
New methods for statistical process control are presented, where the inferences have a nonparametric predictive nature. We consider several problems in process control in terms of uncertainties about future observable random quantities, and we develop inferences for these random quantities hased on
The Impact of Disablers on Predictive Inference
Cummins, Denise Dellarosa
2014-01-01
People consider alternative causes when deciding whether a cause is responsible for an effect (diagnostic inference) but appear to neglect them when deciding whether an effect will occur (predictive inference). Five experiments were conducted to test a 2-part explanation of this phenomenon: namely, (a) that people interpret standard predictive…
Compiling Relational Bayesian Networks for Exact Inference
DEFF Research Database (Denmark)
Jaeger, Manfred; Darwiche, Adnan; Chavira, Mark
2006-01-01
We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available PRIMULA tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference...
Compiling Relational Bayesian Networks for Exact Inference
DEFF Research Database (Denmark)
Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan
2004-01-01
We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating...
Using the Weibull distribution reliability, modeling and inference
McCool, John I
2012-01-01
Understand and utilize the latest developments in Weibull inferential methods While the Weibull distribution is widely used in science and engineering, most engineers do not have the necessary statistical training to implement the methodology effectively. Using the Weibull Distribution: Reliability, Modeling, and Inference fills a gap in the current literature on the topic, introducing a self-contained presentation of the probabilistic basis for the methodology while providing powerful techniques for extracting information from data. The author explains the use of the Weibull distribution
Extended likelihood inference in reliability
International Nuclear Information System (INIS)
Martz, H.F. Jr.; Beckman, R.J.; Waller, R.A.
1978-10-01
Extended likelihood methods of inference are developed in which subjective information in the form of a prior distribution is combined with sampling results by means of an extended likelihood function. The extended likelihood function is standardized for use in obtaining extended likelihood intervals. Extended likelihood intervals are derived for the mean of a normal distribution with known variance, the failure-rate of an exponential distribution, and the parameter of a binomial distribution. Extended second-order likelihood methods are developed and used to solve several prediction problems associated with the exponential and binomial distributions. In particular, such quantities as the next failure-time, the number of failures in a given time period, and the time required to observe a given number of failures are predicted for the exponential model with a gamma prior distribution on the failure-rate. In addition, six types of life testing experiments are considered. For the binomial model with a beta prior distribution on the probability of nonsurvival, methods are obtained for predicting the number of nonsurvivors in a given sample size and for predicting the required sample size for observing a specified number of nonsurvivors. Examples illustrate each of the methods developed. Finally, comparisons are made with Bayesian intervals in those cases where these are known to exist
Reinforcement learning or active inference?
Friston, Karl J; Daunizeau, Jean; Kiebel, Stefan J
2009-07-29
This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain.
Reinforcement learning or active inference?
Directory of Open Access Journals (Sweden)
Karl J Friston
2009-07-01
Full Text Available This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain.
Active inference and epistemic value.
Friston, Karl; Rigoli, Francesco; Ognibene, Dimitri; Mathys, Christoph; Fitzgerald, Thomas; Pezzulo, Giovanni
2015-01-01
We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.
Ancient Biomolecules and Evolutionary Inference.
Cappellini, Enrico; Prohaska, Ana; Racimo, Fernando; Welker, Frido; Pedersen, Mikkel Winther; Allentoft, Morten E; de Barros Damgaard, Peter; Gutenbrunner, Petra; Dunne, Julie; Hammann, Simon; Roffet-Salque, Mélanie; Ilardo, Melissa; Moreno-Mayar, J Víctor; Wang, Yucheng; Sikora, Martin; Vinner, Lasse; Cox, Jürgen; Evershed, Richard P; Willerslev, Eske
2018-04-25
Over the last decade, studies of ancient biomolecules-particularly ancient DNA, proteins, and lipids-have revolutionized our understanding of evolutionary history. Though initially fraught with many challenges, the field now stands on firm foundations. Researchers now successfully retrieve nucleotide and amino acid sequences, as well as lipid signatures, from progressively older samples, originating from geographic areas and depositional environments that, until recently, were regarded as hostile to long-term preservation of biomolecules. Sampling frequencies and the spatial and temporal scope of studies have also increased markedly, and with them the size and quality of the data sets generated. This progress has been made possible by continuous technical innovations in analytical methods, enhanced criteria for the selection of ancient samples, integrated experimental methods, and advanced computational approaches. Here, we discuss the history and current state of ancient biomolecule research, its applications to evolutionary inference, and future directions for this young and exciting field. Expected final online publication date for the Annual Review of Biochemistry Volume 87 is June 20, 2018. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Bayesian Inference Methods for Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand
2013-01-01
This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...
Analogical Inference and Analogical Access.
1987-08-04
water) FLAT-TOP ( coffee ) CLEAR (beaker) U I ’/ I I I i ’ J" ’ CAUSE(BREATER-THANCTEMPERATURE( coff e), TEMPERATURE (ice)2, CFLON(heat, bar, coffee ...distribution unlimited 4. PERFORMING ORGANIZATION REPORT NUMBER(S) 5. MONITORING ORGANIZATION REPORT NUMBER(S) UIUCDCS-R-87-1365 6a. NAME OF PERFORMING... ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION University of Illinois (If applicable) Cognitive Sciences (Code 1142CS) Department of
Beyond statistical inference: a decision theory for science.
Killeen, Peter R
2006-08-01
Traditional null hypothesis significance testing does not yield the probability of the null or its alternative and, therefore, cannot logically ground scientific decisions. The decision theory proposed here calculates the expected utility of an effect on the basis of (1) the probability of replicating it and (2) a utility function on its size. It takes significance tests--which place all value on the replicability of an effect and none on its magnitude--as a special case, one in which the cost of a false positive is revealed to be an order of magnitude greater than the value of a true positive. More realistic utility functions credit both replicability and effect size, integrating them for a single index of merit. The analysis incorporates opportunity cost and is consistent with alternate measures of effect size, such as r2 and information transmission, and with Bayesian model selection criteria. An alternate formulation is functionally equivalent to the formal theory, transparent, and easy to compute.
Inferring climate variability from skewed proxy records
Emile-Geay, J.; Tingley, M.
2013-12-01
compared to other proxy records. (2) a multiproxy reconstruction of temperature over the Common Era (Mann et al., 2009), where we find that about one third of the records display significant departures from normality. Accordingly, accounting for skewness in proxy predictors has a notable influence on both reconstructed global mean and spatial patterns of temperature change. Inferring climate variability from skewed proxy records thus requires cares, but can be done with relatively simple tools. References - Mann, M. E., Z. Zhang, S. Rutherford, R. S. Bradley, M. K. Hughes, D. Shindell, C. Ammann, G. Faluvegi, and F. Ni (2009), Global signatures and dynamical origins of the little ice age and medieval climate anomaly, Science, 326(5957), 1256-1260, doi:10.1126/science.1177303. - Moy, C., G. Seltzer, D. Rodbell, and D. Anderson (2002), Variability of El Niño/Southern Oscillation activ- ity at millennial timescales during the Holocene epoch, Nature, 420(6912), 162-165.
Statistical Inference for Data Adaptive Target Parameters.
Hubbard, Alan E; Kherad-Pajouh, Sara; van der Laan, Mark J
2016-05-01
Consider one observes n i.i.d. copies of a random variable with a probability distribution that is known to be an element of a particular statistical model. In order to define our statistical target we partition the sample in V equal size sub-samples, and use this partitioning to define V splits in an estimation sample (one of the V subsamples) and corresponding complementary parameter-generating sample. For each of the V parameter-generating samples, we apply an algorithm that maps the sample to a statistical target parameter. We define our sample-split data adaptive statistical target parameter as the average of these V-sample specific target parameters. We present an estimator (and corresponding central limit theorem) of this type of data adaptive target parameter. This general methodology for generating data adaptive target parameters is demonstrated with a number of practical examples that highlight new opportunities for statistical learning from data. This new framework provides a rigorous statistical methodology for both exploratory and confirmatory analysis within the same data. Given that more research is becoming "data-driven", the theory developed within this paper provides a new impetus for a greater involvement of statistical inference into problems that are being increasingly addressed by clever, yet ad hoc pattern finding methods. To suggest such potential, and to verify the predictions of the theory, extensive simulation studies, along with a data analysis based on adaptively determined intervention rules are shown and give insight into how to structure such an approach. The results show that the data adaptive target parameter approach provides a general framework and resulting methodology for data-driven science.
Sparse linear models: Variational approximate inference and Bayesian experimental design
International Nuclear Information System (INIS)
Seeger, Matthias W
2009-01-01
A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.
Sparse linear models: Variational approximate inference and Bayesian experimental design
Energy Technology Data Exchange (ETDEWEB)
Seeger, Matthias W [Saarland University and Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbruecken (Germany)
2009-12-01
A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.
Inferring the gene network underlying the branching of tomato inflorescence.
Directory of Open Access Journals (Sweden)
Laura Astola
Full Text Available The architecture of tomato inflorescence strongly affects flower production and subsequent crop yield. To understand the genetic activities involved, insight into the underlying network of genes that initiate and control the sympodial growth in the tomato is essential. In this paper, we show how the structure of this network can be derived from available data of the expressions of the involved genes. Our approach starts from employing biological expert knowledge to select the most probable gene candidates behind branching behavior. To find how these genes interact, we develop a stepwise procedure for computational inference of the network structure. Our data consists of expression levels from primary shoot meristems, measured at different developmental stages on three different genotypes of tomato. With the network inferred by our algorithm, we can explain the dynamics corresponding to all three genotypes simultaneously, despite their apparent dissimilarities. We also correctly predict the chronological order of expression peaks for the main hubs in the network. Based on the inferred network, using optimal experimental design criteria, we are able to suggest an informative set of experiments for further investigation of the mechanisms underlying branching behavior.
Bayesian inference of chemical kinetic models from proposed reactions
Galagali, Nikhil
2015-02-01
© 2014 Elsevier Ltd. Bayesian inference provides a natural framework for combining experimental data with prior knowledge to develop chemical kinetic models and quantify the associated uncertainties, not only in parameter values but also in model structure. Most existing applications of Bayesian model selection methods to chemical kinetics have been limited to comparisons among a small set of models, however. The significant computational cost of evaluating posterior model probabilities renders traditional Bayesian methods infeasible when the model space becomes large. We present a new framework for tractable Bayesian model inference and uncertainty quantification using a large number of systematically generated model hypotheses. The approach involves imposing point-mass mixture priors over rate constants and exploring the resulting posterior distribution using an adaptive Markov chain Monte Carlo method. The posterior samples are used to identify plausible models, to quantify rate constant uncertainties, and to extract key diagnostic information about model structure-such as the reactions and operating pathways most strongly supported by the data. We provide numerical demonstrations of the proposed framework by inferring kinetic models for catalytic steam and dry reforming of methane using available experimental data.
Order statistics & inference estimation methods
Balakrishnan, N
1991-01-01
The literature on order statistics and inferenc eis quite extensive and covers a large number of fields ,but most of it is dispersed throughout numerous publications. This volume is the consolidtion of the most important results and places an emphasis on estimation. Both theoretical and computational procedures are presented to meet the needs of researchers, professionals, and students. The methods of estimation discussed are well-illustrated with numerous practical examples from both the physical and life sciences, including sociology,psychology,a nd electrical and chemical engineering. A co
Bayesian Inference in Statistical Analysis
Box, George E P
2011-01-01
The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Rob
Model selection and inference a practical information-theoretic approach
Burnham, Kenneth P
1998-01-01
This book is unique in that it covers the philosophy of model-based data analysis and an omnibus strategy for the analysis of empirical data The book introduces information theoretic approaches and focuses critical attention on a priori modeling and the selection of a good approximating model that best represents the inference supported by the data Kullback-Leibler information represents a fundamental quantity in science and is Hirotugu Akaike's basis for model selection The maximized log-likelihood function can be bias-corrected to provide an estimate of expected, relative Kullback-Leibler information This leads to Akaike's Information Criterion (AIC) and various extensions and these are relatively simple and easy to use in practice, but little taught in statistics classes and far less understood in the applied sciences than should be the case The information theoretic approaches provide a unified and rigorous theory, an extension of likelihood theory, an important application of information theory, and are ...
All of statistics a concise course in statistical inference
Wasserman, Larry
2004-01-01
This book is for people who want to learn probability and statistics quickly It brings together many of the main ideas in modern statistics in one place The book is suitable for students and researchers in statistics, computer science, data mining and machine learning This book covers a much wider range of topics than a typical introductory text on mathematical statistics It includes modern topics like nonparametric curve estimation, bootstrapping and classification, topics that are usually relegated to follow-up courses The reader is assumed to know calculus and a little linear algebra No previous knowledge of probability and statistics is required The text can be used at the advanced undergraduate and graduate level Larry Wasserman is Professor of Statistics at Carnegie Mellon University He is also a member of the Center for Automated Learning and Discovery in the School of Computer Science His research areas include nonparametric inference, asymptotic theory, causality, and applications to astrophysics, bi...
The Multivariate Generalised von Mises Distribution: Inference and Applications
DEFF Research Database (Denmark)
Navarro, Alexandre Khae Wu; Frellsen, Jes; Turner, Richard
2017-01-01
Circular variables arise in a multitude of data-modelling contexts ranging from robotics to the social sciences, but they have been largely overlooked by the machine learning community. This paper partially redresses this imbalance by extending some standard probabilistic modelling tools to the c......Circular variables arise in a multitude of data-modelling contexts ranging from robotics to the social sciences, but they have been largely overlooked by the machine learning community. This paper partially redresses this imbalance by extending some standard probabilistic modelling tools....... These models can leverage standard modelling tools (e.g. kernel functions and automatic relevance determination). Third, we show that the posterior distribution in these models is a mGvM distribution which enables development of an efficient variational free-energy scheme for performing approximate inference...... and approximate maximum-likelihood learning....
Statistical inference an integrated Bayesianlikelihood approach
Aitkin, Murray
2010-01-01
Filling a gap in current Bayesian theory, Statistical Inference: An Integrated Bayesian/Likelihood Approach presents a unified Bayesian treatment of parameter inference and model comparisons that can be used with simple diffuse prior specifications. This novel approach provides new solutions to difficult model comparison problems and offers direct Bayesian counterparts of frequentist t-tests and other standard statistical methods for hypothesis testing.After an overview of the competing theories of statistical inference, the book introduces the Bayes/likelihood approach used throughout. It pre
Quantum electrodynamics of strong fields
International Nuclear Information System (INIS)
Greiner, W.
1983-01-01
Quantum Electrodynamics of Strong Fields provides a broad survey of the theoretical and experimental work accomplished, presenting papers by a group of international researchers who have made significant contributions to this developing area. Exploring the quantum theory of strong fields, the volume focuses on the phase transition to a charged vacuum in strong electric fields. The contributors also discuss such related topics as QED at short distances, precision tests of QED, nonperturbative QCD and confinement, pion condensation, and strong gravitational fields In addition, the volume features a historical paper on the roots of quantum field theory in the history of quantum physics by noted researcher Friedrich Hund
Instabilities in strongly coupled plasmas
Kalman, G J
2003-01-01
The conventional Vlasov treatment of beam-plasma instabilities is inappropriate when the plasma is strongly coupled. In the strongly coupled liquid state, the strong correlations between the dust grains fundamentally affect the conditions for instability. In the crystalline state, the inherent anisotropy couples the longitudinal and transverse polarizations, and results in unstable excitations in both polarizations. We summarize analyses of resonant and non-resonant, as well as resistive instabilities. We consider both ion-dust streaming and dust beam-plasma instabilities. Strong coupling, in general, leads to an enhancement of the growth rates. In the crystalline phase, a resonant transverse instability can be excited.
Clinton, Virginia
2015-01-01
The purpose of this study was to examine the associations between reading motivation and inference generation while reading. Undergraduate participants (N = 69) read two science articles while thinking aloud, completed a standardized reading comprehension assessment, and self reported their habitual reading motivation. Findings indicate that…
Short proofs of strong normalization
Wojdyga, Aleksander
2008-01-01
This paper presents simple, syntactic strong normalization proofs for the simply-typed lambda-calculus and the polymorphic lambda-calculus (system F) with the full set of logical connectives, and all the permutative reductions. The normalization proofs use translations of terms and types to systems, for which strong normalization property is known.
Inferring Domain Plans in Question-Answering
National Research Council Canada - National Science Library
Pollack, Martha E
1986-01-01
The importance of plan inference in models of conversation has been widely noted in the computational-linguistics literature, and its incorporation in question-answering systems has enabled a range...
Scalable inference for stochastic block models
Peng, Chengbin; Zhang, Zhihua; Wong, Ka-Chun; Zhang, Xiangliang; Keyes, David E.
2017-01-01
Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference
International Nuclear Information System (INIS)
DeSantis, G.N.
1995-01-01
The calculation decides the integrity of the safety latch that will hold the strong-back to the pump during lifting. The safety latch will be welded to the strong-back and will latch to a 1.5-in. dia cantilever rod welded to the pump baseplate. The static and dynamic analysis shows that the safety latch will hold the strong-back to the pump if the friction clamps fail and the pump become free from the strong-back. Thus, the safety latch will meet the requirements of the Lifting and Rigging Manual for under the hook lifting for static loading; it can withstand shock loads from the strong-back falling 0.25 inch
Scientific inference learning from data
Vaughan, Simon
2013-01-01
Providing the knowledge and practical experience to begin analysing scientific data, this book is ideal for physical sciences students wishing to improve their data handling skills. The book focuses on explaining and developing the practice and understanding of basic statistical analysis, concentrating on a few core ideas, such as the visual display of information, modelling using the likelihood function, and simulating random data. Key concepts are developed through a combination of graphical explanations, worked examples, example computer code and case studies using real data. Students will develop an understanding of the ideas behind statistical methods and gain experience in applying them in practice. Further resources are available at www.cambridge.org/9781107607590, including data files for the case studies so students can practise analysing data, and exercises to test students' understanding.
Efficient algorithms for conditional independence inference
Czech Academy of Sciences Publication Activity Database
Bouckaert, R.; Hemmecke, R.; Lindner, S.; Studený, Milan
2010-01-01
Roč. 11, č. 1 (2010), s. 3453-3479 ISSN 1532-4435 R&D Projects: GA ČR GA201/08/0539; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : conditional independence inference * linear programming approach Subject RIV: BA - General Mathematics Impact factor: 2.949, year: 2010 http://library.utia.cas.cz/separaty/2010/MTR/studeny-efficient algorithms for conditional independence inference.pdf
On the criticality of inferred models
Mastromatteo, Iacopo; Marsili, Matteo
2011-10-01
Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality.
On the criticality of inferred models
International Nuclear Information System (INIS)
Mastromatteo, Iacopo; Marsili, Matteo
2011-01-01
Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality
Polynomial Chaos Surrogates for Bayesian Inference
Le Maitre, Olivier
2016-01-06
The Bayesian inference is a popular probabilistic method to solve inverse problems, such as the identification of field parameter in a PDE model. The inference rely on the Bayes rule to update the prior density of the sought field, from observations, and derive its posterior distribution. In most cases the posterior distribution has no explicit form and has to be sampled, for instance using a Markov-Chain Monte Carlo method. In practice the prior field parameter is decomposed and truncated (e.g. by means of Karhunen- Lo´eve decomposition) to recast the inference problem into the inference of a finite number of coordinates. Although proved effective in many situations, the Bayesian inference as sketched above faces several difficulties requiring improvements. First, sampling the posterior can be a extremely costly task as it requires multiple resolutions of the PDE model for different values of the field parameter. Second, when the observations are not very much informative, the inferred parameter field can highly depends on its prior which can be somehow arbitrary. These issues have motivated the introduction of reduced modeling or surrogates for the (approximate) determination of the parametrized PDE solution and hyperparameters in the description of the prior field. Our contribution focuses on recent developments in these two directions: the acceleration of the posterior sampling by means of Polynomial Chaos expansions and the efficient treatment of parametrized covariance functions for the prior field. We also discuss the possibility of making such approach adaptive to further improve its efficiency.
A Bayesian Network Schema for Lessening Database Inference
National Research Council Canada - National Science Library
Chang, LiWu; Moskowitz, Ira S
2001-01-01
.... The authors introduce a formal schema for database inference analysis, based upon a Bayesian network structure, which identifies critical parameters involved in the inference problem and represents...
Bayesian Inference on Gravitational Waves
Directory of Open Access Journals (Sweden)
Asad Ali
2015-12-01
Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.
Large orders in strong-field QED
Energy Technology Data Exchange (ETDEWEB)
Heinzl, Thomas [School of Mathematics and Statistics, University of Plymouth, Drake Circus, Plymouth PL4 8AA (United Kingdom); Schroeder, Oliver [Science-Computing ag, Hagellocher Weg 73, D-72070 Tuebingen (Germany)
2006-09-15
We address the issue of large-order expansions in strong-field QED. Our approach is based on the one-loop effective action encoded in the associated photon polarization tensor. We concentrate on the simple case of crossed fields aiming at possible applications of high-power lasers to measure vacuum birefringence. A simple next-to-leading order derivative expansion reveals that the indices of refraction increase with frequency. This signals normal dispersion in the small-frequency regime where the derivative expansion makes sense. To gain information beyond that regime we determine the factorial growth of the derivative expansion coefficients evaluating the first 82 orders by means of computer algebra. From this we can infer a nonperturbative imaginary part for the indices of refraction indicating absorption (pair production) as soon as energy and intensity become (super)critical. These results compare favourably with an analytic evaluation of the polarization tensor asymptotics. Kramers-Kronig relations finally allow for a nonperturbative definition of the real parts as well and show that absorption goes hand in hand with anomalous dispersion for sufficiently large frequencies and fields.
Quantum centipedes with strong global constraint
Grange, Pascal
2017-06-01
A centipede made of N quantum walkers on a one-dimensional lattice is considered. The distance between two consecutive legs is either one or two lattice spacings, and a global constraint is imposed: the maximal distance between the first and last leg is N + 1. This is the strongest global constraint compatible with walking. For an initial value of the wave function corresponding to a localized configuration at the origin, the probability law of the first leg of the centipede can be expressed in closed form in terms of Bessel functions. The dispersion relation and the group velocities are worked out exactly. Their maximal group velocity goes to zero when N goes to infinity, which is in contrast with the behaviour of group velocities of quantum centipedes without global constraint, which were recently shown by Krapivsky, Luck and Mallick to give rise to ballistic spreading of extremal wave-front at non-zero velocity in the large-N limit. The corresponding Hamiltonians are implemented numerically, based on a block structure of the space of configurations corresponding to compositions of the integer N. The growth of the maximal group velocity when the strong constraint is gradually relaxed is explored, and observed to be linear in the density of gaps allowed in the configurations. Heuristic arguments are presented to infer that the large-N limit of the globally constrained model can yield finite group velocities provided the allowed number of gaps is a finite fraction of N.
Ceres' Geophysical Evolution Inferred from Dawn Data
Castillo-Rogez, Julie; Bowling, Timothy; Ermakov, Anton I.; Fu, Roger; Park, Ryan; Raymond, Carol; De Sanctis, Maria Cristina; Ammannito, Eleonora; Ruesch, Ottaviano; Prettyman, Thomas H.; Y McSween, Harry; Toplis, Michael J.; Russell, Christopher T.; Dawn Team
2016-10-01
If Ceres formed as an ice-rich body, as suggested by its low density and the detection of ammoniated phyllosilicates [1], then it should have differentiated an ice-dominated shell, analogous to large icy satellites [2]. Instead, Dawn observations revealed an enrichment of Ceres' shell in strong materials, either a rocky component and/or salts and gas hydrates [3, 4, 5, 6]. We have explored several scenarios for the emplacement of Ceres' surface. Endogenic processes cannot account for its overall homogeneity. Instead we suggest that Ceres differentiated an icy shell upon freezing of its early ocean that was removed as a consequence of frequent exposure by impacting after the dwarf planet migrated from a cold accretional environment to the warmer outer main belt (or when the solar nebula dissipated, if Ceres formed in situ). This scenario implies that Ceres' current surface represents the interface between the original ice shell and the top of the frozen ocean, a region that is extremely rich chemistry-wise, as illustrated by the mineralogical observations returned by Dawn [7]. Thermal modeling shows that the shell could remain warm over the long term and offer a setting for the generation of brines that may be responsible for the emplacement of Ahuna Mons [8] and Occator's bright spots [7] on an otherwise homogeneous surface [9]. An important implication is that Ceres' surface offers an analog for better understanding the deep interior and chemical evolution of large ice-rich bodies.References: [1] De Sanctis et al., Nature, 2015; [2] McCord and Sotin, Journal of Geophysical Research, 2005; [3] Park et al., Nature, 2016 (in press); [4] Hiesinger et al., Science (submitted); [5] Bland et al., Nature Geoscience, 2016 (in press); [6] Fu et al., AGU Fall Meeting, 2015 [7] De Sanctis et al., Nature, 2016 (in press); [8] Ruesch et al., Science, in revision; [9] Ammannito et al., Science, 2016 (accepted).Acknowledgements: Part of this work is being carried out at the Jet
A formal model of interpersonal inference
Directory of Open Access Journals (Sweden)
Michael eMoutoussis
2014-03-01
Full Text Available Introduction: We propose that active Bayesian inference – a general framework for decision-making – can equally be applied to interpersonal exchanges. Social cognition, however, entails special challenges. We address these challenges through a novel formulation of a formal model and demonstrate its psychological significance. Method: We review relevant literature, especially with regards to interpersonal representations, formulate a mathematical model and present a simulation study. The model accommodates normative models from utility theory and places them within the broader setting of Bayesian inference. Crucially, we endow people's prior beliefs, into which utilities are absorbed, with preferences of self and others. The simulation illustrates the model's dynamics and furnishes elementary predictions of the theory. Results: 1. Because beliefs about self and others inform both the desirability and plausibility of outcomes, in this framework interpersonal representations become beliefs that have to be actively inferred. This inference, akin to 'mentalising' in the psychological literature, is based upon the outcomes of interpersonal exchanges. 2. We show how some well-known social-psychological phenomena (e.g. self-serving biases can be explained in terms of active interpersonal inference. 3. Mentalising naturally entails Bayesian updating of how people value social outcomes. Crucially this includes inference about one’s own qualities and preferences. Conclusion: We inaugurate a Bayes optimal framework for modelling intersubject variability in mentalising during interpersonal exchanges. Here, interpersonal representations are endowed with explicit functional and affective properties. We suggest the active inference framework lends itself to the study of psychiatric conditions where mentalising is distorted.
Intelligent machines in the twenty-first century: foundations of inference and inquiry.
Knuth, Kevin H
2003-12-15
The last century saw the application of Boolean algebra to the construction of computing machines, which work by applying logical transformations to information contained in their memory. The development of information theory and the generalization of Boolean algebra to Bayesian inference have enabled these computing machines, in the last quarter of the twentieth century, to be endowed with the ability to learn by making inferences from data. This revolution is just beginning as new computational techniques continue to make difficult problems more accessible. Recent advances in our understanding of the foundations of probability theory have revealed implications for areas other than logic. Of relevance to intelligent machines, we recently identified the algebra of questions as the free distributive algebra, which will now allow us to work with questions in a way analogous to that which Boolean algebra enables us to work with logical statements. In this paper, we examine the foundations of inference and inquiry. We begin with a history of inferential reasoning, highlighting key concepts that have led to the automation of inference in modern machine-learning systems. We then discuss the foundations of inference in more detail using a modern viewpoint that relies on the mathematics of partially ordered sets and the scaffolding of lattice theory. This new viewpoint allows us to develop the logic of inquiry and introduce a measure describing the relevance of a proposed question to an unresolved issue. Last, we will demonstrate the automation of inference, and discuss how this new logic of inquiry will enable intelligent machines to ask questions. Automation of both inference and inquiry promises to allow robots to perform science in the far reaches of our solar system and in other star systems by enabling them not only to make inferences from data, but also to decide which question to ask, which experiment to perform, or which measurement to take given what they have
Estimating mountain basin-mean precipitation from streamflow using Bayesian inference
Henn, Brian; Clark, Martyn P.; Kavetski, Dmitri; Lundquist, Jessica D.
2015-10-01
Estimating basin-mean precipitation in complex terrain is difficult due to uncertainty in the topographical representativeness of precipitation gauges relative to the basin. To address this issue, we use Bayesian methodology coupled with a multimodel framework to infer basin-mean precipitation from streamflow observations, and we apply this approach to snow-dominated basins in the Sierra Nevada of California. Using streamflow observations, forcing data from lower-elevation stations, the Bayesian Total Error Analysis (BATEA) methodology and the Framework for Understanding Structural Errors (FUSE), we infer basin-mean precipitation, and compare it to basin-mean precipitation estimated using topographically informed interpolation from gauges (PRISM, the Parameter-elevation Regression on Independent Slopes Model). The BATEA-inferred spatial patterns of precipitation show agreement with PRISM in terms of the rank of basins from wet to dry but differ in absolute values. In some of the basins, these differences may reflect biases in PRISM, because some implied PRISM runoff ratios may be inconsistent with the regional climate. We also infer annual time series of basin precipitation using a two-step calibration approach. Assessment of the precision and robustness of the BATEA approach suggests that uncertainty in the BATEA-inferred precipitation is primarily related to uncertainties in hydrologic model structure. Despite these limitations, time series of inferred annual precipitation under different model and parameter assumptions are strongly correlated with one another, suggesting that this approach is capable of resolving year-to-year variability in basin-mean precipitation.
Estimating uncertainty of inference for validation
Energy Technology Data Exchange (ETDEWEB)
Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM
2010-09-30
We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the
Inferring Mathematical Equations Using Crowdsourcing.
Directory of Open Access Journals (Sweden)
Szymon Wasik
Full Text Available Crowdsourcing, understood as outsourcing work to a large network of people in the form of an open call, has been utilized successfully many times, including a very interesting concept involving the implementation of computer games with the objective of solving a scientific problem by employing users to play a game-so-called crowdsourced serious games. Our main objective was to verify whether such an approach could be successfully applied to the discovery of mathematical equations that explain experimental data gathered during the observation of a given dynamic system. Moreover, we wanted to compare it with an approach based on artificial intelligence that uses symbolic regression to find such formulae automatically. To achieve this, we designed and implemented an Internet game in which players attempt to design a spaceship representing an equation that models the observed system. The game was designed while considering that it should be easy to use for people without strong mathematical backgrounds. Moreover, we tried to make use of the collective intelligence observed in crowdsourced systems by enabling many players to collaborate on a single solution. The idea was tested on several hundred players playing almost 10,000 games and conducting a user opinion survey. The results prove that the proposed solution has very high potential. The function generated during weeklong tests was almost as precise as the analytical solution of the model of the system and, up to a certain complexity level of the formulae, it explained data better than the solution generated automatically by Eureqa, the leading software application for the implementation of symbolic regression. Moreover, we observed benefits of using crowdsourcing; the chain of consecutive solutions that led to the best solution was obtained by the continuous collaboration of several players.
Inferring Mathematical Equations Using Crowdsourcing.
Wasik, Szymon; Fratczak, Filip; Krzyskow, Jakub; Wulnikowski, Jaroslaw
2015-01-01
Crowdsourcing, understood as outsourcing work to a large network of people in the form of an open call, has been utilized successfully many times, including a very interesting concept involving the implementation of computer games with the objective of solving a scientific problem by employing users to play a game-so-called crowdsourced serious games. Our main objective was to verify whether such an approach could be successfully applied to the discovery of mathematical equations that explain experimental data gathered during the observation of a given dynamic system. Moreover, we wanted to compare it with an approach based on artificial intelligence that uses symbolic regression to find such formulae automatically. To achieve this, we designed and implemented an Internet game in which players attempt to design a spaceship representing an equation that models the observed system. The game was designed while considering that it should be easy to use for people without strong mathematical backgrounds. Moreover, we tried to make use of the collective intelligence observed in crowdsourced systems by enabling many players to collaborate on a single solution. The idea was tested on several hundred players playing almost 10,000 games and conducting a user opinion survey. The results prove that the proposed solution has very high potential. The function generated during weeklong tests was almost as precise as the analytical solution of the model of the system and, up to a certain complexity level of the formulae, it explained data better than the solution generated automatically by Eureqa, the leading software application for the implementation of symbolic regression. Moreover, we observed benefits of using crowdsourcing; the chain of consecutive solutions that led to the best solution was obtained by the continuous collaboration of several players.
Deep Learning for Population Genetic Inference.
Sheehan, Sara; Song, Yun S
2016-03-01
Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.
Deep Learning for Population Genetic Inference.
Directory of Open Access Journals (Sweden)
Sara Sheehan
2016-03-01
Full Text Available Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data to the output (e.g., population genetic parameters of interest. We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history. Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.
Deep Learning for Population Genetic Inference
Sheehan, Sara; Song, Yun S.
2016-01-01
Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme. PMID:27018908
Inferring Phylogenetic Networks Using PhyloNet.
Wen, Dingqiao; Yu, Yun; Zhu, Jiafan; Nakhleh, Luay
2018-07-01
PhyloNet was released in 2008 as a software package for representing and analyzing phylogenetic networks. At the time of its release, the main functionalities in PhyloNet consisted of measures for comparing network topologies and a single heuristic for reconciling gene trees with a species tree. Since then, PhyloNet has grown significantly. The software package now includes a wide array of methods for inferring phylogenetic networks from data sets of unlinked loci while accounting for both reticulation (e.g., hybridization) and incomplete lineage sorting. In particular, PhyloNet now allows for maximum parsimony, maximum likelihood, and Bayesian inference of phylogenetic networks from gene tree estimates. Furthermore, Bayesian inference directly from sequence data (sequence alignments or biallelic markers) is implemented. Maximum parsimony is based on an extension of the "minimizing deep coalescences" criterion to phylogenetic networks, whereas maximum likelihood and Bayesian inference are based on the multispecies network coalescent. All methods allow for multiple individuals per species. As computing the likelihood of a phylogenetic network is computationally hard, PhyloNet allows for evaluation and inference of networks using a pseudolikelihood measure. PhyloNet summarizes the results of the various analyzes and generates phylogenetic networks in the extended Newick format that is readily viewable by existing visualization software.
International Nuclear Information System (INIS)
Leonor, I; Frey, R; Sutton, P J; Jones, G; Marka, S; Marka, Z
2009-01-01
One of the ongoing searches performed using the LIGO-Virgo network of gravitational-wave interferometers is the search for gravitational-wave burst (GWB) counterparts to gamma-ray bursts (GRBs). This type of analysis makes use of GRB time and position information from gamma-ray satellite detectors to trigger the GWB search, and the GWB detection rates possible for such an analysis thus strongly depend on the GRB detection efficiencies of the satellite detectors. Using local GRB rate densities inferred from observations which are found in the science literature, we calculate estimates of the GWB detection rates for different configurations of the LIGO-Virgo network for this type of analysis.
Goal inferences about robot behavior : goal inferences and human response behaviors
Broers, H.A.T.; Ham, J.R.C.; Broeders, R.; De Silva, P.; Okada, M.
2014-01-01
This explorative research focused on the goal inferences human observers draw based on a robot's behavior, and the extent to which those inferences predict people's behavior in response to that robot. Results show that different robot behaviors cause different response behavior from people.
Phylogeny and Divergence Times of Lemurs Inferred with Recent and Ancient Fossils in the Tree.
Herrera, James P; Dávalos, Liliana M
2016-09-01
Paleontological and neontological systematics seek to answer evolutionary questions with different data sets. Phylogenies inferred for combined extant and extinct taxa provide novel insights into the evolutionary history of life. Primates have an extensive, diverse fossil record and molecular data for living and extinct taxa are rapidly becoming available. We used two models to infer the phylogeny and divergence times for living and fossil primates, the tip-dating (TD) and fossilized birth-death process (FBD). We collected new morphological data, especially on the living and extinct endemic lemurs of Madagascar. We combined the morphological data with published DNA sequences to infer near-complete (88% of lemurs) time-calibrated phylogenies. The results suggest that primates originated around the Cretaceous-Tertiary boundary, slightly earlier than indicated by the fossil record and later than previously inferred from molecular data alone. We infer novel relationships among extinct lemurs, and strong support for relationships that were previously unresolved. Dates inferred with TD were significantly older than those inferred with FBD, most likely related to an assumption of a uniform branching process in the TD compared with a birth-death process assumed in the FBD. This is the first study to combine morphological and DNA sequence data from extinct and extant primates to infer evolutionary relationships and divergence times, and our results shed new light on the tempo of lemur evolution and the efficacy of combined phylogenetic analyses. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Using Alien Coins to Test Whether Simple Inference Is Bayesian
Cassey, Peter; Hawkins, Guy E.; Donkin, Chris; Brown, Scott D.
2016-01-01
Reasoning and inference are well-studied aspects of basic cognition that have been explained as statistically optimal Bayesian inference. Using a simplified experimental design, we conducted quantitative comparisons between Bayesian inference and human inference at the level of individuals. In 3 experiments, with more than 13,000 participants, we…
International Nuclear Information System (INIS)
Aoki, Ken-ichi
1988-01-01
Existence of a strong coupling phase in QED has been suggested in solutions of the Schwinger-Dyson equation and in Monte Carlo simulation of lattice QED. In this article we recapitulate the previous arguments, and formulate the problem in the modern framework of the renormalization theory, Wilsonian renormalization. This scheme of renormalization gives the best understanding of the basic structure of a field theory especially when it has a multi-phase structure. We resolve some misleading arguments in the previous literature. Then we set up a strategy to attack the strong phase, if any. We describe a trial; a coupled Schwinger-Dyson equation. Possible picture of the strong coupling phase QED is presented. (author)
Statistical inference for noisy nonlinear ecological dynamic systems.
Wood, Simon N
2010-08-26
Chaotic ecological dynamic systems defy conventional statistical analysis. Systems with near-chaotic dynamics are little better. Such systems are almost invariably driven by endogenous dynamic processes plus demographic and environmental process noise, and are only observable with error. Their sensitivity to history means that minute changes in the driving noise realization, or the system parameters, will cause drastic changes in the system trajectory. This sensitivity is inherited and amplified by the joint probability density of the observable data and the process noise, rendering it useless as the basis for obtaining measures of statistical fit. Because the joint density is the basis for the fit measures used by all conventional statistical methods, this is a major theoretical shortcoming. The inability to make well-founded statistical inferences about biological dynamic models in the chaotic and near-chaotic regimes, other than on an ad hoc basis, leaves dynamic theory without the methods of quantitative validation that are essential tools in the rest of biological science. Here I show that this impasse can be resolved in a simple and general manner, using a method that requires only the ability to simulate the observed data on a system from the dynamic model about which inferences are required. The raw data series are reduced to phase-insensitive summary statistics, quantifying local dynamic structure and the distribution of observations. Simulation is used to obtain the mean and the covariance matrix of the statistics, given model parameters, allowing the construction of a 'synthetic likelihood' that assesses model fit. This likelihood can be explored using a straightforward Markov chain Monte Carlo sampler, but one further post-processing step returns pure likelihood-based inference. I apply the method to establish the dynamic nature of the fluctuations in Nicholson's classic blowfly experiments.
Fuzzy logic controller using different inference methods
International Nuclear Information System (INIS)
Liu, Z.; De Keyser, R.
1994-01-01
In this paper the design of fuzzy controllers by using different inference methods is introduced. Configuration of the fuzzy controllers includes a general rule-base which is a collection of fuzzy PI or PD rules, the triangular fuzzy data model and a centre of gravity defuzzification algorithm. The generalized modus ponens (GMP) is used with the minimum operator of the triangular norm. Under the sup-min inference rule, six fuzzy implication operators are employed to calculate the fuzzy look-up tables for each rule base. The performance is tested in simulated systems with MATLAB/SIMULINK. Results show the effects of using the fuzzy controllers with different inference methods and applied to different test processes
Uncertainty in prediction and in inference
International Nuclear Information System (INIS)
Hilgevoord, J.; Uffink, J.
1991-01-01
The concepts of uncertainty in prediction and inference are introduced and illustrated using the diffraction of light as an example. The close relationship between the concepts of uncertainty in inference and resolving power is noted. A general quantitative measure of uncertainty in inference can be obtained by means of the so-called statistical distance between probability distributions. When applied to quantum mechanics, this distance leads to a measure of the distinguishability of quantum states, which essentially is the absolute value of the matrix element between the states. The importance of this result to the quantum mechanical uncertainty principle is noted. The second part of the paper provides a derivation of the statistical distance on the basis of the so-called method of support
A Learning Algorithm for Multimodal Grammar Inference.
D'Ulizia, A; Ferri, F; Grifoni, P
2011-12-01
The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.
Examples in parametric inference with R
Dixit, Ulhas Jayram
2016-01-01
This book discusses examples in parametric inference with R. Combining basic theory with modern approaches, it presents the latest developments and trends in statistical inference for students who do not have an advanced mathematical and statistical background. The topics discussed in the book are fundamental and common to many fields of statistical inference and thus serve as a point of departure for in-depth study. The book is divided into eight chapters: Chapter 1 provides an overview of topics on sufficiency and completeness, while Chapter 2 briefly discusses unbiased estimation. Chapter 3 focuses on the study of moments and maximum likelihood estimators, and Chapter 4 presents bounds for the variance. In Chapter 5, topics on consistent estimator are discussed. Chapter 6 discusses Bayes, while Chapter 7 studies some more powerful tests. Lastly, Chapter 8 examines unbiased and other tests. Senior undergraduate and graduate students in statistics and mathematics, and those who have taken an introductory cou...
Grammatical inference algorithms, routines and applications
Wieczorek, Wojciech
2017-01-01
This book focuses on grammatical inference, presenting classic and modern methods of grammatical inference from the perspective of practitioners. To do so, it employs the Python programming language to present all of the methods discussed. Grammatical inference is a field that lies at the intersection of multiple disciplines, with contributions from computational linguistics, pattern recognition, machine learning, computational biology, formal learning theory and many others. Though the book is largely practical, it also includes elements of learning theory, combinatorics on words, the theory of automata and formal languages, plus references to real-world problems. The listings presented here can be directly copied and pasted into other programs, thus making the book a valuable source of ready recipes for students, academic researchers, and programmers alike, as well as an inspiration for their further development.>.
Statistical inference based on divergence measures
Pardo, Leandro
2005-01-01
The idea of using functionals of Information Theory, such as entropies or divergences, in statistical inference is not new. However, in spite of the fact that divergence statistics have become a very good alternative to the classical likelihood ratio test and the Pearson-type statistic in discrete models, many statisticians remain unaware of this powerful approach.Statistical Inference Based on Divergence Measures explores classical problems of statistical inference, such as estimation and hypothesis testing, on the basis of measures of entropy and divergence. The first two chapters form an overview, from a statistical perspective, of the most important measures of entropy and divergence and study their properties. The author then examines the statistical analysis of discrete multivariate data with emphasis is on problems in contingency tables and loglinear models using phi-divergence test statistics as well as minimum phi-divergence estimators. The final chapter looks at testing in general populations, prese...
Smyth, Steve; Smyth, Jen
2016-01-01
Science Opens Doors is the creation of Clive Thompson of the Horners' Livery Company. The Science Opens Doors project philosophy is strongly based upon the King's College London ASPIRES project, which established that children like doing science in junior school (ages 7-11), but that by the age of 12-14 they are firmly against becoming scientists.…
Strong interactions at high energy
International Nuclear Information System (INIS)
Anselmino, M.
1995-01-01
Spin effects in strong interaction high energy processes are subtle phenomena which involve both short and long distance physics and test perturbative and non perturbative aspects of QCD. Moreover, depending on quantities like interferences between different amplitudes and relative phases, spin observables always test a theory at a fundamental quantum mechanical level; it is then no surprise that spin data are often difficult to accommodate within the existing models. A report is made on the main issues and contributions discussed in the parallel Session on the open-quote open-quote Strong interactions at high energy close-quote close-quote in this Conference. copyright 1995 American Institute of Physics
Strong-field dissociation dynamics
International Nuclear Information System (INIS)
DiMauro, L.F.; Yang, Baorui.
1993-01-01
The strong-field dissociation behavior of diatomic molecules is examined under two distinctive physical scenarios. In the first scenario, the dissociation of the isolated hydrogen and deuterium molecular ions is discussed. The dynamics of above-threshold dissociation (ATD) are investigated over a wide range of green and infrared intensities and compared to a dressed-state model. The second situation arises when strong-field neutral dissociation is followed by ionization of the atomic fragments. The study results in a direct measure of the atomic fragment's ac-Stark shift by observing the intensity-dependent shifts in the electron or nuclear fragment kinetic energy. 8 figs., 14 refs
Improved Inference of Heteroscedastic Fixed Effects Models
Directory of Open Access Journals (Sweden)
Afshan Saeed
2016-12-01
Full Text Available Heteroscedasticity is a stern problem that distorts estimation and testing of panel data model (PDM. Arellano (1987 proposed the White (1980 estimator for PDM with heteroscedastic errors but it provides erroneous inference for the data sets including high leverage points. In this paper, our attempt is to improve heteroscedastic consistent covariance matrix estimator (HCCME for panel dataset with high leverage points. To draw robust inference for the PDM, our focus is to improve kernel bootstrap estimators, proposed by Racine and MacKinnon (2007. The Monte Carlo scheme is used for assertion of the results.
Likelihood inference for unions of interacting discs
DEFF Research Database (Denmark)
Møller, Jesper; Helisova, K.
2010-01-01
This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point...... process, where the germs are the centres and the marks are the associated radii of the discs. We propose to use a recent parametric class of interacting disc process models, where the minimal sufficient statistic depends on various geometric properties of the random set, and the density is specified......-based maximum likelihood inference and the effect of specifying different reference Poisson models....
IMAGINE: Interstellar MAGnetic field INference Engine
Steininger, Theo
2018-03-01
IMAGINE (Interstellar MAGnetic field INference Engine) performs inference on generic parametric models of the Galaxy. The modular open source framework uses highly optimized tools and technology such as the MultiNest sampler (ascl:1109.006) and the information field theory framework NIFTy (ascl:1302.013) to create an instance of the Milky Way based on a set of parameters for physical observables, using Bayesian statistics to judge the mismatch between measured data and model prediction. The flexibility of the IMAGINE framework allows for simple refitting for newly available data sets and makes state-of-the-art Bayesian methods easily accessible particularly for random components of the Galactic magnetic field.
Inferring causality from noisy time series data
DEFF Research Database (Denmark)
Mønster, Dan; Fusaroli, Riccardo; Tylén, Kristian
2016-01-01
Convergent Cross-Mapping (CCM) has shown high potential to perform causal inference in the absence of models. We assess the strengths and weaknesses of the method by varying coupling strength and noise levels in coupled logistic maps. We find that CCM fails to infer accurate coupling strength...... and even causality direction in synchronized time-series and in the presence of intermediate coupling. We find that the presence of noise deterministically reduces the level of cross-mapping fidelity, while the convergence rate exhibits higher levels of robustness. Finally, we propose that controlled noise...
On parametrised cold dense matter equation of state inference
Riley, Thomas E.; Raaijmakers, Geert; Watts, Anna L.
2018-04-01
Constraining the equation of state of cold dense matter in compact stars is a major science goal for observing programmes being conducted using X-ray, radio, and gravitational wave telescopes. We discuss Bayesian hierarchical inference of parametrised dense matter equations of state. In particular we generalise and examine two inference paradigms from the literature: (i) direct posterior equation of state parameter estimation, conditioned on observations of a set of rotating compact stars; and (ii) indirect parameter estimation, via transformation of an intermediary joint posterior distribution of exterior spacetime parameters (such as gravitational masses and coordinate equatorial radii). We conclude that the former paradigm is not only tractable for large-scale analyses, but is principled and flexible from a Bayesian perspective whilst the latter paradigm is not. The thematic problem of Bayesian prior definition emerges as the crux of the difference between these paradigms. The second paradigm should in general only be considered as an ill-defined approach to the problem of utilising archival posterior constraints on exterior spacetime parameters; we advocate for an alternative approach whereby such information is repurposed as an approximative likelihood function. We also discuss why conditioning on a piecewise-polytropic equation of state model - currently standard in the field of dense matter study - can easily violate conditions required for transformation of a probability density distribution between spaces of exterior (spacetime) and interior (source matter) parameters.
The aggregate site frequency spectrum for comparative population genomic inference.
Xue, Alexander T; Hickerson, Michael J
2015-12-01
Understanding how assemblages of species responded to past climate change is a central goal of comparative phylogeography and comparative population genomics, an endeavour that has increasing potential to integrate with community ecology. New sequencing technology now provides the potential to perform complex demographic inference at unprecedented resolution across assemblages of nonmodel species. To this end, we introduce the aggregate site frequency spectrum (aSFS), an expansion of the site frequency spectrum to use single nucleotide polymorphism (SNP) data sets collected from multiple, co-distributed species for assemblage-level demographic inference. We describe how the aSFS is constructed over an arbitrary number of independent population samples and then demonstrate how the aSFS can differentiate various multispecies demographic histories under a wide range of sampling configurations while allowing effective population sizes and expansion magnitudes to vary independently. We subsequently couple the aSFS with a hierarchical approximate Bayesian computation (hABC) framework to estimate degree of temporal synchronicity in expansion times across taxa, including an empirical demonstration with a data set consisting of five populations of the threespine stickleback (Gasterosteus aculeatus). Corroborating what is generally understood about the recent postglacial origins of these populations, the joint aSFS/hABC analysis strongly suggests that the stickleback data are most consistent with synchronous expansion after the Last Glacial Maximum (posterior probability = 0.99). The aSFS will have general application for multilevel statistical frameworks to test models involving assemblages and/or communities, and as large-scale SNP data from nonmodel species become routine, the aSFS expands the potential for powerful next-generation comparative population genomic inference. © 2015 The Authors. Molecular Ecology Published by John Wiley & Sons Ltd.
Strong Decomposition of Random Variables
DEFF Research Database (Denmark)
Hoffmann-Jørgensen, Jørgen; Kagan, Abram M.; Pitt, Loren D.
2007-01-01
A random variable X is stongly decomposable if X=Y+Z where Y=Φ(X) and Z=X-Φ(X) are independent non-degenerated random variables (called the components). It is shown that at least one of the components is singular, and we derive a necessary and sufficient condition for strong decomposability...... of a discrete random variable....
Strong coupling electroweak symmetry breaking
International Nuclear Information System (INIS)
Barklow, T.L.; Burdman, G.; Chivukula, R.S.
1997-04-01
The authors review models of electroweak symmetry breaking due to new strong interactions at the TeV energy scale and discuss the prospects for their experimental tests. They emphasize the direct observation of the new interactions through high-energy scattering of vector bosons. They also discuss indirect probes of the new interactions and exotic particles predicted by specific theoretical models
Strong coupling electroweak symmetry breaking
Energy Technology Data Exchange (ETDEWEB)
Barklow, T.L. [Stanford Linear Accelerator Center, Menlo Park, CA (United States); Burdman, G. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Chivukula, R.S. [Boston Univ., MA (United States). Dept. of Physics
1997-04-01
The authors review models of electroweak symmetry breaking due to new strong interactions at the TeV energy scale and discuss the prospects for their experimental tests. They emphasize the direct observation of the new interactions through high-energy scattering of vector bosons. They also discuss indirect probes of the new interactions and exotic particles predicted by specific theoretical models.
The colours of strong interaction
International Nuclear Information System (INIS)
1995-01-01
The aim of this session is to draw a consistent framework about the different ways to consider strong interaction. A large part is dedicated to theoretical work and the latest experimental results obtained at the first electron collider HERA are discussed. (A.C.)
Strong cosmic censorship and the strong curvature singularities
International Nuclear Information System (INIS)
Krolak, A.
1987-01-01
Conditions are given under which any asymptotically simple and empty space-time that has a partial Cauchy surface with an asymptotically simple past is globally hyperbolic. It is shown that this result suggests that the Cauchy horizons of the type occurring in Reissner--Nordstroem and Kerr space-times are unstable. This in turn gives support for the validity of the strong cosmic censorship hypothesis
Laboratory Delivering science and technology to protect our nation and promote world stability Science & ; Innovation Collaboration Careers Community Environment Science & Innovation Facilities Science Pillars Research Library Science Briefs Science News Science Highlights Lab Organizations Science Programs Applied
McLaury, Ralph L.
2011-01-01
This study investigates beliefs about teaching held by preservice science teachers and their influences on self-perceived microteaching outcomes within interactive secondary science teaching methods courses. Hermeneutic methodology was used in cooperation with seven preservice science teachers (N = 7) to infer participant beliefs about teaching…
Inferring biological functions of guanylyl cyclases with computational methods
Alquraishi, May Majed; Meier, Stuart Kurt
2013-01-01
A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.
Inferring biological functions of guanylyl cyclases with computational methods
Alquraishi, May Majed
2013-09-03
A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.
Chandan, D.; Peltier, W. R.
2013-12-01
The issue of tectonic contamination of geological inferences of relative sea level history is an important one. The issue arises on timescales that range from the 21-26 kyrs that have passed since the Last Glacial Maximum, to the most recent time when periods as warm as the present are expected to have existed, such as the mid-Pliocene. The coral based record from Barbados, for example, is known to be contaminated by continuing tectonic uplift of the island at a rate of approximately 0.34 mm/yr. For the Pliocene warm period at ~3 Myr, records from geological sites, such as the Orangeburg Scarp in North Carolina, have played a prominent role in arguments underpinning the design of the ongoing international PlioMIP program. In connection with the latter site, Rowley et al (2013) have recently argued that this record is contaminated by a tectonic imprint sufficiently strong to suggest that the usual inferences of Pliocene eustatic sea level based upon it (eg. Miller et al, 2012) must be seen as highly suspect. Here we employ a tomographically constrained model of the mantle convection process to revisit the issue of the tectonic imprint on relative sea level at the Orangeburg site, as well as other similar locations. Our analysis is based upon the inferred time dependence of dynamic topography forced by the mantle's internal density heterogeneities delivered by the S20RTS seismic tomography model. We begin by comparing the static, present day dynamic topography predicted by the (linear) internal loading theory based on the formalism of Pari and Peltier (2000) with that predicted using using a full three dimensional version of the nonlinear time-dependent mantle convection model of Shahnas and Peltier (2010, 2011). We demonstrate first that these two methodologies produce extremely similar results for the static field. We then proceed to run the nonlinear convection model in data assimilation mode while continuously nudging the internal density field back towards the
Model averaging, optimal inference and habit formation
Directory of Open Access Journals (Sweden)
Thomas H B FitzGerald
2014-06-01
Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.
Campbell's and Rubin's Perspectives on Causal Inference
West, Stephen G.; Thoemmes, Felix
2010-01-01
Donald Campbell's approach to causal inference (D. T. Campbell, 1957; W. R. Shadish, T. D. Cook, & D. T. Campbell, 2002) is widely used in psychology and education, whereas Donald Rubin's causal model (P. W. Holland, 1986; D. B. Rubin, 1974, 2005) is widely used in economics, statistics, medicine, and public health. Campbell's approach focuses on…
Bayesian structural inference for hidden processes
Strelioff, Christopher C.; Crutchfield, James P.
2014-04-01
We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.
HIERARCHICAL PROBABILISTIC INFERENCE OF COSMIC SHEAR
International Nuclear Information System (INIS)
Schneider, Michael D.; Dawson, William A.; Hogg, David W.; Marshall, Philip J.; Bard, Deborah J.; Meyers, Joshua; Lang, Dustin
2015-01-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics
Interest, Inferences, and Learning from Texts
Clinton, Virginia; van den Broek, Paul
2012-01-01
Topic interest and learning from texts have been found to be positively associated with each other. However, the reason for this positive association is not well understood. The purpose of this study is to examine a cognitive process, inference generation, that could explain the positive association between interest and learning from texts. In…
Ignorability in Statistical and Probabilistic Inference
DEFF Research Database (Denmark)
Jaeger, Manfred
2005-01-01
When dealing with incomplete data in statistical learning, or incomplete observations in probabilistic inference, one needs to distinguish the fact that a certain event is observed from the fact that the observed event has happened. Since the modeling and computational complexities entailed...
Inverse Ising inference with correlated samples
International Nuclear Information System (INIS)
Obermayer, Benedikt; Levine, Erel
2014-01-01
Correlations between two variables of a high-dimensional system can be indicative of an underlying interaction, but can also result from indirect effects. Inverse Ising inference is a method to distinguish one from the other. Essentially, the parameters of the least constrained statistical model are learned from the observed correlations such that direct interactions can be separated from indirect correlations. Among many other applications, this approach has been helpful for protein structure prediction, because residues which interact in the 3D structure often show correlated substitutions in a multiple sequence alignment. In this context, samples used for inference are not independent but share an evolutionary history on a phylogenetic tree. Here, we discuss the effects of correlations between samples on global inference. Such correlations could arise due to phylogeny but also via other slow dynamical processes. We present a simple analytical model to address the resulting inference biases, and develop an exact method accounting for background correlations in alignment data by combining phylogenetic modeling with an adaptive cluster expansion algorithm. We find that popular reweighting schemes are only marginally effective at removing phylogenetic bias, suggest a rescaling strategy that yields better results, and provide evidence that our conclusions carry over to the frequently used mean-field approach to the inverse Ising problem. (paper)
Evolutionary inference via the Poisson Indel Process.
Bouchard-Côté, Alexandre; Jordan, Michael I
2013-01-22
We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.
Culture and Pragmatic Inference in Interpersonal Communication
African Journals Online (AJOL)
cognitive process, and that the human capacity for inference is crucially important ... been noted that research in interpersonal communication is currently pushing the ... communicative actions, the social-cultural world of everyday life is not only ... personal experiences of the authors', as documented over time and recreated ...
Inference and the Introductory Statistics Course
Pfannkuch, Maxine; Regan, Matt; Wild, Chris; Budgett, Stephanie; Forbes, Sharleen; Harraway, John; Parsonage, Ross
2011-01-01
This article sets out some of the rationale and arguments for making major changes to the teaching and learning of statistical inference in introductory courses at our universities by changing from a norm-based, mathematical approach to more conceptually accessible computer-based approaches. The core problem of the inferential argument with its…
Statistical Inference on the Canadian Middle Class
Directory of Open Access Journals (Sweden)
Russell Davidson
2018-03-01
Full Text Available Conventional wisdom says that the middle classes in many developed countries have recently suffered losses, in terms of both the share of the total population belonging to the middle class, and also their share in total income. Here, distribution-free methods are developed for inference on these shares, by means of deriving expressions for their asymptotic variances of sample estimates, and the covariance of the estimates. Asymptotic inference can be undertaken based on asymptotic normality. Bootstrap inference can be expected to be more reliable, and appropriate bootstrap procedures are proposed. As an illustration, samples of individual earnings drawn from Canadian census data are used to test various hypotheses about the middle-class shares, and confidence intervals for them are computed. It is found that, for the earlier censuses, sample sizes are large enough for asymptotic and bootstrap inference to be almost identical, but that, in the twenty-first century, the bootstrap fails on account of a strange phenomenon whereby many presumably different incomes in the data are rounded to one and the same value. Another difference between the centuries is the appearance of heavy right-hand tails in the income distributions of both men and women.
Spurious correlations and inference in landscape genetics
Samuel A. Cushman; Erin L. Landguth
2010-01-01
Reliable interpretation of landscape genetic analyses depends on statistical methods that have high power to identify the correct process driving gene flow while rejecting incorrect alternative hypotheses. Little is known about statistical power and inference in individual-based landscape genetics. Our objective was to evaluate the power of causalmodelling with partial...
Cortical information flow during inferences of agency
Dogge, Myrthel; Hofman, Dennis; Boersma, Maria; Dijkerman, H Chris; Aarts, Henk
2014-01-01
Building on the recent finding that agency experiences do not merely rely on sensorimotor information but also on cognitive cues, this exploratory study uses electroencephalographic recordings to examine functional connectivity during agency inference processing in a setting where action and outcome
Quasi-Experimental Designs for Causal Inference
Kim, Yongnam; Steiner, Peter
2016-01-01
When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…
The importance of learning when making inferences
Directory of Open Access Journals (Sweden)
Jorg Rieskamp
2008-03-01
Full Text Available The assumption that people possess a repertoire of strategies to solve the inference problems they face has been made repeatedly. The experimental findings of two previous studies on strategy selection are reexamined from a learning perspective, which argues that people learn to select strategies for making probabilistic inferences. This learning process is modeled with the strategy selection learning (SSL theory, which assumes that people develop subjective expectancies for the strategies they have. They select strategies proportional to their expectancies, which are updated on the basis of experience. For the study by Newell, Weston, and Shanks (2003 it can be shown that people did not anticipate the success of a strategy from the beginning of the experiment. Instead, the behavior observed at the end of the experiment was the result of a learning process that can be described by the SSL theory. For the second study, by Br"oder and Schiffer (2006, the SSL theory is able to provide an explanation for why participants only slowly adapted to new environments in a dynamic inference situation. The reanalysis of the previous studies illustrates the importance of learning for probabilistic inferences.
Colligation, Or the Logical Inference of Interconnection
DEFF Research Database (Denmark)
Falster, Peter
1998-01-01
laws or assumptions. Yet interconnection as an abstract concept seems to be without scientific underpinning in pure logic. Adopting a historical viewpoint, our aim is to show that the reasoning of interconnection may be identified with a neglected kind of logical inference, called "colligation...
Colligation or, The Logical Inference of Interconnection
DEFF Research Database (Denmark)
Franksen, Ole Immanuel; Falster, Peter
2000-01-01
laws or assumptions. Yet interconnection as an abstract concept seems to be without scientific underpinning in oure logic. Adopting a historical viewpoint, our aim is to show that the reasoning of interconnection may be identified with a neglected kind of logical inference, called "colligation...
Inferring motion and location using WLAN RSSI
Kavitha Muthukrishnan, K.; van der Zwaag, B.J.; Havinga, Paul J.M.; Fuller, R.; Koutsoukos, X.
2009-01-01
We present novel algorithms to infer movement by making use of inherent fluctuations in the received signal strengths from existing WLAN infrastructure. We evaluate the performance of the presented algorithms based on classification metrics such as recall and precision using annotated traces
Flavour Democracy in Strong Unification
Abel, S A; Abel, Steven; King, Steven
1998-01-01
We show that the fermion mass spectrum may naturally be understood in terms of flavour democratic fixed points in supersymmetric theories which have a large domain of attraction in the presence of "strong unification". Our approach provides an alternative to the approximate Yukawa texture zeroes of the Froggatt-Nielsen mechanism. We discuss a particular model based on a broken gauged $SU(3)_L\\times SU(3)_R$ family symmetry which illustrates our approach.
Zhou, Haotian; Majka, Elizabeth A; Epley, Nicholas
2017-04-01
People use at least two strategies to solve the challenge of understanding another person's mind: inferring that person's perspective by reading his or her behavior (theorization) and getting that person's perspective by experiencing his or her situation (simulation). The five experiments reported here demonstrate a strong tendency for people to underestimate the value of simulation. Predictors estimated a stranger's emotional reactions toward 50 pictures. They could either infer the stranger's perspective by reading his or her facial expressions or simulate the stranger's perspective by watching the pictures he or she viewed. Predictors were substantially more accurate when they got perspective through simulation, but overestimated the accuracy they had achieved by inferring perspective. Predictors' miscalibrated confidence stemmed from overestimating the information revealed through facial expressions and underestimating the similarity in people's reactions to a given situation. People seem to underappreciate a useful strategy for understanding the minds of others, even after they gain firsthand experience with both strategies.
Inference of Well-Typings for Logic Programs with Application to Termination Analysis
DEFF Research Database (Denmark)
Bruynooghe, M.; Gallagher, John Patrick; Humbeeck, W. Van
2005-01-01
A method is developed to infer a polymorphic well-typing for a logic program. Our motivation is to improve the automation of termination analysis by deriving types from which norms can automatically be constructed. Previous work on type-based termination analysis used either types declared...... by the user, or automatically generated monomorphic types describing the success set of predicates. The latter types are less precise and result in weaker termination conditions than those obtained from declared types. Our type inference procedure involves solving set constraints generated from the program...... and derives a well-typing in contrast to a success-set approximation. Experiments so far show that our automatically inferred well-typings are close to the declared types and result in termination conditions that are as strong as those obtained with declared types. We describe the method, its implementation...
String dynamics at strong coupling
International Nuclear Information System (INIS)
Hull, C.M.
1996-01-01
The dynamics of superstring, supergravity and M-theories and their compactifications are probed by studying the various perturbation theories that emerge in the strong and weak-coupling limits for various directions in coupling constant space. The results support the picture of an underlying non-perturbative theory that, when expanded perturbatively in different coupling constants, gives different perturbation theories, which can be perturbative superstring theories or superparticle theories. The p-brane spectrum is considered in detail and a criterion found to establish which p-branes govern the strong-coupling dynamics. In many cases there are competing conjectures in the literature, and this analysis decides between them. In other cases, new results are found. The chiral 6-dimensional theory resulting from compactifying the type IIB string on K 3 is studied in detail and it is found that certain strong-coupling limits appear to give new theories, some of which hint at the possibility of a 12-dimensional origin. (orig.)
Active inference, sensory attenuation and illusions.
Brown, Harriet; Adams, Rick A; Parees, Isabel; Edwards, Mark; Friston, Karl
2013-11-01
Active inference provides a simple and neurobiologically plausible account of how action and perception are coupled in producing (Bayes) optimal behaviour. This can be seen most easily as minimising prediction error: we can either change our predictions to explain sensory input through perception. Alternatively, we can actively change sensory input to fulfil our predictions. In active inference, this action is mediated by classical reflex arcs that minimise proprioceptive prediction error created by descending proprioceptive predictions. However, this creates a conflict between action and perception; in that, self-generated movements require predictions to override the sensory evidence that one is not actually moving. However, ignoring sensory evidence means that externally generated sensations will not be perceived. Conversely, attending to (proprioceptive and somatosensory) sensations enables the detection of externally generated events but precludes generation of actions. This conflict can be resolved by attenuating the precision of sensory evidence during movement or, equivalently, attending away from the consequences of self-made acts. We propose that this Bayes optimal withdrawal of precise sensory evidence during movement is the cause of psychophysical sensory attenuation. Furthermore, it explains the force-matching illusion and reproduces empirical results almost exactly. Finally, if attenuation is removed, the force-matching illusion disappears and false (delusional) inferences about agency emerge. This is important, given the negative correlation between sensory attenuation and delusional beliefs in normal subjects--and the reduction in the magnitude of the illusion in schizophrenia. Active inference therefore links the neuromodulatory optimisation of precision to sensory attenuation and illusory phenomena during the attribution of agency in normal subjects. It also provides a functional account of deficits in syndromes characterised by false inference
Repositioning Information Science.
Ibekwe-Sanjuan , Fidelia; Buckland , Michael; Latham , Kiersten
2010-01-01
International audience; During the twentieth century there was a strong desire for information studies to become scientific, to move from librarianship, bibliography, and documentation to an information science. In 1968 the American Documentation Institute was renamed American Society for Information Science. By the twenty-first century, however, departments of (library and) information science had turned instead towards the social sciences, but have not been successful in providing a coheren...
Akdemir, Bayram; Doǧan, Sercan; Aksoy, Muharrem H.; Canli, Eyüp; Özgören, Muammer
2015-03-01
Liquid behaviors are very important for many areas especially for Mechanical Engineering. Fast camera is a way to observe and search the liquid behaviors. Camera traces the dust or colored markers travelling in the liquid and takes many pictures in a second as possible as. Every image has large data structure due to resolution. For fast liquid velocity, there is not easy to evaluate or make a fluent frame after the taken images. Artificial intelligence has much popularity in science to solve the nonlinear problems. Adaptive neural fuzzy inference system is a common artificial intelligence in literature. Any particle velocity in a liquid has two dimension speed and its derivatives. Adaptive Neural Fuzzy Inference System has been used to create an artificial frame between previous and post frames as offline. Adaptive neural fuzzy inference system uses velocities and vorticities to create a crossing point vector between previous and post points. In this study, Adaptive Neural Fuzzy Inference System has been used to fill virtual frames among the real frames in order to improve image continuity. So this evaluation makes the images much understandable at chaotic or vorticity points. After executed adaptive neural fuzzy inference system, the image dataset increase two times and has a sequence as virtual and real, respectively. The obtained success is evaluated using R2 testing and mean squared error. R2 testing has a statistical importance about similarity and 0.82, 0.81, 0.85 and 0.8 were obtained for velocities and derivatives, respectively.
Science Olympiad students' nature of science understandings
Philpot, Cindy J.
2007-12-01
Recent reform efforts in science education focus on scientific literacy for all citizens. In order to be scientifically literate, an individual must have informed understandings of nature of science (NOS), scientific inquiry, and science content matter. This study specifically focused on Science Olympiad students' understanding of NOS as one piece of scientific literacy. Research consistently shows that science students do not have informed understandings of NOS (Abd-El-Khalick, 2002; Bell, Blair, Crawford, and Lederman, 2002; Kilcrease and Lucy, 2002; Schwartz, Lederman, and Thompson, 2001). However, McGhee-Brown, Martin, Monsaas and Stombler (2003) found that Science Olympiad students had in-depth understandings of science concepts, principles, processes, and techniques. Science Olympiad teams compete nationally and are found in rural, urban, and suburban schools. In an effort to learn from students who are generally considered high achieving students and who enjoy science, as opposed to the typical science student, the purpose of this study was to investigate Science Olympiad students' understandings of NOS and the experiences that formed their understandings. An interpretive, qualitative, case study method was used to address the research questions. The participants were purposefully and conveniently selected from the Science Olympiad team at a suburban high school. Data collection consisted of the Views of Nature of Science -- High School Questionnaire (VNOS-HS) (Schwartz, Lederman, & Thompson, 2001), semi-structured individual interviews, and a focus group. The main findings of this study were similar to much of the previous research in that the participants had informed understandings of the tentative nature of science and the role of inferences in science, but they did not have informed understandings of the role of human imagination and creativity, the empirical nature of science, or theories and laws. High level science classes and participation in
Oravetz, David
2005-01-01
This article is for teachers looking for new ways to motivate students, increase science comprehension, and understanding without using the old standard expository science textbook. This author suggests reading a science fiction novel in the science classroom as a way to engage students in learning. Using science fiction literature and language…
International Nuclear Information System (INIS)
L'Huillier, A.
2002-01-01
When a high-power laser focuses into a gas of atoms, the electromagnetic field becomes of the same magnitude as the Coulomb field which binds a 1s electron in a hydrogen atom. 3 highly non-linear phenomena can happen: 1) ATI (above threshold ionization): electrons initially in the ground state absorb a large number of photons, many more than the minimum number required for ionization; 2) multiple ionization: many electrons can be emitted one at a time, in a sequential process, or simultaneously in a mechanism called direct or non-sequential; and 3) high order harmonic generation (HHG): efficient photon emission in the extreme ultraviolet range, in the form of high-order harmonics of the fundamental laser field can occur. The theoretical problem consists in solving the time dependent Schroedinger equation (TDSE) that describes the interaction of a many-electron atom with a laser field. A number of methods have been proposed to solve this problem in the case of a hydrogen atom or a single-active electron atom in a strong laser field. A large effort is presently being devoted to go beyond the single-active approximation. The understanding of the physics of the interaction between atoms and strong laser fields has been provided by a very simple model called ''simple man's theory''. A unified view of HHG, ATI, and non-sequential ionization, originating from the simple man's model and the strong field approximation, expressed in terms of electrons trajectories or quantum paths is slowly emerging. (A.C.)
Rydberg atoms in strong fields
International Nuclear Information System (INIS)
Kleppner, D.; Tsimmerman, M.
1985-01-01
Experimental and theoretical achievements in studying Rydberg atoms in external fields are considered. Only static (or quasistatic) fields and ''one-electron'' atoms, i.e. atoms that are well described by one-electron states, are discussed. Mainly behaviour of alkali metal atoms in electric field is considered. The state of theoretical investigations for hydrogen atom in magnetic field is described, but experimental data for atoms of alkali metals are presented as an illustration. Results of the latest experimental and theoretical investigations into the structure of Rydberg atoms in strong fields are presented
Strong versions of Bell's theorem
International Nuclear Information System (INIS)
Stapp, H.P.
1994-01-01
Technical aspects of a recently constructed strong version of Bell's theorem are discussed. The theorem assumes neither hidden variables nor factorization, and neither determinism nor counterfactual definiteness. It deals directly with logical connections. Hence its relationship with modal logic needs to be described. It is shown that the proof can be embedded in an orthodox modal logic, and hence its compatibility with modal logic assured, but that this embedding weakens the theorem by introducing as added assumptions the conventionalities of the particular modal logic that is adopted. This weakening is avoided in the recent proof by using directly the set-theoretic conditions entailed by the locality assumption
Strongly interacting light dark matter
International Nuclear Information System (INIS)
Bruggisser, Sebastian; Riva, Francesco; Urbano, Alfredo
2016-07-01
In the presence of approximate global symmetries that forbid relevant interactions, strongly coupled light Dark Matter (DM) can appear weakly coupled at small-energy and generate a sizable relic abundance. Fundamental principles like unitarity restrict these symmetries to a small class, where the leading interactions are captured by effective operators up to dimension-8. Chiral symmetry, spontaneously broken global symmetries and non-linearly realized supersymmetry are examples of this. Their DM candidates (composite fermions, pseudo-Nambu-Goldstone Bosons and Goldstini) are interesting targets for LHC missing-energy searches.
Weak consistency and strong paraconsistency
Directory of Open Access Journals (Sweden)
Gemma Robles
2009-11-01
Full Text Available In a standard sense, consistency and paraconsistency are understood as, respectively, the absence of any contradiction and as the absence of the ECQ (“E contradictione quodlibet” rule that allows us to conclude any well formed formula from any contradiction. The aim of this paper is to explain the concepts of weak consistency alternative to the standard one, the concepts of paraconsistency related to them and the concept of strong paraconsistency, all of which have been defined by the author together with José M. Méndez.
Energy Technology Data Exchange (ETDEWEB)
Dowrick, N.J. (Dept. of Physics, Oxford (United Kingdom)); McDougall, N.A. (National Lab. for High Energy Physics, Tsukuba, Ibaraki (Japan))
1992-07-09
We show that two well-known solutions to the strong CP problem, the axion and a massless quark, may be understood in terms of the mechanism recently proposed by Samuel where long-range interactions between topological charges may be responsible for the removal of CP violation. We explain how the axion and a QCD meson (identified as the {eta}' if all quarks are massless) suppress fluctuations in global topological charge by almost identical dynamical although the masses, couplings and relevant length scales are very different. Furthermore, we elucidate the precise origin of the {eta}' mass. (orig.).
Scalar strong interaction hadron theory
Hoh, Fang Chao
2015-01-01
The scalar strong interaction hadron theory, SSI, is a first principles' and nonlocal theory at quantum mechanical level that provides an alternative to low energy QCD and Higgs related part of the standard model. The quark-quark interaction is scalar rather than color-vectorial. A set of equations of motion for mesons and another set for baryons have been constructed. This book provides an account of the present state of a theory supposedly still at its early stage of development. This work will facilitate researchers interested in entering into this field and serve as a basis for possible future development of this theory.
Estimation of strong ground motion
International Nuclear Information System (INIS)
Watabe, Makoto
1993-01-01
Fault model has been developed to estimate a strong ground motion in consideration of characteristics of seismic source and propagation path of seismic waves. There are two different approaches in the model. The first one is a theoretical approach, while the second approach is a semi-empirical approach. Though the latter is more practical than the former to be applied to the estimation of input motions, it needs at least the small-event records, the value of the seismic moment of the small event and the fault model of the large event
Strong Mechanoluminescence from Oxynitridosilicate Phosphors
Energy Technology Data Exchange (ETDEWEB)
Zhang Lin; Xu Chaonan; Yamada, Hiroshi, E-mail: cn-xu@aist.go.jp [National Institute of Advanced Industrial Science and Technology (AIST), 807-1 Shuku, Tosu, Saga 841-0052 (Japan)
2011-10-29
We successfully developed a novel Mechanoluminescence (ML) material with water resistance, oxynitridosilicate; BaSi{sub 2}O{sub 2}N{sub 2}: Eu{sup 2+}. The crystal structure, photoluminescence (PL) and ML properties were characterized. The ML of BaSi{sub 2}O{sub 2}N{sub 2}: Eu{sup 2+} is so strong that the blue-green emission can be observed by the naked eyes clearly. In addition, it shows superior water resistance property. No changes were found in the ML intensities during the total water treatment test.
Pakistan strong industrial base urged for economic progress
2001-01-01
A conference organized by Pakistan Nuclear Society urged that Pakistan should develop a strong industrial base and capability to export equipment for economic progress. The chairmen of PAEC pointed out that Pakistan is already showing remarkable progress in export of science-related equipment to CERN. He also asked scientists to wage a war against Pakistans inability to acquire indigenous technology (1 page).
Effective lagrangian for strong interactions
International Nuclear Information System (INIS)
Jain, P.
1988-01-01
We attempt to construct a realistic phenomenological Lagrangian in order to describe strong interactions. This is in general a very complicated problem and we shall explore its various aspects. We first include the vector mesons by writing down the most general chiral invariant terms proportional to the Levi-Civita symbol ε μναβ . These terms involve three unknown coefficients, which are calculated by using the experimental results of strong interaction processes. We then calculate the static nucleon properties by finding the solitonic excitations of this model. The results turn out to be, as is also the case for most other vector-pseudoscalar Lagrangians, better than the Skyrme model but are still somewhat different from the experiments. Another aspect that we shall study is the incorporation of scale anomaly of QCD into the Skyrme model. We thus introduce a scalar glueball in our Lagrangian. Here we find an interesting result that the effective glue field dynamically forms a bag for the soliton. Depending on the values of the parameters, we get either a deep bag or a shallow bag. However by including the scalar meson, we find that to get realistic scalar sector we must have the shallow bag. Finally we show some intriguing connections between the chiral quark model, in which the nucleon is described as a solitonic excitation, and the ordinary potential binding quark model
EDITORIAL: Strongly correlated electron systems Strongly correlated electron systems
Ronning, Filip; Batista, Cristian
2011-03-01
Strongly correlated electrons is an exciting and diverse field in condensed matter physics. This special issue aims to capture some of that excitement and recent developments in the field. Given that this issue was inspired by the 2010 International Conference on Strongly Correlated Electron Systems (SCES 2010), we briefly give some history in order to place this issue in context. The 2010 International Conference on Strongly Correlated Electron Systems was held in Santa Fe, New Mexico, a reunion of sorts from the 1989 International Conference on the Physics of Highly Correlated Electron Systems that also convened in Santa Fe. SCES 2010—co-chaired by John Sarrao and Joe Thompson—followed the tradition of earlier conferences, in this century, hosted by Buzios (2008), Houston (2007), Vienna (2005), Karlsruhe (2004), Krakow (2002) and Ann Arbor (2001). Every three years since 1997, SCES has joined the International Conference on Magnetism (ICM), held in Recife (2000), Rome (2003), Kyoto (2006) and Karlsruhe (2009). Like its predecessors, SCES 2010 topics included strongly correlated f- and d-electron systems, heavy-fermion behaviors, quantum-phase transitions, non-Fermi liquid phenomena, unconventional superconductivity, and emergent states that arise from electronic correlations. Recent developments from studies of quantum magnetism and cold atoms complemented the traditional subjects and were included in SCES 2010. 2010 celebrated the 400th anniversary of Santa Fe as well as the birth of astronomy. So what's the connection to SCES? The Dutch invention of the first practical telescope and its use by Galileo in 1610 and subsequent years overturned dogma that the sun revolved about the earth. This revolutionary, and at the time heretical, conclusion required innovative combinations of new instrumentation, observation and mathematics. These same combinations are just as important 400 years later and are the foundation of scientific discoveries that were discussed
Strong Selective Adsorption of Polymers.
Ge, Ting; Rubinstein, Michael
2015-06-09
A scaling theory is developed for selective adsorption of polymers induced by the strong binding between specific monomers and complementary surface adsorption sites. By "selective" we mean specific attraction between a subset of all monomers, called "sticky", and a subset of surface sites, called "adsorption sites". We demonstrate that, in addition to the expected dependence on the polymer volume fraction ϕ bulk in the bulk solution, selective adsorption strongly depends on the ratio between two characteristic length scales, the root-mean-square distance l between neighboring sticky monomers along the polymer, and the average distance d between neighboring surface adsorption sites. The role of the ratio l / d arises from the fact that a polymer needs to deform to enable the spatial commensurability between its sticky monomers and the surface adsorption sites for selective adsorption. We study strong selective adsorption of both telechelic polymers with two end monomers being sticky and multisticker polymers with many sticky monomers between sticky ends. For telechelic polymers, we identify four adsorption regimes at l / d 1, we expect that the adsorption layer at exponentially low ϕ bulk consists of separated unstretched loops, while as ϕ bulk increases the layer crosses over to a brush of extended loops with a second layer of weakly overlapping tails. For multisticker chains, in the limit of exponentially low ϕ bulk , adsorbed polymers are well separated from each other. As l / d increases, the conformation of an individual polymer changes from a single-end-adsorbed "mushroom" to a random walk of loops. For high ϕ bulk , adsorbed polymers at small l / d are mushrooms that cover all the adsorption sites. At sufficiently large l / d , adsorbed multisticker polymers strongly overlap. We anticipate the formation of a self-similar carpet and with increasing l / d a two-layer structure with a brush of loops covered by a self-similar carpet. As l / d exceeds the
Likelihood inference for unions of interacting discs
DEFF Research Database (Denmark)
Møller, Jesper; Helisová, Katarina
To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....
An Intuitive Dashboard for Bayesian Network Inference
International Nuclear Information System (INIS)
Reddy, Vikas; Farr, Anna Charisse; Wu, Paul; Mengersen, Kerrie; Yarlagadda, Prasad K D V
2014-01-01
Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++
The NIFTY way of Bayesian signal inference
International Nuclear Information System (INIS)
Selig, Marco
2014-01-01
We introduce NIFTY, 'Numerical Information Field Theory', a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTY can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTY as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D 3 PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy
The NIFTy way of Bayesian signal inference
Selig, Marco
2014-12-01
We introduce NIFTy, "Numerical Information Field Theory", a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTy can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTy as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.
Inferring genetic interactions from comparative fitness data.
Crona, Kristina; Gavryushkin, Alex; Greene, Devin; Beerenwinkel, Niko
2017-12-20
Darwinian fitness is a central concept in evolutionary biology. In practice, however, it is hardly possible to measure fitness for all genotypes in a natural population. Here, we present quantitative tools to make inferences about epistatic gene interactions when the fitness landscape is only incompletely determined due to imprecise measurements or missing observations. We demonstrate that genetic interactions can often be inferred from fitness rank orders, where all genotypes are ordered according to fitness, and even from partial fitness orders. We provide a complete characterization of rank orders that imply higher order epistasis. Our theory applies to all common types of gene interactions and facilitates comprehensive investigations of diverse genetic interactions. We analyzed various genetic systems comprising HIV-1, the malaria-causing parasite Plasmodium vivax , the fungus Aspergillus niger , and the TEM-family of β-lactamase associated with antibiotic resistance. For all systems, our approach revealed higher order interactions among mutations.
An emergent approach to analogical inference
Thibodeau, Paul H.; Flusberg, Stephen J.; Glick, Jeremy J.; Sternberg, Daniel A.
2013-03-01
In recent years, a growing number of researchers have proposed that analogy is a core component of human cognition. According to the dominant theoretical viewpoint, analogical reasoning requires a specific suite of cognitive machinery, including explicitly coded symbolic representations and a mapping or binding mechanism that operates over these representations. Here we offer an alternative approach: we find that analogical inference can emerge naturally and spontaneously from a relatively simple, error-driven learning mechanism without the need to posit any additional analogy-specific machinery. The results also parallel findings from the developmental literature on analogy, demonstrating a shift from an initial reliance on surface feature similarity to the use of relational similarity later in training. Variants of the model allow us to consider and rule out alternative accounts of its performance. We conclude by discussing how these findings can potentially refine our understanding of the processes that are required to perform analogical inference.
Pointwise probability reinforcements for robust statistical inference.
Frénay, Benoît; Verleysen, Michel
2014-02-01
Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). AFDs are observations that are much more frequent in the training sample that they should be, with respect to their theoretical probability, and include e.g. outliers. Estimates of parameters tend to be biased towards models which support such data. This paper proposes to introduce pointwise probability reinforcements (PPRs): the probability of each observation is reinforced by a PPR and a regularisation allows controlling the amount of reinforcement which compensates for AFDs. The proposed solution is very generic, since it can be used to robustify any statistical inference method which can be formulated as a likelihood maximisation. Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. Moreover, outliers can be filtered manually since an abnormality degree is obtained for each observation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Statistical inference from imperfect photon detection
International Nuclear Information System (INIS)
Audenaert, Koenraad M R; Scheel, Stefan
2009-01-01
We consider the statistical properties of photon detection with imperfect detectors that exhibit dark counts and less than unit efficiency, in the context of tomographic reconstruction. In this context, the detectors are used to implement certain positive operator-valued measures (POVMs) that would allow us to reconstruct the quantum state or quantum process under consideration. Here we look at the intermediate step of inferring outcome probabilities from measured outcome frequencies, and show how this inference can be performed in a statistically sound way in the presence of detector imperfections. Merging outcome probabilities for different sets of POVMs into a consistent quantum state picture has been treated elsewhere (Audenaert and Scheel 2009 New J. Phys. 11 023028). Single-photon pulsed measurements as well as continuous wave measurements are covered.
An Intuitive Dashboard for Bayesian Network Inference
Reddy, Vikas; Charisse Farr, Anna; Wu, Paul; Mengersen, Kerrie; Yarlagadda, Prasad K. D. V.
2014-03-01
Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++.
Working with sample data exploration and inference
Chaffe-Stengel, Priscilla
2014-01-01
Managers and analysts routinely collect and examine key performance measures to better understand their operations and make good decisions. Being able to render the complexity of operations data into a coherent account of significant events requires an understanding of how to work well with raw data and to make appropriate inferences. Although some statistical techniques for analyzing data and making inferences are sophisticated and require specialized expertise, there are methods that are understandable and applicable by anyone with basic algebra skills and the support of a spreadsheet package. By applying these fundamental methods themselves rather than turning over both the data and the responsibility for analysis and interpretation to an expert, managers will develop a richer understanding and potentially gain better control over their environment. This text is intended to describe these fundamental statistical techniques to managers, data analysts, and students. Statistical analysis of sample data is enh...
Parametric inference for biological sequence analysis.
Pachter, Lior; Sturmfels, Bernd
2004-11-16
One of the major successes in computational biology has been the unification, by using the graphical model formalism, of a multitude of algorithms for annotating and comparing biological sequences. Graphical models that have been applied to these problems include hidden Markov models for annotation, tree models for phylogenetics, and pair hidden Markov models for alignment. A single algorithm, the sum-product algorithm, solves many of the inference problems that are associated with different statistical models. This article introduces the polytope propagation algorithm for computing the Newton polytope of an observation from a graphical model. This algorithm is a geometric version of the sum-product algorithm and is used to analyze the parametric behavior of maximum a posteriori inference calculations for graphical models.
Inferences on Children’s Reading Groups
Directory of Open Access Journals (Sweden)
Javier González García
2009-05-01
Full Text Available This article focuses on the non-literal information of a text, which can be inferred from key elements or clues offered by the text itself. This kind of text is called implicit text or inference, due to the thinking process that it stimulates. The explicit resources that lead to information retrieval are related to others of implicit information, which have increased their relevance. In this study, during two courses, how two teachers interpret three stories and how they establish a debate dividing the class into three student groups, was analyzed. The sample was formed by two classes of two urban public schools of Burgos capital (Spain, and two of public schools of Tampico (Mexico. This allowed us to observe an increasing percentage value of the group focused in text comprehension, and a lesser percentage of the group perceiving comprehension as a secondary objective.
Inferring Genetic Ancestry: Opportunities, Challenges, and Implications
Royal, Charmaine D.; Novembre, John; Fullerton, Stephanie M.; Goldstein, David B.; Long, Jeffrey C.; Bamshad, Michael J.; Clark, Andrew G.
2010-01-01
Increasing public interest in direct-to-consumer (DTC) genetic ancestry testing has been accompanied by growing concern about issues ranging from the personal and societal implications of the testing to the scientific validity of ancestry inference. The very concept of “ancestry” is subject to misunderstanding in both the general and scientific communities. What do we mean by ancestry? How exactly is ancestry measured? How far back can such ancestry be defined and by which genetic tools? How ...
Spatial Inference Based on Geometric Proportional Analogies
Mullally, Emma-Claire; O'Donoghue, Diarmuid P.
2006-01-01
We describe an instance-based reasoning solution to a variety of spatial reasoning problems. The solution centers on identifying an isomorphic mapping between labelled graphs that represent some problem data and a known solution instance. We describe a number of spatial reasoning problems that are solved by generating non-deductive inferences, integrating topology with area (and other) features. We report the accuracy of our algorithm on different categories of spatial reasoning tasks from th...
Inferring ontology graph structures using OWL reasoning
Rodriguez-Garcia, Miguel Angel
2018-01-05
Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies\\' semantic content remains a challenge.We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies\\' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph .Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.
Role of Speaker Cues in Attention Inference
Jin Joo Lee; Cynthia Breazeal; David DeSteno
2017-01-01
Current state-of-the-art approaches to emotion recognition primarily focus on modeling the nonverbal expressions of the sole individual without reference to contextual elements such as the co-presence of the partner. In this paper, we demonstrate that the accurate inference of listeners’ social-emotional state of attention depends on accounting for the nonverbal behaviors of their storytelling partner, namely their speaker cues. To gain a deeper understanding of the role of speaker cues in at...
Inferring ontology graph structures using OWL reasoning.
Rodríguez-García, Miguel Ángel; Hoehndorf, Robert
2018-01-05
Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies' semantic content remains a challenge. We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph . Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.
Constrained bayesian inference of project performance models
Sunmola, Funlade
2013-01-01
Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...
Using metacognitive cues to infer others' thinking
André Mata; Tiago Almeida
2014-01-01
Three studies tested whether people use cues about the way other people think---for example, whether others respond fast vs. slow---to infer what responses other people might give to reasoning problems. People who solve reasoning problems using deliberative thinking have better insight than intuitive problem-solvers into the responses that other people might give to the same problems. Presumably because deliberative responders think of intuitive responses before they think o...
Thermodynamics of statistical inference by cells.
Lang, Alex H; Fisher, Charles K; Mora, Thierry; Mehta, Pankaj
2014-10-03
The deep connection between thermodynamics, computation, and information is now well established both theoretically and experimentally. Here, we extend these ideas to show that thermodynamics also places fundamental constraints on statistical estimation and learning. To do so, we investigate the constraints placed by (nonequilibrium) thermodynamics on the ability of biochemical signaling networks to estimate the concentration of an external signal. We show that accuracy is limited by energy consumption, suggesting that there are fundamental thermodynamic constraints on statistical inference.
Bootstrap inference when using multiple imputation.
Schomaker, Michael; Heumann, Christian
2018-04-16
Many modern estimators require bootstrapping to calculate confidence intervals because either no analytic standard error is available or the distribution of the parameter of interest is nonsymmetric. It remains however unclear how to obtain valid bootstrap inference when dealing with multiple imputation to address missing data. We present 4 methods that are intuitively appealing, easy to implement, and combine bootstrap estimation with multiple imputation. We show that 3 of the 4 approaches yield valid inference, but that the performance of the methods varies with respect to the number of imputed data sets and the extent of missingness. Simulation studies reveal the behavior of our approaches in finite samples. A topical analysis from HIV treatment research, which determines the optimal timing of antiretroviral treatment initiation in young children, demonstrates the practical implications of the 4 methods in a sophisticated and realistic setting. This analysis suffers from missing data and uses the g-formula for inference, a method for which no standard errors are available. Copyright © 2018 John Wiley & Sons, Ltd.
Inferring epidemic network topology from surveillance data.
Directory of Open Access Journals (Sweden)
Xiang Wan
Full Text Available The transmission of infectious diseases can be affected by many or even hidden factors, making it difficult to accurately predict when and where outbreaks may emerge. One approach at the moment is to develop and deploy surveillance systems in an effort to detect outbreaks as timely as possible. This enables policy makers to modify and implement strategies for the control of the transmission. The accumulated surveillance data including temporal, spatial, clinical, and demographic information, can provide valuable information with which to infer the underlying epidemic networks. Such networks can be quite informative and insightful as they characterize how infectious diseases transmit from one location to another. The aim of this work is to develop a computational model that allows inferences to be made regarding epidemic network topology in heterogeneous populations. We apply our model on the surveillance data from the 2009 H1N1 pandemic in Hong Kong. The inferred epidemic network displays significant effect on the propagation of infectious diseases.
Role of Speaker Cues in Attention Inference
Directory of Open Access Journals (Sweden)
Jin Joo Lee
2017-10-01
Full Text Available Current state-of-the-art approaches to emotion recognition primarily focus on modeling the nonverbal expressions of the sole individual without reference to contextual elements such as the co-presence of the partner. In this paper, we demonstrate that the accurate inference of listeners’ social-emotional state of attention depends on accounting for the nonverbal behaviors of their storytelling partner, namely their speaker cues. To gain a deeper understanding of the role of speaker cues in attention inference, we conduct investigations into real-world interactions of children (5–6 years old storytelling with their peers. Through in-depth analysis of human–human interaction data, we first identify nonverbal speaker cues (i.e., backchannel-inviting cues and listener responses (i.e., backchannel feedback. We then demonstrate how speaker cues can modify the interpretation of attention-related backchannels as well as serve as a means to regulate the responsiveness of listeners. We discuss the design implications of our findings toward our primary goal of developing attention recognition models for storytelling robots, and we argue that social robots can proactively use speaker cues to form more accurate inferences about the attentive state of their human partners.
Cortical information flow during inferences of agency
Directory of Open Access Journals (Sweden)
Myrthel eDogge
2014-08-01
Full Text Available Building on the recent finding that agency experiences do not merely rely on sensorimotor information but also on cognitive cues, this exploratory study uses electroencephalographic recordings to examine functional connectivity during agency inference processing in a setting where action and outcome are independent. Participants completed a computerized task in which they pressed a button followed by one of two color words (red or blue and rated their experienced agency over producing the color. Before executing the action, a matching or mismatching color word was pre-activated by explicitly instructing participants to produce the color (goal condition or by briefly presenting the color word (prime condition. In both conditions, experienced agency was higher in matching versus mismatching trials. Furthermore, increased electroencephalography (EEG-based connectivity strength was observed between parietal and frontal nodes and within the (prefrontal cortex when color-outcomes matched with goals and participants reported high agency. This pattern of increased connectivity was not identified in trials where outcomes were pre-activated through primes. These results suggest that different connections are involved in the experience and in the loss of agency, as well as in inferences of agency resulting from different types of pre-activation. Moreover, the findings provide novel support for the involvement of a fronto-parietal network in agency inferences.
Phylogenetic Inference of HIV Transmission Clusters
Directory of Open Access Journals (Sweden)
Vlad Novitsky
2017-10-01
Full Text Available Better understanding the structure and dynamics of HIV transmission networks is essential for designing the most efficient interventions to prevent new HIV transmissions, and ultimately for gaining control of the HIV epidemic. The inference of phylogenetic relationships and the interpretation of results rely on the definition of the HIV transmission cluster. The definition of the HIV cluster is complex and dependent on multiple factors, including the design of sampling, accuracy of sequencing, precision of sequence alignment, evolutionary models, the phylogenetic method of inference, and specified thresholds for cluster support. While the majority of studies focus on clusters, non-clustered cases could also be highly informative. A new dimension in the analysis of the global and local HIV epidemics is the concept of phylogenetically distinct HIV sub-epidemics. The identification of active HIV sub-epidemics reveals spreading viral lineages and may help in the design of targeted interventions.HIVclustering can also be affected by sampling density. Obtaining a proper sampling density may increase statistical power and reduce sampling bias, so sampling density should be taken into account in study design and in interpretation of phylogenetic results. Finally, recent advances in long-range genotyping may enable more accurate inference of HIV transmission networks. If performed in real time, it could both inform public-health strategies and be clinically relevant (e.g., drug-resistance testing.
Causal inference of asynchronous audiovisual speech
Directory of Open Access Journals (Sweden)
John F Magnotti
2013-11-01
Full Text Available During speech perception, humans integrate auditory information from the voice with visual information from the face. This multisensory integration increases perceptual precision, but only if the two cues come from the same talker; this requirement has been largely ignored by current models of speech perception. We describe a generative model of multisensory speech perception that includes this critical step of determining the likelihood that the voice and face information have a common cause. A key feature of the model is that it is based on a principled analysis of how an observer should solve this causal inference problem using the asynchrony between two cues and the reliability of the cues. This allows the model to make predictions abut the behavior of subjects performing a synchrony judgment task, predictive power that does not exist in other approaches, such as post hoc fitting of Gaussian curves to behavioral data. We tested the model predictions against the performance of 37 subjects performing a synchrony judgment task viewing audiovisual speech under a variety of manipulations, including varying asynchronies, intelligibility, and visual cue reliability. The causal inference model outperformed the Gaussian model across two experiments, providing a better fit to the behavioral data with fewer parameters. Because the causal inference model is derived from a principled understanding of the task, model parameters are directly interpretable in terms of stimulus and subject properties.
Functional neuroanatomy of intuitive physical inference.
Fischer, Jason; Mikhael, John G; Tenenbaum, Joshua B; Kanwisher, Nancy
2016-08-23
To engage with the world-to understand the scene in front of us, plan actions, and predict what will happen next-we must have an intuitive grasp of the world's physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events-a "physics engine" in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general "multiple demand" system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action.
Integrating distributed Bayesian inference and reinforcement learning for sensor management
Grappiolo, C.; Whiteson, S.; Pavlin, G.; Bakker, B.
2009-01-01
This paper introduces a sensor management approach that integrates distributed Bayesian inference (DBI) and reinforcement learning (RL). DBI is implemented using distributed perception networks (DPNs), a multiagent approach to performing efficient inference, while RL is used to automatically
Strong growth for Queensland mining
Energy Technology Data Exchange (ETDEWEB)
1990-10-01
The Queensland mining industry experienced strong growth during 1989-90 as shown in the latest statistics released by the Department of Resource Industries. The total value of Queensland mineral and energy production rose to a new record of $5.1 billion, an increase of 16.5% on 1988-89 production. A major contributing factor was a 20.9 percent increase in the value of coal production. While the quantity of coal produced rose only 1.1 percent, the substantial increase in the value of coal production is attributable to higher coal prices negotiated for export contracts. In Australian dollar terms coal, gold, lead, zinc and crude oil on average experienced higher international prices than in the previous year. Only copper and silver prices declined. 3 tabs.
Strong moduli stabilization and phenomenology
Dudas, Emilian; Mambrini, Yann; Mustafayev, Azar; Olive, Keith A
2013-01-01
We describe the resulting phenomenology of string theory/supergravity models with strong moduli stabilization. The KL model with F-term uplifting, is one such example. Models of this type predict universal scalar masses equal to the gravitino mass. In contrast, A-terms receive highly suppressed gravity mediated contributions. Under certain conditions, the same conclusion is valid for gaugino masses, which like A-terms, are then determined by anomalies. In such models, we are forced to relatively large gravitino masses (30-1000 TeV). We compute the low energy spectrum as a function of m_{3/2}. We see that the Higgs masses naturally takes values between 125-130 GeV. The lower limit is obtained from the requirement of chargino masses greater than 104 GeV, while the upper limit is determined by the relic density of dark matter (wino-like).
Strongly interacting W's and Z's
International Nuclear Information System (INIS)
Gaillard, M.K.
1984-01-01
The study focussed primarily on the dynamics of a strongly interacting W, Z(SIW) sector, with the aim of sharpening predictions for total W, Z yield and W, Z multiplicities expected from WW fusion for various scenarios. Specific issues raised in the context of the general problem of modeling SIW included the specificity of the technicolor (or, equivalently, QCD) model, whether or not a composite scalar model can be evaded, and whether the standard model necessarily implies an I = J = O state (≅ Higgs particle) that is relatively ''light'' (M ≤ hundreds of TeV). The consensus on the last issue was that existing arguments are inconclusive. While the author shall briefly address compositeness and alternatives to the technicolor model, quantitative estimates will be of necessity based on technicolor or an extrapolation of pion data
Uniquely Strongly Clean Group Rings
Institute of Scientific and Technical Information of China (English)
WANG XIU-LAN
2012-01-01
A ring R is called clean if every element is the sum of an idempotent and a unit,and R is called uniquely strongly clean (USC for short) if every element is uniquely the sum of an idempotent and a unit that commute.In this article,some conditions on a ring R and a group G such that RG is clean are given.It is also shown that if G is a locally finite group,then the group ring RG is USC if and only if R is USC,and G is a 2-group.The left uniquely exchange group ring,as a middle ring of the uniquely clean ring and the USC ring,does not possess this property,and so does the uniquely exchange group ring.
Electrophoresis in strong electric fields.
Barany, Sandor
2009-01-01
Two kinds of non-linear electrophoresis (ef) that can be detected in strong electric fields (several hundred V/cm) are considered. The first ("classical" non-linear ef) is due to the interaction of the outer field with field-induced ionic charges in the electric double layer (EDL) under conditions, when field-induced variations of electrolyte concentration remain to be small comparatively to its equilibrium value. According to the Shilov theory, the non-linear component of the electrophoretic velocity for dielectric particles is proportional to the cubic power of the applied field strength (cubic electrophoresis) and to the second power of the particles radius; it is independent of the zeta-potential but is determined by the surface conductivity of particles. The second one, the so-called "superfast electrophoresis" is connected with the interaction of a strong outer field with a secondary diffuse layer of counterions (space charge) that is induced outside the primary (classical) diffuse EDL by the external field itself because of concentration polarization. The Dukhin-Mishchuk theory of "superfast electrophoresis" predicts quadratic dependence of the electrophoretic velocity of unipolar (ionically or electronically) conducting particles on the external field gradient and linear dependence on the particle's size in strong electric fields. These are in sharp contrast to the laws of classical electrophoresis (no dependence of V(ef) on the particle's size and linear dependence on the electric field gradient). A new method to measure the ef velocity of particles in strong electric fields is developed that is based on separation of the effects of sedimentation and electrophoresis using videoimaging and a new flowcell and use of short electric pulses. To test the "classical" non-linear electrophoresis, we have measured the ef velocity of non-conducting polystyrene, aluminium-oxide and (semiconductor) graphite particles as well as Saccharomice cerevisiae yeast cells as a
Bootstrapping phylogenies inferred from rearrangement data
Directory of Open Access Journals (Sweden)
Lin Yu
2012-08-01
Full Text Available Abstract Background Large-scale sequencing of genomes has enabled the inference of phylogenies based on the evolution of genomic architecture, under such events as rearrangements, duplications, and losses. Many evolutionary models and associated algorithms have been designed over the last few years and have found use in comparative genomics and phylogenetic inference. However, the assessment of phylogenies built from such data has not been properly addressed to date. The standard method used in sequence-based phylogenetic inference is the bootstrap, but it relies on a large number of homologous characters that can be resampled; yet in the case of rearrangements, the entire genome is a single character. Alternatives such as the jackknife suffer from the same problem, while likelihood tests cannot be applied in the absence of well established probabilistic models. Results We present a new approach to the assessment of distance-based phylogenetic inference from whole-genome data; our approach combines features of the jackknife and the bootstrap and remains nonparametric. For each feature of our method, we give an equivalent feature in the sequence-based framework; we also present the results of extensive experimental testing, in both sequence-based and genome-based frameworks. Through the feature-by-feature comparison and the experimental results, we show that our bootstrapping approach is on par with the classic phylogenetic bootstrap used in sequence-based reconstruction, and we establish the clear superiority of the classic bootstrap for sequence data and of our corresponding new approach for rearrangement data over proposed variants. Finally, we test our approach on a small dataset of mammalian genomes, verifying that the support values match current thinking about the respective branches. Conclusions Our method is the first to provide a standard of assessment to match that of the classic phylogenetic bootstrap for aligned sequences. Its
Bootstrapping phylogenies inferred from rearrangement data.
Lin, Yu; Rajan, Vaibhav; Moret, Bernard Me
2012-08-29
Large-scale sequencing of genomes has enabled the inference of phylogenies based on the evolution of genomic architecture, under such events as rearrangements, duplications, and losses. Many evolutionary models and associated algorithms have been designed over the last few years and have found use in comparative genomics and phylogenetic inference. However, the assessment of phylogenies built from such data has not been properly addressed to date. The standard method used in sequence-based phylogenetic inference is the bootstrap, but it relies on a large number of homologous characters that can be resampled; yet in the case of rearrangements, the entire genome is a single character. Alternatives such as the jackknife suffer from the same problem, while likelihood tests cannot be applied in the absence of well established probabilistic models. We present a new approach to the assessment of distance-based phylogenetic inference from whole-genome data; our approach combines features of the jackknife and the bootstrap and remains nonparametric. For each feature of our method, we give an equivalent feature in the sequence-based framework; we also present the results of extensive experimental testing, in both sequence-based and genome-based frameworks. Through the feature-by-feature comparison and the experimental results, we show that our bootstrapping approach is on par with the classic phylogenetic bootstrap used in sequence-based reconstruction, and we establish the clear superiority of the classic bootstrap for sequence data and of our corresponding new approach for rearrangement data over proposed variants. Finally, we test our approach on a small dataset of mammalian genomes, verifying that the support values match current thinking about the respective branches. Our method is the first to provide a standard of assessment to match that of the classic phylogenetic bootstrap for aligned sequences. Its support values follow a similar scale and its receiver
Strong plasma turbulence in the earth's electron foreshock
Robinson, P. A.; Newman, D. L.
1991-01-01
A quantitative model is developed to account for the distribution in magnitude and location of the intense plasma waves observed in the earth's electron foreshock given the observed rms levels of waves. In this model, nonlinear strong-turbulence effects cause solitonlike coherent wave packets to form and decouple from incoherent background beam-excited weak turbulence, after which they convect downstream with the solar wind while collapsing to scales as short as 100 m and fields as high as 2 V/m. The existence of waves with energy densities above the strong-turbulence wave-collapse threshold is inferred from observations from IMP 6 and ISEE 1 and quantitative agreement is found between the predicted distribution of fields in an ensemble of such wave packets and the actual field distribution observed in situ by IMP 6. Predictions for the polarization of plasma waves and the bandwidth of ion-sound waves are also consistent with the observations. It is shown that strong-turbulence effects must be incorporated in any comprehensive theory of the propagation and evolution of electron beams in the foreshock. Previous arguments against the existence of strong turbulence in the foreshock are refuted.
Strong plasma turbulence in the earth's electron foreshock
International Nuclear Information System (INIS)
Robinson, P.A.; Newman, D.L.
1991-01-01
A quantitative model is developed to account for the distribution in magnitude and location of the intense plasma waves observed in the Earth's electron foreshock given the observed rms levels of waves. In this model, nonlinear strong-turbulence effects cause solitonlike coherent wave packets to form and decouple from incoherent background beam-excited weak turbulence, after which they convect downstream with the solar wind while collapsing to scales as short as 100 m and fields as high as 2 V m -1 . The existence of waves with energy densities above the strong-turbulence wave-collapse threshold is inferred from observations from IMP 6 and ISEE 1 and quantitative agreement is found between the predicted distribution of fields in an ensemble of such wave packets and the actual field distribution observed in situ by IMP 6. Predictions for the polarization of plasma waves and the bandwidth of ion-sound waves are also consistent with the observations. It is shown that strong-turbulence effects must be incorporated in any comprehensive theory of the propagation and evolution of electron beams in the foreshock. Previous arguments against the existence of strong turbulence in the foreshock are refuted
Type Inference for Session Types in the Pi-Calculus
DEFF Research Database (Denmark)
Graversen, Eva Fajstrup; Harbo, Jacob Buchreitz; Huttel, Hans
2014-01-01
In this paper we present a direct algorithm for session type inference for the π-calculus. Type inference for session types has previously been achieved by either imposing limitations and restriction on the π-calculus, or by reducing the type inference problem to that for linear types. Our approach...
Reasoning about Informal Statistical Inference: One Statistician's View
Rossman, Allan J.
2008-01-01
This paper identifies key concepts and issues associated with the reasoning of informal statistical inference. I focus on key ideas of inference that I think all students should learn, including at secondary level as well as tertiary. I argue that a fundamental component of inference is to go beyond the data at hand, and I propose that statistical…
Statistical Inference at Work: Statistical Process Control as an Example
Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia
2008-01-01
To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…
Malle, Bertram F; Holbrook, Jess
2012-04-01
People interpret behavior by making inferences about agents' intentionality, mind, and personality. Past research studied such inferences 1 at a time; in real life, people make these inferences simultaneously. The present studies therefore examined whether 4 major inferences (intentionality, desire, belief, and personality), elicited simultaneously in response to an observed behavior, might be ordered in a hierarchy of likelihood and speed. To achieve generalizability, the studies included a wide range of stimulus behaviors, presented them verbally and as dynamic videos, and assessed inferences both in a retrieval paradigm (measuring the likelihood and speed of accessing inferences immediately after they were made) and in an online processing paradigm (measuring the speed of forming inferences during behavior observation). Five studies provide evidence for a hierarchy of social inferences-from intentionality and desire to belief to personality-that is stable across verbal and visual presentations and that parallels the order found in developmental and primate research. (c) 2012 APA, all rights reserved.
Strong Ideal Convergence in Probabilistic Metric Spaces
Indian Academy of Sciences (India)
In the present paper we introduce the concepts of strongly ideal convergent sequence and strong ideal Cauchy sequence in a probabilistic metric (PM) space endowed with the strong topology, and establish some basic facts. Next, we define the strong ideal limit points and the strong ideal cluster points of a sequence in this ...
Strong Statistical Convergence in Probabilistic Metric Spaces
Şençimen, Celaleddin; Pehlivan, Serpil
2008-01-01
In this article, we introduce the concepts of strongly statistically convergent sequence and strong statistically Cauchy sequence in a probabilistic metric (PM) space endowed with the strong topology, and establish some basic facts. Next, we define the strong statistical limit points and the strong statistical cluster points of a sequence in this space and investigate the relations between these concepts.
2006-01-01
Our friend and colleague John Strong was cruelly taken from us by a brain tumour on 31 July, a few days before his 65th birthday. John started his career and obtained his PhD in a group from Westfield College, initially working on experiments at Rutherford Appleton Laboratory (RAL). From the early 1970s onwards, however, his research was focused on experiments in CERN, with several particularly notable contributions. The Omega spectrometer adopted a system John had originally developed for experiments at RAL using vidicon cameras (a type of television camera) to record the sparks in the spark chambers. This highly automated system allowed Omega to be used in a similar way to bubble chambers. He contributed to the success of NA1 and NA7, where he became heavily involved in the electronic trigger systems. In these experiments the Westfield group joined forces with Italian colleagues to measure the form factors of the pion and the kaon, and the lifetime of some of the newly discovered charm particles. Such h...
Remnants of strong tidal interactions
International Nuclear Information System (INIS)
Mcglynn, T.A.
1990-01-01
This paper examines the properties of stellar systems that have recently undergone a strong tidal shock, i.e., a shock which removes a significant fraction of the particles in the system, and where the shocked system has a much smaller mass than the producer of the tidal field. N-body calculations of King models shocked in a variety of ways are performed, and the consequences of the shocks are investigated. The results confirm the prediction of Jaffe for shocked systems. Several models are also run where the tidal forces on the system are constant, simulating a circular orbit around a primary, and the development of tidal radii under these static conditions appears to be a mild process which does not dramatically affect material that is not stripped. The tidal radii are about twice as large as classical formulas would predict. Remnant density profiles are compared with a sample of elliptical galaxies, and the implications of the results for the development of stellar populations and galaxies are considered. 38 refs
Strongly correlated perovskite fuel cells
Zhou, You; Guan, Xiaofei; Zhou, Hua; Ramadoss, Koushik; Adam, Suhare; Liu, Huajun; Lee, Sungsik; Shi, Jian; Tsuchiya, Masaru; Fong, Dillon D.; Ramanathan, Shriram
2016-06-01
Fuel cells convert chemical energy directly into electrical energy with high efficiencies and environmental benefits, as compared with traditional heat engines. Yttria-stabilized zirconia is perhaps the material with the most potential as an electrolyte in solid oxide fuel cells (SOFCs), owing to its stability and near-unity ionic transference number. Although there exist materials with superior ionic conductivity, they are often limited by their ability to suppress electronic leakage when exposed to the reducing environment at the fuel interface. Such electronic leakage reduces fuel cell power output and the associated chemo-mechanical stresses can also lead to catastrophic fracture of electrolyte membranes. Here we depart from traditional electrolyte design that relies on cation substitution to sustain ionic conduction. Instead, we use a perovskite nickelate as an electrolyte with high initial ionic and electronic conductivity. Since many such oxides are also correlated electron systems, we can suppress the electronic conduction through a filling-controlled Mott transition induced by spontaneous hydrogen incorporation. Using such a nickelate as the electrolyte in free-standing membrane geometry, we demonstrate a low-temperature micro-fabricated SOFC with high performance. The ionic conductivity of the nickelate perovskite is comparable to the best-performing solid electrolytes in the same temperature range, with a very low activation energy. The results present a design strategy for high-performance materials exhibiting emergent properties arising from strong electron correlations.
Strong seismic ground motion propagation
International Nuclear Information System (INIS)
Seale, S.; Archuleta, R.; Pecker, A.; Bouchon, M.; Mohammadioun, G.; Murphy, A.; Mohammadioun, B.
1988-10-01
At the McGee Creek, California, site, 3-component strong-motion accelerometers are located at depths of 166 m, 35 m and 0 m. The surface material is glacial moraine, to a depth of 30.5 m, overlying homfels. Accelerations were recorded from two California earthquakes: Round Valley, M L 5.8, November 23, 1984, 18:08 UTC and Chalfant Valley, M L 6.4, July 21, 1986, 14:42 UTC. By separating out the SH components of acceleration, we were able to determine the orientations of the downhole instruments. By separating out the SV component of acceleration, we were able to determine the approximate angle of incidence of the signal at 166 m. A constant phase velocity Haskell-Thomson model was applied to generate synthetic SH seismograms at the surface using the accelerations recorded at 166 m. In the frequency band 0.0 - 10.0 Hz, we compared the filtered synthetic records to the filtered surface data. The onset of the SH pulse is clearly seen, as are the reflections from the interface at 30.5 m. The synthetic record closely matches the data in amplitude and phase. The fit between the synthetic accelerogram and the data shows that the seismic amplification at the surface is a result of the contrast of the impedances (shear stiffnesses) of the near surface materials
Blei, David M; Smyth, Padhraic
2017-08-07
Data science has attracted a lot of attention, promising to turn vast amounts of data into useful predictions and insights. In this article, we ask why scientists should care about data science. To answer, we discuss data science from three perspectives: statistical, computational, and human. Although each of the three is a critical component of data science, we argue that the effective combination of all three components is the essence of what data science is about.
Improved functional overview of protein complexes using inferred epistatic relationships
LENUS (Irish Health Repository)
Ryan, Colm
2011-05-23
Abstract Background Epistatic Miniarray Profiling(E-MAP) quantifies the net effect on growth rate of disrupting pairs of genes, often producing phenotypes that may be more (negative epistasis) or less (positive epistasis) severe than the phenotype predicted based on single gene disruptions. Epistatic interactions are important for understanding cell biology because they define relationships between individual genes, and between sets of genes involved in biochemical pathways and protein complexes. Each E-MAP screen quantifies the interactions between a logically selected subset of genes (e.g. genes whose products share a common function). Interactions that occur between genes involved in different cellular processes are not as frequently measured, yet these interactions are important for providing an overview of cellular organization. Results We introduce a method for combining overlapping E-MAP screens and inferring new interactions between them. We use this method to infer with high confidence 2,240 new strongly epistatic interactions and 34,469 weakly epistatic or neutral interactions. We show that accuracy of the predicted interactions approaches that of replicate experiments and that, like measured interactions, they are enriched for features such as shared biochemical pathways and knockout phenotypes. We constructed an expanded epistasis map for yeast cell protein complexes and show that our new interactions increase the evidence for previously proposed inter-complex connections, and predict many new links. We validated a number of these in the laboratory, including new interactions linking the SWR-C chromatin modifying complex and the nuclear transport apparatus. Conclusion Overall, our data support a modular model of yeast cell protein network organization and show how prediction methods can considerably extend the information that can be extracted from overlapping E-MAP screens.
Strongly interacting photons and atoms
International Nuclear Information System (INIS)
Alge, W.
1999-05-01
This thesis contains the main results of the research topics I have pursued during the my PhD studies at the University of Innsbruck and partly in collaboration with the Institut d' Optique in Orsay, France. It is divided into three parts. The first and largest part discusses the possibility of using strong standing waves as a tool to cool and trap neutral atoms in optical cavities. This is very important in the field of nonlinear optics where several successful experiments with cold atoms in cavities have been performed recently. A discussion of the optical parametric oscillator in a regime where the nonlinearity dominates the evolution is the topic of the second part. We investigated mainly the statistical properties of the cavity output of the three interactive cavity modes. Very recently a system has been proposed which promises fantastic properties. It should exhibit a giant Kerr nonlinearity with negligible absorption thus leading to a photonic turnstile device based on cold atoms in cavity. We have shown that this model suffers from overly simplistic assumptions and developed several more comprehensive approaches to study the behavior of this system. Apart from the division into three parts of different contents the thesis is divided into publications, supplements and invisible stuff. The intention of the supplements is to reach researchers which work in related areas and provide them with more detailed information about the concepts and the numerical tools we used. It is written especially for diploma and PhD students to give them a chance to use the third part of our work which is actually the largest one. They consist of a large number of computer programs we wrote to investigate the behavior of the systems in parameter regions where no hope exists to solve the equations analytically. (author)
Topics in strong Langmuir turbulence
International Nuclear Information System (INIS)
Skoric, M.M.
1981-01-01
This thesis discusses certain aspects of the turbulence of a fully ionised non-isothermal plasma dominated by the Langmuir mode. Some of the basic properties of strongly turbulent plasmas are reviewed. In particular, interest is focused on the state of Langmuir turbulence, that is the turbulence of a simple externally unmagnetized plasma. The problem of the existence and dynamics of Langmuir collapse is discussed, often met as a non-linear stage of the modulational instability in the framework of the Zakharov equations (i.e. simple time-averaged dynamical equations). Possible macroscopic consequences of such dynamical turbulent models are investigated. In order to study highly non-linear collapse dynamics in its advanced stage, a set of generalized Zakharov equations are derived. Going beyond the original approximation, the author includes the effects of higher electron non-linearities and a breakdown of slow-timescale quasi-neutrality. He investigates how these corrections may influence the collapse stabilisation. Recently, it has been realised that the modulational instability in a Langmuir plasma will be accompanied by the collisionless-generation of a slow-timescale magnetic field. Accordingly, a novel physical situation has emerged which is investigated in detail. The stability of monochromatic Langmuir waves in a self-magnetized Langmuir plasma, is discussed, and the existence of a novel magneto-modulational instability shown. The wave collapse dynamics is investigated and a physical interpretation of the basic results is given. A problem of the transient analysis of an interaction of time-dependent electromagnetic pulses with linear cold plasma media is investigated. (Auth.)
Promoting Strong Written Communication Skills
Narayanan, M.
2015-12-01
The reason that an improvement in the quality of technical writing is still needed in the classroom is due to the fact that universities are facing challenging problems not only on the technological front but also on the socio-economic front. The universities are actively responding to the changes that are taking place in the global consumer marketplace. Obviously, there are numerous benefits of promoting strong written communication skills. They can be summarized into the following six categories. First, and perhaps the most important: The University achieves learner satisfaction. The learner has documented verbally, that the necessary knowledge has been successfully acquired. This results in learner loyalty that in turn will attract more qualified learners.Second, quality communication lowers the cost per pupil, consequently resulting in increased productivity backed by a stronger economic structure and forecast. Third, quality communications help to improve the cash flow and cash reserves of the university. Fourth, having high quality communication enables the university to justify the need for high costs of tuition and fees. Fifth, better quality in written communication skills result in attracting top-quality learners. This will lead to happier and satisfied learners, not to mention greater prosperity for the university as a whole. Sixth, quality written communication skills result in reduced complaints, thus meaning fewer hours spent on answering or correcting the situation. The University faculty and staff are thus able to devote more time on scholarly activities, meaningful research and productive community service. References Boyer, Ernest L. (1990). Scholarship reconsidered: Priorities of the Professorate.Princeton, NJ: Carnegie Foundation for the Advancement of Teaching. Hawkins, P., & Winter, J. (1997). Mastering change: Learning the lessons of the enterprise.London: Department for Education and Employment. Buzzel, Robert D., and Bradley T. Gale. (1987
Nonparametric inference of network structure and dynamics
Peixoto, Tiago P.
The network structure of complex systems determine their function and serve as evidence for the evolutionary mechanisms that lie behind them. Despite considerable effort in recent years, it remains an open challenge to formulate general descriptions of the large-scale structure of network systems, and how to reliably extract such information from data. Although many approaches have been proposed, few methods attempt to gauge the statistical significance of the uncovered structures, and hence the majority cannot reliably separate actual structure from stochastic fluctuations. Due to the sheer size and high-dimensionality of many networks, this represents a major limitation that prevents meaningful interpretations of the results obtained with such nonstatistical methods. In this talk, I will show how these issues can be tackled in a principled and efficient fashion by formulating appropriate generative models of network structure that can have their parameters inferred from data. By employing a Bayesian description of such models, the inference can be performed in a nonparametric fashion, that does not require any a priori knowledge or ad hoc assumptions about the data. I will show how this approach can be used to perform model comparison, and how hierarchical models yield the most appropriate trade-off between model complexity and quality of fit based on the statistical evidence present in the data. I will also show how this general approach can be elegantly extended to networks with edge attributes, that are embedded in latent spaces, and that change in time. The latter is obtained via a fully dynamic generative network model, based on arbitrary-order Markov chains, that can also be inferred in a nonparametric fashion. Throughout the talk I will illustrate the application of the methods with many empirical networks such as the internet at the autonomous systems level, the global airport network, the network of actors and films, social networks, citations among
Impact of noise on molecular network inference.
Directory of Open Access Journals (Sweden)
Radhakrishnan Nagarajan
Full Text Available Molecular entities work in concert as a system and mediate phenotypic outcomes and disease states. There has been recent interest in modelling the associations between molecular entities from their observed expression profiles as networks using a battery of algorithms. These networks have proven to be useful abstractions of the underlying pathways and signalling mechanisms. Noise is ubiquitous in molecular data and can have a pronounced effect on the inferred network. Noise can be an outcome of several factors including: inherent stochastic mechanisms at the molecular level, variation in the abundance of molecules, heterogeneity, sensitivity of the biological assay or measurement artefacts prevalent especially in high-throughput settings. The present study investigates the impact of discrepancies in noise variance on pair-wise dependencies, conditional dependencies and constraint-based Bayesian network structure learning algorithms that incorporate conditional independence tests as a part of the learning process. Popular network motifs and fundamental connections, namely: (a common-effect, (b three-chain, and (c coherent type-I feed-forward loop (FFL are investigated. The choice of these elementary networks can be attributed to their prevalence across more complex networks. Analytical expressions elucidating the impact of discrepancies in noise variance on pairwise dependencies and conditional dependencies for special cases of these motifs are presented. Subsequently, the impact of noise on two popular constraint-based Bayesian network structure learning algorithms such as Grow-Shrink (GS and Incremental Association Markov Blanket (IAMB that implicitly incorporate tests for conditional independence is investigated. Finally, the impact of noise on networks inferred from publicly available single cell molecular expression profiles is investigated. While discrepancies in noise variance are overlooked in routine molecular network inference, the
Bayesian Estimation and Inference using Stochastic Hardware
Directory of Open Access Journals (Sweden)
Chetan Singh Thakur
2016-03-01
Full Text Available In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker, demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND, we show how inference can be performed in a Directed Acyclic Graph (DAG using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.
Bayesian Estimation and Inference Using Stochastic Electronics.
Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André
2016-01-01
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.
Strong Bayesian evidence for the normal neutrino hierarchy
Energy Technology Data Exchange (ETDEWEB)
Simpson, Fergus; Jimenez, Raul; Verde, Licia [ICCUB, University of Barcelona (UB-IEEC), Marti i Franques 1, Barcelona, 08028 (Spain); Pena-Garay, Carlos, E-mail: fergus2@gmail.com, E-mail: raul.jimenez@icc.ub.edu, E-mail: penagaray@gmail.com, E-mail: liciaverde@icc.ub.edu [I2SysBio, CSIC-UVEG, P.O. 22085, Valencia, 46071 (Spain)
2017-06-01
The configuration of the three neutrino masses can take two forms, known as the normal and inverted hierarchies. We compute the Bayesian evidence associated with these two hierarchies. Previous studies found a mild preference for the normal hierarchy, and this was driven by the asymmetric manner in which cosmological data has confined the available parameter space. Here we identify the presence of a second asymmetry, which is imposed by data from neutrino oscillations. By combining constraints on the squared-mass splittings [1] with the limit on the sum of neutrino masses of Σ m {sub ν} < 0.13 eV [2], and using a minimally informative prior on the masses, we infer odds of 42:1 in favour of the normal hierarchy, which is classified as 'strong' in the Jeffreys' scale. We explore how these odds may evolve in light of higher precision cosmological data, and discuss the implications of this finding with regards to the nature of neutrinos. Finally the individual masses are inferred to be m {sub 1}=3.80{sup +26.2}{sub -3.73}meV; m {sub 2}=8.8{sup +18}{sub -1.2}meV; m {sub 3}=50.4{sup +5.8}{sub -1.2}meV (95% credible intervals).
Robust Inference with Multi-way Clustering
A. Colin Cameron; Jonah B. Gelbach; Douglas L. Miller; Doug Miller
2009-01-01
In this paper we propose a variance estimator for the OLS estimator as well as for nonlinear estimators such as logit, probit and GMM. This variance estimator enables cluster-robust inference when there is two-way or multi-way clustering that is non-nested. The variance estimator extends the standard cluster-robust variance estimator or sandwich estimator for one-way clustering (e.g. Liang and Zeger (1986), Arellano (1987)) and relies on similar relatively weak distributional assumptions. Our...
Approximate Inference and Deep Generative Models
CERN. Geneva
2018-01-01
Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.
Abductive Inference using Array-Based Logic
DEFF Research Database (Denmark)
Frisvad, Jeppe Revall; Falster, Peter; Møller, Gert L.
The notion of abduction has found its usage within a wide variety of AI fields. Computing abductive solutions has, however, shown to be highly intractable in logic programming. To avoid this intractability we present a new approach to logicbased abduction; through the geometrical view of data...... employed in array-based logic we embrace abduction in a simple structural operation. We argue that a theory of abduction on this form allows for an implementation which, at runtime, can perform abductive inference quite efficiently on arbitrary rules of logic representing knowledge of finite domains....
Russia needs a strong counterpart
International Nuclear Information System (INIS)
Slovak, K.; Marcan, P.
2008-01-01
In this paper an interview with the head of OMV, Wolfgang Ruttenstorfer is published. There is extract from this interview: Q: There have been attempts to take over MOL for a quite long time. Do you think you can still succeed? Since the beginning we kept saying that this would not happen from one day to another. But it may take two to three years. But we are positive that it is justified. Q: Resistance from MOL and the Hungarian government is strong. We have tried to persuade the Hungarian government. We offered them a split company management. A part of the management would be in Budapest. We would locate the management of the largest division - the refinery, there. And of course only the best could be part of the management. We would not nominate people according to their nationality, it would not matter whether the person was Austrian, Hungarian or Slovak. We want a Central European company, not Hungarian, Romanian or Slovak company. Q: Would the transaction still be attractive if, because of pressure exercised by Brussels, you had to sell Slovnaft or your refinery in Szazhalobatta? We do not intend to sell any refineries. Q: Rumours are spreading that the Commission may ask you to sell a refinery? We do not want to speculate. Let us wait and see what happens. We do not want to sell refineries. Q: It is said that OMV is coordinating or at least consulting its attempts to acquire MOL with Gazprom. There are many rumours in Central Europe. But I can tell you this is not true. We are interested in this merger because we feel the increasing pressure exercised by Kazakhstan and Russia. We, of course, have a good relationship with Gazprom which we have had enjoyed for over forty years. As indeed Slovakia has. Q: A few weeks ago Austrian daily Wirtschaftsblatt published an article about Gazprom's interest in OMV shares. That is gossip that is more than ten years' old. Similarly to the rumours that Gazprom is a shareholder of MOL. There are no negotiations with Gazprom
Gunji, Yukio-Pegio; Shinohara, Shuji; Haruna, Taichi; Basios, Vasileios
2017-02-01
To overcome the dualism between mind and matter and to implement consciousness in science, a physical entity has to be embedded with a measurement process. Although quantum mechanics have been regarded as a candidate for implementing consciousness, nature at its macroscopic level is inconsistent with quantum mechanics. We propose a measurement-oriented inference system comprising Bayesian and inverse Bayesian inferences. While Bayesian inference contracts probability space, the newly defined inverse one relaxes the space. These two inferences allow an agent to make a decision corresponding to an immediate change in their environment. They generate a particular pattern of joint probability for data and hypotheses, comprising multiple diagonal and noisy matrices. This is expressed as a nondistributive orthomodular lattice equivalent to quantum logic. We also show that an orthomodular lattice can reveal information generated by inverse syllogism as well as the solutions to the frame and symbol-grounding problems. Our model is the first to connect macroscopic cognitive processes with the mathematical structure of quantum mechanics with no additional assumptions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
POPPER, a simple programming language for probabilistic semantic inference in medicine.
Robson, Barry
2015-01-01
Our previous reports described the use of the Hyperbolic Dirac Net (HDN) as a method for probabilistic inference from medical data, and a proposed probabilistic medical Semantic Web (SW) language Q-UEL to provide that data. Rather like a traditional Bayes Net, that HDN provided estimates of joint and conditional probabilities, and was static, with no need for evolution due to "reasoning". Use of the SW will require, however, (a) at least the semantic triple with more elaborate relations than conditional ones, as seen in use of most verbs and prepositions, and (b) rules for logical, grammatical, and definitional manipulation that can generate changes in the inference net. Here is described the simple POPPER language for medical inference. It can be automatically written by Q-UEL, or by hand. Based on studies with our medical students, it is believed that a tool like this may help in medical education and that a physician unfamiliar with SW science can understand it. It is here used to explore the considerable challenges of assigning probabilities, and not least what the meaning and utility of inference net evolution would be for a physician. Copyright © 2014 Elsevier Ltd. All rights reserved.
Quantum Enhanced Inference in Markov Logic Networks.
Wittek, Peter; Gogolin, Christian
2017-04-19
Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.
Inferring network topology from complex dynamics
International Nuclear Information System (INIS)
Shandilya, Srinivas Gorur; Timme, Marc
2011-01-01
Inferring the network topology from dynamical observations is a fundamental problem pervading research on complex systems. Here, we present a simple, direct method for inferring the structural connection topology of a network, given an observation of one collective dynamical trajectory. The general theoretical framework is applicable to arbitrary network dynamical systems described by ordinary differential equations. No interference (external driving) is required and the type of dynamics is hardly restricted in any way. In particular, the observed dynamics may be arbitrarily complex; stationary, invariant or transient; synchronous or asynchronous and chaotic or periodic. Presupposing a knowledge of the functional form of the dynamical units and of the coupling functions between them, we present an analytical solution to the inverse problem of finding the network topology from observing a time series of state variables only. Robust reconstruction is achieved in any sufficiently long generic observation of the system. We extend our method to simultaneously reconstructing both the entire network topology and all parameters appearing linear in the system's equations of motion. Reconstruction of network topology and system parameters is viable even in the presence of external noise that distorts the original dynamics substantially. The method provides a conceptually new step towards reconstructing a variety of real-world networks, including gene and protein interaction networks and neuronal circuits.
Inferring climate sensitivity from volcanic events
Energy Technology Data Exchange (ETDEWEB)
Boer, G.J. [Environment Canada, University of Victoria, Canadian Centre for Climate Modelling and Analysis, Victoria, BC (Canada); Stowasser, M.; Hamilton, K. [University of Hawaii, International Pacific Research Centre, Honolulu, HI (United States)
2007-04-15
The possibility of estimating the equilibrium climate sensitivity of the earth-system from observations following explosive volcanic eruptions is assessed in the context of a perfect model study. Two modern climate models (the CCCma CGCM3 and the NCAR CCSM2) with different equilibrium climate sensitivities are employed in the investigation. The models are perturbed with the same transient volcano-like forcing and the responses analysed to infer climate sensitivities. For volcano-like forcing the global mean surface temperature responses of the two models are very similar, despite their differing equilibrium climate sensitivities, indicating that climate sensitivity cannot be inferred from the temperature record alone even if the forcing is known. Equilibrium climate sensitivities can be reasonably determined only if both the forcing and the change in heat storage in the system are known very accurately. The geographic patterns of clear-sky atmosphere/surface and cloud feedbacks are similar for both the transient volcano-like and near-equilibrium constant forcing simulations showing that, to a considerable extent, the same feedback processes are invoked, and determine the climate sensitivity, in both cases. (orig.)
Facility Activity Inference Using Radiation Networks
Energy Technology Data Exchange (ETDEWEB)
Rao, Nageswara S. [ORNL; Ramirez Aviles, Camila A. [ORNL
2017-11-01
We consider the problem of inferring the operational status of a reactor facility using measurements from a radiation sensor network deployed around the facility’s ventilation off-gas stack. The intensity of stack emissions decays with distance, and the sensor counts or measurements are inherently random with parameters determined by the intensity at the sensor’s location. We utilize the measurements to estimate the intensity at the stack, and use it in a one-sided Sequential Probability Ratio Test (SPRT) to infer on/off status of the reactor. We demonstrate the superior performance of this method over conventional majority fusers and individual sensors using (i) test measurements from a network of 21 NaI detectors, and (ii) effluence measurements collected at the stack of a reactor facility. We also analytically establish the superior detection performance of the network over individual sensors with fixed and adaptive thresholds by utilizing the Poisson distribution of the counts. We quantify the performance improvements of the network detection over individual sensors using the packing number of the intensity space.
Models for inference in dynamic metacommunity systems
Dorazio, Robert M.; Kery, Marc; Royle, J. Andrew; Plattner, Matthias
2010-01-01
A variety of processes are thought to be involved in the formation and dynamics of species assemblages. For example, various metacommunity theories are based on differences in the relative contributions of dispersal of species among local communities and interactions of species within local communities. Interestingly, metacommunity theories continue to be advanced without much empirical validation. Part of the problem is that statistical models used to analyze typical survey data either fail to specify ecological processes with sufficient complexity or they fail to account for errors in detection of species during sampling. In this paper, we describe a statistical modeling framework for the analysis of metacommunity dynamics that is based on the idea of adopting a unified approach, multispecies occupancy modeling, for computing inferences about individual species, local communities of species, or the entire metacommunity of species. This approach accounts for errors in detection of species during sampling and also allows different metacommunity paradigms to be specified in terms of species- and location-specific probabilities of occurrence, extinction, and colonization: all of which are estimable. In addition, this approach can be used to address inference problems that arise in conservation ecology, such as predicting temporal and spatial changes in biodiversity for use in making conservation decisions. To illustrate, we estimate changes in species composition associated with the species-specific phenologies of flight patterns of butterflies in Switzerland for the purpose of estimating regional differences in biodiversity.
Causal inference, probability theory, and graphical insights.
Baker, Stuart G
2013-11-10
Causal inference from observational studies is a fundamental topic in biostatistics. The causal graph literature typically views probability theory as insufficient to express causal concepts in observational studies. In contrast, the view here is that probability theory is a desirable and sufficient basis for many topics in causal inference for the following two reasons. First, probability theory is generally more flexible than causal graphs: Besides explaining such causal graph topics as M-bias (adjusting for a collider) and bias amplification and attenuation (when adjusting for instrumental variable), probability theory is also the foundation of the paired availability design for historical controls, which does not fit into a causal graph framework. Second, probability theory is the basis for insightful graphical displays including the BK-Plot for understanding Simpson's paradox with a binary confounder, the BK2-Plot for understanding bias amplification and attenuation in the presence of an unobserved binary confounder, and the PAD-Plot for understanding the principal stratification component of the paired availability design. Published 2013. This article is a US Government work and is in the public domain in the USA.
Inferring relevance in a changing world
Directory of Open Access Journals (Sweden)
Robert C Wilson
2012-01-01
Full Text Available Reinforcement learning models of human and animal learning usually concentrate on how we learn the relationship between different stimuli or actions and rewards. However, in real world situations stimuli are ill-defined. On the one hand, our immediate environment is extremely multi-dimensional. On the other hand, in every decision-making scenario only a few aspects of the environment are relevant for obtaining reward, while most are irrelevant. Thus a key question is how do we learn these relevant dimensions, that is, how do we learn what to learn about? We investigated this process of representation learning experimentally, using a task in which one stimulus dimension was relevant for determining reward at each point in time. As in real life situations, in our task the relevant dimension can change without warning, adding ever-present uncertainty engendered by a constantly changing environment. We show that human performance on this task is better described by a suboptimal strategy based on selective attention and serial hypothesis testing rather than a normative strategy based on probabilistic inference. From this, we conjecture that the problem of inferring relevance in general scenarios is too computationally demanding for the brain to solve optimally. As a result the brain utilizes approximations, employing these even in simplified scenarios in which optimal representation learning is tractable, such as the one in our experiment.
Automated adaptive inference of phenomenological dynamical models
Daniels, Bryan
Understanding the dynamics of biochemical systems can seem impossibly complicated at the microscopic level: detailed properties of every molecular species, including those that have not yet been discovered, could be important for producing macroscopic behavior. The profusion of data in this area has raised the hope that microscopic dynamics might be recovered in an automated search over possible models, yet the combinatorial growth of this space has limited these techniques to systems that contain only a few interacting species. We take a different approach inspired by coarse-grained, phenomenological models in physics. Akin to a Taylor series producing Hooke's Law, forgoing microscopic accuracy allows us to constrain the search over dynamical models to a single dimension. This makes it feasible to infer dynamics with very limited data, including cases in which important dynamical variables are unobserved. We name our method Sir Isaac after its ability to infer the dynamical structure of the law of gravitation given simulated planetary motion data. Applying the method to output from a microscopically complicated but macroscopically simple biological signaling model, it is able to adapt the level of detail to the amount of available data. Finally, using nematode behavioral time series data, the method discovers an effective switch between behavioral attractors after the application of a painful stimulus.
Graphical models for inferring single molecule dynamics
Directory of Open Access Journals (Sweden)
Gonzalez Ruben L
2010-10-01
Full Text Available Abstract Background The recent explosion of experimental techniques in single molecule biophysics has generated a variety of novel time series data requiring equally novel computational tools for analysis and inference. This article describes in general terms how graphical modeling may be used to learn from biophysical time series data using the variational Bayesian expectation maximization algorithm (VBEM. The discussion is illustrated by the example of single-molecule fluorescence resonance energy transfer (smFRET versus time data, where the smFRET time series is modeled as a hidden Markov model (HMM with Gaussian observables. A detailed description of smFRET is provided as well. Results The VBEM algorithm returns the model’s evidence and an approximating posterior parameter distribution given the data. The former provides a metric for model selection via maximum evidence (ME, and the latter a description of the model’s parameters learned from the data. ME/VBEM provide several advantages over the more commonly used approach of maximum likelihood (ML optimized by the expectation maximization (EM algorithm, the most important being a natural form of model selection and a well-posed (non-divergent optimization problem. Conclusions The results demonstrate the utility of graphical modeling for inference of dynamic processes in single molecule biophysics.
Quantum Enhanced Inference in Markov Logic Networks
Wittek, Peter; Gogolin, Christian
2017-04-01
Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.
Causal Inference in the Perception of Verticality.
de Winkel, Ksander N; Katliar, Mikhail; Diers, Daniel; Bülthoff, Heinrich H
2018-04-03
The perceptual upright is thought to be constructed by the central nervous system (CNS) as a vector sum; by combining estimates on the upright provided by the visual system and the body's inertial sensors with prior knowledge that upright is usually above the head. Recent findings furthermore show that the weighting of the respective sensory signals is proportional to their reliability, consistent with a Bayesian interpretation of a vector sum (Forced Fusion, FF). However, violations of FF have also been reported, suggesting that the CNS may rely on a single sensory system (Cue Capture, CC), or choose to process sensory signals based on inferred signal causality (Causal Inference, CI). We developed a novel alternative-reality system to manipulate visual and physical tilt independently. We tasked participants (n = 36) to indicate the perceived upright for various (in-)congruent combinations of visual-inertial stimuli, and compared models based on their agreement with the data. The results favor the CI model over FF, although this effect became unambiguous only for large discrepancies (±60°). We conclude that the notion of a vector sum does not provide a comprehensive explanation of the perception of the upright, and that CI offers a better alternative.
Saros, Jasmine E.; Stone, Jeffery R.; Pederson, Gregory T.; Slemmons, Krista; Spanbauer, Trisha; Schliep, Anna; Cahl, Douglas; Williamson, Craig E.; Engstrom, Daniel R.
2015-01-01
Over the 20th century, surface water temperatures have increased in many lake ecosystems around the world, but long-term trends in the vertical thermal structure of lakes remain unclear, despite the strong control that thermal stratification exerts on the biological response of lakes to climate change. Here we used both neo- and paleoecological approaches to develop a fossil-based inference model for lake mixing depths and thereby refine understanding of lake thermal structure change. We focused on three common planktonic diatom taxa, the distributions of which previous research suggests might be affected by mixing depth. Comparative lake surveys and growth rate experiments revealed that these species respond to lake thermal structure when nitrogen is sufficient, with species optima ranging from shallower to deeper mixing depths. The diatom-based mixing depth model was applied to sedimentary diatom profiles extending back to 1750 AD in two lakes with moderate nitrate concentrations but differing climate settings. Thermal reconstructions were consistent with expected changes, with shallower mixing depths inferred for an alpine lake where treeline has advanced, and deeper mixing depths inferred for a boreal lake where wind strength has increased. The inference model developed here provides a new tool to expand and refine understanding of climate-induced changes in lake ecosystems.
Directory of Open Access Journals (Sweden)
Yinyin Yuan
Full Text Available Inferring regulatory relationships among many genes based on their temporal variation in transcript abundance has been a popular research topic. Due to the nature of microarray experiments, classical tools for time series analysis lose power since the number of variables far exceeds the number of the samples. In this paper, we describe some of the existing multivariate inference techniques that are applicable to hundreds of variables and show the potential challenges for small-sample, large-scale data. We propose a directed partial correlation (DPC method as an efficient and effective solution to regulatory network inference using these data. Specifically for genomic data, the proposed method is designed to deal with large-scale datasets. It combines the efficiency of partial correlation for setting up network topology by testing conditional independence, and the concept of Granger causality to assess topology change with induced interruptions. The idea is that when a transcription factor is induced artificially within a gene network, the disruption of the network by the induction signifies a genes role in transcriptional regulation. The benchmarking results using GeneNetWeaver, the simulator for the DREAM challenges, provide strong evidence of the outstanding performance of the proposed DPC method. When applied to real biological data, the inferred starch metabolism network in Arabidopsis reveals many biologically meaningful network modules worthy of further investigation. These results collectively suggest DPC is a versatile tool for genomics research. The R package DPC is available for download (http://code.google.com/p/dpcnet/.
Constraint Satisfaction Inference : Non-probabilistic Global Inference for Sequence Labelling
Canisius, S.V.M.; van den Bosch, A.; Daelemans, W.; Basili, R.; Moschitti, A.
2006-01-01
We present a new method for performing sequence labelling based on the idea of using a machine-learning classifier to generate several possible output sequences, and then applying an inference procedure to select the best sequence among those. Most sequence labelling methods following a similar
Allday, Jonathan
2003-01-01
Offers some suggestions as to how science fiction, especially television science fiction programs such as "Star Trek" and "Star Wars", can be drawn into physics lessons to illuminate some interesting issues. (Author/KHR)
Mathematical inference and control of molecular networks from perturbation experiments
Mohammed-Rasheed, Mohammed
in order to affect the time evolution of molecular activity in a desirable manner. In this proposal, we address both the inference and control problems of GRNs. In the first part of the thesis, we consider the control problem. We assume that we are given a general topology network structure, whose dynamics follow a discrete-time Markov chain model. We subsequently develop a comprehensive framework for optimal perturbation control of the network. The aim of the perturbation is to drive the network away from undesirable steady-states and to force it to converge to a unique desirable steady-state. The proposed framework does not make any assumptions about the topology of the initial network (e.g., ergodicity, weak and strong connectivity), and is thus applicable to general topology networks. We define the optimal perturbation as the minimum-energy perturbation measured in terms of the Frobenius norm between the initial and perturbed networks. We subsequently demonstrate that there exists at most one optimal perturbation that forces the network into the desirable steady-state. In the event where the optimal perturbation does not exist, we construct a family of sub-optimal perturbations that approximate the optimal solution arbitrarily closely. In the second part of the thesis, we address the inference problem of GRNs from time series data. We model the dynamics of the molecules using a system of ordinary differential equations corrupted by additive white noise. For large-scale networks, we formulate the inference problem as a constrained maximum likelihood estimation problem. We derive the molecular interactions that maximize the likelihood function while constraining the network to be sparse. We further propose a procedure to recover weak interactions based on the Bayesian information criterion. For small-size networks, we investigated the inference of a globally stable 7-gene melanoma genetic regulatory network from genetic perturbation experiments. We considered five
Normal Science in a Multiverse
Carroll, Sean
2016-06-01
A number of theories in contemporary physics and cosmology place an emphasis on features that are hard, and arguably impossible, to test. These include the cosmological multiverse as well as some approaches to quantum gravity. Worries have been raised that these models attempt to sidestep the purportedly crucial principle of falsifiability. Proponents of these theories sometimes suggest that we are seeing a new approach to science, while opponents fear that we are abandoning science altogether. I will argue that in fact these theories are straightforwardly scientific and can be evaluated in absolutely conventional ways, based on empiricism, abduction (inference to the best explanation), and Bayesian reasoning. The integrity of science remains intact.
On Valdivia strong version of Nikodym boundedness property
Czech Academy of Sciences Publication Activity Database
Kąkol, Jerzy; López-Pellicer, M.
2017-01-01
Roč. 446, č. 1 (2017), s. 1-17 ISSN 0022-247X R&D Projects: GA ČR GF16-34860L Institutional support: RVO:67985840 Keywords : finitely additive scalar measure * Nikodym and strong Nikodym property * increasing tree Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.064, year: 2016 http://www.sciencedirect.com/science/article/pii/S0022247X16304413
Human brain lesion-deficit inference remapped.
Mah, Yee-Haur; Husain, Masud; Rees, Geraint; Nachev, Parashkev
2014-09-01
Our knowledge of the anatomical organization of the human brain in health and disease draws heavily on the study of patients with focal brain lesions. Historically the first method of mapping brain function, it is still potentially the most powerful, establishing the necessity of any putative neural substrate for a given function or deficit. Great inferential power, however, carries a crucial vulnerability: without stronger alternatives any consistent error cannot be easily detected. A hitherto unexamined source of such error is the structure of the high-dimensional distribution of patterns of focal damage, especially in ischaemic injury-the commonest aetiology in lesion-deficit studies-where the anatomy is naturally shaped by the architecture of the vascular tree. This distribution is so complex that analysis of lesion data sets of conventional size cannot illuminate its structure, leaving us in the dark about the presence or absence of such error. To examine this crucial question we assembled the largest known set of focal brain lesions (n = 581), derived from unselected patients with acute ischaemic injury (mean age = 62.3 years, standard deviation = 17.8, male:female ratio = 0.547), visualized with diffusion-weighted magnetic resonance imaging, and processed with validated automated lesion segmentation routines. High-dimensional analysis of this data revealed a hidden bias within the multivariate patterns of damage that will consistently distort lesion-deficit maps, displacing inferred critical regions from their true locations, in a manner opaque to replication. Quantifying the size of this mislocalization demonstrates that past lesion-deficit relationships estimated with conventional inferential methodology are likely to be significantly displaced, by a magnitude dependent on the unknown underlying lesion-deficit relationship itself. Past studies therefore cannot be retrospectively corrected, except by new knowledge that would render them redundant
Sampling flies or sampling flaws? Experimental design and inference strength in forensic entomology.
Michaud, J-P; Schoenly, Kenneth G; Moreau, G
2012-01-01
Forensic entomology is an inferential science because postmortem interval estimates are based on the extrapolation of results obtained in field or laboratory settings. Although enormous gains in scientific understanding and methodological practice have been made in forensic entomology over the last few decades, a majority of the field studies we reviewed do not meet the standards for inference, which are 1) adequate replication, 2) independence of experimental units, and 3) experimental conditions that capture a representative range of natural variability. Using a mock case-study approach, we identify design flaws in field and lab experiments and suggest methodological solutions for increasing inference strength that can inform future casework. Suggestions for improving data reporting in future field studies are also proposed.
Testing inferences in developmental evolution: the forensic evidence principle.
Larsson, Hans C E; Wagner, Günter P
2012-09-01
Developmental evolution (DE) examines the influence of developmental mechanisms on biological evolution. Here we consider the question: "what is the evidence that allows us to decide whether a certain developmental scenario for an evolutionary change is in fact "correct" or at least falsifiable?" We argue that the comparative method linked with what we call the "forensic evidence principle" (FEP) is sufficient to conduct rigorous tests of DE scenarios. The FEP states that different genetically mediated developmental causes of an evolutionary transformation will leave different signatures in the development of the derived character. Although similar inference rules have been used in practically every empirical science, we expand this approach here in two ways: (1) we justify the validity of this principle with reference to a well-known result from mathematical physics, known as the symmetry principle, and (2) propose a specific form of the FEP for DE: given two or more developmental explanations for a certain evolutionary event, say an evolutionary novelty, then the evidence discriminating between these hypotheses will be found in the most proximal internal drivers of the derived character. Hence, a detailed description of the ancestral and derived states, and their most proximal developmental drivers are necessary to discriminate between various evolutionary developmental hypotheses. We discuss how this stepwise order of testing is necessary, establishes a formal test, and how skipping this order of examination may violate a more accurate examination of DE. We illustrate the approach with an example from avian digit evolution. © 2012 Wiley Periodicals, Inc.
Conflicting Epistemologies and Inference in Coupled Human and Natural Systems
Garcia, M. E.
2017-12-01
Last year, I presented a model that projects per capita water consumption based on changes in price, population, building codes, and water stress salience. This model applied methods from hydrological science and engineering to relationships both within and beyond their traditional scope. Epistemologically, the development of mathematical models of natural or engineered systems is objectivist while research examining relationships between observations, perceptions and action is commonly constructivist or subjectivist. Drawing on multiple epistemologies is common in, and perhaps central to, the growing fields of coupled human and natural systems, and socio-hydrology. Critically, these philosophical perspectives vary in their view of the nature of the system as mechanistic, adaptive or constructed, and the split between aleatory and epistemic uncertainty. Interdisciplinary research is commonly cited as a way to address the critical and domain crossing challenge of sustainability as synthesis across perspectives can offer a more comprehensive view of system dynamics. However, combining methods and concepts from multiple ontologies and epistemologies can introduce contradictions into the logic of inference. These contractions challenge the evaluation of research products and the implications for practical application of research findings are not fully understood. Reflections on the evaluation, application, and generalization of the water consumption model described above are used to ground these broader questions and offer thoughts on the way forward.
Convergent cross-mapping and pairwise asymmetric inference.
McCracken, James M; Weigel, Robert S
2014-12-01
Convergent cross-mapping (CCM) is a technique for computing specific kinds of correlations between sets of times series. It was introduced by Sugihara et al. [Science 338, 496 (2012).] and is reported to be "a necessary condition for causation" capable of distinguishing causality from standard correlation. We show that the relationships between CCM correlations proposed by Sugihara et al. do not, in general, agree with intuitive concepts of "driving" and as such should not be considered indicative of causality. It is shown that the fact that the CCM algorithm implies causality is a function of system parameters for simple linear and nonlinear systems. For example, in a circuit containing a single resistor and inductor, both voltage and current can be identified as the driver depending on the frequency of the source voltage. It is shown that the CCM algorithm, however, can be modified to identify relationships between pairs of time series that are consistent with intuition for the considered example systems for which CCM causality analysis provided nonintuitive driver identifications. This modification of the CCM algorithm is introduced as "pairwise asymmetric inference" (PAI) and examples of its use are presented.
Jonsen, Ian
2016-02-08
State-space models provide a powerful way to scale up inference of movement behaviours from individuals to populations when the inference is made across multiple individuals. Here, I show how a joint estimation approach that assumes individuals share identical movement parameters can lead to improved inference of behavioural states associated with different movement processes. I use simulated movement paths with known behavioural states to compare estimation error between nonhierarchical and joint estimation formulations of an otherwise identical state-space model. Behavioural state estimation error was strongly affected by the degree of similarity between movement patterns characterising the behavioural states, with less error when movements were strongly dissimilar between states. The joint estimation model improved behavioural state estimation relative to the nonhierarchical model for simulated data with heavy-tailed Argos location errors. When applied to Argos telemetry datasets from 10 Weddell seals, the nonhierarchical model estimated highly uncertain behavioural state switching probabilities for most individuals whereas the joint estimation model yielded substantially less uncertainty. The joint estimation model better resolved the behavioural state sequences across all seals. Hierarchical or joint estimation models should be the preferred choice for estimating behavioural states from animal movement data, especially when location data are error-prone.
Meta-learning framework applied in bioinformatics inference system design.
Arredondo, Tomás; Ormazábal, Wladimir
2015-01-01
This paper describes a meta-learner inference system development framework which is applied and tested in the implementation of bioinformatic inference systems. These inference systems are used for the systematic classification of the best candidates for inclusion in bacterial metabolic pathway maps. This meta-learner-based approach utilises a workflow where the user provides feedback with final classification decisions which are stored in conjunction with analysed genetic sequences for periodic inference system training. The inference systems were trained and tested with three different data sets related to the bacterial degradation of aromatic compounds. The analysis of the meta-learner-based framework involved contrasting several different optimisation methods with various different parameters. The obtained inference systems were also contrasted with other standard classification methods with accurate prediction capabilities observed.
Active Inference, homeostatic regulation and adaptive behavioural control.
Pezzulo, Giovanni; Rigoli, Francesco; Friston, Karl
2015-11-01
We review a theory of homeostatic regulation and adaptive behavioural control within the Active Inference framework. Our aim is to connect two research streams that are usually considered independently; namely, Active Inference and associative learning theories of animal behaviour. The former uses a probabilistic (Bayesian) formulation of perception and action, while the latter calls on multiple (Pavlovian, habitual, goal-directed) processes for homeostatic and behavioural control. We offer a synthesis these classical processes and cast them as successive hierarchical contextualisations of sensorimotor constructs, using the generative models that underpin Active Inference. This dissolves any apparent mechanistic distinction between the optimization processes that mediate classical control or learning. Furthermore, we generalize the scope of Active Inference by emphasizing interoceptive inference and homeostatic regulation. The ensuing homeostatic (or allostatic) perspective provides an intuitive explanation for how priors act as drives or goals to enslave action, and emphasises the embodied nature of inference. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Bayesian inference data evaluation and decisions
Harney, Hanns Ludwig
2016-01-01
This new edition offers a comprehensive introduction to the analysis of data using Bayes rule. It generalizes Gaussian error intervals to situations in which the data follow distributions other than Gaussian. This is particularly useful when the observed parameter is barely above the background or the histogram of multiparametric data contains many empty bins, so that the determination of the validity of a theory cannot be based on the chi-squared-criterion. In addition to the solutions of practical problems, this approach provides an epistemic insight: the logic of quantum mechanics is obtained as the logic of unbiased inference from counting data. New sections feature factorizing parameters, commuting parameters, observables in quantum mechanics, the art of fitting with coherent and with incoherent alternatives and fitting with multinomial distribution. Additional problems and examples help deepen the knowledge. Requiring no knowledge of quantum mechanics, the book is written on introductory level, with man...
Bayesian inference and updating of reliability data
International Nuclear Information System (INIS)
Sabri, Z.A.; Cullingford, M.C.; David, H.T.; Husseiny, A.A.
1980-01-01
A Bayes methodology for inference of reliability values using available but scarce current data is discussed. The method can be used to update failure rates as more information becomes available from field experience, assuming that the performance of a given component (or system) exhibits a nonhomogeneous Poisson process. Bayes' theorem is used to summarize the historical evidence and current component data in the form of a posterior distribution suitable for prediction and for smoothing or interpolation. An example is given. It may be appropriate to apply the methodology developed here to human error data, in which case the exponential model might be used to describe the learning behavior of the operator or maintenance crew personnel
Automatic inference of indexing rules for MEDLINE
Directory of Open Access Journals (Sweden)
Shooshan Sonya E
2008-11-01
Full Text Available Abstract Background: Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. Methods: In this paper, we describe the use and the customization of Inductive Logic Programming (ILP to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Results: Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI, a system producing automatic indexing recommendations for MEDLINE. Conclusion: We expect the sets of ILP rules obtained in this experiment to be integrated into MTI.
Progression inference for somatic mutations in cancer
Directory of Open Access Journals (Sweden)
Leif E. Peterson
2017-04-01
Full Text Available Computational methods were employed to determine progression inference of genomic alterations in commonly occurring cancers. Using cross-sectional TCGA data, we computed evolutionary trajectories involving selectivity relationships among pairs of gene-specific genomic alterations such as somatic mutations, deletions, amplifications, downregulation, and upregulation among the top 20 driver genes associated with each cancer. Results indicate that the majority of hierarchies involved TP53, PIK3CA, ERBB2, APC, KRAS, EGFR, IDH1, VHL, etc. Research into the order and accumulation of genomic alterations among cancer driver genes will ever-increase as the costs of nextgen sequencing subside, and personalized/precision medicine incorporates whole-genome scans into the diagnosis and treatment of cancer. Keywords: Oncology, Cancer research, Genetics, Computational biology
Inferring Phylogenetic Networks from Gene Order Data
Directory of Open Access Journals (Sweden)
Alexey Anatolievich Morozov
2013-01-01
Full Text Available Existing algorithms allow us to infer phylogenetic networks from sequences (DNA, protein or binary, sets of trees, and distance matrices, but there are no methods to build them using the gene order data as an input. Here we describe several methods to build split networks from the gene order data, perform simulation studies, and use our methods for analyzing and interpreting different real gene order datasets. All proposed methods are based on intermediate data, which can be generated from genome structures under study and used as an input for network construction algorithms. Three intermediates are used: set of jackknife trees, distance matrix, and binary encoding. According to simulations and case studies, the best intermediates are jackknife trees and distance matrix (when used with Neighbor-Net algorithm. Binary encoding can also be useful, but only when the methods mentioned above cannot be used.
Supplier Selection Using Fuzzy Inference System
Directory of Open Access Journals (Sweden)
hamidreza kadhodazadeh
2014-01-01
Full Text Available Suppliers are one of the most vital parts of supply chain whose operation has significant indirect effect on customer satisfaction. Since customer's expectations from organization are different, organizations should consider different standards, respectively. There are many researches in this field using different standards and methods in recent years. The purpose of this study is to propose an approach for choosing a supplier in a food manufacturing company considering cost, quality, service, type of relationship and structure standards of the supplier organization. To evaluate supplier according to the above standards, the fuzzy inference system has been used. Input data of this system includes supplier's score in any standard that is achieved by AHP approach and the output is final score of each supplier. Finally, a supplier has been selected that although is not the best in price and quality, has achieved good score in all of the standards.
Gene expression inference with deep learning.
Chen, Yifei; Li, Yi; Narayan, Rajiv; Subramanian, Aravind; Xie, Xiaohui
2016-06-15
Large-scale gene expression profiling has been widely used to characterize cellular states in response to various disease conditions, genetic perturbations, etc. Although the cost of whole-genome expression profiles has been dropping steadily, generating a compendium of expression profiling over thousands of samples is still very expensive. Recognizing that gene expressions are often highly correlated, researchers from the NIH LINCS program have developed a cost-effective strategy of profiling only ∼1000 carefully selected landmark genes and relying on computational methods to infer the expression of remaining target genes. However, the computational approach adopted by the LINCS program is currently based on linear regression (LR), limiting its accuracy since it does not capture complex nonlinear relationship between expressions of genes. We present a deep learning method (abbreviated as D-GEX) to infer the expression of target genes from the expression of landmark genes. We used the microarray-based Gene Expression Omnibus dataset, consisting of 111K expression profiles, to train our model and compare its performance to those from other methods. In terms of mean absolute error averaged across all genes, deep learning significantly outperforms LR with 15.33% relative improvement. A gene-wise comparative analysis shows that deep learning achieves lower error than LR in 99.97% of the target genes. We also tested the performance of our learned model on an independent RNA-Seq-based GTEx dataset, which consists of 2921 expression profiles. Deep learning still outperforms LR with 6.57% relative improvement, and achieves lower error in 81.31% of the target genes. D-GEX is available at https://github.com/uci-cbcl/D-GEX CONTACT: xhx@ics.uci.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Systematic parameter inference in stochastic mesoscopic modeling
Energy Technology Data Exchange (ETDEWEB)
Lei, Huan; Yang, Xiu [Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Li, Zhen [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States); Karniadakis, George Em, E-mail: george_karniadakis@brown.edu [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States)
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.
State-Space Inference and Learning with Gaussian Processes
Turner, R; Deisenroth, MP; Rasmussen, CE
2010-01-01
18.10.13 KB. Ok to add author version to spiral, authors hold copyright. State-space inference and learning with Gaussian processes (GPs) is an unsolved problem. We propose a new, general methodology for inference and learning in nonlinear state-space models that are described probabilistically by non-parametric GP models. We apply the expectation maximization algorithm to iterate between inference in the latent state-space and learning the parameters of the underlying GP dynamics model. C...
Probabilistic logic networks a comprehensive framework for uncertain inference
Goertzel, Ben; Goertzel, Izabela Freire; Heljakka, Ari
2008-01-01
This comprehensive book describes Probabilistic Logic Networks (PLN), a novel conceptual, mathematical and computational approach to uncertain inference. A broad scope of reasoning types are considered.
Parametric statistical inference basic theory and modern approaches
Zacks, Shelemyahu; Tsokos, C P
1981-01-01
Parametric Statistical Inference: Basic Theory and Modern Approaches presents the developments and modern trends in statistical inference to students who do not have advanced mathematical and statistical preparation. The topics discussed in the book are basic and common to many fields of statistical inference and thus serve as a jumping board for in-depth study. The book is organized into eight chapters. Chapter 1 provides an overview of how the theory of statistical inference is presented in subsequent chapters. Chapter 2 briefly discusses statistical distributions and their properties. Chapt
Multi-Modal Inference in Animacy Perception for Artificial Object
Directory of Open Access Journals (Sweden)
Kohske Takahashi
2011-10-01
Full Text Available Sometimes we feel animacy for artificial objects and their motion. Animals usually interact with environments through multiple sensory modalities. Here we investigated how the sensory responsiveness of artificial objects to the environment would contribute to animacy judgment for them. In a 90-s trial, observers freely viewed four objects moving in a virtual 3D space. The objects, whose position and motion were determined following Perlin-noise series, kept drifting independently in the space. Visual flashes, auditory bursts, or synchronous flashes and bursts appeared with 1–2 s intervals. The first object abruptly accelerated their motion just after visual flashes, giving an impression of responding to the flash. The second object responded to bursts. The third object responded to synchronous flashes and bursts. The forth object accelerated at a random timing independent of flashes and bursts. The observers rated how strongly they felt animacy for each object. The results showed that the object responding to the auditory bursts was rated as having weaker animacy compared to the other objects. This implies that sensory modality through which an object interacts with the environment may be a factor for animacy perception in the object and may serve as the basis of multi-modal and cross-modal inference of animacy.
Modeling and inferring cleavage patterns in proliferating epithelia.
Directory of Open Access Journals (Sweden)
Ankit B Patel
2009-06-01
Full Text Available The regulation of cleavage plane orientation is one of the key mechanisms driving epithelial morphogenesis. Still, many aspects of the relationship between local cleavage patterns and tissue-level properties remain poorly understood. Here we develop a topological model that simulates the dynamics of a 2D proliferating epithelium from generation to generation, enabling the exploration of a wide variety of biologically plausible cleavage patterns. We investigate a spectrum of models that incorporate the spatial impact of neighboring cells and the temporal influence of parent cells on the choice of cleavage plane. Our findings show that cleavage patterns generate "signature" equilibrium distributions of polygonal cell shapes. These signatures enable the inference of local cleavage parameters such as neighbor impact, maternal influence, and division symmetry from global observations of the distribution of cell shape. Applying these insights to the proliferating epithelia of five diverse organisms, we find that strong division symmetry and moderate neighbor/maternal influence are required to reproduce the predominance of hexagonal cells and low variability in cell shape seen empirically. Furthermore, we present two distinct cleavage pattern models, one stochastic and one deterministic, that can reproduce the empirical distribution of cell shapes. Although the proliferating epithelia of the five diverse organisms show a highly conserved cell shape distribution, there are multiple plausible cleavage patterns that can generate this distribution, and experimental evidence suggests that indeed plants and fruitflies use distinct division mechanisms.
The origins of probabilistic inference in human infants.
Denison, Stephanie; Xu, Fei
2014-03-01
Reasoning under uncertainty is the bread and butter of everyday life. Many areas of psychology, from cognitive, developmental, social, to clinical, are interested in how individuals make inferences and decisions with incomplete information. The ability to reason under uncertainty necessarily involves probability computations, be they exact calculations or estimations. What are the developmental origins of probabilistic reasoning? Recent work has begun to examine whether infants and toddlers can compute probabilities; however, previous experiments have confounded quantity and probability-in most cases young human learners could have relied on simple comparisons of absolute quantities, as opposed to proportions, to succeed in these tasks. We present four experiments providing evidence that infants younger than 12 months show sensitivity to probabilities based on proportions. Furthermore, infants use this sensitivity to make predictions and fulfill their own desires, providing the first demonstration that even preverbal learners use probabilistic information to navigate the world. These results provide strong evidence for a rich quantitative and statistical reasoning system in infants. Copyright © 2013 Elsevier B.V. All rights reserved.
The evolutionary history of ferns inferred from 25 low-copy nuclear genes.
Rothfels, Carl J; Li, Fay-Wei; Sigel, Erin M; Huiet, Layne; Larsson, Anders; Burge, Dylan O; Ruhsam, Markus; Deyholos, Michael; Soltis, Douglas E; Stewart, C Neal; Shaw, Shane W; Pokorny, Lisa; Chen, Tao; dePamphilis, Claude; DeGironimo, Lisa; Chen, Li; Wei, Xiaofeng; Sun, Xiao; Korall, Petra; Stevenson, Dennis W; Graham, Sean W; Wong, Gane K-S; Pryer, Kathleen M
2015-07-01
• Understanding fern (monilophyte) phylogeny and its evolutionary timescale is critical for broad investigations of the evolution of land plants, and for providing the point of comparison necessary for studying the evolution of the fern sister group, seed plants. Molecular phylogenetic investigations have revolutionized our understanding of fern phylogeny, however, to date, these studies have relied almost exclusively on plastid data.• Here we take a curated phylogenomics approach to infer the first broad fern phylogeny from multiple nuclear loci, by combining broad taxon sampling (73 ferns and 12 outgroup species) with focused character sampling (25 loci comprising 35877 bp), along with rigorous alignment, orthology inference and model selection.• Our phylogeny corroborates some earlier inferences and provides novel insights; in particular, we find strong support for Equisetales as sister to the rest of ferns, Marattiales as sister to leptosporangiate ferns, and Dennstaedtiaceae as sister to the eupolypods. Our divergence-time analyses reveal that divergences among the extant fern orders all occurred prior to ∼200 MYA. Finally, our species-tree inferences are congruent with analyses of concatenated data, but generally with lower support. Those cases where species-tree support values are higher than expected involve relationships that have been supported by smaller plastid datasets, suggesting that deep coalescence may be reducing support from the concatenated nuclear data.• Our study demonstrates the utility of a curated phylogenomics approach to inferring fern phylogeny, and highlights the need to consider underlying data characteristics, along with data quantity, in phylogenetic studies. © 2015 Botanical Society of America, Inc.
DEFF Research Database (Denmark)
Bataillon, Thomas; Duan, Jinjie; Hvilsom, Christina
2015-01-01
of recent gene flow from Western into Eastern chimpanzees. The striking contrast in X-linked vs. autosomal polymorphism and divergence previously reported in Central chimpanzees is also found in Eastern and Western chimpanzees. We show that the direction of selection (DoS) statistic exhibits a strong non......-monotonic relationship with the strength of purifying selection S, making it inappropriate for estimating S. We instead use counts in synonymous vs. non-synonymous frequency classes to infer the distribution of S coefficients acting on non-synonymous mutations in each subspecies. The strength of purifying selection we...... infer is congruent with the differences in effective sizes of each subspecies: Central chimpanzees are undergoing the strongest purifying selection followed by Eastern and Western chimpanzees. Coding indels show stronger selection against indels changing the reading frame than observed in human...
Modeling and notation of DEA with strong and weak disposable outputs.
Kuntz, Ludwig; Sülz, Sandra
2011-12-01
Recent articles published in Health Care Management Science have described DEA applications under the assumption of strong and weak disposable outputs. As we confidently assume that these papers include some methodical deficiencies, we aim to illustrate a revised approach.
Properties of the subglacial till inferred from supraglacial lake drainage
Neufeld, J. A.; Hewitt, D.
2017-12-01
The buildup and drainage of supraglacial lakes along the margins of the Greenland ice sheet has been previously observed using detailed GPS campaigns which show that rapid drainage events are often preceded by localised, transient uplift followed by rapid, and much broader scale, uplift and flexure associated with the main drainage event [1,2]. Previous models of these events have focused on fracturing during rapid lake drainage from an impermeable bedrock [3] or a thin subglacial film [4]. We present a new model of supraglacial drainage that couples the water flux from rapid lake drainage events to a simplified model of the pore-pressure in a porous, subglacial till along with a simplified model of the flexure of glacial ice. Using a hybrid mathematical model we explore the internal transitions between turbulent and laminar flow throughout the evolving subglacial cavity and porous till. The model predicts that an initially small water flux may locally increase pore-pressure in the till leading to uplift and a local divergence in the ice velocity that may ultimately be responsible for large hydro-fracturing and full-scale drainage events. Furthermore, we find that during rapid drainage while the presence of a porous, subglacial till is crucial for propagation, the manner of spreading is remarkably insensitive to the properties of the subglacial till. This is in stark contrast to the post-drainage relaxation of the pore pressure, and hence sliding velocity, which is highly sensitive to the permeability, compressibility and thickness of subglacial till. We use our model, and the inferred sensitivity to the properties of the subglacial till after the main drainage event, to infer the properties of the subglacial till. The results suggest that a detailed interpretation of supraglacial lake drainage may provide important insights into the hydrology of the subglacial till along the margins of the Greenland ice sheet, and the coupling of pore pressure in subglacial till
Serang, Oliver
2014-01-01
Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called “causal independence”). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to and the space to where is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions. PMID:24626234
Making inference from wildlife collision data: inferring predator absence from prey strikes
Directory of Open Access Journals (Sweden)
Peter Caley
2017-02-01
Full Text Available Wildlife collision data are ubiquitous, though challenging for making ecological inference due to typically irreducible uncertainty relating to the sampling process. We illustrate a new approach that is useful for generating inference from predator data arising from wildlife collisions. By simply conditioning on a second prey species sampled via the same collision process, and by using a biologically realistic numerical response functions, we can produce a coherent numerical response relationship between predator and prey. This relationship can then be used to make inference on the population size of the predator species, including the probability of extinction. The statistical conditioning enables us to account for unmeasured variation in factors influencing the runway strike incidence for individual airports and to enable valid comparisons. A practical application of the approach for testing hypotheses about the distribution and abundance of a predator species is illustrated using the hypothesized red fox incursion into Tasmania, Australia. We estimate that conditional on the numerical response between fox and lagomorph runway strikes on mainland Australia, the predictive probability of observing no runway strikes of foxes in Tasmania after observing 15 lagomorph strikes is 0.001. We conclude there is enough evidence to safely reject the null hypothesis that there is a widespread red fox population in Tasmania at a population density consistent with prey availability. The method is novel and has potential wider application.
Making inference from wildlife collision data: inferring predator absence from prey strikes.
Caley, Peter; Hosack, Geoffrey R; Barry, Simon C
2017-01-01
Wildlife collision data are ubiquitous, though challenging for making ecological inference due to typically irreducible uncertainty relating to the sampling process. We illustrate a new approach that is useful for generating inference from predator data arising from wildlife collisions. By simply conditioning on a second prey species sampled via the same collision process, and by using a biologically realistic numerical response functions, we can produce a coherent numerical response relationship between predator and prey. This relationship can then be used to make inference on the population size of the predator species, including the probability of extinction. The statistical conditioning enables us to account for unmeasured variation in factors influencing the runway strike incidence for individual airports and to enable valid comparisons. A practical application of the approach for testing hypotheses about the distribution and abundance of a predator species is illustrated using the hypothesized red fox incursion into Tasmania, Australia. We estimate that conditional on the numerical response between fox and lagomorph runway strikes on mainland Australia, the predictive probability of observing no runway strikes of foxes in Tasmania after observing 15 lagomorph strikes is 0.001. We conclude there is enough evidence to safely reject the null hypothesis that there is a widespread red fox population in Tasmania at a population density consistent with prey availability. The method is novel and has potential wider application.
Information Science: Science or Social Science?
Sreeramana Aithal; Paul P.K.,; Bhuimali A.
2017-01-01
Collection, selection, processing, management, and dissemination of information are the main and ultimate role of Information Science and similar studies such as Information Studies, Information Management, Library Science, and Communication Science and so on. However, Information Science deals with some different characteristics than these subjects. Information Science is most interdisciplinary Science combines with so many knowledge clusters and domains. Information Science is a broad disci...
Phase transition transistors based on strongly-correlated materials
Nakano, Masaki
2013-03-01
The field-effect transistor (FET) provides electrical switching functions through linear control of the number of charges at a channel surface by external voltage. Controlling electronic phases of condensed matters in a FET geometry has long been a central issue of physical science. In particular, FET based on a strongly correlated material, namely ``Mott transistor,'' has attracted considerable interest, because it potentially provides gigantic and diverse electronic responses due to a strong interplay between charge, spin, orbital and lattice. We have investigated electric-field effects on such materials aiming at novel physical phenomena and electronic functions originating from strong correlation effects. Here we demonstrate electrical switching of bulk state of matter over the first-order metal-insulator transition. We fabricated FETs based on VO2 with use of a recently developed electric-double-layer transistor technique, and found that the electrostatically induced carriers at a channel surface drive all preexisting localized carriers of 1022 cm-3 even inside a bulk to motion, leading to bulk carrier delocalization beyond the electrostatic screening length. This non-local switching of bulk phases is achieved with just around 1 V, and moreover, a novel non-volatile memory like character emerges in a voltage-sweep measurement. These observations are apparently distinct from those of conventional FETs based on band insulators, capturing the essential feature of collective interactions in strongly correlated materials. This work was done in collaboration with K. Shibuya, D. Okuyama, T. Hatano, S. Ono, M. Kawasaki, Y. Iwasa, and Y. Tokura. This work was supported by the Japan Society for the Promotion of Science (JSAP) through its ``Funding Program for World-Leading Innovative R&D on Science and Technology (FIRST Program).''
Directory of Open Access Journals (Sweden)
Thomas H B FitzGerald
2017-05-01
Full Text Available Normative models of human cognition often appeal to Bayesian filtering, which provides optimal online estimates of unknown or hidden states of the world, based on previous observations. However, in many cases it is necessary to optimise beliefs about sequences of states rather than just the current state. Importantly, Bayesian filtering and sequential inference strategies make different predictions about beliefs and subsequent choices, rendering them behaviourally dissociable. Taking data from a probabilistic reversal task we show that subjects' choices provide strong evidence that they are representing short sequences of states. Between-subject measures of this implicit sequential inference strategy had a neurobiological underpinning and correlated with grey matter density in prefrontal and parietal cortex, as well as the hippocampus. Our findings provide, to our knowledge, the first evidence for sequential inference in human cognition, and by exploiting between-subject variation in this measure we provide pointers to its neuronal substrates.
DEFF Research Database (Denmark)
Pedersen, Casper-Emil Tingskov; Frandsen, Peter; Wekesa, Sabenzia N.
2015-01-01
abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale...... through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer...... to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully...
Strong Bisimilarity of Simple Process Algebras
DEFF Research Database (Denmark)
Srba, Jirí
2003-01-01
We study bisimilarity and regularity problems of simple process algebras. In particular, we show PSPACE-hardness of the following problems: (i) strong bisimilarity of Basic Parallel Processes (BPP), (ii) strong bisimilarity of Basic Process Algebra (BPA), (iii) strong regularity of BPP, and (iv......) strong regularity of BPA. We also demonstrate NL-hardness of strong regularity problems for the normed subclasses of BPP and BPA. Bisimilarity problems of simple process algebras are introduced in a general framework of process rewrite systems, and a uniform description of the new techniques used...
Application of strong phosphoric acid to radiochemistry
International Nuclear Information System (INIS)
Terada, Kikuo
1977-01-01
Not only inorganic and organic compounds but also natural substrances, such as accumulations in soil, are completely decomposed and distilled by heating with strong phosphoric acid for 30 to 50 minutes. As applications of strong phosphoric acid to radiochemistry, determination of uranium and boron by use of solubilization effect of this substance, titration of uranyl ion by use of sulfuric iron (II) contained in this substance, application to tracer experiment, and determination of radioactive ruthenium in environmental samples are reviewed. Strong phosphoric acid is also applied to activation analysis, for example, determination of N in pyrographite with iodate potassium-strong phosphoric acid method, separation of Os and Ru with sulfuric cerium (IV) - strong phosphoric acid method or potassium dechromate-strong phosphoric acid method, analysis of Se, As and Sb rocks and accumulations with ammonium bromide, sodium chloride and sodium bromide-strong phosphoric acid method. (Kanao, N.)
International Nuclear Information System (INIS)
Arbo, D.G.; Toekesi, K.; Miraglia, J.E.; FCEN, University of Buenos Aires
2008-01-01
Complete text of publication follows. We presented a theoretical study of the ionization of hydrogen atoms as a result of the interaction with an ultrashort external electric field. Doubly-differential momentum distributions and angular momentum distributions of ejected electrons calculated in the framework of the Coulomb-Volkov and strong field approximations, as well as classical calculations are compared with the exact solution of the time dependent Schroedinger equation. We have shown that the Coulomb-Volkov approximation (CVA) describes the quantum atomic ionization probabilities exactly when the external field is described by a sudden momentum transfer [1]. The velocity distribution of emitted electrons right after ionization by a sudden momentum transfer is given through the strong field approximation (SFA) within both the CVA and CTMC methods. In this case, the classical and quantum time dependent evolutions of an atom subject to a sudden momentum transfer are identical. The difference between the classical and quantum final momentum distributions resides in the time evolution of the escaping electron under the subsequent action of the Coulomb field. Furthermore, classical mechanics is incapable of reproducing the quantum angular momentum distribution due to the improper initial radial distribution used in the CTMC calculations, i.e., the microcanonical ensemble. We find that in the limit of high momentum transfer, based on the SFA, there is a direct relation between the cylindrical radial distribution dP/dρ and the final angular momentum distribution dP/dL. This leads to a close analytical expression for the partial wave populations (dP/dL) SFA-Q given by dP SFA-Q / dL = 4Z 3 L 2 / (Δp) 3 K 1 (2ZL/Δp) which, together with the prescription L = l + 1/2, reproduces quite accurately the quantum (CVA) results. Considering the inverse problem, knowing the final angular momentum distribution can lead to the inference of the initial probability distribution
Making Inferences in Adulthood: Falling Leaves Mean It's Fall.
Zandi, Taher; Gregory, Monica E.
1988-01-01
Assessed age differences in making inferences from prose. Older adults correctly answered mean of 10 questions related to implicit information and 8 related to explicit information. Young adults answered mean of 7 implicit and 12 explicit information questions. In spite of poorer recall of factual details, older subjects made inferences to greater…
Causal Effect Inference with Deep Latent-Variable Models
Louizos, C; Shalit, U.; Mooij, J.; Sontag, D.; Zemel, R.; Welling, M.
2017-01-01
Learning individual-level causal effects from observational data, such as inferring the most effective medication for a specific patient, is a problem of growing importance for policy makers. The most important aspect of inferring causal effects from observational data is the handling of
A Comparative Analysis of Fuzzy Inference Engines in Context of ...
African Journals Online (AJOL)
Fuzzy inference engine has found successful applications in a wide variety of fields, such as automatic control, data classification, decision analysis, expert engines, time series prediction, robotics, pattern recognition, etc. This paper presents a comparative analysis of three fuzzy inference engines, max-product, max-min ...
General Purpose Probabilistic Programming Platform with Effective Stochastic Inference
2018-04-01
REFERENCES 74 LIST OF ACRONYMS 80 ii List of Figures Figure 1. The problem of inferring curves from data while simultaneously choosing the...bottom path) as the inverse problem to computer graphics (top path). ........ 18 Figure 18. An illustration of generative probabilistic graphics for 3D...Building these systems involves simultaneously developing mathematical models, inference algorithms and optimized software implementations. Small changes
A Comparative Analysis of Fuzzy Inference Engines in Context of ...
African Journals Online (AJOL)
PROF. O. E. OSUAGWU
Fuzzy Inference engine is an important part of reasoning systems capable of extracting correct conclusions from ... is known as the inference, or rule definition portion, of fuzzy .... minimal set of decision rules based on input- ... The study uses Mamdani FIS model and. Sugeno FIS ... control of induction motor drive. [18] study.
Deontic Introduction: A Theory of Inference from Is to Ought
Elqayam, Shira; Thompson, Valerie A.; Wilkinson, Meredith R.; Evans, Jonathan St. B. T.; Over, David E.
2015-01-01
Humans have a unique ability to generate novel norms. Faced with the knowledge that there are hungry children in Somalia, we easily and naturally infer that we ought to donate to famine relief charities. Although a contentious and lively issue in metaethics, such inference from "is" to "ought" has not been systematically…
Causal inference in survival analysis using pseudo-observations
DEFF Research Database (Denmark)
Andersen, Per K; Syriopoulou, Elisavet; Parner, Erik T
2017-01-01
Causal inference for non-censored response variables, such as binary or quantitative outcomes, is often based on either (1) direct standardization ('G-formula') or (2) inverse probability of treatment assignment weights ('propensity score'). To do causal inference in survival analysis, one needs ...
Design, science and naturalism
Deming, David
2008-09-01
The Design Argument is the proposition that the presence of order in the universe is evidence for the existence of God. The Argument dates at least to the presocratic Greek philosophers, and is largely based on analogical reasoning. Following the appearance of Aquinas' Summa Theologica in the 13th century, the Christian Church in Europe embraced a Natural Theology based on observation and reason that allowed it to dominate the entire world of knowledge. Science in turn advanced itself by demonstrating that it could be of service to theology, the recognized queen of the sciences. During the heyday of British Natural Theology in the 17th and 18th centuries, the watchmaker, shipbuilder, and architect analogies were invoked reflexively by philosophers, theologians, and scientists. The Design Argument was not systematically and analytically criticized until David Hume wrote Dialogues on Natural Religion in the 1750s. After Darwin published Origin of Species in 1859, Design withered on the vine. But in recent years, the Argument has been resurrected under the appellation "intelligent design," and been the subject of political and legal controversy in the United States. Design advocates have argued that intelligent design can be formulated as a scientific hypothesis, that new scientific discoveries validate a design inference, and that naturalism must be removed as a methodological requirement in science. If science is defined by a model of concentric epistemological zonation, design cannot be construed as a scientific hypothesis because it is inconsistent with the core aspects of scientific methodology: naturalism, uniformity, induction, and efficient causation. An analytical examination of claims by design advocates finds no evidence of any type to support either scientific or philosophical claims that design can be unambiguously inferred from nature. The apparent irreducible complexity of biological mechanisms may be explained by exaptation or scaffolding. The argument
Science writing in the real world
Directory of Open Access Journals (Sweden)
Mike Mentis
2014-02-01
Full Text Available The objective of this contribution is to consider guides to technical writing. Since the professional writes what he does and does what he writes, guides to how you execute the one relate to how you perform the other, so this article is about more than just writing. While there is need for idiosyncrasy and individualism, there are some rules. Documents must have an explicit purpose stated at the outset. By their nature, documents in the applied sciences and business address real-world problems, but elsewhere activity may be laissez faire for which the cost-effectiveness in yielding innovations is contestable. A hallmark of written science and technology is that every statement is capable of being tested and capable of being shown to be wrong, and that methods yield repeatable results. Caution should be observed in requiring authoritative referencing for every notion, partly because of the unsatisfying infinite regress in searching for ultimate sources, and partly to avoid squashing innovation. It is not only the content of messages that matters, but reliability too. Probability theory must be built into design to assure that strong inference can be drawn from outcomes. Research, business and infrastructure projects must substitute the frequent optimistic ‘everything goes according to plan’ (EGAP with a more realistic ‘most likely development’ (MLD and the risks of even that not happening. A cornerstone of science and technology is parsimony. No description, experiment, explanation, hypothesis, idea, instrument, machine, method, model, prediction, statement, technique, test or theory should be more elaborate than necessary to satisfy its purpose. Antifragility – the capacity to survive and benefit from shocks – must be designed into project and organizational structure and function by manipulating such factors as complexity and interdependency to evade failure in a turbulent and unpredictable world. The role of writing is to integrate
Bayesian inference of radiation belt loss timescales.
Camporeale, E.; Chandorkar, M.
2017-12-01
Electron fluxes in the Earth's radiation belts are routinely studied using the classical quasi-linear radial diffusion model. Although this simplified linear equation has proven to be an indispensable tool in understanding the dynamics of the radiation belt, it requires specification of quantities such as the diffusion coefficient and electron loss timescales that are never directly measured. Researchers have so far assumed a-priori parameterisations for radiation belt quantities and derived the best fit using satellite data. The state of the art in this domain lacks a coherent formulation of this problem in a probabilistic framework. We present some recent progress that we have made in performing Bayesian inference of radial diffusion parameters. We achieve this by making extensive use of the theory connecting Gaussian Processes and linear partial differential equations, and performing Markov Chain Monte Carlo sampling of radial diffusion parameters. These results are important for understanding the role and the propagation of uncertainties in radiation belt simulations and, eventually, for providing a probabilistic forecast of energetic electron fluxes in a Space Weather context.
Scalable inference for stochastic block models
Peng, Chengbin
2017-12-08
Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.
Probabilistic learning and inference in schizophrenia
Averbeck, Bruno B.; Evans, Simon; Chouhan, Viraj; Bristow, Eleanor; Shergill, Sukhwinder S.
2010-01-01
Patients with schizophrenia make decisions on the basis of less evidence when required to collect information to make an inference, a behavior often called jumping to conclusions. The underlying basis for this behaviour remains controversial. We examined the cognitive processes underpinning this finding by testing subjects on the beads task, which has been used previously to elicit jumping to conclusions behaviour, and a stochastic sequence learning task, with a similar decision theoretic structure. During the sequence learning task, subjects had to learn a sequence of button presses, while receiving noisy feedback on their choices. We fit a Bayesian decision making model to the sequence task and compared model parameters to the choice behavior in the beads task in both patients and healthy subjects. We found that patients did show a jumping to conclusions style; and those who picked early in the beads task tended to learn less from positive feedback in the sequence task. This favours the likelihood of patients selecting early because they have a low threshold for making decisions, and that they make choices on the basis of relatively little evidence. PMID:20810252
Aesthetic quality inference for online fashion shopping
Chen, Ming; Allebach, Jan
2014-03-01
On-line fashion communities in which participants post photos of personal fashion items for viewing and possible purchase by others are becoming increasingly popular. Generally, these photos are taken by individuals who have no training in photography with low-cost mobile phone cameras. It is desired that photos of the products have high aesthetic quality to improve the users' online shopping experience. In this work, we design features for aesthetic quality inference in the context of online fashion shopping. Psychophysical experiments are conducted to construct a database of the photos' aesthetic evaluation, specifically for photos from an online fashion shopping website. We then extract both generic low-level features and high-level image attributes to represent the aesthetic quality. Using a support vector machine framework, we train a predictor of the aesthetic quality rating based on the feature vector. Experimental results validate the efficacy of our approach. Metadata such as the product type are also used to further improve the result.
Information-Theoretic Inference of Common Ancestors
Directory of Open Access Journals (Sweden)
Bastian Steudel
2015-04-01
Full Text Available A directed acyclic graph (DAG partially represents the conditional independence structure among observations of a system if the local Markov condition holds, that is if every variable is independent of its non-descendants given its parents. In general, there is a whole class of DAGs that represents a given set of conditional independence relations. We are interested in properties of this class that can be derived from observations of a subsystem only. To this end, we prove an information-theoretic inequality that allows for the inference of common ancestors of observed parts in any DAG representing some unknown larger system. More explicitly, we show that a large amount of dependence in terms of mutual information among the observations implies the existence of a common ancestor that distributes this information. Within the causal interpretation of DAGs, our result can be seen as a quantitative extension of Reichenbach’s principle of common cause to more than two variables. Our conclusions are valid also for non-probabilistic observations, such as binary strings, since we state the proof for an axiomatized notion of “mutual information” that includes the stochastic as well as the algorithmic version.
Probabilistic learning and inference in schizophrenia.
Averbeck, Bruno B; Evans, Simon; Chouhan, Viraj; Bristow, Eleanor; Shergill, Sukhwinder S
2011-04-01
Patients with schizophrenia make decisions on the basis of less evidence when required to collect information to make an inference, a behavior often called jumping to conclusions. The underlying basis for this behavior remains controversial. We examined the cognitive processes underpinning this finding by testing subjects on the beads task, which has been used previously to elicit jumping to conclusions behavior, and a stochastic sequence learning task, with a similar decision theoretic structure. During the sequence learning task, subjects had to learn a sequence of button presses, while receiving a noisy feedback on their choices. We fit a Bayesian decision making model to the sequence task and compared model parameters to the choice behavior in the beads task in both patients and healthy subjects. We found that patients did show a jumping to conclusions style; and those who picked early in the beads task tended to learn less from positive feedback in the sequence task. This favours the likelihood of patients selecting early because they have a low threshold for making decisions, and that they make choices on the basis of relatively little evidence. Published by Elsevier B.V.
Active Inference and Learning in the Cerebellum.
Friston, Karl; Herreros, Ivan
2016-09-01
This letter offers a computational account of Pavlovian conditioning in the cerebellum based on active inference and predictive coding. Using eyeblink conditioning as a canonical paradigm, we formulate a minimal generative model that can account for spontaneous blinking, startle responses, and (delay or trace) conditioning. We then establish the face validity of the model using simulated responses to unconditioned and conditioned stimuli to reproduce the sorts of behavior that are observed empirically. The scheme's anatomical validity is then addressed by associating variables in the predictive coding scheme with nuclei and neuronal populations to match the (extrinsic and intrinsic) connectivity of the cerebellar (eyeblink conditioning) system. Finally, we try to establish predictive validity by reproducing selective failures of delay conditioning, trace conditioning, and extinction using (simulated and reversible) focal lesions. Although rather metaphorical, the ensuing scheme can account for a remarkable range of anatomical and neurophysiological aspects of cerebellar circuitry-and the specificity of lesion-deficit mappings that have been established experimentally. From a computational perspective, this work shows how conditioning or learning can be formulated in terms of minimizing variational free energy (or maximizing Bayesian model evidence) using exactly the same principles that underlie predictive coding in perception.
Inferring gene networks from discrete expression data
Zhang, L.
2013-07-18
The modeling of gene networks from transcriptional expression data is an important tool in biomedical research to reveal signaling pathways and to identify treatment targets. Current gene network modeling is primarily based on the use of Gaussian graphical models applied to continuous data, which give a closedformmarginal likelihood. In this paper,we extend network modeling to discrete data, specifically data from serial analysis of gene expression, and RNA-sequencing experiments, both of which generate counts of mRNAtranscripts in cell samples.We propose a generalized linear model to fit the discrete gene expression data and assume that the log ratios of the mean expression levels follow a Gaussian distribution.We restrict the gene network structures to decomposable graphs and derive the graphs by selecting the covariance matrix of the Gaussian distribution with the hyper-inverse Wishart priors. Furthermore, we incorporate prior network models based on gene ontology information, which avails existing biological information on the genes of interest. We conduct simulation studies to examine the performance of our discrete graphical model and apply the method to two real datasets for gene network inference. © The Author 2013. Published by Oxford University Press. All rights reserved.
Bayesian Inference of a Multivariate Regression Model
Directory of Open Access Journals (Sweden)
Marick S. Sinay
2014-01-01
Full Text Available We explore Bayesian inference of a multivariate linear regression model with use of a flexible prior for the covariance structure. The commonly adopted Bayesian setup involves the conjugate prior, multivariate normal distribution for the regression coefficients and inverse Wishart specification for the covariance matrix. Here we depart from this approach and propose a novel Bayesian estimator for the covariance. A multivariate normal prior for the unique elements of the matrix logarithm of the covariance matrix is considered. Such structure allows for a richer class of prior distributions for the covariance, with respect to strength of beliefs in prior location hyperparameters, as well as the added ability, to model potential correlation amongst the covariance structure. The posterior moments of all relevant parameters of interest are calculated based upon numerical results via a Markov chain Monte Carlo procedure. The Metropolis-Hastings-within-Gibbs algorithm is invoked to account for the construction of a proposal density that closely matches the shape of the target posterior distribution. As an application of the proposed technique, we investigate a multiple regression based upon the 1980 High School and Beyond Survey.
Logical inference techniques for loop parallelization
Oancea, Cosmin E.; Rauchwerger, Lawrence
2012-01-01
This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop's memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.
BAYESIAN INFERENCE OF CMB GRAVITATIONAL LENSING
Energy Technology Data Exchange (ETDEWEB)
Anderes, Ethan [Department of Statistics, University of California, Davis, CA 95616 (United States); Wandelt, Benjamin D.; Lavaux, Guilhem [Sorbonne Universités, UPMC Univ Paris 06 and CNRS, UMR7095, Institut d’Astrophysique de Paris, F-75014, Paris (France)
2015-08-01
The Planck satellite, along with several ground-based telescopes, has mapped the cosmic microwave background (CMB) at sufficient resolution and signal-to-noise so as to allow a detection of the subtle distortions due to the gravitational influence of the intervening matter distribution. A natural modeling approach is to write a Bayesian hierarchical model for the lensed CMB in terms of the unlensed CMB and the lensing potential. So far there has been no feasible algorithm for inferring the posterior distribution of the lensing potential from the lensed CMB map. We propose a solution that allows efficient Markov Chain Monte Carlo sampling from the joint posterior of the lensing potential and the unlensed CMB map using the Hamiltonian Monte Carlo technique. The main conceptual step in the solution is a re-parameterization of CMB lensing in terms of the lensed CMB and the “inverse lensing” potential. We demonstrate a fast implementation on simulated data, including noise and a sky cut, that uses a further acceleration based on a very mild approximation of the inverse lensing potential. We find that the resulting Markov Chain has short correlation lengths and excellent convergence properties, making it promising for applications to high-resolution CMB data sets in the future.
Virtual reality and consciousness inference in dreaming.
Hobson, J Allan; Hong, Charles C-H; Friston, Karl J
2014-01-01
This article explores the notion that the brain is genetically endowed with an innate virtual reality generator that - through experience-dependent plasticity - becomes a generative or predictive model of the world. This model, which is most clearly revealed in rapid eye movement (REM) sleep dreaming, may provide the theater for conscious experience. Functional neuroimaging evidence for brain activations that are time-locked to rapid eye movements (REMs) endorses the view that waking consciousness emerges from REM sleep - and dreaming lays the foundations for waking perception. In this view, the brain is equipped with a virtual model of the world that generates predictions of its sensations. This model is continually updated and entrained by sensory prediction errors in wakefulness to ensure veridical perception, but not in dreaming. In contrast, dreaming plays an essential role in maintaining and enhancing the capacity to model the world by minimizing model complexity and thereby maximizing both statistical and thermodynamic efficiency. This perspective suggests that consciousness corresponds to the embodied process of inference, realized through the generation of virtual realities (in both sleep and wakefulness). In short, our premise or hypothesis is that the waking brain engages with the world to predict the causes of sensations, while in sleep the brain's generative model is actively refined so that it generates more efficient predictions during waking. We review the evidence in support of this hypothesis - evidence that grounds consciousness in biophysical computations whose neuronal and neurochemical infrastructure has been disclosed by sleep research.
Inferring human mobility using communication patterns
Palchykov, Vasyl; Mitrović, Marija; Jo, Hang-Hyun; Saramäki, Jari; Pan, Raj Kumar
2014-08-01
Understanding the patterns of mobility of individuals is crucial for a number of reasons, from city planning to disaster management. There are two common ways of quantifying the amount of travel between locations: by direct observations that often involve privacy issues, e.g., tracking mobile phone locations, or by estimations from models. Typically, such models build on accurate knowledge of the population size at each location. However, when this information is not readily available, their applicability is rather limited. As mobile phones are ubiquitous, our aim is to investigate if mobility patterns can be inferred from aggregated mobile phone call data alone. Using data released by Orange for Ivory Coast, we show that human mobility is well predicted by a simple model based on the frequency of mobile phone calls between two locations and their geographical distance. We argue that the strength of the model comes from directly incorporating the social dimension of mobility. Furthermore, as only aggregated call data is required, the model helps to avoid potential privacy problems.
Inference-based procedural modeling of solids
Biggers, Keith
2011-11-01
As virtual environments become larger and more complex, there is an increasing need for more automated construction algorithms to support the development process. We present an approach for modeling solids by combining prior examples with a simple sketch. Our algorithm uses an inference-based approach to incrementally fit patches together in a consistent fashion to define the boundary of an object. This algorithm samples and extracts surface patches from input models, and develops a Petri net structure that describes the relationship between patches along an imposed parameterization. Then, given a new parameterized line or curve, we use the Petri net to logically fit patches together in a manner consistent with the input model. This allows us to easily construct objects of varying sizes and configurations using arbitrary articulation, repetition, and interchanging of parts. The result of our process is a solid model representation of the constructed object that can be integrated into a simulation-based environment. © 2011 Elsevier Ltd. All rights reserved.
Multiple sequence alignment accuracy and phylogenetic inference.
Ogden, T Heath; Rosenberg, Michael S
2006-04-01
Phylogenies are often thought to be more dependent upon the specifics of the sequence alignment rather than on the method of reconstruction. Simulation of sequences containing insertion and deletion events was performed in order to determine the role that alignment accuracy plays during phylogenetic inference. Data sets were simulated for pectinate, balanced, and random tree shapes under different conditions (ultrametric equal branch length, ultrametric random branch length, nonultrametric random branch length). Comparisons between hypothesized alignments and true alignments enabled determination of two measures of alignment accuracy, that of the total data set and that of individual branches. In general, our results indicate that as alignment error increases, topological accuracy decreases. This trend was much more pronounced for data sets derived from more pectinate topologies. In contrast, for balanced, ultrametric, equal branch length tree shapes, alignment inaccuracy had little average effect on tree reconstruction. These conclusions are based on average trends of many analyses under different conditions, and any one specific analysis, independent of the alignment accuracy, may recover very accurate or inaccurate topologies. Maximum likelihood and Bayesian, in general, outperformed neighbor joining and maximum parsimony in terms of tree reconstruction accuracy. Results also indicated that as the length of the branch and of the neighboring branches increase, alignment accuracy decreases, and the length of the neighboring branches is the major factor in topological accuracy. Thus, multiple-sequence alignment can be an important factor in downstream effects on topological reconstruction.
Phylogenetic inference with weighted codon evolutionary distances.
Criscuolo, Alexis; Michel, Christian J
2009-04-01
We develop a new approach to estimate a matrix of pairwise evolutionary distances from a codon-based alignment based on a codon evolutionary model. The method first computes a standard distance matrix for each of the three codon positions. Then these three distance matrices are weighted according to an estimate of the global evolutionary rate of each codon position and averaged into a unique distance matrix. Using a large set of both real and simulated codon-based alignments of nucleotide sequences, we show that this approach leads to distance matrices that have a significantly better treelikeness compared to those obtained by standard nucleotide evolutionary distances. We also propose an alternative weighting to eliminate the part of the noise often associated with some codon positions, particularly the third position, which is known to induce a fast evolutionary rate. Simulation results show that fast distance-based tree reconstruction algorithms on distance matrices based on this codon position weighting can lead to phylogenetic trees that are at least as accurate as, if not better, than those inferred by maximum likelihood. Finally, a well-known multigene dataset composed of eight yeast species and 106 codon-based alignments is reanalyzed and shows that our codon evolutionary distances allow building a phylogenetic tree which is similar to those obtained by non-distance-based methods (e.g., maximum parsimony and maximum likelihood) and also significantly improved compared to standard nucleotide evolutionary distance estimates.
Primate diversification inferred from phylogenies and fossils.
Herrera, James P
2017-12-01
Biodiversity arises from the balance between speciation and extinction. Fossils record the origins and disappearance of organisms, and the branching patterns of molecular phylogenies allow estimation of speciation and extinction rates, but the patterns of diversification are frequently incongruent between these two data sources. I tested two hypotheses about the diversification of primates based on ∼600 fossil species and 90% complete phylogenies of living species: (1) diversification rates increased through time; (2) a significant extinction event occurred in the Oligocene. Consistent with the first hypothesis, analyses of phylogenies supported increasing speciation rates and negligible extinction rates. In contrast, fossils showed that while speciation rates increased, speciation and extinction rates tended to be nearly equal, resulting in zero net diversification. Partially supporting the second hypothesis, the fossil data recorded a clear pattern of diversity decline in the Oligocene, although diversification rates were near zero. The phylogeny supported increased extinction ∼34 Ma, but also elevated extinction ∼10 Ma, coinciding with diversity declines in some fossil clades. The results demonstrated that estimates of speciation and extinction ignoring fossils are insufficient to infer diversification and information on extinct lineages should be incorporated into phylogenetic analyses. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Logical inference techniques for loop parallelization
Oancea, Cosmin E.
2012-01-01
This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop\\'s memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.
Inferring Molecular Processes Heterogeneity from Transcriptional Data.
Gogolewski, Krzysztof; Wronowska, Weronika; Lech, Agnieszka; Lesyng, Bogdan; Gambin, Anna
2017-01-01
RNA microarrays and RNA-seq are nowadays standard technologies to study the transcriptional activity of cells. Most studies focus on tracking transcriptional changes caused by specific experimental conditions. Information referring to genes up- and downregulation is evaluated analyzing the behaviour of relatively large population of cells by averaging its properties. However, even assuming perfect sample homogeneity, different subpopulations of cells can exhibit diverse transcriptomic profiles, as they may follow different regulatory/signaling pathways. The purpose of this study is to provide a novel methodological scheme to account for possible internal, functional heterogeneity in homogeneous cell lines, including cancer ones. We propose a novel computational method to infer the proportion between subpopulations of cells that manifest various functional behaviour in a given sample. Our method was validated using two datasets from RNA microarray experiments. Both experiments aimed to examine cell viability in specific experimental conditions. The presented methodology can be easily extended to RNA-seq data as well as other molecular processes. Moreover, it complements standard tools to indicate most important networks from transcriptomic data and in particular could be useful in the analysis of cancer cell lines affected by biologically active compounds or drugs.
Quantum-Like Representation of Non-Bayesian Inference
Asano, M.; Basieva, I.; Khrennikov, A.; Ohya, M.; Tanaka, Y.
2013-01-01
This research is related to the problem of "irrational decision making or inference" that have been discussed in cognitive psychology. There are some experimental studies, and these statistical data cannot be described by classical probability theory. The process of decision making generating these data cannot be reduced to the classical Bayesian inference. For this problem, a number of quantum-like coginitive models of decision making was proposed. Our previous work represented in a natural way the classical Bayesian inference in the frame work of quantum mechanics. By using this representation, in this paper, we try to discuss the non-Bayesian (irrational) inference that is biased by effects like the quantum interference. Further, we describe "psychological factor" disturbing "rationality" as an "environment" correlating with the "main system" of usual Bayesian inference.
Statistical causal inferences and their applications in public health research
Wu, Pan; Chen, Ding-Geng
2016-01-01
This book compiles and presents new developments in statistical causal inference. The accompanying data and computer programs are publicly available so readers may replicate the model development and data analysis presented in each chapter. In this way, methodology is taught so that readers may implement it directly. The book brings together experts engaged in causal inference research to present and discuss recent issues in causal inference methodological development. This is also a timely look at causal inference applied to scenarios that range from clinical trials to mediation and public health research more broadly. In an academic setting, this book will serve as a reference and guide to a course in causal inference at the graduate level (Master's or Doctorate). It is particularly relevant for students pursuing degrees in Statistics, Biostatistics and Computational Biology. Researchers and data analysts in public health and biomedical research will also find this book to be an important reference.
Human Inferences about Sequences: A Minimal Transition Probability Model.
Directory of Open Access Journals (Sweden)
Florent Meyniel
2016-12-01
Full Text Available The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge.
Dowd, Jason E; Thompson, Robert J; Schiff, Leslie A; Reynolds, Julie A
2018-01-01
Developing critical-thinking and scientific reasoning skills are core learning objectives of science education, but little empirical evidence exists regarding the interrelationships between these constructs. Writing effectively fosters students' development of these constructs, and it offers a unique window into studying how they relate. In this study of undergraduate thesis writing in biology at two universities, we examine how scientific reasoning exhibited in writing (assessed using the Biology Thesis Assessment Protocol) relates to general and specific critical-thinking skills (assessed using the California Critical Thinking Skills Test), and we consider implications for instruction. We find that scientific reasoning in writing is strongly related to inference , while other aspects of science reasoning that emerge in writing (epistemological considerations, writing conventions, etc.) are not significantly related to critical-thinking skills. Science reasoning in writing is not merely a proxy for critical thinking. In linking features of students' writing to their critical-thinking skills, this study 1) provides a bridge to prior work suggesting that engagement in science writing enhances critical thinking and 2) serves as a foundational step for subsequently determining whether instruction focused explicitly on developing critical-thinking skills (particularly inference ) can actually improve students' scientific reasoning in their writing. © 2018 J. E. Dowd et al. CBE—Life Sciences Education © 2018 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
Fortunato, Santo; Bergstrom, Carl T; Börner, Katy; Evans, James A; Helbing, Dirk; Milojević, Staša; Petersen, Alexander M; Radicchi, Filippo; Sinatra, Roberta; Uzzi, Brian; Vespignani, Alessandro; Waltman, Ludo; Wang, Dashun; Barabási, Albert-László
2018-03-02
Identifying fundamental drivers of science and developing predictive models to capture its evolution are instrumental for the design of policies that can improve the scientific enterprise-for example, through enhanced career paths for scientists, better performance evaluation for organizations hosting research, discovery of novel effective funding vehicles, and even identification of promising regions along the scientific frontier. The science of science uses large-scale data on the production of science to search for universal and domain-specific patterns. Here, we review recent developments in this transdisciplinary field. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Audain, Enrique; Uszkoreit, Julian; Sachsenberg, Timo; Pfeuffer, Julianus; Liang, Xiao; Hermjakob, Henning; Sanchez, Aniel; Eisenacher, Martin; Reinert, Knut; Tabb, David L; Kohlbacher, Oliver; Perez-Riverol, Yasset
2017-01-06
In mass spectrometry-based shotgun proteomics, protein identifications are usually the desired result. However, most of the analytical methods are based on the identification of reliable peptides and not the direct identification of intact proteins. Thus, assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is a critical step in proteomics research. Currently, different protein inference algorithms and tools are available for the proteomics community. Here, we evaluated five software tools for protein inference (PIA, ProteinProphet, Fido, ProteinLP, MSBayesPro) using three popular database search engines: Mascot, X!Tandem, and MS-GF+. All the algorithms were evaluated using a highly customizable KNIME workflow using four different public datasets with varying complexities (different sample preparation, species and analytical instruments). We defined a set of quality control metrics to evaluate the performance of each combination of search engines, protein inference algorithm, and parameters on each dataset. We show that the results for complex samples vary not only regarding the actual numbers of reported protein groups but also concerning the actual composition of groups. Furthermore, the robustness of reported proteins when using databases of differing complexities is strongly dependant on the applied inference algorithm. Finally, merging the identifications of multiple search engines does not necessarily increase the number of reported proteins, but does increase the number of peptides per protein and thus can generally be recommended. Protein inference is one of the major challenges in MS-based proteomics nowadays. Currently, there are a vast number of protein inference algorithms and implementations available for the proteomics community. Protein assembly impacts in the final results of the research, the quantitation values and the final claims in the research manuscript. Even though protein
Ultimate and proximate explanations of strong reciprocity.
Vromen, Jack
2017-08-23
Strong reciprocity (SR) has recently been subject to heated debate. In this debate, the "West camp" (West et al. in Evol Hum Behav 32(4):231-262, 2011), which is critical of the case for SR, and the "Laland camp" (Laland et al. in Science, 334(6062):1512-1516, 2011, Biol Philos 28(5):719-745, 2013), which is sympathetic to the case of SR, seem to take diametrically opposed positions. The West camp criticizes advocates of SR for conflating proximate and ultimate causation. SR is said to be a proximate mechanism that is put forward by its advocates as an ultimate explanation of human cooperation. The West camp thus accuses advocates of SR for not heeding Mayr's original distinction between ultimate and proximate causation. The Laland camp praises advocates of SR for revising Mayr's distinction. Advocates of SR are said to replace Mayr's uni-directional view on the relation between ultimate and proximate causes by the bi-directional one of reciprocal causation. The paper argues that both the West camp and the Laland camp misrepresent what advocates of SR are up to. The West camp is right that SR is a proximate cause of human cooperation. But rather than putting forward SR as an ultimate explanation, as the West camp argues, advocates of SR believe that SR itself is in need of ultimate explanation. Advocates of SR tend to take gene-culture co-evolutionary theory as the correct meta-theoretical framework for advancing ultimate explanations of SR. Appearances notwithstanding, gene-culture coevolutionary theory does not imply Laland et al.'s notion of reciprocal causation. "Reciprocal causation" suggests that proximate and ultimate causes interact simultaneously, while advocates of SR assume that they interact sequentially. I end by arguing that the best way to understand the debate is by disambiguating Mayr's ultimate-proximate distinction. I propose to reserve "ultimate" and "proximate" for different sorts of explanations, and to use other terms for distinguishing
Explosion source strong ground motions in the Mississippi embayment
Langston, C.A.; Bodin, P.; Powell, C.; Withers, M.; Horton, S.; Mooney, W.
2006-01-01
Two strong-motion arrays were deployed for the October 2002 Embayment Seismic Excitation Experiment to study the spatial variation of strong ground motions in the deep, unconsolidated sediments of the Mississippi embayment because there are no comparable strong-motion data from natural earthquakes in the area. Each linear array consisted of eight three-component K2 accelerographs spaced 15 m apart situated 1.2 and 2.5 kin from 2268-kg and 1134-kg borehole explosion sources, respectively. The array data show distinct body-wave and surface-wave arrivals that propagate within the thick, unconsolidated sedimentary column, the high-velocity basement rocks, and small-scale structure near the surface. Time-domain coherence of body-wave and surface-wave arrivals is computed for acceleration, velocity, and displacement time windows. Coherence is high for relatively low-frequency verticalcomponent Rayleigh waves and high-frequency P waves propagating across the array. Prominent high-frequency PS conversions seen on radial components, a proxy for the direct S wave from earthquake sources, lose coherence quickly over the 105-m length of the array. Transverse component signals are least coherent for any ground motion and appear to be highly scattered. Horizontal phase velocity is computed by using the ratio of particle velocity to estimates of the strain based on a plane-wave-propagation model. The resulting time-dependent phase-velocity map is a useful way to infer the propagation mechanisms of individual seismic phases and time windows of three-component waveforms. Displacement gradient analysis is a complementary technique for processing general spatial-array data to obtain horizontal slowness information.
Directory of Open Access Journals (Sweden)
Greene Anthony J
2007-09-01
Full Text Available Abstract Recent advances have led to an understanding that the hippocampus is involved more broadly than explicit or declarative memory alone. Tasks which involve the acquisition of complex associations involve the hippocampus whether the learning is explicit or implicit. One hippocampal-dependent implicit task is transitive inference (TI. Recently it was suggested that implicit transitive inference does not depend upon the hippocampus (Frank, M. J., O'Reilly, R. C., & Curran, T. 2006. When memory fails, intuition reigns: midazolam enhances implicit inference in humans. Psychological Science, 17, 700–707. The authors demonstrated that intravenous midazolam, which is thought to inactivate the hippocampus, may enhance TI performance. Three critical assumptions are required but not met: 1 that deactivations of other regions could not account for the effect 2 that intravenous midazolam does indeed deactivate the hippocampus and 3 that midazolam influences explicit but not implicit memory. Each of these assumptions is seriously flawed. Consequently, the suggestion that implicit TI does not depend upon the hippocampus is unfounded.
Causal inference in nonlinear systems: Granger causality versus time-delayed mutual information
Li, Songting; Xiao, Yanyang; Zhou, Douglas; Cai, David
2018-05-01
The Granger causality (GC) analysis has been extensively applied to infer causal interactions in dynamical systems arising from economy and finance, physics, bioinformatics, neuroscience, social science, and many other fields. In the presence of potential nonlinearity in these systems, the validity of the GC analysis in general is questionable. To illustrate this, here we first construct minimal nonlinear systems and show that the GC analysis fails to infer causal relations in these systems—it gives rise to all types of incorrect causal directions. In contrast, we show that the time-delayed mutual information (TDMI) analysis is able to successfully identify the direction of interactions underlying these nonlinear systems. We then apply both methods to neuroscience data collected from experiments and demonstrate that the TDMI analysis but not the GC analysis can identify the direction of interactions among neuronal signals. Our work exemplifies inference hazards in the GC analysis in nonlinear systems and suggests that the TDMI analysis can be an appropriate tool in such a case.
Strong Stationary Duality for Diffusion Processes
Fill, James Allen; Lyzinski, Vince
2014-01-01
We develop the theory of strong stationary duality for diffusion processes on compact intervals. We analytically derive the generator and boundary behavior of the dual process and recover a central tenet of the classical Markov chain theory in the diffusion setting by linking the separation distance in the primal diffusion to the absorption time in the dual diffusion. We also exhibit our strong stationary dual as the natural limiting process of the strong stationary dual sequence of a well ch...
Strongly correlating liquids and their isomorphs
Pedersen, Ulf R.; Gnan, Nicoletta; Bailey, Nicholas P.; Schröder, Thomas B.; Dyre, Jeppe C.
2010-01-01
This paper summarizes the properties of strongly correlating liquids, i.e., liquids with strong correlations between virial and potential energy equilibrium fluctuations at constant volume. We proceed to focus on the experimental predictions for strongly correlating glass-forming liquids. These predictions include i) density scaling, ii) isochronal superposition, iii) that there is a single function from which all frequency-dependent viscoelastic response functions may be calculated, iv) that...
Atom collisions in a strong electromagnetic field
International Nuclear Information System (INIS)
Smirnov, V.S.; Chaplik, A.V.
1976-01-01
It is shown that the long-range part of interatomic interaction is considerably altered in a strong electromagnetic field. Instead of the van der Waals law the potential asymptote can best be described by a dipole-dipole R -3 law. Impact broadening and the line shift in a strong nonresonant field are calculated. The possibility of bound states of two atoms being formed in a strong light field is discussed
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education. Science Smiles. Articles in Resonance – Journal of Science Education. Volume 1 Issue 4 April 1996 pp 4-4 Science Smiles. Chief Editor's column / Science Smiles · R K Laxman · More Details Fulltext PDF. Volume 1 Issue 5 May 1996 pp 3-3 Science Smiles.
Vertically Integrated Seismological Analysis II : Inference
Arora, N. S.; Russell, S.; Sudderth, E.
2009-12-01
Methods for automatically associating detected waveform features with hypothesized seismic events, and localizing those events, are a critical component of efforts to verify the Comprehensive Test Ban Treaty (CTBT). As outlined in our companion abstract, we have developed a hierarchical model which views detection, association, and localization as an integrated probabilistic inference problem. In this abstract, we provide more details on the Markov chain Monte Carlo (MCMC) methods used to solve this inference task. MCMC generates samples from a posterior distribution π(x) over possible worlds x by defining a Markov chain whose states are the worlds x, and whose stationary distribution is π(x). In the Metropolis-Hastings (M-H) method, transitions in the Markov chain are constructed in two steps. First, given the current state x, a candidate next state x‧ is generated from a proposal distribution q(x‧ | x), which may be (more or less) arbitrary. Second, the transition to x‧ is not automatic, but occurs with an acceptance probability—α(x‧ | x) = min(1, π(x‧)q(x | x‧)/π(x)q(x‧ | x)). The seismic event model outlined in our companion abstract is quite similar to those used in multitarget tracking, for which MCMC has proved very effective. In this model, each world x is defined by a collection of events, a list of properties characterizing those events (times, locations, magnitudes, and types), and the association of each event to a set of observed detections. The target distribution π(x) = P(x | y), the posterior distribution over worlds x given the observed waveform data y at all stations. Proposal distributions then implement several types of moves between worlds. For example, birth moves create new events; death moves delete existing events; split moves partition the detections for an event into two new events; merge moves combine event pairs; swap moves modify the properties and assocations for pairs of events. Importantly, the rules for
International Nuclear Information System (INIS)
Dashti, Imad
2003-01-01
This paper uses a Bayesian stochastic frontier model to obtain confidence intervals on firm efficiency measures of electric utilities rather than the point estimates reported in most previous studies. Results reveal that the stochastic frontier model yields imprecise measures of firm efficiency. However, the application produces much more precise inference on pairwise efficiency comparisons of firms due to a sometimes strong positive covariance of efficiency measures across firms. In addition, we examine the sensitivity to functional form by repeating the analysis for Cobb-Douglas, translog and Fourier frontiers, with and without imposing monotonicity and concavity
Inferring tie strength from online directed behavior.
Directory of Open Access Journals (Sweden)
Jason J Jones
Full Text Available Some social connections are stronger than others. People have not only friends, but also best friends. Social scientists have long recognized this characteristic of social connections and researchers frequently use the term tie strength to refer to this concept. We used online interaction data (specifically, Facebook interactions to successfully identify real-world strong ties. Ground truth was established by asking users themselves to name their closest friends in real life. We found the frequency of online interaction was diagnostic of strong ties, and interaction frequency was much more useful diagnostically than were attributes of the user or the user's friends. More private communications (messages were not necessarily more informative than public communications (comments, wall posts, and other interactions.
Network inference via adaptive optimal design
Directory of Open Access Journals (Sweden)
Stigter Johannes D
2012-09-01
Full Text Available Abstract Background Current research in network reverse engineering for genetic or metabolic networks very often does not include a proper experimental and/or input design. In this paper we address this issue in more detail and suggest a method that includes an iterative design of experiments based, on the most recent data that become available. The presented approach allows a reliable reconstruction of the network and addresses an important issue, i.e., the analysis and the propagation of uncertainties as they exist in both the data and in our own knowledge. These two types of uncertainties have their immediate ramifications for the uncertainties in the parameter estimates and, hence, are taken into account from the very beginning of our experimental design. Findings The method is demonstrated for two small networks that include a genetic network for mRNA synthesis and degradation and an oscillatory network describing a molecular network underlying adenosine 3’-5’ cyclic monophosphate (cAMP as observed in populations of Dyctyostelium cells. In both cases a substantial reduction in parameter uncertainty was observed. Extension to larger scale networks is possible but needs a more rigorous parameter estimation algorithm that includes sparsity as a constraint in the optimization procedure. Conclusion We conclude that a careful experiment design very often (but not always pays off in terms of reliability in the inferred network topology. For large scale networks a better parameter estimation algorithm is required that includes sparsity as an additional constraint. These algorithms are available in the literature and can also be used in an adaptive optimal design setting as demonstrated in this paper.
On the Hardness of Topology Inference
Acharya, H. B.; Gouda, M. G.
Many systems require information about the topology of networks on the Internet, for purposes like management, efficiency, testing of new protocols and so on. However, ISPs usually do not share the actual topology maps with outsiders; thus, in order to obtain the topology of a network on the Internet, a system must reconstruct it from publicly observable data. The standard method employs traceroute to obtain paths between nodes; next, a topology is generated such that the observed paths occur in the graph. However, traceroute has the problem that some routers refuse to reveal their addresses, and appear as anonymous nodes in traces. Previous research on the problem of topology inference with anonymous nodes has demonstrated that it is at best NP-complete. In this paper, we improve upon this result. In our previous research, we showed that in the special case where nodes may be anonymous in some traces but not in all traces (so all node identifiers are known), there exist trace sets that are generable from multiple topologies. This paper extends our theory of network tracing to the general case (with strictly anonymous nodes), and shows that the problem of computing the network that generated a trace set, given the trace set, has no general solution. The weak version of the problem, which allows an algorithm to output a "small" set of networks- any one of which is the correct one- is also not solvable. Any algorithm guaranteed to output the correct topology outputs at least an exponential number of networks. Our results are surprisingly robust: they hold even when the network is known to have exactly two anonymous nodes, and every node as well as every edge in the network is guaranteed to occur in some trace. On the basis of this result, we suggest that exact reconstruction of network topology requires more powerful tools than traceroute.
Inferring modules from human protein interactome classes
Directory of Open Access Journals (Sweden)
Chaurasia Gautam
2010-07-01
Full Text Available Abstract Background The integration of protein-protein interaction networks derived from high-throughput screening approaches and complementary sources is a key topic in systems biology. Although integration of protein interaction data is conventionally performed, the effects of this procedure on the result of network analyses has not been examined yet. In particular, in order to optimize the fusion of heterogeneous interaction datasets, it is crucial to consider not only their degree of coverage and accuracy, but also their mutual dependencies and additional salient features. Results We examined this issue based on the analysis of modules detected by network clustering methods applied to both integrated and individual (disaggregated data sources, which we call interactome classes. Due to class diversity, we deal with variable dependencies of data features arising from structural specificities and biases, but also from possible overlaps. Since highly connected regions of the human interactome may point to potential protein complexes, we have focused on the concept of modularity, and elucidated the detection power of module extraction algorithms by independent validations based on GO, MIPS and KEGG. From the combination of protein interactions with gene expressions, a confidence scoring scheme has been proposed before proceeding via GO with further classification in permanent and transient modules. Conclusions Disaggregated interactomes are shown to be informative for inferring modularity, thus contributing to perform an effective integrative analysis. Validation of the extracted modules by multiple annotation allows for the assessment of confidence measures assigned to the modules in a protein pathway context. Notably, the proposed multilayer confidence scheme can be used for network calibration by enabling a transition from unweighted to weighted interactomes based on biological evidence.
Narasimhan, T. N.
2008-01-01
SummaryIn a world with water resources severely impacted by technology, science must actively contribute to water law. To this end, this paper is an earth scientist's attempt to comprehend essential elements of water law, and to examine their connections to science. Science and law share a common logical framework of starting with a priori prescribed tenets, and drawing consistent inferences. In science, observationally established physical laws constitute the tenets, while in law, they stem from social values. The foundations of modern water law in Europe and the New World were formulated nearly two thousand years ago by Roman jurists who were inspired by Greek philosophy of reason. Recognizing that vital natural elements such as water, air, and the sea were governed by immutable natural laws, they reasoned that these elements belonged to all humans, and therefore cannot be owned as private property. Legally, such public property was to be governed by jus gentium, the law of all people or the law of all nations. In contrast, jus civile or civil law governed private property. Remarkably, jus gentium continues to be relevant in our contemporary society in which science plays a pivotal role in exploiting vital resources common to all. This paper examines the historical roots of modern water law, follows their evolution through the centuries, and examines how the spirit of science inherent in jus gentium is profoundly influencing evolving water and environmental laws in Europe, the United States and elsewhere. In a technological world, scientific knowledge has to lie at the core of water law. Yet, science cannot formulate law. It is hoped that a philosophical understanding of the relationships between science and law will contribute to their constructively coming together in the service of society.
DEFF Research Database (Denmark)
Lefsrud, Lianne M.; Meyer, Renate
2012-01-01
This paper examines the framings and identity work associated with professionals’ discursive construction of climate change science, their legitimation of themselves as experts on ‘the truth’, and their attitudes towards regulatory measures. Drawing from survey responses of 1077 professional......, legitimation strategies, and use of emotionality and metaphor. By linking notions of the science or science fiction of climate change to the assessment of the adequacy of global and local policies and of potential organizational responses, we contribute to the understanding of ‘defensive institutional work...
Inference of Cancer-specific Gene Regulatory Networks Using Soft Computing Rules
Directory of Open Access Journals (Sweden)
Xiaosheng Wang
2010-03-01
Full Text Available Perturbations of gene regulatory networks are essentially responsible for oncogenesis. Therefore, inferring the gene regulatory networks is a key step to overcoming cancer. In this work, we propose a method for inferring directed gene regulatory networks based on soft computing rules, which can identify important cause-effect regulatory relations of gene expression. First, we identify important genes associated with a specific cancer (colon cancer using a supervised learning approach. Next, we reconstruct the gene regulatory networks by inferring the regulatory relations among the identified genes, and their regulated relations by other genes within the genome. We obtain two meaningful findings. One is that upregulated genes are regulated by more genes than downregulated ones, while downregulated genes regulate more genes than upregulated ones. The other one is that tumor suppressors suppress tumor activators and activate other tumor suppressors strongly, while tumor activators activate other tumor activators and suppress tumor suppressors weakly, indicating the robustness of biological systems. These findings provide valuable insights into the pathogenesis of cancer.
García, Miguel A; Costea, Mihai; Kuzmina, Maria; Stefanović, Saša
2014-04-01
The parasitic genus Cuscuta, containing some 200 species circumscribed traditionally in three subgenera, is nearly cosmopolitan, occurring in a wide range of habitats and hosts. Previous molecular studies, on subgenera Grammica and Cuscuta, delimited major clades within these groups. However, the sequences used were unalignable among subgenera, preventing the phylogenetic comparison across the genus. We conducted a broad phylogenetic study using rbcL and nrLSU sequences covering the morphological, physiological, and geographical diversity of Cuscuta. We used parsimony methods to reconstruct ancestral states for taxonomically important characters. Biogeographical inferences were obtained using statistical and Bayesian approaches. Four well-supported major clades are resolved. Two of them correspond to subgenera Monogynella and Grammica. Subgenus Cuscuta is paraphyletic, with section Pachystigma sister to subgenus Grammica. Previously described cases of strongly supported discordance between plastid and nuclear phylogenies, interpreted as reticulation events, are confirmed here and three new cases are detected. Dehiscent fruits and globose stigmas are inferred as ancestral character states, whereas the ancestral style number is ambiguous. Biogeographical reconstructions suggest an Old World origin for the genus and subsequent spread to the Americas as a consequence of one long-distance dispersal. Hybridization may play an important yet underestimated role in the evolution of Cuscuta. Our results disagree with scenarios of evolution (polarity) previously proposed for several taxonomically important morphological characters, and with their usage and significance. While several cases of long-distance dispersal are inferred, vicariance or dispersal to adjacent areas emerges as the dominant biogeographical pattern.
Indexing the Environmental Quality Performance Based on A Fuzzy Inference Approach
Iswari, Lizda
2018-03-01
Environmental performance strongly deals with the quality of human life. In Indonesia, this performance is quantified through Environmental Quality Index (EQI) which consists of three indicators, i.e. river quality index, air quality index, and coverage of land cover. The current of this instrument data processing was done by averaging and weighting each index to represent the EQI at the provincial level. However, we found EQI interpretations that may contain some uncertainties and have a range of circumstances possibly less appropriate if processed under a common statistical approach. In this research, we aim to manage the indicators of EQI with a more intuitive computation technique and make some inferences related to the environmental performance in 33 provinces in Indonesia. Research was conducted in three stages of Mamdani Fuzzy Inference System (MAFIS), i.e. fuzzification, data inference, and defuzzification. Data input consists of 10 environmental parameters and the output is an index of Environmental Quality Performance (EQP). Research was applied to the environmental condition data set in 2015 and quantified the results into the scale of 0 to 100, i.e. 10 provinces at good performance with the EQP above 80 dominated by provinces in eastern part of Indonesia, 22 provinces with the EQP between 80 to 50, and one province in Java Island with the EQP below 20. This research shows that environmental quality performance can be quantified without eliminating the natures of the data set and simultaneously is able to show the environment behavior along with its spatial pattern distribution.
Morality Principles for Risk Modelling: Needs and Links with the Origins of Plausible Inference
Solana-Ortega, Alberto; Solana, Vicente
2009-12-01
In comparison with the foundations of probability calculus, the inescapable and controversial issue of how to assign probabilities has only recently become a matter of formal study. The introduction of information as a technical concept was a milestone, but the most promising entropic assignment methods still face unsolved difficulties, manifesting the incompleteness of plausible inference theory. In this paper we examine the situation faced by risk analysts in the critical field of extreme events modelling, where the former difficulties are especially visible, due to scarcity of observational data, the large impact of these phenomena and the obligation to assume professional responsibilities. To respond to the claim for a sound framework to deal with extremes, we propose a metafoundational approach to inference, based on a canon of extramathematical requirements. We highlight their strong moral content, and show how this emphasis in morality, far from being new, is connected with the historic origins of plausible inference. Special attention is paid to the contributions of Caramuel, a contemporary of Pascal, unfortunately ignored in the usual mathematical accounts of probability.
Horn, Sebastian S; Ruggeri, Azzurra; Pachur, Thorsten
2016-09-01
Judgments about objects in the world are often based on probabilistic information (or cues). A frugal judgment strategy that utilizes memory (i.e., the ability to discriminate between known and unknown objects) as a cue for inference is the recognition heuristic (RH). The usefulness of the RH depends on the structure of the environment, particularly the predictive power (validity) of recognition. Little is known about developmental differences in use of the RH. In this study, the authors examined (a) to what extent children and adolescents recruit the RH when making judgments, and (b) around what age adaptive use of the RH emerges. Primary schoolchildren (M = 9 years), younger adolescents (M = 12 years), and older adolescents (M = 17 years) made comparative judgments in task environments with either high or low recognition validity. Reliance on the RH was measured with a hierarchical multinomial model. Results indicated that primary schoolchildren already made systematic use of the RH. However, only older adolescents adaptively adjusted their strategy use between environments and were better able to discriminate between situations in which the RH led to correct versus incorrect inferences. These findings suggest that the use of simple heuristics does not progress unidirectionally across development but strongly depends on the task environment, in line with the perspective of ecological rationality. Moreover, adaptive heuristic inference seems to require experience and a developed base of domain knowledge. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Inference of cancer-specific gene regulatory networks using soft computing rules.
Wang, Xiaosheng; Gotoh, Osamu
2010-03-24
Perturbations of gene regulatory networks are essentially responsible for oncogenesis. Therefore, inferring the gene regulatory networks is a key step to overcoming cancer. In this work, we propose a method for inferring directed gene regulatory networks based on soft computing rules, which can identify important cause-effect regulatory relations of gene expression. First, we identify important genes associated with a specific cancer (colon cancer) using a supervised learning approach. Next, we reconstruct the gene regulatory networks by inferring the regulatory relations among the identified genes, and their regulated relations by other genes within the genome. We obtain two meaningful findings. One is that upregulated genes are regulated by more genes than downregulated ones, while downregulated genes regulate more genes than upregulated ones. The other one is that tumor suppressors suppress tumor activators and activate other tumor suppressors strongly, while tumor activators activate other tumor activators and suppress tumor suppressors weakly, indicating the robustness of biological systems. These findings provide valuable insights into the pathogenesis of cancer.
Geography and environmental science
Milinčić, Miroljub; Souliotis, Lily; Mihajlović, Ljiljana; Požar, Tea
2014-01-01
Geography is one of the oldest academic disciplines with a strong holistic approach in conceptualizing the interaction between nature and society, i.e. animate and inanimate parts of the environment. Over time, geography has been increasing and improving its conceptual and terminological abilities for studying and understanding complex relationships among environmental systems. For this reason, geography has advanced from a well-known science about nature and society into a relevant science a...
Ensemble stacking mitigates biases in inference of synaptic connectivity.
Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N
2018-01-01
A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.
Causal inference in biology networks with integrated belief propagation.
Chang, Rui; Karr, Jonathan R; Schadt, Eric E
2015-01-01
Inferring causal relationships among molecular and higher order phenotypes is a critical step in elucidating the complexity of living systems. Here we propose a novel method for inferring causality that is no longer constrained by the conditional dependency arguments that limit the ability of statistical causal inference methods to resolve causal relationships within sets of graphical models that are Markov equivalent. Our method utilizes Bayesian belief propagation to infer the responses of perturbation events on molecular traits given a hypothesized graph structure. A distance measure between the inferred response distribution and the observed data is defined to assess the 'fitness' of the hypothesized causal relationships. To test our algorithm, we infer causal relationships within equivalence classes of gene networks in which the form of the functional interactions that are possible are assumed to be nonlinear, given synthetic microarray and RNA sequencing data. We also apply our method to infer causality in real metabolic network with v-structure and feedback loop. We show that our method can recapitulate the causal structure and recover the feedback loop only from steady-state data which conventional method cannot.
A graphical user interface for a method to infer kinetics and network architecture (MIKANA).
Mourão, Márcio A; Srividhya, Jeyaraman; McSharry, Patrick E; Crampin, Edmund J; Schnell, Santiago
2011-01-01
One of the main challenges in the biomedical sciences is the determination of reaction mechanisms that constitute a biochemical pathway. During the last decades, advances have been made in building complex diagrams showing the static interactions of proteins. The challenge for systems biologists is to build realistic models of the dynamical behavior of reactants, intermediates and products. For this purpose, several methods have been recently proposed to deduce the reaction mechanisms or to estimate the kinetic parameters of the elementary reactions that constitute the pathway. One such method is MIKANA: Method to Infer Kinetics And Network Architecture. MIKANA is a computational method to infer both reaction mechanisms and estimate the kinetic parameters of biochemical pathways from time course data. To make it available to the scientific community, we developed a Graphical User Interface (GUI) for MIKANA. Among other features, the GUI validates and processes an input time course data, displays the inferred reactions, generates the differential equations for the chemical species in the pathway and plots the prediction curves on top of the input time course data. We also added a new feature to MIKANA that allows the user to exclude a priori known reactions from the inferred mechanism. This addition improves the performance of the method. In this article, we illustrate the GUI for MIKANA with three examples: an irreversible Michaelis-Menten reaction mechanism; the interaction map of chemical species of the muscle glycolytic pathway; and the glycolytic pathway of Lactococcus lactis. We also describe the code and methods in sufficient detail to allow researchers to further develop the code or reproduce the experiments described. The code for MIKANA is open source, free for academic and non-academic use and is available for download (Information S1).
Accounting for the Effect of Earth's Rotation in Magnetotelluric Inference
Riegert, D. L.; Thomson, D. J.
2017-12-01
The study of geomagnetism has been documented as far back as 1722 when the watchmaker G. Graham constructed a more sensitive compass and showed that the variations in geomagnetic direction varied with an irregular daily pattern. Increased interest in geomagnetism in geomagnetism began at the end of the 19th century (Lamb, Schuster, Chapman, and Price). The Magnetotelluric Method was first introduced in the 1950's (Cagniard and Tikhonov), and, at its core, is simply a regression problem. The result of this method is a transfer function estimate which describes the earth's response to magnetic field variations. This estimate can then be used to infer the earth's subsurface structure; useful for applications such as natural resource exploration. The statistical problem of estimating a transfer function between geomagnetic and induced current measurements has evolved since the 1950's due to a variety of problems: non-stationarity, outliers, and violation of Gaussian assumptions. To address some of these issues, robust regression methods (Chave and Thomson, 2004) and the remote reference method (Gambel, 1979) have been proposed and used. The current method seems to provide reasonable estimates, but still requires a large amount of data. Using the multitaper method of spectral analysis (Thomson, 1982), taking long (greater than 4 months) blocks of geomagnetic data, and concentrating on frequencies below 1000 microhertz to avoid ultraviolet effects, one finds that:1) the cross-spectra are dominated by many offset frequencies including plus and minus 1 and 2 cycles per day;2) the coherence at these offset frequencies is often stronger than at zero offset;3) there are strong couplings from the "quasi two-day" cycle;4) frequencines are usually not symmetric;5) the spectra are dominated by the normal modes of the Sun. This talk will discuss the method of incorporating these observations into the transfer function estimation model, some of the difficulties that arose, their
Holland, G. J.; McCaffrey, M. S.; Kiehl, J. T.; Schmidt, C.
2010-12-01
We are in an era of rapidly changing communication media, which is driving a major evolution in the modes of communicating science. In the past, a mainstay of scientific communication in popular media was through science “translators”; science journalists and presenters. These have now nearly disappeared and are being replaced by widespread dissemination through, e.g., the internet, blogs, YouTube and journalists who often have little scientific background and sharp deadlines. Thus, scientists are required to assume increasing responsibility for translating their scientific findings and calibrating their communications to non-technical audiences, a task for which they are often ill prepared, especially when it comes to controversial societal issues such as tobacco, evolution, and most recently climate change (Oreskes and Conway 2010). Such issues have been politicized and hi-jacked by ideological belief systems to such an extent that constructive dialogue is often impossible. Many scientists are excellent communicators, to their peers. But this requires careful attention to detail and logical explanation, open acknowledgement of uncertainties, and dispassionate delivery. These qualities become liabilities when communicating to a non-scientific audience where entertainment, attention grabbing, 15 second sound bites, and self assuredness reign (e.g. Olson 2009). Here we report on a program initiated by NCAR and UCAR to develop new approaches to science communication and to equip present and future scientists with the requisite skills. If we start from a sound scientific finding with general scientific consensus, such as the warming of the planet by greenhouse gases, then the primary emphasis moves from the “science” to the “art” of communication. The art cannot have free reign, however, as there remains a strong requirement for objectivity, honesty, consistency, and above all a resistance to advocating particular policy positions. Targeting audience
On the Strong Direct Summand Conjecture
McCullough, Jason
2009-01-01
In this thesis, our aim is the study the Vanishing of Maps of Tor Conjecture of Hochster and Huneke. We mainly focus on an equivalent characterization called the Strong Direct Summand Conjecture, due to N. Ranganathan. Our results are separated into three chapters. In Chapter 3, we prove special cases of the Strong Direct Summand Conjecture in…
Physics challenges in the strong interactions
International Nuclear Information System (INIS)
Ellis, S.D.
1992-01-01
The study of strong interactions is now a mature field for which scientist now know that the correct underlying theory is QCD. Here, an overview of the challenges to be faced in the area of the strong interactions during the 1990's is presented. As an illustrative example special attention is given to the analysis of jets as studied at hadron colliders
Physics challenges in the strong interactions
Energy Technology Data Exchange (ETDEWEB)
Ellis, S.D. [Univ. of Washington, Seattle (United States)
1992-12-31
The study of strong interactions is now a mature field for which scientist now know that the correct underlying theory is QCD. Here, an overview of the challenges to be faced in the area of the strong interactions during the 1990`s is presented. As an illustrative example special attention is given to the analysis of jets as studied at hadron colliders.
Theoretical studies of strongly correlated fermions
Energy Technology Data Exchange (ETDEWEB)
Logan, D [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France)
1997-04-01
Strongly correlated fermions are investigated. An understanding of strongly correlated fermions underpins a diverse range of phenomena such as metal-insulator transitions, high-temperature superconductivity, magnetic impurity problems and the properties of heavy-fermion systems, in all of which local moments play an important role. (author).
The strong reflecting property and Harrington's Principle
Cheng, Yong
2015-01-01
In this paper we characterize the strong reflecting property for $L$-cardinals for all $\\omega_n$, characterize Harrington's Principle $HP(L)$ and its generalization and discuss the relationship between the strong reflecting property for $L$-cardinals and Harrington's Principle $HP(L)$.
Strong Nash Equilibria and the Potential Maimizer
van Megen, F.J.C.; Facchini, G.; Borm, P.E.M.; Tijs, S.H.
1996-01-01
A class of non cooperative games characterized by a `congestion e ect' is studied, in which there exists a strong Nash equilibrium, and the set of Nash equilibria, the set of strong Nash equilibria and the set of strategy pro les maximizing the potential function coincide.The structure of the class
Large N baryons, strong coupling theory, quarks
International Nuclear Information System (INIS)
Sakita, B.
1984-01-01
It is shown that in QCD the large N limit is the same as the static strong coupling limit. By using the static strong coupling techniques some of the results of large N baryons are derived. The results are consistent with the large N SU(6) static quark model. (author)
The lambda sigma calculus and strong normalization
DEFF Research Database (Denmark)
Schack-Nielsen, Anders; Schürmann, Carsten
Explicit substitution calculi can be classified into several dis- tinct categories depending on whether they are confluent, meta-confluent, strong normalization preserving, strongly normalizing, simulating, fully compositional, and/or local. In this paper we present a variant of the λσ-calculus, ...
Optimization of strong and weak coordinates
Swart, M.; Bickelhaupt, F.M.
2006-01-01
We present a new scheme for the geometry optimization of equilibrium and transition state structures that can be used for both strong and weak coordinates. We use a screening function that depends on atom-pair distances to differentiate strong coordinates from weak coordinates. This differentiation
Tarlowski, Andrzej
2018-01-01
There is a lively debate concerning the role of conceptual and perceptual information in young children's inductive inferences. While most studies focus on the role of basic level categories in induction the present research contributes to the debate by asking whether children's inductions are guided by ontological constraints. Two studies use a novel inductive paradigm to test whether young children have an expectation that all animals share internal commonalities that do not extend to perceptually similar inanimates. The results show that children make category-consistent responses when asked to project an internal feature from an animal to either a dissimilar animal or a similar toy replica. However, the children do not have a universal preference for category-consistent responses in an analogous task involving vehicles and vehicle toy replicas. The results also show the role of context and individual factors in inferences. Children's early reliance on ontological commitments in induction cannot be explained by perceptual similarity or by children's sensitivity to the authenticity of objects.
Directory of Open Access Journals (Sweden)
Andrzej Tarlowski
2018-04-01
Full Text Available There is a lively debate concerning the role of conceptual and perceptual information in young children's inductive inferences. While most studies focus on the role of basic level categories in induction the present research contributes to the debate by asking whether children's inductions are guided by ontological constraints. Two studies use a novel inductive paradigm to test whether young children have an expectation that all animals share internal commonalities that do not extend to perceptually similar inanimates. The results show that children make category-consistent responses when asked to project an internal feature from an animal to either a dissimilar animal or a similar toy replica. However, the children do not have a universal preference for category-consistent responses in an analogous task involving vehicles and vehicle toy replicas. The results also show the role of context and individual factors in inferences. Children's early reliance on ontological commitments in induction cannot be explained by perceptual similarity or by children's sensitivity to the authenticity of objects.
Bayesian inference of substrate properties from film behavior
International Nuclear Information System (INIS)
Aggarwal, R; Demkowicz, M J; Marzouk, Y M
2015-01-01
We demonstrate that by observing the behavior of a film deposited on a substrate, certain features of the substrate may be inferred with quantified uncertainty using Bayesian methods. We carry out this demonstration on an illustrative film/substrate model where the substrate is a Gaussian random field and the film is a two-component mixture that obeys the Cahn–Hilliard equation. We construct a stochastic reduced order model to describe the film/substrate interaction and use it to infer substrate properties from film behavior. This quantitative inference strategy may be adapted to other film/substrate systems. (paper)