<strong>Generic Patch Inference>
DEFF Research Database (Denmark)
Andersen, Jesper; Lawall, Julia Laetitia
2008-01-01
A key issue in maintaining Linux device drivers is the need to update drivers in response to evolutions in Linux internal libraries. Currently, there is little tool support for performing and documenting such changes. In this paper we present a tool, spfind, that identifies common changes made...... developers can use it to extract an abstract representation of the set of changes that others have made. Our experiments on recent changes in Linux show that the inferred generic patches are more concise than the corresponding patches found in commits to the Linux source tree while being safe with respect...
Strong Inference in Mathematical Modeling: A Method for Robust Science in the Twenty-First Century.
Ganusov, Vitaly V
2016-01-01
While there are many opinions on what mathematical modeling in biology is, in essence, modeling is a mathematical tool, like a microscope, which allows consequences to logically follow from a set of assumptions. Only when this tool is applied appropriately, as microscope is used to look at small items, it may allow to understand importance of specific mechanisms/assumptions in biological processes. Mathematical modeling can be less useful or even misleading if used inappropriately, for example, when a microscope is used to study stars. According to some philosophers (Oreskes et al., 1994), the best use of mathematical models is not when a model is used to confirm a hypothesis but rather when a model shows inconsistency of the model (defined by a specific set of assumptions) and data. Following the principle of strong inference for experimental sciences proposed by Platt (1964), I suggest "strong inference in mathematical modeling" as an effective and robust way of using mathematical modeling to understand mechanisms driving dynamics of biological systems. The major steps of strong inference in mathematical modeling are (1) to develop multiple alternative models for the phenomenon in question; (2) to compare the models with available experimental data and to determine which of the models are not consistent with the data; (3) to determine reasons why rejected models failed to explain the data, and (4) to suggest experiments which would allow to discriminate between remaining alternative models. The use of strong inference is likely to provide better robustness of predictions of mathematical models and it should be strongly encouraged in mathematical modeling-based publications in the Twenty-First century.
Strong Inference in Mathematical Modeling: A Method for Robust Science in the Twenty-First Century
Ganusov, Vitaly V.
2016-01-01
While there are many opinions on what mathematical modeling in biology is, in essence, modeling is a mathematical tool, like a microscope, which allows consequences to logically follow from a set of assumptions. Only when this tool is applied appropriately, as microscope is used to look at small items, it may allow to understand importance of specific mechanisms/assumptions in biological processes. Mathematical modeling can be less useful or even misleading if used inappropriately, for example, when a microscope is used to study stars. According to some philosophers (Oreskes et al., 1994), the best use of mathematical models is not when a model is used to confirm a hypothesis but rather when a model shows inconsistency of the model (defined by a specific set of assumptions) and data. Following the principle of strong inference for experimental sciences proposed by Platt (1964), I suggest “strong inference in mathematical modeling” as an effective and robust way of using mathematical modeling to understand mechanisms driving dynamics of biological systems. The major steps of strong inference in mathematical modeling are (1) to develop multiple alternative models for the phenomenon in question; (2) to compare the models with available experimental data and to determine which of the models are not consistent with the data; (3) to determine reasons why rejected models failed to explain the data, and (4) to suggest experiments which would allow to discriminate between remaining alternative models. The use of strong inference is likely to provide better robustness of predictions of mathematical models and it should be strongly encouraged in mathematical modeling-based publications in the Twenty-First century. PMID:27499750
Strong inference in mathematical modeling: a method for robust science in the 21st century
Directory of Open Access Journals (Sweden)
Vitaly V. Ganusov
2016-07-01
Full Text Available While there are many opinions on what mathematical modeling in biology is, in essence, modeling is a mathematical tool, like a microscope, which allows consequences to logically follow from a set of assumptions. Only when this tool is applied appropriately, as microscope is used to look at small items, it may allow to understand importance of specific mechanisms/assumptions in biological processes. Mathematical modeling can be less useful or even misleading if used inappropriately, for example, when a microscope is used to study stars. According to some philosophers [1], the best use of mathematical models is not when a model is used to confirm a hypothesis but rather when a model shows inconsistency of the model (defined by a specific set of assumptions and data. Following the principle of strong inference for experimental sciences proposed by Platt [2], I suggest ``strong inference in mathematical modeling'' as an effective and robust way of using mathematical modeling to understand mechanisms driving dynamics of biological systems. The major steps of strong inference in mathematical modeling are 1 to develop multiple alternative models for the phenomenon in question; 2 to compare the models with available experimental data and to determine which of the models are not consistent with the data; 3 to determine reasons why rejected models failed to explain the data, and 4 to suggest experiments which would allow to discriminate between remaining alternative models. The use of strong inference is likely to provide better robustness of predictions of mathematical models and it should be strongly encouraged in mathematical modeling-based publications in the 21st century.
Population genetics inference for longitudinally-sampled mutants under strong selection.
Lacerda, Miguel; Seoighe, Cathal
2014-11-01
Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.
Natural Gas Storage Facilities, US, 2010, Platts
U.S. Environmental Protection Agency — The Platts Natural Gas Storage Facilities geospatial data layer contains points that represent locations of facilities used for natural gas storage in the United...
Probing the Small-scale Structure in Strongly Lensed Systems via Transdimensional Inference
Daylan, Tansu; Cyr-Racine, Francis-Yan; Diaz Rivero, Ana; Dvorkin, Cora; Finkbeiner, Douglas P.
2018-02-01
Strong lensing is a sensitive probe of the small-scale density fluctuations in the Universe. We implement a pipeline to model strongly lensed systems using probabilistic cataloging, which is a transdimensional, hierarchical, and Bayesian framework to sample from a metamodel (union of models with different dimensionality) consistent with observed photon count maps. Probabilistic cataloging allows one to robustly characterize modeling covariances within and across lens models with different numbers of subhalos. Unlike traditional cataloging of subhalos, it does not require model subhalos to improve the goodness of fit above the detection threshold. Instead, it allows the exploitation of all information contained in the photon count maps—for instance, when constraining the subhalo mass function. We further show that, by not including these small subhalos in the lens model, fixed-dimensional inference methods can significantly mismodel the data. Using a simulated Hubble Space Telescope data set, we show that the subhalo mass function can be probed even when many subhalos in the sample catalogs are individually below the detection threshold and would be absent in a traditional catalog. The implemented software, Probabilistic Cataloger (PCAT) is made publicly available at https://github.com/tdaylan/pcat.
DEFF Research Database (Denmark)
Møller, Jesper
2010-01-01
Chapter 9: This contribution concerns statistical inference for parametric models used in stochastic geometry and based on quick and simple simulation free procedures as well as more comprehensive methods based on a maximum likelihood or Bayesian approach combined with markov chain Monte Carlo...... (MCMC) techniques. Due to space limitations the focus is on spatial point processes....
DEFF Research Database (Denmark)
Møller, Jesper
(This text written by Jesper Møller, Aalborg University, is submitted for the collection ‘Stochastic Geometry: Highlights, Interactions and New Perspectives', edited by Wilfrid S. Kendall and Ilya Molchanov, to be published by ClarendonPress, Oxford, and planned to appear as Section 4.1 with the ......(This text written by Jesper Møller, Aalborg University, is submitted for the collection ‘Stochastic Geometry: Highlights, Interactions and New Perspectives', edited by Wilfrid S. Kendall and Ilya Molchanov, to be published by ClarendonPress, Oxford, and planned to appear as Section 4.......1 with the title ‘Inference'.) This contribution concerns statistical inference for parametric models used in stochastic geometry and based on quick and simple simulation free procedures as well as more comprehensive methods using Markov chain Monte Carlo (MCMC) simulations. Due to space limitations the focus...
DEFF Research Database (Denmark)
Lenoir, Jonathan; Graae, Bente; Aarrestad, Per
2013-01-01
-change impacts. Is this local spatial buffering restricted to topographically complex terrains? To answer this, we here study fine-grained thermal variability across a 2500-km wide latitudinal gradient in Northern Europe encompassing a large array of topographic complexities. We first combined plant community...... data, Ellenberg temperature indicator values, locally measured temperatures (LmT) and globally interpolated temperatures (GiT) in a modelling framework to infer biologically relevant temperature conditions from plant assemblages within community-inferred temperatures: CiT). We...... temperature indicator values in combination with plant assemblages explained 46-72% of variation in LmT and 92-96% of variation in GiT during the growing season (June, July, August). Growing-season CiT range within 1-km(2) units peaked at 60-65°N and increased with terrain roughness, averaging 1.97 °C (SD = 0...
Spatiotemporal Assessment of Groundwater Resources in the South Platte Basin, Colorado
Ruybal, C. J.; McCray, J. E.; Hogue, T. S.
2015-12-01
The South Platte Basin is one of the most economically diverse and fastest growing basins in Colorado. Strong competition for water resources in an over-appropriated system brings challenges to meeting future water demands. Balancing the conjunctive use of surface water and groundwater from the South Platte alluvial aquifer and the Denver Basin aquifer system is critical for meeting future demands. Over the past decade, energy development in the basin has added to the competition for water resources, highlighting the need to advance our understanding of the availability and sustainability of groundwater resources. Current work includes evaluating groundwater storage changes and recharge regimes throughout the South Platte Basin under competing uses, e.g. agriculture, oil and gas, urban, recreational, and environmental. The Gravity Recovery and Climate Experiment satellites in conjunction with existing groundwater data is used to evaluate spatiotemporal variability in groundwater storage and identify areas of high water stress. Spatiotemporal data will also be utilized to develop a high resolution groundwater model of the region. Results will ultimately help stakeholders in the South Platte Basin better understand groundwater resource challenges and contribute to Colorado's strategic future water planning.
Johnson, Leigh A; Chan, Lauren M; Weese, Terri L; Busby, Lisa D; McMurry, Samuel
2008-09-01
Members of the phlox family (Polemoniaceae) serve as useful models for studying various evolutionary and biological processes. Despite its biological importance, no family-wide phylogenetic estimate based on multiple DNA regions with complete generic sampling is available. Here, we analyze one nuclear and five chloroplast DNA sequence regions (nuclear ITS, chloroplast matK, trnL intron plus trnL-trnF intergeneric spacer, and the trnS-trnG, trnD-trnT, and psbM-trnD intergenic spacers) using parsimony and Bayesian methods, as well as assessments of congruence and long branch attraction, to explore phylogenetic relationships among 84 ingroup species representing all currently recognized Polemoniaceae genera. Relationships inferred from the ITS and concatenated chloroplast regions are similar overall. A combined analysis provides strong support for the monophyly of Polemoniaceae and subfamilies Acanthogilioideae, Cobaeoideae, and Polemonioideae. Relationships among subfamilies, and thus for the precise root of Polemoniaceae, remain poorly supported. Within the largest subfamily, Polemonioideae, four clades corresponding to tribes Polemonieae, Phlocideae, Gilieae, and Loeselieae receive strong support. The monogeneric Polemonieae appears sister to Phlocideae. Relationships within Polemonieae, Phlocideae, and Gilieae are mostly consistent between analyses and data permutations. Many relationships within Loeselieae remain uncertain. Overall, inferred phylogenetic relationships support a higher-level classification for Polemoniaceae proposed in 2000.
Pechenick, Eitan Adam; Danforth, Christopher M; Dodds, Peter Sheridan
2015-01-01
It is tempting to treat frequency trends from the Google Books data sets as indicators of the "true" popularity of various words and phrases. Doing so allows us to draw quantitatively strong conclusions about the evolution of cultural perception of a given topic, such as time or gender. However, the Google Books corpus suffers from a number of limitations which make it an obscure mask of cultural popularity. A primary issue is that the corpus is in effect a library, containing one of each book. A single, prolific author is thereby able to noticeably insert new phrases into the Google Books lexicon, whether the author is widely read or not. With this understood, the Google Books corpus remains an important data set to be considered more lexicon-like than text-like. Here, we show that a distinct problematic feature arises from the inclusion of scientific texts, which have become an increasingly substantive portion of the corpus throughout the 1900 s. The result is a surge of phrases typical to academic articles but less common in general, such as references to time in the form of citations. We use information theoretic methods to highlight these dynamics by examining and comparing major contributions via a divergence measure of English data sets between decades in the period 1800-2000. We find that only the English Fiction data set from the second version of the corpus is not heavily affected by professional texts. Overall, our findings call into question the vast majority of existing claims drawn from the Google Books corpus, and point to the need to fully characterize the dynamics of the corpus before using these data sets to draw broad conclusions about cultural and linguistic evolution.
Kass, Mason A.; Bloss, Benjamin R.; Irons, Trevor P.; Cannia, James C.; Abraham, Jared D.
2014-01-01
This report is a release of digital data and associated survey descriptions from a series of magnetic resonance soundings (MRS, also known as surface nuclear magnetic resonance) that was conducted during October and November of 2012 in areas of western Nebraska as part of a cooperative hydrologic study by the North Platte Natural Resource District (NRD), South Platte NRD, Twin Platte NRD, the Nebraska Environmental Trust, and the U.S. Geological Survey (USGS). The objective of the study was to delineate the base-of-aquifer and refine the understanding of the hydrologic properties in the aquifer system. The MRS technique non-invasively measures water content in the subsurface, which makes it a useful tool for hydrologic investigations in the near surface (up to depths of approximately 150 meters). In total, 14 MRS production-level soundings were acquired by the USGS over an area of approximately 10,600 square kilometers. The data are presented here in digital format, along with acquisition information, survey and site descriptions, and metadata.
Summary of Bed-Sediment Measurements Along the Platte River, Nebraska, 1931-2009
Kinzel, P.J.; Runge, J.T.
2010-01-01
Rivers are conduits for water and sediment supplied from upstream sources. The sizes of the sediments that a river bed consists of typically decrease in a downstream direction because of natural sorting. However, other factors can affect the caliber of bed sediment including changes in upstream water-resource development, land use, and climate that alter the watershed yield of water or sediment. Bed sediments provide both a geologic and stratigraphic record of past fluvial processes and quantification of current sediment transport relations. The objective of this fact sheet is to describe and compare longitudinal measurements of bed-sediment sizes made along the Platte River, Nebraska from 1931 to 2009. The Platte River begins at the junction of the North Platte and South Platte Rivers near North Platte, Nebr. and flows east for approximately 500 kilometers before joining the Missouri River at Plattsmouth, Nebr. The confluence of the Loup River with the Platte River serves to divide the middle (or central) Platte River (the Platte River upstream from the confluence with the Loup River) and lower Platte River (the Platte River downstream from the confluence with Loup River). The Platte River provides water for a variety of needs including: irrigation, infiltration to public water-supply wells, power generation, recreation, and wildlife habitat. The Platte River Basin includes habitat for four federally listed species including the whooping crane (Grus americana), interior least tern (Sterna antillarum), piping plover (Charadrius melodus), and pallid sturgeon (Scaphirhynchus albus). A habitat recovery program for the federally listed species in the Platte River was initiated in 2007. One strategy identified by the recovery program to manage and enhance habitat is the manipulation of streamflow. Understanding the longitudinal and temporal changes in the size gradation of the bed sediment will help to explain the effects of past flow regimes and anticipated
Schrago, Carlos G; Menezes, Albert N; Furtado, Carolina; Bonvicino, Cibele R; Seuanez, Hector N
2014-11-05
Neotropical primates (NP) are presently distributed in the New World from Mexico to northern Argentina, comprising three large families, Cebidae, Atelidae, and Pitheciidae, consequently to their diversification following their separation from Old World anthropoids near the Eocene/Oligocene boundary, some 40 Ma. The evolution of NP has been intensively investigated in the last decade by studies focusing on their phylogeny and timescale. However, despite major efforts, the phylogenetic relationship between these three major clades and the age of their last common ancestor are still controversial because these inferences were based on limited numbers of loci and dating analyses that did not consider the evolutionary variation associated with the distribution of gene trees within the proposed phylogenies. We show, by multispecies coalescent analyses of selected genome segments, spanning along 92,496,904 bp that the early diversification of extant NP was marked by a 2-fold increase of their effective population size and that Atelids and Cebids are more closely related respective to Pitheciids. The molecular phylogeny of NP has been difficult to solve because of population-level phenomena at the early evolution of the lineage. The association of evolutionary variation with the distribution of gene trees within proposed phylogenies is crucial for distinguishing the mean genetic divergence between species (the mean coalescent time between loci) from speciation time. This approach, based on extensive genomic data provided by new generation DNA sequencing, provides more accurate reconstructions of phylogenies and timescales for all organisms. © The Author(s) 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Directory of Open Access Journals (Sweden)
Z. T. Guo
2009-02-01
Full Text Available We correlate the China loess and Antarctica ice records to address the inter-hemispheric climate link over the past 800 ka. The results show a broad coupling between Asian and Antarctic climates at the glacial-interglacial scale. However, a number of decoupled aspects are revealed, among which marine isotope stage (MIS 13 exhibits a strong anomaly compared with the other interglacials. It is characterized by unusually positive benthic oxygen (δ^{18}O and carbon isotope (δ^{13}C values in the world oceans, cooler Antarctic temperature, lower summer sea surface temperature in the South Atlantic, lower CO_{2} and CH_{4} concentrations, but by extremely strong Asian, Indian and African summer monsoons, weakest Asian winter monsoon, and lowest Asian dust and iron fluxes. Pervasive warm conditions were also evidenced by the records from northern high-latitude regions. These consistently indicate a warmer Northern Hemisphere and a cooler Southern Hemisphere, and hence a strong asymmetry of hemispheric climates during MIS-13. Similar anomalies of lesser extents also occurred during MIS-11 and MIS-5e. Thus, MIS-13 provides a case that the Northern Hemisphere experienced a substantial warming under relatively low concentrations of greenhouse gases. It suggests that the global climate system possesses a natural variability that is not predictable from the simple response of northern summer insolation and atmospheric CO_{2} changes. During MIS-13, both hemispheres responded in different ways leading to anomalous continental, marine and atmospheric conditions at the global scale. The correlations also suggest that the marine δ^{18}O record is not always a reliable indicator of the northern ice-volume changes, and that the asymmetry of hemispheric climates is one of the prominent factors controlling the strength of Asian, Indian and African monsoon circulations, most likely through modulating the position of
Kamphuis, W.; Houttuin, K.
2007-01-01
In this report, we introduce a newly developed task environment for experimental team research: the Planning Task for Teams (PLATT). PLATT is a scenario based, computerized, complex planning task for three-person teams. PLATT has been designed to be able to do experimental laboratory research on
South Platte Watershed from the Headwaters to the Denver Metropolitan Area (Colorado) of the Urban Waters Federal Partnership (UWFP) reconnects urban communities with their waterways by improving coordination among federal agencies and collaborating
Peterson, Steven M.; Flynn, Amanda T.; Vrabel, Joseph; Ryter, Derek W.
2015-08-12
The North Platte Natural Resources District (NPNRD) has been actively collecting data and studying groundwater resources because of concerns about the future availability of the highly inter-connected surface-water and groundwater resources. This report, prepared by the U.S. Geological Survey in cooperation with the North Platte Natural Resources District, describes a groundwater-flow model of the North Platte River valley from Bridgeport, Nebraska, extending west to 6 miles into Wyoming. The model was built to improve the understanding of the interaction of surface-water and groundwater resources, and as an optimization tool, the model is able to analyze the effects of water-management options on the simulated stream base flow of the North Platte River. The groundwater system and related sources and sinks of water were simulated using a newton formulation of the U.S. Geological Survey modular three-dimensional groundwater model, referred to as MODFLOW–NWT, which provided an improved ability to solve nonlinear unconfined aquifer simulations with wetting and drying of cells. Using previously published aquifer-base-altitude contours in conjunction with newer test-hole and geophysical data, a new base-of-aquifer altitude map was generated because of the strong effect of the aquifer-base topography on groundwater-flow direction and magnitude. The largest inflow to groundwater is recharge originating from water leaking from canals, which is much larger than recharge originating from infiltration of precipitation. The largest component of groundwater discharge from the study area is to the North Platte River and its tributaries, with smaller amounts of discharge to evapotranspiration and groundwater withdrawals for irrigation. Recharge from infiltration of precipitation was estimated with a daily soil-water-balance model. Annual recharge from canal seepage was estimated using available records from the Bureau of Reclamation and then modified with canal
Social-ecological resilience and law in the Platte River Basin
Birge, Hannah E.; Allen, Craig R.; Craig, Robin; Garmestani, Ahjond S.; Hamm, Joseph A.; Babbitt, Christina; Nemec, Kristine T.; Schlager, Edella
2014-01-01
Efficiency and resistance to rapid change are hallmarks of both the judicial and legislative branches of the United States government. These defining characteristics, while bringing stability and predictability, pose challenges when it comes to managing dynamic natural systems. As our understanding of ecosystems improves, we must devise ways to account for the non-linearities and uncertainties rife in complex social-ecological systems. This paper takes an in-depth look at the Platte River basin over time to explore how the system's resilience—the capacity to absorb disturbance without losing defining structures and functions—responds to human driven change. Beginning with pre-European settlement, the paper explores how water laws, policies, and infrastructure influenced the region's ecology and society. While much of the post-European development in the Platte River basin came at a high ecological cost to the system, the recent tri-state and federal collaborative Platte River Recovery and Implementation Program is a first step towards flexible and adaptive management of the social-ecological system. Using the Platte River basin as an example, we make the case that inherent flexibility and adaptability are vital for the next iteration of natural resources management policies affecting stressed basins. We argue that this can be accomplished by nesting policy in a resilience framework, which we describe and attempt to operationalize for use across systems and at different levels of jurisdiction. As our current natural resources policies fail under the weight of looming global change, unprecedented demand for natural resources, and shifting land use, the need for a new generation of adaptive, flexible natural resources govern-ance emerges. Here we offer a prescription for just that, rooted in the social , ecological and political realities of the Platte River basin. Social-Ecological Resilience and Law in the Platte River Basin (PDF Download Available). Available
Smith, B.D.; Abraham, J.D.; Cannia, J.C.; Minsley, B.J.; Deszcz-Pan, M.; Ball, L.B.
2010-01-01
This report is a release of digital data from a helicopter electromagnetic and magnetic survey that was conducted during June 2009 in areas of western Nebraska as part of a joint hydrologic study by the North Platte Natural Resource District (NRD), South Platte NRD, and U.S. Geological Survey (USGS). Flight lines for the survey totaled 937 line kilometers (582 line miles). The objective of the contracted survey, conducted by Fugro Airborne, Ltd., is to improve the understanding of the relation between surface-water and groundwater systems critical to developing groundwater models used in management programs for water resources. A unique aspect of the survey is the flight line layout. One set of flight lines was flown in a zig-zag pattern extending along the length of the previously collected airborne data. The success of this survey design depended on a well-understood regional hydrogeologic framework and model developed by the Cooperative Hydrologic Study of the Platte River Basin and the airborne geophysical data collected in 2008. Resistivity variations along lines could be related to this framework. In addition to these lines, more traditional surveys consisting of parallel flight lines, separated by about 400 meters were carried out for three blocks in the North Platte NRD, the South Platte NRD and in the area of Crescent Lakes. These surveys helped to establish the spatial variations of the resistivity of hydrostratigraphic units. An additional survey was flown over the Crescent Lake area. The objective of this survey, funded by the USGS Office of Groundwater, was to map shallow hydrogeologic features of the southwestern part of the Sand Hills that contain a mix of fresh to saline lakes.
O'Shaughnessy, Richard; Gerosa, Davide; Wysocki, Daniel
2017-07-07
The inferred parameters of the binary black hole GW151226 are consistent with nonzero spin for the most massive black hole, misaligned from the binary's orbital angular momentum. If the black holes formed through isolated binary evolution from an initially aligned binary star, this misalignment would then arise from a natal kick imparted to the first-born black hole at its birth during stellar collapse. We use simple kinematic arguments to constrain the characteristic magnitude of this kick, and find that a natal kick v_{k}≳50 km/s must be imparted to the black hole at birth to produce misalignments consistent with GW151226. Such large natal kicks exceed those adopted by default in most of the current supernova and binary evolution models.
Book review: Implementing the Endangered Species Act on the Platte Basin water commons
Sherfy, Mark H.
2014-01-01
The Platte River is a unique midcontinent ecosystem that is world-renowned for its natural resources, particularly the spectacular spring concentrations of migratory birds, such as sandhill cranes (Grus canadensis), ducks, and geese. The Platte River basin also provides habitat for four federally listed endangered or threatened species—interior least tern (Sternula antillarum athalassos), piping plover (Charadrius melodus), whooping crane (G. americana), and pallid sturgeon (Scaphirhynchus albus)—that require specific hydrological conditions in order for habitat to be suitable. Flows on the Platte River are subject to regulation by a number of dams, and it is heavily relied upon for irrigation in Colorado, Wyoming, and Nebraska. Accordingly, it also has become a political battleground for the simple reason that the demand for water exceeds supply. David Freeman’s book takes a detailed look at water-use issues on the Platte River, focusing on how implementation of the Endangered Species Act influences decision-making about water allocations.
Geological report on water conditions at Platt National Park, Oklahoma
Gould, Charles Newton; Schoff, Stuart Leeson
1939-01-01
Platt National Park, located in southern Oklahoma, containing 842 acres, was established by Acts of Congress in 1902, 1904, and 1906. The reason for the setting aside of this area was the presence in the area of some 30 'mineral' springs, the water from which contains sulphur, bromide, salt, and other minerals, which are believed to possess medicinal qualities. For many generations the sulphur springs of the Chickasaw Nation had been known for their reputed healing qualities. It had long been the custom for families to come from considerable distances on horseback and in wagons and camp near the springs, in order to drink the water. In course of time a primitive town, known as Sulphur Springs, grew up near a group of springs known since as Pavilion Springs at the mouth of Sulphur Creek, now known as Travertine Creek. This town was still in existence at the time of my first visit to the locality in July, 1901. At this time, in company with Joseph A. Taff, of the United States Geological Survey, I spent a week riding over the country making a preliminary survey looking toward the setting aside of the area for a National Park. After the establishment of the National Park, the old town of Sulphur Springs was abandoned, and when the present boundaries of the park had been established the present town of Sulphur, now county seat of Murray County, grew up. In July 1906, on request of Superintendent Joseph F. Swords, I visited the park and made an examination of the various springs and submitted a report, dated August 15, 1906, to Secretary of the Interior E.A. Hitchcock. Copies of this report are on file in the Regional Office and at Platt National Park. In this report I set forth the approximate amount of flow of the various springs, the character of the water in each, and the conditions of the springs as of that date. I also made certain recommendations regarding proposed improvements of each spring. In this report I say: 'In the town of Sulphur, four wells have been
Kumagai, Hiroyuki; Pulido, Nelson; Fukuyama, Eiichi; Aoi, Shin
2013-01-01
investigate source processes of the 2011 Tohoku-Oki earthquake, we utilized a source location method using high-frequency (5-10 Hz) seismic amplitudes. In this method, we assumed far-field isotropic radiation of S waves, and conducted a spatial grid search to find the best fitting source locations along the subducted slab in each successive time window. Our application of the method to the Tohoku-Oki earthquake resulted in artifact source locations at shallow depths near the trench caused by limited station coverage and noise effects. We then assumed various source node distributions along the plate, and found that the observed seismograms were most reasonably explained when assuming deep source nodes. This result suggests that the high-frequency seismic waves were radiated at deeper depths during the earthquake, a feature which is consistent with results obtained from teleseismic back-projection and strong-motion source model studies. We identified three high-frequency subevents, and compared them with the moment-rate function estimated from low-frequency seismograms. Our comparison indicated that no significant moment release occurred during the first high-frequency subevent and the largest moment-release pulse occurred almost simultaneously with the second high-frequency subevent. We speculated that the initial slow rupture propagated bilaterally from the hypocenter toward the land and trench. The landward subshear rupture propagation consisted of three successive high-frequency subevents. The trenchward propagation ruptured the strong asperity and released the largest moment near the trench.
Gu, Yingxin; Wylie, Bruce K.; Bliss, Norman B.
2013-01-01
This study assessed and described a relationship between satellite-derived growing season averaged Normalized Difference Vegetation Index (NDVI) and annual productivity for grasslands within the Greater Platte River Basin (GPRB) of the United States. We compared growing season averaged NDVI (GSN) with Soil Survey Geographic (SSURGO) database rangeland productivity and flux tower Gross Primary Productivity (GPP) for grassland areas. The GSN was calculated for each of nine years (2000–2008) using the 7-day composite 250-m eMODIS (expedited Moderate Resolution Imaging Spectroradiometer) NDVI data. Strong correlations exist between the nine-year mean GSN (MGSN) and SSURGO annual productivity for grasslands (R2 = 0.74 for approximately 8000 pixels randomly selected from eight homogeneous regions within the GPRB; R2 = 0.96 for the 14 cluster-averaged points). Results also reveal a strong correlation between GSN and flux tower growing season averaged GPP (R2 = 0.71). Finally, we developed an empirical equation to estimate grassland productivity based on the MGSN. Spatially explicit estimates of grassland productivity over the GPRB were generated, which improved the regional consistency of SSURGO grassland productivity data and can help scientists and land managers to better understand the actual biophysical and ecological characteristics of grassland systems in the GPRB. This final estimated grassland production map can also be used as an input for biogeochemical, ecological, and climate change models.
Multimodel inference and adaptive management
Rehme, S.E.; Powell, L.A.; Allen, Craig R.
2011-01-01
Ecology is an inherently complex science coping with correlated variables, nonlinear interactions and multiple scales of pattern and process, making it difficult for experiments to result in clear, strong inference. Natural resource managers, policy makers, and stakeholders rely on science to provide timely and accurate management recommendations. However, the time necessary to untangle the complexities of interactions within ecosystems is often far greater than the time available to make management decisions. One method of coping with this problem is multimodel inference. Multimodel inference assesses uncertainty by calculating likelihoods among multiple competing hypotheses, but multimodel inference results are often equivocal. Despite this, there may be pressure for ecologists to provide management recommendations regardless of the strength of their study’s inference. We reviewed papers in the Journal of Wildlife Management (JWM) and the journal Conservation Biology (CB) to quantify the prevalence of multimodel inference approaches, the resulting inference (weak versus strong), and how authors dealt with the uncertainty. Thirty-eight percent and 14%, respectively, of articles in the JWM and CB used multimodel inference approaches. Strong inference was rarely observed, with only 7% of JWM and 20% of CB articles resulting in strong inference. We found the majority of weak inference papers in both journals (59%) gave specific management recommendations. Model selection uncertainty was ignored in most recommendations for management. We suggest that adaptive management is an ideal method to resolve uncertainty when research results in weak inference.
Evapotranspiration Rates of Riparian Forests, Platte River, Nebraska, 2002-06
Landon, Matthew K.; Rus, David L.; Dietsch, Benjamin J.; Johnson, Michaela R.; Eggemeyer, Kathleen D.
2009-01-01
Evapotranspiration (ET) in riparian areas is a poorly understood component of the regional water balance in the Platte River Basin, where competing demands have resulted in water shortages in the ground-water/surface-water system. From April 2002 through March 2006, the U.S. Geological Survey, Nebraska Platte River Cooperative Hydrology Study Group, and Central Platte Natural Resources District conducted a micrometeorological study of water and energy balances at two sites in central Nebraska near Odessa and Gothenburg to improve understanding of ET rates and factors affecting them in Platte River riparian forests. A secondary objective of the study was to constrain estimates of ground-water use by riparian vegetation to satisfy ET consumptive demands, a useful input to regional ground-water flow models. Both study sites are located on large islands within the Platte River characterized by a cottonwood-dominated forest canopy on primarily sandy alluvium. Although both sites are typical of riparian forests along the Platte River in Nebraska, the Odessa understory is dominated by deciduous shrubs, whereas the Gothenburg understory is dominated by eastern redcedars. Additionally, seasonal ground-water levels fluctuated more at Odessa than at Gothenburg. The study period of April 2002 through March 2006 encompassed precipitation conditions ranging from dry to wet. This study characterized the components of the water balance in the riparian zone of each site. ET was evaluated from eddy-covariance sensors installed on towers above the forest canopy at a height of 26.1 meters. Precipitation was measured both above and below the forest canopy. A series of sensors measured soil-moisture availability within the unsaturated zone in two different vertical profiles at each site. Changes in ground-water altitude were evaluated from piezometers. The areal footprint represented in the water balance extended up to 800 meters from each tower. During the study, ET was less variable
Steele, G.V.; Cannia, J.C.
1997-01-01
In 1993, the U.S. Geological Survey and the North Platte Natural Resources District began a 3-year study to determine the geohydrology and water quality of the North Platte River alluvial aquifer near Oshkosh, Garden County, Nebraska. The objectives of the study were to determine the geohydrologic properties of the North Platte River alluvial aquifer, to establish a well network for long- term monitoring of concentrations of agricultural chemicals including nitrate and herbicides, and to establish baseline concentrations of major ions in the ground water. To meet these objectives, monitor wells were installed at 11 sites near Oshkosh. The geohydrologic properties of the aquifer were estimated from water-level measurements at selected irrigation wells located in the study area and short- term constant-discharge aquifer tests at two monitor wells. Water samples were collected bimonthly and analyzed for specific conductance, pH, water temperature, dissolved oxygen, and nutrients including dissolved nitrate. Samples were collected semiannually for analysis of major ions, and annually for triazine and acetamide herbicides. Evaluation of the aquifer-test data indicates the hydraulic conductivities of the North Platte River alluvial aquifer range between 169 and 184 feet per day and transmissivities ranged from 12,700 to 26,700 feet-squared per day. The average specific yield for the alluvial aquifer, based on the two aquifer tests, was 0.2. Additional hydrologic data for the alluvial aquifer include a horizontal gradient of about 0.002 foot per foot and estimated ground- water flow velocities of about 0.1 to 1.8 feet per day. Evaluation of the water-quality data indicates that nitrate concentrations exceed the U.S. Environmental Protection Agency's (USEPA) Maximum Contamination Level of 10 milligrams per liter for drinking water in areas to the east and west of Oshkosh. In these areas, nitrate concentrations generally are continuing to rise. West of Oshkosh the highest
Caticha, Ariel
2011-03-01
In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme.
Kroese, A.H.; van der Meulen, E.A.; Poortema, Klaas; Schaafsma, W.
1995-01-01
The making of statistical inferences in distributional form is conceptionally complicated because the epistemic 'probabilities' assigned are mixtures of fact and fiction. In this respect they are essentially different from 'physical' or 'frequency-theoretic' probabilities. The distributional form is
Caticha, Ariel
2010-01-01
In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEn...
Mapping grasslands suitable for cellulosic biofuels in the Greater Platte River Basin, United States
Wylie, Bruce K.; Gu, Yingxin
2012-01-01
Biofuels are an important component in the development of alternative energy supplies, which is needed to achieve national energy independence and security in the United States. The most common biofuel product today in the United States is corn-based ethanol; however, its development is limited because of concerns about global food shortages, livestock and food price increases, and water demand increases for irrigation and ethanol production. Corn-based ethanol also potentially contributes to soil erosion, and pesticides and fertilizers affect water quality. Studies indicate that future potential production of cellulosic ethanol is likely to be much greater than grain- or starch-based ethanol. As a result, economics and policy incentives could, in the near future, encourage expansion of cellulosic biofuels production from grasses, forest woody biomass, and agricultural and municipal wastes. If production expands, cultivation of cellulosic feedstock crops, such as switchgrass (Panicum virgatum L.) and miscanthus (Miscanthus species), is expected to increase dramatically. The main objective of this study is to identify grasslands in the Great Plains that are potentially suitable for cellulosic feedstock (such as switchgrass) production. Producing ethanol from noncropland holdings (such as grassland) will minimize the effects of biofuel developments on global food supplies. Our pilot study area is the Greater Platte River Basin, which includes a broad range of plant productivity from semiarid grasslands in the west to the fertile corn belt in the east. The Greater Platte River Basin was the subject of related U.S. Geological Survey (USGS) integrated research projects.
Steele, Gregory V.; Cannia, James C.
1995-01-01
In 1993, a 3-year study was begun to describe the geohydrology and water quality of the North Platte River alluvial aquifer near Oshkosh, Garden County, Nebraska. The study's objectives are to evaluate the geohydrologic characteristics of the alluvial aquifer and to establish a network of observation wells for long-term monitoring of temporal variations and spatial distributions of nitrate and major-ion concentrations. Monitor wells were installed at 11 sites near Oshkosh. The geohydrology of the aquifer was characterized based on water-level measurements and two short-term aquifer tests. Bimonthly water samples were collected and analyzed for pH, specific conductivity, water temperature, dissolved oxygen, and nutrients that included dissolved nitrate. Concentrations of major ions were defined from analyses of semiannual water samples. Analyses of the geohydrologic and water-quality data indicate that the aquifer is vulnerable to nitrate contamination. These data also show that nitrate concentrations in ground water flowing into and out of the study area are less than the U.S. Environmental Protection Agency's Maximum Concentration Level of 10 milligrams per liter for drinking water. Ground water from Lost Creek Valley may be mixing with ground water in the North Platte River Valley, somewhat moderating nitrate concentrations near Oshkosh.
Wild birds have been shown to be significant sources of numerous types of pathogens that are relevant to humans and agriculture. The presence of large numbers of migratory birds in such a sensitive and important ecosystem as the Platte River in central Nebraska, USA, could potent...
Aggelopoulos, Nikolaos C
2015-08-01
Perceptual inference refers to the ability to infer sensory stimuli from predictions that result from internal neural representations built through prior experience. Methods of Bayesian statistical inference and decision theory model cognition adequately by using error sensing either in guiding action or in "generative" models that predict the sensory information. In this framework, perception can be seen as a process qualitatively distinct from sensation, a process of information evaluation using previously acquired and stored representations (memories) that is guided by sensory feedback. The stored representations can be utilised as internal models of sensory stimuli enabling long term associations, for example in operant conditioning. Evidence for perceptual inference is contributed by such phenomena as the cortical co-localisation of object perception with object memory, the response invariance in the responses of some neurons to variations in the stimulus, as well as from situations in which perception can be dissociated from sensation. In the context of perceptual inference, sensory areas of the cerebral cortex that have been facilitated by a priming signal may be regarded as comparators in a closed feedback loop, similar to the better known motor reflexes in the sensorimotor system. The adult cerebral cortex can be regarded as similar to a servomechanism, in using sensory feedback to correct internal models, producing predictions of the outside world on the basis of past experience. Copyright © 2015 Elsevier Ltd. All rights reserved.
Channel and island change in the lower Platte River, Eastern Nebraska, USA: 1855 2005
Joeckel, R. M.; Henebry, G. M.
2008-12-01
The lower Platte River has undergone considerable change in channel and bar characteristics since the mid-1850s in four 20-25 km-long study stretches. The same net effect of historical channel shrinkage that was detected upstream from Grand Island, Nebraska, can also be detected in the lower river but differences in the behaviors of study stretches upstream and downstream from major tributaries are striking. The least relative decrease occurred downstream from the Loup River confluence, and the stretch downstream from the Elkhorn River confluence actually showed an increase in channel area during the 1940s. Bank erosion was also greater downstream of the tributaries between ca. 1860 and 1938/1941, particularly in stretch RG, which showed more lateral migration. The cumulative island area and the ratio of island area to channel area relative to the 1938/1941 baseline data showed comparatively great fluctuations in median island size in both downstream stretches. The erratic behavior of island size distributions over time indicates that large islands were accreted to the banks at different times, and that some small, newly-stabilized islands were episodically "flushed" out of the system. In the upstream stretches the stabilization of mobile bars to create new, small islands had a more consistent impact over time. Channel decrease by the abandonment of large, long-lived anabranches and by the in-place narrowing resulting from island accretion were more prominent in these upstream stretches. Across all of the study area, channel area appears to be stabilizing gradually as the rate of decrease lessens. This trend began earliest in stretch RG in the late 1950s and was accompanied by shifts in the size distributions of stabilized islands in that stretch into the 1960s. Elsewhere, even in the easternmost study stretch, stabilizing was occurring by the late 1960s, the same time frame documented by investigations of the Platte system upstream of the study area. Comprehensive
A Concept for a Long Term Hydrologic Observatory in the South Platte River Basin
Ramirez, J. A.
2004-12-01
The intersection between: (1) the Rocky Mountains and developments occurring in high altitude fragile environments; (2) the metropolitan areas emerging at the interface of the mountains and the plains; (3) the irrigation occurring along rivers as they break from the mountains and snake across the Great Plains; and (4) the grasslands and the dryland farming that covers the vast amount of the Great Plains, represents a dynamic, complex, highly integrated ecosystem, stretching from Montana and North Dakota to New Mexico and Texas. This swath of land, and the rivers that cross it (headwaters of the Missouri , the Yellowstone, the North Platte , the South Platte, the Arkansas , the Cimarron, the Red and the Pecos Rivers ), represent a significant percentage of the landmass of the United States. Within this large area, besides tremendous increases in population in metropolitan areas, there are new energy developments, old hard rock mining concerns, new recreation developments, irrigation farms selling water to meet urban demands, new in-stream flow programs, struggling rural areas, and continued "mining" of ground water. The corresponding impacts are creating endangered and threatened species conflicts which require new knowledge to fully understand the measures needed to mitigate harmful ecosystem conditions. Within the Rocky Mountain/Great Plains interface, water is limiting and land is plentiful, presenting natural resource managers with a number of unique problems which demand a scale of integrated science not achieved in the past. For example, water is imported into a number of the streams flowing east from the Rocky Mountains. Nitrogen is deposited in pristine watersheds that rise up high in the Rocky Mountains. Cities capture spring runoff in reservoirs to use at a steady rate over the entire year, putting water into river systems normally moving low flows in the winter. Irrigation of both urban landscapes and farm fields may be at a scale that impacts climate
Hamel, M. J.; Rugg, M.L.; Pegg, M.A.; Patino, Reynaldo; Hammen, J.J.
2015-01-01
We assessed reproductive status, fecundity, egg size, and spawning dynamics of shovelnose sturgeon Scaphirhynchus platorynchus in the lower Platte River. Shovelnose sturgeon were captured throughout each year during 2011 and 2012 using a multi-gear approach designed to collect a variety of fish of varying sizes and ages. Fish were collected monthly for a laboratory assessment of reproductive condition. Female shovelnose sturgeon reached fork length at 50% maturity (FL50) at 547 mm and at a minimum length of 449 mm. The average female spawning cycle was 3–5 years. Mean egg count for adult females was 16 098 ± 1103 (SE), and mean egg size was 2.401 ± 0.051 (SE) mm. Total fecundity was positively correlated with length (r2 = 0.728; P 0.05). Male shovelnose sturgeon reached FL50 at 579 mm and at a minimum length of 453 mm. The average male spawning cycle was 1–2 years. Reproductively viable male and female sturgeon occurred during the spring (March–May) and autumn (September–October) in both years, indicating spring and potential autumn spawning events. Shovelnose sturgeon in the lower Platte River are maturing at a shorter length and younger age compared to populations elsewhere. Although it is unknown if the change is plastic or evolutionary, unfavorable environmental conditions or over-harvest may lead to hastened declines compared to other systems.
Schaepe, Nathaniel J.; Soenksen, Philip J.; Rus, David L.
2014-01-01
The lower Platte River, Nebraska, provides drinking water, irrigation water, and in-stream flows for recreation, wildlife habitat, and vital habitats for several threatened and endangered species. The U.S. Geological Survey (USGS), in cooperation with the Lower Platte River Corridor Alliance (LPRCA) developed site-specific regression models for water-quality constituents at four sites (Shell Creek near Columbus, Nebraska [USGS site 06795500]; Elkhorn River at Waterloo, Nebr. [USGS site 06800500]; Salt Creek near Ashland, Nebr. [USGS site 06805000]; and Platte River at Louisville, Nebr. [USGS site 06805500]) in the lower Platte River corridor. The models were developed by relating continuously monitored water-quality properties (surrogate measurements) to discrete water-quality samples. These models enable existing web-based software to provide near-real-time estimates of stream-specific constituent concentrations to support natural resources management decisions. Since 2007, USGS, in cooperation with the LPRCA, has continuously monitored four water-quality properties seasonally within the lower Platte River corridor: specific conductance, water temperature, dissolved oxygen, and turbidity. During 2007 through 2011, the USGS and the Nebraska Department of Environmental Quality collected and analyzed discrete water-quality samples for nutrients, major ions, pesticides, suspended sediment, and bacteria. These datasets were used to develop the regression models. This report documents the collection of these various water-quality datasets and the development of the site-specific regression models. Regression models were developed for all four monitored sites. Constituent models for Shell Creek included nitrate plus nitrite, total phosphorus, orthophosphate, atrazine, acetochlor, suspended sediment, and Escherichia coli (E. coli) bacteria. Regression models that were developed for the Elkhorn River included nitrate plus nitrite, total Kjeldahl nitrogen, total phosphorus
Rohatgi, Vijay K
2003-01-01
Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth
Moser, Matthew T.
2014-01-01
The central Platte River is an important resource in Nebraska. Its water flows among multiple channels and supports numerous beneficial uses such as drinking water, irrigation for agriculture, groundwater recharge, and recreational activities. The central Platte River valley is an important stopover for migratory waterfowl and cranes, such as the Whooping (Grus americana) and Sandhill Cranes (Grus canadensis), in their annual northward traversal of the Central Flyway. Waterfowl, cranes, and other migratory birds moving across international and intercontinental borders may provide long-range transportation for any microbial pathogen they harbor, particularly through the spread of feces. Samples were collected weekly in the study reach from three sites (upstream, middle, and downstream from the roosting locations) during the spring of 2009 and 2010. The samples were analyzed for avian influenza, Escherichia coli, Cryptosporidium, Giardia, Campylobacter, and Legionella. Analysis indicates that several types of fecal indicator bacteria and a range of viral, protozoan, and bacterial pathogens were present in Sandhill Crane excreta. These bacteria and pathogens were present at a significantly higher frequency and densities in water and sediments when the Sandhill Cranes were present, particularly during evening roosts within the Platte River environment.
Berry, Margaret E.; Lundstrom, Scott C.; Slate, Janet L.; Muhs, Daniel R.; Sawyer, David A.; VanSistine, D. Paco
2011-01-01
The Greater Platte River Basin area spans a central part of the Midcontinent and Great Plains from the Rocky Mountains on the west to the Missouri River on the east, and is defined to include drainage areas of the Platte, Niobrara, and Republican Rivers, the Rainwater Basin, and other adjoining areas overlying the northern High Plains aquifer. The Greater Platte River Basin contains abundant surficial deposits that were sensitive to, or are reflective of, the climate under which they formed: deposits from multiple glaciations in the mountain headwaters of the North and South Platte Rivers and from continental ice sheets in eastern Nebraska; fluvial terraces (ranging from Tertiary to Holocene in age) along the rivers and streams; vast areas of eolian sand in the Nebraska Sand Hills and other dune fields (recording multiple episodes of dune activity); thick sequences of windblown silt (loess); and sediment deposited in numerous lakes and wetlands. In addition, the Greater Platte River Basin overlies and contributes surface water to the High Plains aquifer, a nationally important groundwater system that underlies parts of eight states and sustains one of the major agricultural areas of the United States. The area also provides critical nesting habitat for birds such as plovers and terns, and roosting habitat for cranes and other migratory birds that travel through the Central Flyway of North America. This broad area, containing fragile ecosystems that could be further threatened by changes in climate and land use, has been identified by the USGS and the University of Nebraska-Lincoln as a region where intensive collaborative research could lead to a better understanding of climate change and what might be done to adapt to or mitigate its adverse effects to ecosystems and to humans. The need for robust data on the geologic framework of ecosystems in the Greater Platte River Basin has been acknowledged in proceedings from the 2008 Climate Change Workshop and in draft
Gu, Yingxin; Wylie, Bruce K.; Zhang, Li; Gilmanov, Tagir G.
2012-01-01
This study evaluates the carbon fluxes and trends and examines the environmental sustainability (e.g., carbon budget, source or sink) of the potential biofuel feedstock sites identified in the Greater Platte River Basin (GPRB). A 9-year (2000–2008) time series of net ecosystem production (NEP), a measure of net carbon absorption or emission by ecosystems, was used to assess the historical trends and budgets of carbon flux for grasslands in the GPRB. The spatially averaged annual NEP (ANEP) for grassland areas that are possibly suitable for biofuel expansion (productive grasslands) was 71–169 g C m−2 year−1 during 2000–2008, indicating a carbon sink (more carbon is absorbed than released) in these areas. The spatially averaged ANEP for areas not suitable for biofuel feedstock development (less productive or degraded grasslands) was −47 to 69 g C m−2 year−1 during 2000–2008, showing a weak carbon source or a weak carbon sink (carbon emitted is nearly equal to carbon absorbed). The 9-year pre-harvest cumulative ANEP was 1166 g C m−2 for the suitable areas (a strong carbon sink) and 200 g C m−2 for the non-suitable areas (a weak carbon sink). Results demonstrate and confirm that our method of dynamic modeling of ecosystem performance can successfully identify areas desirable and sustainable for future biofuel feedstock development. This study provides useful information for land managers and decision makers to make optimal land use decisions regarding biofuel feedstock development and sustainability.
Emotional inferences by pragmatics
Iza-Miqueleiz, Mauricio
2017-01-01
It has for long been taken for granted that, along the course of reading a text, world knowledge is often required in order to establish coherent links between sentences (McKoon & Ratcliff 1992, Iza & Ezquerro 2000). The content grasped from a text turns out to be strongly dependent upon the reader’s additional knowledge that allows a coherent interpretation of the text as a whole. The world knowledge directing the inference may be of distinctive nature. Gygax et al. (2007) showed that m...
Hall, Brent M.; Rus, David L.
2013-01-01
The Platte River is a vital natural resource for the people, plants, and animals of Nebraska. A recent study quantified water use by riparian woodlands along central reaches of the Platte River, Nebraska, finding that water use was mainly regulated below maximum predicted levels. A comparative study was launched through a cooperative partnership between the U.S. Geological Survey, the Central Platte Natural Resources District, the Nebraska Department of Natural Resources, and the Nebraska Environmental Trust to compare water use of a riparian woodland with that of a grazed riparian grassland along the central Platte River. This report describes the results of the 3-year study by the U.S. Geological Survey to measure the evapotranspiration (ET) rates in the two riparian vegetation communities. Evapotranspiration was measured during 2008–09 and 2011 using the eddy-covariance method at a riparian woodland near Odessa, hereinafter referred to as the “woodland site,” and a riparian grassland pasture near Elm Creek, hereinafter referred to as the “grassland site.” Overall, annual ET totals at the grassland site were 90 percent of the annual ET measured at the woodland site, with averages of 653 millimeters (mm) and 726 mm, respectively. Evapotranspiration rates were similar at the grassland site and the woodland site during the spring and fall seasons, but at the woodland site ET rates were higher than those of the grassland site during the peak-growth summer months of June through August. These seasonal differences and the slightly lower ET rates at the grassland site were likely the result of differing plant communities, disturbance effects related to grazing and flooding, and climatic differences between the sites. The annual water balance was calculated for each site and indicated that the predominant factors in the water balance at both sites were ET and precipitation. Annual precipitation for the study period ranged from near to above the normal
Smith, Bruce D.; Abraham, Jared D.; Cannia, James C.; Hill, Patricia
2009-01-01
This report is a release of digital data from a helicopter electromagnetic and magnetic survey that was conducted during June 2008 in areas of western Nebraska as part of a joint hydrologic study by the North Platte Natural Resource District, South Platte Natural Resource District, and U.S. Geological Survey. The objective of the contracted survey, conducted by Fugro Airborne, Ltd., was to improve the understanding of the relationship between surface water and groundwater systems critical to developing groundwater models used in management programs for water resources. The survey covered 1,375 line km (854 line mi). A unique aspect of this survey is the flight line layout. One set of flight lines were flown paralleling each side of the east-west trending North Platte River and Lodgepole Creek. The survey also included widely separated (10 km) perpendicular north-south lines. The success of this survey design depended on a well understood regional hydrogeologic framework and model developed by the Cooperative Hydrologic Study of the Platte River Basin. Resistivity variations along lines could be related to this framework. In addition to these lines, more traditional surveys consisting of parallel flight lines separated by about 270 m were carried out for one block in each of the drainages. These surveys helped to establish the spatial variations of the resistivity of hydrostratigraphic units. The electromagnetic equipment consisted of six different coil-pair orientations that measured resistivity at separated frequencies from about 400 Hz to about 140,000 Hz. The electromagnetic data along flight lines were converted to electrical resistivity. The resulting line data were converted to geo-referenced grids and maps which are included with this report. In addition to the electromagnetic data, total field magnetic data and digital elevation data were collected. Data released in this report consist of data along flight lines, digital grids, and digital maps of the
Sherfy, Mark H.; Anteau, Michael J.; Shaffer, Terry L.; Sovada, Marsha A.; Stucker, Jennifer H.
2012-01-01
Federally listed least terns (Sternula antillarum) and piping plovers (Charadrius melodus) nest on riverine sandbars on many major midcontinent river systems. On the Central Platte River, availability of sandbar habitat is limited, and both species nest on excavated sandpits in the river's floodplain. However, the extent to which sandpit-nesting birds use riverine habitats for foraging is unknown. We evaluated use of foraging habitats by least terns and piping plovers by collecting data on movements, behavior, foraging habitat, and productivity. We radiomarked 16 piping plovers and 23 least terns in 2009-2010 and monitored their movements using a network of fixed telemetry dataloggers. Piping plovers were detected primarily by the datalogger located in their nesting sandpit, whereas least terns were more frequently detected on dataloggers outside of the nesting sandpit. Telemetry data and behavioral observations showed that least terns tended to concentrate at the Kearney Canal Diversion Gates, where forage fish were apparently readily available. Fish sampling data suggested that forage fish were more abundant in riverine than in sandpit habitats, and behavioral observations showed that least terns foraged more frequently in riverine than in sandpit habitats. Piping plovers tended to forage in wet substrates along sandpit shorelines, but also used dry substrates and sandpit interior habitats. The greater mobility of least terns makes a wider range of potential foraging habitats available during brood rearing, making them able to exploit concentrations of fish outside the nesting colony. Thus, our data suggest that different spatial scales should be considered in managing nesting and foraging habitat complexes for piping plovers and least terns.
The annual Sandhill crane (Grus canadensis) migration through Nebraska is thought to be a major source of fecal pollution to the Platte River, but of unknown human health risk. To better understand potential risks, the presence of Campylobacter species and fecal bacteria were exa...
Galatowitsch, Susan M.; Larson, Diane L.; Larson, Jennifer L.
2016-01-01
Invasive plants, such as Phragmites australis, can profoundly affect channel environments of large rivers by stabilizing sediments and altering water flows. Invasive plant removal is considered necessary where restoration of dynamic channels is needed to provide critical habitat for species of conservation concern. However, these programs are widely reported to be inefficient. Post-control reinvasion is frequent, suggesting increased attention is needed to prevent seed regeneration. To develop more effective responses to this invader in the Central Platte River (Nebraska, USA), we investigated several aspects of Phragmites seed ecology potentially linked to post-control reinvasion, in comparison to other common species: extent of viable seed production, importance of water transport, and regeneration responses to hydrology. We observed that although Phragmites seed does not mature until very late in the ice-free season, populations produce significant amounts of viable seed (>50 % of filled seed). Most seed transported via water in the Platte River are invasive perennial species, although Phragmites abundances are much lower than species such as Lythrum salicaria, Cyperus esculentus and Phalaris arundinacea. Seed regeneration of Phragmites varies greatly depending on hydrology, especially timing of water level changes. Flood events coinciding with the beginning of seedling emergence reduced establishment by as much as 59 % compared to flood events that occurred a few weeks later. Results of these investigations suggest that prevention of seed set (i.e., by removal of flowering culms) should be a priority in vegetation stands not being treated annually. After seeds are in the seedbank, preventing reinvasion using prescribed flooding has a low chance of success given that Phragmites can regenerate in a wide variety of hydrologic microsites.
DEFF Research Database (Denmark)
Katajainen, Jyrki
2008-01-01
In this project the goal is to develop the safe * family of containers for the CPH STL. The containers to be developed should be safer and more reliable than any of the existing implementations. A special focus should be put on strong exception safety since none of the existing prototypes available...
International Nuclear Information System (INIS)
Froissart, Marcel
1976-01-01
Strong interactions are introduced by their more obvious aspect: nuclear forces. In hadron family, the nucleon octet, OMEGA - decuplet, and quark triply are successively considered. Pion wave having been put at the origin of nuclear forces, low energy phenomena are described, the force being explained as an exchange of structure corresponding to a Regge trajectory in a variable rotating state instead of the exchange of a well defined particle. At high energies the concepts of pomeron, parton and stratons are introduced, pionization and fragmentation are briefly differentiated [fr
Density estimation in tiger populations: combining information for strong inference
Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.
2012-01-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
Battaglin, W. A.; Bradley, P. M.; Paschke, S.; Plumlee, G. S.; Kimbrough, R.
2016-12-01
In September 2013, heavy rainfall caused severe flooding in Rocky Mountain National Park (ROMO) and environs extending downstream into the main stem of the South Platte River. In ROMO, flooding damaged infrastructure and local roads. In the tributary canyons, flooding damaged homes, septic systems, and roads. On the plains, flooding damaged several wastewater treatment plants. The occurrence and fate of pharmaceuticals and other contaminants of emerging concern (CECs) in streams during flood conditions is poorly understood. We assessed the occurrence and fate of CECs in this flood by collecting water samples (post-peak flow) from 4 headwaters sites in ROMO, 7 sites on tributaries to the South Platte River, and 6 sites on the main stem of the South Platte; and by collecting flood sediment samples (post-flood depositional) from 14 sites on tributaries and 10 sites on the main stem. Water samples were analysed for 110 pharmaceuticals and 69 wastewater indicators. Sediment samples were analysed for 57 wastewater indicators. Concentrations and numbers of CECs detected in water increased markedly as floodwaters moved downstream and some were not diluted despite the large flow increases in downstream reaches of the affected rivers. For example, in the Cache la Poudre River in ROMO, no pharmaceuticals and 1 wastewater indicator compound (camphor) were detected. At Greeley, the Cache la Poudre was transporting 19 pharmaceuticals [total concentration of 0.69 parts-per-billion (ppb)] and 22 wastewater indicators (total concentration of 2.81 ppb). In the South Platte downstream from Greeley, 24 pharmaceuticals (total concentration of 1.47 ppb) and 24 wastewater indicators (total concentration of 2.35 ppb) were detected. Some CECs such as the combustion products pyrene, fluoranthene, and benzo(a)pyrene were detected only at sub-ppb concentrations in water, but were detected at concentrations in the hundreds of ppb in flood sediment samples.
DEFF Research Database (Denmark)
Andersen, Jesper
2009-01-01
Collateral evolution the problem of updating several library-using programs in response to API changes in the used library. In this dissertation we address the issue of understanding collateral evolutions by automatically inferring a high-level specification of the changes evident in a given set ...... specifications inferred by spdiff in Linux are shown. We find that the inferred specifications concisely capture the actual collateral evolution performed in the examples....
System Support for Forensic Inference
Gehani, Ashish; Kirchner, Florent; Shankar, Natarajan
Digital evidence is playing an increasingly important role in prosecuting crimes. The reasons are manifold: financially lucrative targets are now connected online, systems are so complex that vulnerabilities abound and strong digital identities are being adopted, making audit trails more useful. If the discoveries of forensic analysts are to hold up to scrutiny in court, they must meet the standard for scientific evidence. Software systems are currently developed without consideration of this fact. This paper argues for the development of a formal framework for constructing “digital artifacts” that can serve as proxies for physical evidence; a system so imbued would facilitate sound digital forensic inference. A case study involving a filesystem augmentation that provides transparent support for forensic inference is described.
Burton, Bethany L.; Johnson, Michaela R.; Vrabel, Joseph; Imig, Brian H.; Payne, Jason; Tompkins, Ryan E.
2009-01-01
Due to water resources of portions of the North Platte River basin being designated as over-appropriated by the State of Nebraska Department of Natural Resources (DNR), the North Platte Natural Resources District (NPNRD), in cooperation with the DNR, is developing an Integrated Management Plan (IMP) for groundwater and surface water in the NPNRD. As part of the IMP, a three-dimensional numerical finite difference groundwater-flow model is being developed to evaluate the effectiveness of using leakage of water from selected irrigation canal systems to manage groundwater recharge. To determine the relative leakage potential of the upper 8 m of the selected irrigation canals within the North Platte River valley in western Nebraska and eastern Wyoming, the U.S. Geological Survey performed a land-based capacitively coupled (CC) resistivity survey along nearly 630 km of 13 canals and 2 laterals in 2004 and from 2007 to 2009. These 13 canals were selected from the 27 irrigation canals in the North Platte valley due to their location, size, irrigated area, and relation to the active North Platte valley flood plain and related paleochannels and terrace deposits where most of the saturated thickness in the alluvium exists. The resistivity data were then compared to continuous cores at 62 test holes down to a maximum depth of 8 m. Borehole electrical conductivity (EC) measurements at 36 of those test holes were done to correlate resistivity values with grain sizes in order to determine potential vertical leakage along the canals as recharge to the underlying alluvial aquifer. The data acquired in 2004, as well as the 25 test hole cores from 2004, are presented elsewhere. These data were reprocessed using the same updated processing and inversion algorithms used on the 2007 through 2009 datasets, providing a consistent and complete dataset for all collection periods. Thirty-seven test hole cores and borehole electrical conductivity measurements were acquired based on the 2008
Energy Technology Data Exchange (ETDEWEB)
Petrov, S.
1996-10-01
Languages with a solvable implication problem but without complete and consistent systems of inference rules (`poor` languages) are considered. The problem of existence of finite complete and consistent inference rule system for a ``poor`` language is stated independently of the language or rules syntax. Several properties of the problem arc proved. An application of results to the language of join dependencies is given.
Bayesian statistical inference
Directory of Open Access Journals (Sweden)
Bruno De Finetti
2017-04-01
Full Text Available This work was translated into English and published in the volume: Bruno De Finetti, Induction and Probability, Biblioteca di Statistica, eds. P. Monari, D. Cocchi, Clueb, Bologna, 1993.Bayesian statistical Inference is one of the last fundamental philosophical papers in which we can find the essential De Finetti's approach to the statistical inference.
Geometric statistical inference
International Nuclear Information System (INIS)
Periwal, Vipul
1999-01-01
A reparametrization-covariant formulation of the inverse problem of probability is explicitly solved for finite sample sizes. The inferred distribution is explicitly continuous for finite sample size. A geometric solution of the statistical inference problem in higher dimensions is outlined
Bailer-Jones, Coryn A. L.
2017-04-01
Preface; 1. Probability basics; 2. Estimation and uncertainty; 3. Statistical models and inference; 4. Linear models, least squares, and maximum likelihood; 5. Parameter estimation: single parameter; 6. Parameter estimation: multiple parameters; 7. Approximating distributions; 8. Monte Carlo methods for inference; 9. Parameter estimation: Markov chain Monte Carlo; 10. Frequentist hypothesis testing; 11. Model comparison; 12. Dealing with more complicated problems; References; Index.
Nagao, Makoto
1990-01-01
Knowledge and Inference discusses an important problem for software systems: How do we treat knowledge and ideas on a computer and how do we use inference to solve problems on a computer? The book talks about the problems of knowledge and inference for the purpose of merging artificial intelligence and library science. The book begins by clarifying the concept of """"knowledge"""" from many points of view, followed by a chapter on the current state of library science and the place of artificial intelligence in library science. Subsequent chapters cover central topics in the artificial intellig
Logical inference and evaluation
International Nuclear Information System (INIS)
Perey, F.G.
1981-01-01
Most methodologies of evaluation currently used are based upon the theory of statistical inference. It is generally perceived that this theory is not capable of dealing satisfactorily with what are called systematic errors. Theories of logical inference should be capable of treating all of the information available, including that not involving frequency data. A theory of logical inference is presented as an extension of deductive logic via the concept of plausibility and the application of group theory. Some conclusions, based upon the application of this theory to evaluation of data, are also given
Probability and Statistical Inference
Prosper, Harrison B.
2006-01-01
These lectures introduce key concepts in probability and statistical inference at a level suitable for graduate students in particle physics. Our goal is to paint as vivid a picture as possible of the concepts covered.
On quantum statistical inference
Barndorff-Nielsen, O.E.; Gill, R.D.; Jupp, P.E.
2003-01-01
Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, developments in the theory of quantum measurements have
2018-02-15
expressed a variety of inference techniques on discrete and continuous distributions: exact inference, importance sampling, Metropolis-Hastings (MH...without redoing any math or rewriting any code. And although our main goal is composable reuse, our performance is also good because we can use...control paths. • The Hakaru language can express mixtures of discrete and continuous distributions, but the current disintegration transformation
Introductory statistical inference
Mukhopadhyay, Nitis
2014-01-01
This gracefully organized text reveals the rigorous theory of probability and statistical inference in the style of a tutorial, using worked examples, exercises, figures, tables, and computer simulations to develop and illustrate concepts. Drills and boxed summaries emphasize and reinforce important ideas and special techniques.Beginning with a review of the basic concepts and methods in probability theory, moments, and moment generating functions, the author moves to more intricate topics. Introductory Statistical Inference studies multivariate random variables, exponential families of dist
Gu, Yingxin; Wylie, Bruce K.; Howard, Daniel M.; Phuyal, Khem P.; Ji, Lei
2013-01-01
In this study, we developed a new approach that adjusted normalized difference vegetation index (NDVI) pixel values that were near saturation to better characterize the cropland performance (CP) in the Greater Platte River Basin (GPRB), USA. The relationship between NDVI and the ratio vegetation index (RVI) at high NDVI values was investigated, and an empirical equation for estimating saturation-adjusted NDVI (NDVIsat_adjust) based on RVI was developed. A 10-year (2000–2009) NDVIsat_adjust data set was developed using 250-m 7-day composite historical eMODIS (expedited Moderate Resolution Imaging Spectroradiometer) NDVI data. The growing season averaged NDVI (GSN), which is a proxy for ecosystem performance, was estimated and long-term NDVI non-saturation- and saturation-adjusted cropland performance (CPnon_sat_adjust, CPsat_adjust) maps were produced over the GPRB. The final CP maps were validated using National Agricultural Statistics Service (NASS) crop yield data. The relationship between CPsat_adjust and the NASS average corn yield data (r = 0.78, 113 samples) is stronger than the relationship between CPnon_sat_adjust and the NASS average corn yield data (r = 0.67, 113 samples), indicating that the new CPsat_adjust map reduces the NDVI saturation effects and is in good agreement with the corn yield ground observations. Results demonstrate that the NDVI saturation adjustment approach improves the quality of the original GSN map and better depicts the actual vegetation conditions of the GPRB cropland systems.
Dietsch, Benjamin J.; Godberson, Julie A.; Steele, Gregory V.
2009-01-01
The Nebraska Department of Natural Resources approved instream-flow appropriations on the Platte River to maintain fish communities, whooping crane roost habitat, and wet meadows used by several wild bird species. In the lower Platte River region, the Nebraska Game and Parks Commission owns an appropriation filed to maintain streamflow for fish communities between the Platte River confluence with the Elkhorn River and the mouth of the Platte River. Because Elkhorn River flow is an integral part of the flow in the reach addressed by this appropriation, the Upper Elkhorn and Lower Elkhorn Natural Resources Districts are involved in overall management of anthropogenic effects on the availability of surface water for instream requirements. The Physical Habitat Simulation System (PHABSIM) and other estimation methodologies were used previously to determine instream requirements for Platte River biota, which led to the filing of five water appropriations applications with the Nebraska Department of Natural Resources in 1993 by the Nebraska Game and Parks Commission. One of these requested instream-flow appropriations of 3,700 cubic feet per second was for the reach from the Elkhorn River to the mouth of the Platte River. Four appropriations were granted with modifications in 1998, by the Nebraska Department of Natural Resources. Daily streamflow data for the periods of record were summarized for 17 streamflow-gaging stations in Nebraska to evaluate streamflow characteristics, including low-flow intervals for consecutive durations of 1, 3, 7, 14, 30, 60, and 183 days. Temporal trends in selected streamflow statistics were not adjusted for variability in precipitation. Results indicated significant positive temporal trends in annual flow for the period of record at eight streamflow-gaging stations - Platte River near Duncan (06774000), Platte River at North Bend (06796000), Elkhorn River at Neligh (06798500), Logan Creek near Uehling (06799500), Maple Creek near Nickerson
Type Inference with Inequalities
DEFF Research Database (Denmark)
Schwartzbach, Michael Ignatieff
1991-01-01
of (monotonic) inequalities on the types of variables and expressions. A general result about systems of inequalities over semilattices yields a solvable form. We distinguish between deciding typability (the existence of solutions) and type inference (the computation of a minimal solution). In our case, both......Type inference can be phrased as constraint-solving over types. We consider an implicitly typed language equipped with recursive types, multiple inheritance, 1st order parametric polymorphism, and assignments. Type correctness is expressed as satisfiability of a possibly infinite collection...
Pillow, Bradford H; Pearson, Raeanne M; Hecht, Mary; Bremer, Amanda
2010-01-01
Children and adults rated their own certainty following inductive inferences, deductive inferences, and guesses. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Deductions were rated as more certain than strong inductions beginning in Grade 3, and fourth-grade children and adults differentiated strong inductions, weak inductions, and informed guesses from pure guesses. By Grade 3, participants also gave different types of explanations for their deductions and inductions. These results are discussed in relation to children's concepts of cognitive processes, logical reasoning, and epistemological development.
Subjective randomness as statistical inference.
Griffiths, Thomas L; Daniels, Dylan; Austerweil, Joseph L; Tenenbaum, Joshua B
2018-06-01
Some events seem more random than others. For example, when tossing a coin, a sequence of eight heads in a row does not seem very random. Where do these intuitions about randomness come from? We argue that subjective randomness can be understood as the result of a statistical inference assessing the evidence that an event provides for having been produced by a random generating process. We show how this account provides a link to previous work relating randomness to algorithmic complexity, in which random events are those that cannot be described by short computer programs. Algorithmic complexity is both incomputable and too general to capture the regularities that people can recognize, but viewing randomness as statistical inference provides two paths to addressing these problems: considering regularities generated by simpler computing machines, and restricting the set of probability distributions that characterize regularity. Building on previous work exploring these different routes to a more restricted notion of randomness, we define strong quantitative models of human randomness judgments that apply not just to binary sequences - which have been the focus of much of the previous work on subjective randomness - but also to binary matrices and spatial clustering. Copyright © 2018 Elsevier Inc. All rights reserved.
Watson, Jane
2007-01-01
Inference, or decision making, is seen in curriculum documents as the final step in a statistical investigation. For a formal statistical enquiry this may be associated with sophisticated tests involving probability distributions. For young students without the mathematical background to perform such tests, it is still possible to draw informal…
Hybrid Optical Inference Machines
1991-09-27
with labels. Now, events. a set of facts cal be generated in the dyadic form "u, R 1,2" Eichmann and Caulfield (19] consider the same type of and can...these enceding-schemes. These architectures are-based pri- 19. G. Eichmann and H. J. Caulfield, "Optical Learning (Inference)marily on optical inner
Explanatory Preferences Shape Learning and Inference.
Lombrozo, Tania
2016-10-01
Explanations play an important role in learning and inference. People often learn by seeking explanations, and they assess the viability of hypotheses by considering how well they explain the data. An emerging body of work reveals that both children and adults have strong and systematic intuitions about what constitutes a good explanation, and that these explanatory preferences have a systematic impact on explanation-based processes. In particular, people favor explanations that are simple and broad, with the consequence that engaging in explanation can shape learning and inference by leading people to seek patterns and favor hypotheses that support broad and simple explanations. Given the prevalence of explanation in everyday cognition, understanding explanation is therefore crucial to understanding learning and inference. Copyright © 2016 Elsevier Ltd. All rights reserved.
Inference rule and problem solving
Energy Technology Data Exchange (ETDEWEB)
Goto, S
1982-04-01
Intelligent information processing signifies an opportunity of having man's intellectual activity executed on the computer, in which inference, in place of ordinary calculation, is used as the basic operational mechanism for such an information processing. Many inference rules are derived from syllogisms in formal logic. The problem of programming this inference function is referred to as a problem solving. Although logically inference and problem-solving are in close relation, the calculation ability of current computers is on a low level for inferring. For clarifying the relation between inference and computers, nonmonotonic logic has been considered. The paper deals with the above topics. 16 references.
Stochastic processes inference theory
Rao, Malempati M
2014-01-01
This is the revised and enlarged 2nd edition of the authors’ original text, which was intended to be a modest complement to Grenander's fundamental memoir on stochastic processes and related inference theory. The present volume gives a substantial account of regression analysis, both for stochastic processes and measures, and includes recent material on Ridge regression with some unexpected applications, for example in econometrics. The first three chapters can be used for a quarter or semester graduate course on inference on stochastic processes. The remaining chapters provide more advanced material on stochastic analysis suitable for graduate seminars and discussions, leading to dissertation or research work. In general, the book will be of interest to researchers in probability theory, mathematical statistics and electrical and information theory.
Making Type Inference Practical
DEFF Research Database (Denmark)
Schwartzbach, Michael Ignatieff; Oxhøj, Nicholas; Palsberg, Jens
1992-01-01
We present the implementation of a type inference algorithm for untyped object-oriented programs with inheritance, assignments, and late binding. The algorithm significantly improves our previous one, presented at OOPSLA'91, since it can handle collection classes, such as List, in a useful way. Abo......, the complexity has been dramatically improved, from exponential time to low polynomial time. The implementation uses the techniques of incremental graph construction and constraint template instantiation to avoid representing intermediate results, doing superfluous work, and recomputing type information....... Experiments indicate that the implementation type checks as much as 100 lines pr. second. This results in a mature product, on which a number of tools can be based, for example a safety tool, an image compression tool, a code optimization tool, and an annotation tool. This may make type inference for object...
Directory of Open Access Journals (Sweden)
João Paulo Monteiro
2001-12-01
Full Text Available Russell's The Problems of Philosophy tries to establish a new theory of induction, at the same time that Hume is there accused of an irrational/ scepticism about induction". But a careful analysis of the theory of knowledge explicitly acknowledged by Hume reveals that, contrary to the standard interpretation in the XXth century, possibly influenced by Russell, Hume deals exclusively with causal inference (which he never classifies as "causal induction", although now we are entitled to do so, never with inductive inference in general, mainly generalizations about sensible qualities of objects ( whether, e.g., "all crows are black" or not is not among Hume's concerns. Russell's theories are thus only false alternatives to Hume's, in (1912 or in his (1948.
Causal inference in econometrics
Kreinovich, Vladik; Sriboonchitta, Songsak
2016-01-01
This book is devoted to the analysis of causal inference which is one of the most difficult tasks in data analysis: when two phenomena are observed to be related, it is often difficult to decide whether one of them causally influences the other one, or whether these two phenomena have a common cause. This analysis is the main focus of this volume. To get a good understanding of the causal inference, it is important to have models of economic phenomena which are as accurate as possible. Because of this need, this volume also contains papers that use non-traditional economic models, such as fuzzy models and models obtained by using neural networks and data mining techniques. It also contains papers that apply different econometric models to analyze real-life economic dependencies.
Active inference and learning.
Friston, Karl; FitzGerald, Thomas; Rigoli, Francesco; Schwartenbeck, Philipp; O Doherty, John; Pezzulo, Giovanni
2016-09-01
This paper offers an active inference account of choice behaviour and learning. It focuses on the distinction between goal-directed and habitual behaviour and how they contextualise each other. We show that habits emerge naturally (and autodidactically) from sequential policy optimisation when agents are equipped with state-action policies. In active inference, behaviour has explorative (epistemic) and exploitative (pragmatic) aspects that are sensitive to ambiguity and risk respectively, where epistemic (ambiguity-resolving) behaviour enables pragmatic (reward-seeking) behaviour and the subsequent emergence of habits. Although goal-directed and habitual policies are usually associated with model-based and model-free schemes, we find the more important distinction is between belief-free and belief-based schemes. The underlying (variational) belief updating provides a comprehensive (if metaphorical) process theory for several phenomena, including the transfer of dopamine responses, reversal learning, habit formation and devaluation. Finally, we show that active inference reduces to a classical (Bellman) scheme, in the absence of ambiguity. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Learning Convex Inference of Marginals
Domke, Justin
2012-01-01
Graphical models trained using maximum likelihood are a common tool for probabilistic inference of marginal distributions. However, this approach suffers difficulties when either the inference process or the model is approximate. In this paper, the inference process is first defined to be the minimization of a convex function, inspired by free energy approximations. Learning is then done directly in terms of the performance of the inference process at univariate marginal prediction. The main ...
Probabilistic inductive inference: a survey
Ambainis, Andris
2001-01-01
Inductive inference is a recursion-theoretic theory of learning, first developed by E. M. Gold (1967). This paper surveys developments in probabilistic inductive inference. We mainly focus on finite inference of recursive functions, since this simple paradigm has produced the most interesting (and most complex) results.
Kinzel, Paul J.
2009-01-01
Fluvial geomorphic data were collected by the United States Geological Survey from July 2005 to June 2008 (a time period within water years 2005 to 2008) to monitor the effects of habitat enhancement activities conducted in the Platte River Whooping Crane Maintenance Trust's Uridil Property, located along the Platte River, Nebraska. The activities involved the removal of vegetation and sand from the tops of high permanent islands and the placement of the sand into the active river channel. This strategy was intended to enhance habitat for migratory water birds by lowering the elevations of the high islands, thereby eliminating a visual obstruction for roosting birds. It was also thought that the bare sand on the lowered island surfaces could serve as potential habitat for nesting water birds. Lastly, the project supplied a local source of sediment to the river to test the hypothesis that this material could contribute to the formation of lower sandbars and potential nesting sites downstream. Topographic surveys on the islands and along river transects were used to quantify the volume of removed sand and track the storage and movement of the introduced sand downstream. Sediment samples were also collected to map the spatial distribution of river bed sediment sizes before and after the management activities. While the project lowered the elevation of high islands, observations of the sand addition indicated the relatively fine-grained sand that was placed in the active river channel was rapidly transported by the flowing water. Topographic measurements made 3 months after the sand addition along transects in the area of sediment addition showed net aggradation over measurements made in 2005. In the year following the sand addition, 2007, elevated river flows from local rain events generally were accompanied by net degradation along transects within the area of sediment addition. In the spring of 2008, a large magnitude flow event of approximately 360 cubic meters per
Nonparametric statistical inference
Gibbons, Jean Dickinson
2010-01-01
Overall, this remains a very fine book suitable for a graduate-level course in nonparametric statistics. I recommend it for all people interested in learning the basic ideas of nonparametric statistical inference.-Eugenia Stoimenova, Journal of Applied Statistics, June 2012… one of the best books available for a graduate (or advanced undergraduate) text for a theory course on nonparametric statistics. … a very well-written and organized book on nonparametric statistics, especially useful and recommended for teachers and graduate students.-Biometrics, 67, September 2011This excellently presente
DEFF Research Database (Denmark)
Andersen, Jesper; Lawall, Julia
2010-01-01
A key issue in maintaining Linux device drivers is the need to keep them up to date with respect to evolutions in Linux internal libraries. Currently, there is little tool support for performing and documenting such changes. In this paper we present a tool, spdiff, that identifies common changes...... developers can use it to extract an abstract representation of the set of changes that others have made. Our experiments on recent changes in Linux show that the inferred generic patches are more concise than the corresponding patches found in commits to the Linux source tree while being safe with respect...
African Journals Online (AJOL)
denise
chlorophyll concentration, intermediate mixed layer and deep euphotic depth. These relationships .... Prior to the training process, connection weights for each node need to ..... shallow and deeper euphotic depths (30 m), small and ...
Irons, Trevor P.; Hobza, Christopher M.; Steele, Gregory V.; Abraham, Jared D.; Cannia, James C.; Woodward, Duane D.
2012-01-01
Surface nuclear magnetic resonance, a noninvasive geophysical method, measures a signal directly related to the amount of water in the subsurface. This allows for low-cost quantitative estimates of hydraulic parameters. In practice, however, additional factors influence the signal, complicating interpretation. The U.S. Geological Survey, in cooperation with the Central Platte Natural Resources District, evaluated whether hydraulic parameters derived from surface nuclear magnetic resonance data could provide valuable input into groundwater models used for evaluating water-management practices. Two calibration sites in Dawson County, Nebraska, were chosen based on previous detailed hydrogeologic and geophysical investigations. At both sites, surface nuclear magnetic resonance data were collected, and derived parameters were compared with results from four constant-discharge aquifer tests previously conducted at those same sites. Additionally, borehole electromagnetic-induction flowmeter data were analyzed as a less-expensive surrogate for traditional aquifer tests. Building on recent work, a novel surface nuclear magnetic resonance modeling and inversion method was developed that incorporates electrical conductivity and effects due to magnetic-field inhomogeneities, both of which can have a substantial impact on the data. After comparing surface nuclear magnetic resonance inversions at the two calibration sites, the nuclear magnetic-resonance-derived parameters were compared with previously performed aquifer tests in the Central Platte Natural Resources District. This comparison served as a blind test for the developed method. The nuclear magnetic-resonance-derived aquifer parameters were in agreement with results of aquifer tests where the environmental noise allowed data collection and the aquifer test zones overlapped with the surface nuclear magnetic resonance testing. In some cases, the previously performed aquifer tests were not designed fully to characterize
Contingency inferences driven by base rates: Valid by sampling
Directory of Open Access Journals (Sweden)
Florian Kutzner
2011-04-01
Full Text Available Fiedler et al. (2009, reviewed evidence for the utilization of a contingency inference strategy termed pseudocontingencies (PCs. In PCs, the more frequent levels (and, by implication, the less frequent levels are assumed to be associated. PCs have been obtained using a wide range of task settings and dependent measures. Yet, the readiness with which decision makers rely on PCs is poorly understood. A computer simulation explored two potential sources of subjective validity of PCs. First, PCs are shown to perform above chance level when the task is to infer the sign of moderate to strong population contingencies from a sample of observations. Second, contingency inferences based on PCs and inferences based on cell frequencies are shown to partially agree across samples. Intriguingly, this criterion and convergent validity are by-products of random sampling error, highlighting the inductive nature of contingency inferences.
Bayesian methods for hackers probabilistic programming and Bayesian inference
Davidson-Pilon, Cameron
2016-01-01
Bayesian methods of inference are deeply natural and extremely powerful. However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice–freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples a...
Dopamine, reward learning, and active inference
Directory of Open Access Journals (Sweden)
Thomas eFitzgerald
2015-11-01
Full Text Available Temporal difference learning models propose phasic dopamine signalling encodes reward prediction errors that drive learning. This is supported by studies where optogenetic stimulation of dopamine neurons can stand in lieu of actual reward. Nevertheless, a large body of data also shows that dopamine is not necessary for learning, and that dopamine depletion primarily affects task performance. We offer a resolution to this paradox based on an hypothesis that dopamine encodes the precision of beliefs about alternative actions, and thus controls the outcome-sensitivity of behaviour. We extend an active inference scheme for solving Markov decision processes to include learning, and show that simulated dopamine dynamics strongly resemble those actually observed during instrumental conditioning. Furthermore, simulated dopamine depletion impairs performance but spares learning, while simulated excitation of dopamine neurons drives reward learning, through aberrant inference about outcome states. Our formal approach provides a novel and parsimonious reconciliation of apparently divergent experimental findings.
Dopamine, reward learning, and active inference.
FitzGerald, Thomas H B; Dolan, Raymond J; Friston, Karl
2015-01-01
Temporal difference learning models propose phasic dopamine signaling encodes reward prediction errors that drive learning. This is supported by studies where optogenetic stimulation of dopamine neurons can stand in lieu of actual reward. Nevertheless, a large body of data also shows that dopamine is not necessary for learning, and that dopamine depletion primarily affects task performance. We offer a resolution to this paradox based on an hypothesis that dopamine encodes the precision of beliefs about alternative actions, and thus controls the outcome-sensitivity of behavior. We extend an active inference scheme for solving Markov decision processes to include learning, and show that simulated dopamine dynamics strongly resemble those actually observed during instrumental conditioning. Furthermore, simulated dopamine depletion impairs performance but spares learning, while simulated excitation of dopamine neurons drives reward learning, through aberrant inference about outcome states. Our formal approach provides a novel and parsimonious reconciliation of apparently divergent experimental findings.
Feature Inference Learning and Eyetracking
Rehder, Bob; Colner, Robert M.; Hoffman, Aaron B.
2009-01-01
Besides traditional supervised classification learning, people can learn categories by inferring the missing features of category members. It has been proposed that feature inference learning promotes learning a category's internal structure (e.g., its typical features and interfeature correlations) whereas classification promotes the learning of…
An Inference Language for Imaging
DEFF Research Database (Denmark)
Pedemonte, Stefano; Catana, Ciprian; Van Leemput, Koen
2014-01-01
We introduce iLang, a language and software framework for probabilistic inference. The iLang framework enables the definition of directed and undirected probabilistic graphical models and the automated synthesis of high performance inference algorithms for imaging applications. The iLang framewor...
Energy Technology Data Exchange (ETDEWEB)
Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ahn, Sungsoo [Korea Advanced Inst. Science and Technology (KAIST), Daejeon (Korea, Republic of); Shin, Jinwoo [Korea Advanced Inst. Science and Technology (KAIST), Daejeon (Korea, Republic of)
2017-05-25
Computing partition function is the most important statistical inference task arising in applications of Graphical Models (GM). Since it is computationally intractable, approximate methods have been used to resolve the issue in practice, where meanfield (MF) and belief propagation (BP) are arguably the most popular and successful approaches of a variational type. In this paper, we propose two new variational schemes, coined Gauged-MF (G-MF) and Gauged-BP (G-BP), improving MF and BP, respectively. Both provide lower bounds for the partition function by utilizing the so-called gauge transformation which modifies factors of GM while keeping the partition function invariant. Moreover, we prove that both G-MF and G-BP are exact for GMs with a single loop of a special structure, even though the bare MF and BP perform badly in this case. Our extensive experiments, on complete GMs of relatively small size and on large GM (up-to 300 variables) confirm that the newly proposed algorithms outperform and generalize MF and BP.
Social Inference Through Technology
Oulasvirta, Antti
Awareness cues are computer-mediated, real-time indicators of people’s undertakings, whereabouts, and intentions. Already in the mid-1970 s, UNIX users could use commands such as “finger” and “talk” to find out who was online and to chat. The small icons in instant messaging (IM) applications that indicate coconversants’ presence in the discussion space are the successors of “finger” output. Similar indicators can be found in online communities, media-sharing services, Internet relay chat (IRC), and location-based messaging applications. But presence and availability indicators are only the tip of the iceberg. Technological progress has enabled richer, more accurate, and more intimate indicators. For example, there are mobile services that allow friends to query and follow each other’s locations. Remote monitoring systems developed for health care allow relatives and doctors to assess the wellbeing of homebound patients (see, e.g., Tang and Venables 2000). But users also utilize cues that have not been deliberately designed for this purpose. For example, online gamers pay attention to other characters’ behavior to infer what the other players are like “in real life.” There is a common denominator underlying these examples: shared activities rely on the technology’s representation of the remote person. The other human being is not physically present but present only through a narrow technological channel.
Testing strong interaction theories
International Nuclear Information System (INIS)
Ellis, J.
1979-01-01
The author discusses possible tests of the current theories of the strong interaction, in particular, quantum chromodynamics. High energy e + e - interactions should provide an excellent means of studying the strong force. (W.D.L.)
Inverse Ising Inference Using All the Data
Aurell, Erik; Ekeberg, Magnus
2012-03-01
We show that a method based on logistic regression, using all the data, solves the inverse Ising problem far better than mean-field calculations relying only on sample pairwise correlation functions, while still computationally feasible for hundreds of nodes. The largest improvement in reconstruction occurs for strong interactions. Using two examples, a diluted Sherrington-Kirkpatrick model and a two-dimensional lattice, we also show that interaction topologies can be recovered from few samples with good accuracy and that the use of l1 regularization is beneficial in this process, pushing inference abilities further into low-temperature regimes.
Optimization methods for logical inference
Chandru, Vijay
2011-01-01
Merging logic and mathematics in deductive inference-an innovative, cutting-edge approach. Optimization methods for logical inference? Absolutely, say Vijay Chandru and John Hooker, two major contributors to this rapidly expanding field. And even though ""solving logical inference problems with optimization methods may seem a bit like eating sauerkraut with chopsticks. . . it is the mathematical structure of a problem that determines whether an optimization model can help solve it, not the context in which the problem occurs."" Presenting powerful, proven optimization techniques for logic in
On quantum statistical inference
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Gill, Richard D.; Jupp, Peter E.
Recent developments in the mathematical foundations of quantum mechanics have brought the theory closer to that of classical probability and statistics. On the other hand, the unique character of quantum physics sets many of the questions addressed apart from those met classically in stochastics....... Furthermore, concurrent advances in experimental techniques and in the theory of quantum computation have led to a strong interest in questions of quantum information, in particular in the sense of the amount of information about unknown parameters in given observational data or accessible through various...
On principles of inductive inference
Kostecki, Ryszard Paweł
2011-01-01
We propose an intersubjective epistemic approach to foundations of probability theory and statistical inference, based on relative entropy and category theory, and aimed to bypass the mathematical and conceptual problems of existing foundational approaches.
Statistical inference via fiducial methods
Salomé, Diemer
1998-01-01
In this thesis the attention is restricted to inductive reasoning using a mathematical probability model. A statistical procedure prescribes, for every theoretically possible set of data, the inference about the unknown of interest. ... Zie: Summary
Statistical inference for stochastic processes
National Research Council Canada - National Science Library
Basawa, Ishwar V; Prakasa Rao, B. L. S
1980-01-01
The aim of this monograph is to attempt to reduce the gap between theory and applications in the area of stochastic modelling, by directing the interest of future researchers to the inference aspects...
Krapu, Gary L.; Brandt, David A.; Kinzel, Paul J.; Pearse, Aaron T.
2014-01-01
We conducted a 10-year study (1998–2007) of the Mid-Continent Population (MCP) of sandhill cranes (Grus canadensis) to identify spring-migration corridors, locations of major stopovers, and migration chronology by crane breeding affiliation (western Alaska–Siberia [WA–S], northern Canada–Nunavut [NC–N], west-central Canada–Alaska [WC–A], and east-central Canada–Minnesota [EC–M]). In the Central Platte River Valley (CPRV) of Nebraska, we evaluated factors influencing staging chronology, food habits, fat storage, and habitat use of sandhill cranes. We compared our findings to results from the Platte River Ecology Study conducted during 1978–1980. We determined spring migration corridors used by the breeding affiliations (designated subpopulations for management purposes) by monitoring 169 cranes marked with platform transmitter terminals (PTTs). We also marked and monitored 456 cranes in the CPRV with very high frequency (VHF) transmitters to evaluate length and pattern of stay, habitat use, and movements. An estimated 42% and 58% of cranes staging in the CPRV were greater sandhill cranes (G. c. tabida) and lesser sandhill cranes (G. c. canadensis), and they stayed for an average of 20 and 25 days (2000–2007), respectively. Cranes from the WA–S, NC–N, WC–A, and EC–M affiliations spent an average of 72, 77, 52, and 53 days, respectively, in spring migration of which 28, 23, 24, and 18 days occurred in the CPRV. The majority of the WA–S subpopulation settled in the CPRV apparently because of inadequate habitat to support more birds upstream, although WA–S cranes accounted for >90% of birds staging in the North Platte River Valley. Crane staging duration in the CPRV was negatively correlated with arrival dates; 92% of cranes stayed >7 days. A program of annual mechanical removal of mature stands of woody growth and seedlings that began in the early 1980s primarily in the main channel of the Platte River has allowed distribution of crane
Barlow, J. E.; Burns, I. S.; Guertin, D. P.; Kepner, W. G.; Goodrich, D. C.
2016-12-01
Long-term land-use and land cover change and their associated impacts pose critical challenges to sustaining vital hydrological ecosystem services for future generations. In this study, a methodology to characterize hydrologic impacts from future urban growth through time that was developed and applied on the San Pedro River Basin was expanded and utilized on the South Platte River Basin as well. Future urban growth is represented by housing density maps generated in decadal intervals from 2010 to 2100, produced by the U.S. Environmental Protection Agency (EPA) Integrated Climate and Land-Use Scenarios (ICLUS) project. ICLUS developed future housing density maps by adapting the Intergovernmental Panel on Climate Change (IPCC) Special Report on Emissions Scenarios (SRES) social, economic, and demographic storylines to the conterminous United States. To characterize hydrologic impacts from future growth, the housing density maps were reclassified to National Land Cover Database (NLCD) 2006 land cover classes and used to parameterize the Soil and Water Assessment Tool (SWAT) using the Automated Geospatial Watershed Assessment (AGWA) tool. The objectives of this project were to 1) develop and implement a methodology for adapting the ICLUS data for use in AGWA as an approach to evaluate impacts of development on water-quantity and -quality, 2) present, evaluate, and compare results from scenarios for watersheds in two different geographic and climatic regions, 3) determine watershed specific implications of this type of future land cover change analysis.
McMahon, P.B.; Lull, K.J.; Dennehy, K.F.; Collins, J.A.
1995-01-01
Water-quality studies conducted by the Metro Wastewater Reclamation District have indicated that during low flow in segments of the South Platte River between Denver and Fort Lupton, concentrations of dissolved oxygen are less than minimum concen- trations set by the State of Colorado. Low dissolved-oxygen concentrations are observed in two reaches of the river-they are about 3.3 to 6.4 miles and 17 to 25 miles downstream from the Metro Waste- water Reclamation District effluent outfalls. Concentrations of dissolved oxygen recover between these two reaches. Studies conducted by the U.S. Geological Survey have indicated that ground-water discharge to the river may contribute to these low dissolved-oxygen concentrations. As a result, an assessment was made of the quantity and quality of ground-water discharge to the South Platte River from Denver to Fort Lupton. Measurements of surface- water and ground-water discharge and collections of surface water and ground water for water-quality analyses were made from August 1992 through January 1993 and in May and July 1993. The quantity of ground-water discharge to the South Platte River was determined indirectly by mass balance of surface-water inflows and outflows and directly by instantaneous measurements of ground-water discharge across the sediment/water interface in the river channel. The quality of surface water and ground water was determined by sampling and analysis of water from the river and monitoring wells screened in the alluvial aquifer adjacent to the river and by sampling and analysis of water from piezometers screened in sediments underlying the river channel. The ground-water flow system was subdivided into a large-area and a small-area flow system. The precise boundaries of the two flow systems are not known. However, the large-area flow system is considered to incorporate all alluvial sediments in hydrologic connection with the South Platte River. The small- area flow system is considered to incorporate
Wellman, Tristan
2015-01-01
The South Platte River and underlying alluvial aquifer form an important hydrologic resource in northeastern Colorado that provides water to population centers along the Front Range and to agricultural communities across the rural plains. Water is regulated based on seniority of water rights and delivered using a network of administration structures that includes ditches, reservoirs, wells, impacted river sections, and engineered recharge areas. A recent addendum to Colorado water law enacted during 2002-2003 curtailed pumping from thousands of wells that lacked authorized augmentation plans. The restrictions in pumping were hypothesized to increase water storage in the aquifer, causing groundwater to rise near the land surface at some locations. The U.S. Geological Survey (USGS), in cooperation with the Colorado Water Conservation Board and the Colorado Water Institute, completed an assessment of 60 years (yr) of historical groundwater-level records collected from 1953 to 2012 from 1,669 wells. Relations of "high" groundwater levels, defined as depth to water from 0 to 10 feet (ft) below land surface, were compared to precipitation, river discharge, and 36 geographic and administrative attributes to identify natural and human controls in areas with shallow groundwater.
Active inference, communication and hermeneutics.
Friston, Karl J; Frith, Christopher D
2015-07-01
Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others--during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions--both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then--in principle--they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Abortion: Strong's counterexamples fail
DEFF Research Database (Denmark)
Di Nucci, Ezio
2009-01-01
This paper shows that the counterexamples proposed by Strong in 2008 in the Journal of Medical Ethics to Marquis's argument against abortion fail. Strong's basic idea is that there are cases--for example, terminally ill patients--where killing an adult human being is prima facie seriously morally...
International Nuclear Information System (INIS)
Goldman, M.V.
1984-01-01
After a brief discussion of beam-excited Langmuir turbulence in the solar wind, we explain the criteria for wave-particle, three-wave and strong turbulence interactions. We then present the results of a numerical integration of the Zakharov equations, which describe the strong turbulence saturation of a weak (low-density) high energy, bump-on-tail beam instability. (author)
Direct Evidence for a Dual Process Model of Deductive Inference
Markovits, Henry; Brunet, Marie-Laurence; Thompson, Valerie; Brisson, Janie
2013-01-01
In 2 experiments, we tested a strong version of a dual process theory of conditional inference (cf. Verschueren et al., 2005a, 2005b) that assumes that most reasoners have 2 strategies available, the choice of which is determined by situational variables, cognitive capacity, and metacognitive control. The statistical strategy evaluates inferences…
Optimal inference with suboptimal models: Addiction and active Bayesian inference
Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl
2015-01-01
When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321
Dessi, Roberta; Rustichini, Aldo
2015-01-01
A large literature in psychology, and more recently in economics, has argued that monetary rewards can reduce intrinsic motivation. We investigate whether the negative impact persists when intrinsic motivation is strong, and test this hypothesis experimentally focusing on the motivation to undertake interesting and challenging tasks, informative about individual ability. We find that this type of task can generate strong intrinsic motivation, that is impervious to the effect of monetary incen...
Bitcoin Meets Strong Consistency
Decker, Christian; Seidel, Jochen; Wattenhofer, Roger
2014-01-01
The Bitcoin system only provides eventual consistency. For everyday life, the time to confirm a Bitcoin transaction is prohibitively slow. In this paper we propose a new system, built on the Bitcoin blockchain, which enables strong consistency. Our system, PeerCensus, acts as a certification authority, manages peer identities in a peer-to-peer network, and ultimately enhances Bitcoin and similar systems with strong consistency. Our extensive analysis shows that PeerCensus is in a secure state...
Strong gravity and supersymmetry
International Nuclear Information System (INIS)
Chamseddine, Ali H.; Salam, A.; Strathdee, J.
1977-11-01
A supersymmetric theory is constructed for a strong f plus a weak g graviton, together with their accompanying massive gravitinos, by gaugin the gradel 0Sp(2,2,1)x 0Sp(2,2,1) structure. The mixing term between f and g fields, which makes the strong graviton massive, can be introduced through a spontaneous symmetry-breaking mechanism implemented in this note by constructing a non-linear realization of the symmetry group
Interactive Instruction in Bayesian Inference
DEFF Research Database (Denmark)
Khan, Azam; Breslav, Simon; Hornbæk, Kasper
2018-01-01
An instructional approach is presented to improve human performance in solving Bayesian inference problems. Starting from the original text of the classic Mammography Problem, the textual expression is modified and visualizations are added according to Mayer’s principles of instruction. These pri......An instructional approach is presented to improve human performance in solving Bayesian inference problems. Starting from the original text of the classic Mammography Problem, the textual expression is modified and visualizations are added according to Mayer’s principles of instruction....... These principles concern coherence, personalization, signaling, segmenting, multimedia, spatial contiguity, and pretraining. Principles of self-explanation and interactivity are also applied. Four experiments on the Mammography Problem showed that these principles help participants answer the questions...... that an instructional approach to improving human performance in Bayesian inference is a promising direction....
On Maximum Entropy and Inference
Directory of Open Access Journals (Sweden)
Luigi Gresele
2017-11-01
Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.
Eight challenges in phylodynamic inference
Directory of Open Access Journals (Sweden)
Simon D.W. Frost
2015-03-01
Full Text Available The field of phylodynamics, which attempts to enhance our understanding of infectious disease dynamics using pathogen phylogenies, has made great strides in the past decade. Basic epidemiological and evolutionary models are now well characterized with inferential frameworks in place. However, significant challenges remain in extending phylodynamic inference to more complex systems. These challenges include accounting for evolutionary complexities such as changing mutation rates, selection, reassortment, and recombination, as well as epidemiological complexities such as stochastic population dynamics, host population structure, and different patterns at the within-host and between-host scales. An additional challenge exists in making efficient inferences from an ever increasing corpus of sequence data.
Problem solving and inference mechanisms
Energy Technology Data Exchange (ETDEWEB)
Furukawa, K; Nakajima, R; Yonezawa, A; Goto, S; Aoyama, A
1982-01-01
The heart of the fifth generation computer will be powerful mechanisms for problem solving and inference. A deduction-oriented language is to be designed, which will form the core of the whole computing system. The language is based on predicate logic with the extended features of structuring facilities, meta structures and relational data base interfaces. Parallel computation mechanisms and specialized hardware architectures are being investigated to make possible efficient realization of the language features. The project includes research into an intelligent programming system, a knowledge representation language and system, and a meta inference system to be built on the core. 30 references.
Ekman, Drew R; Keteles, Kristen; Beihoffer, Jon; Cavallin, Jenna E; Dahlin, Kenneth; Davis, John M; Jastrow, Aaron; Lazorchak, James M; Mills, Marc A; Murphy, Mark; Nguyen, David; Vajda, Alan M; Villeneuve, Daniel L; Winkelman, Dana L; Collette, Timothy W
2018-08-01
Rivers in the arid Western United States face increasing influences from anthropogenic contaminants due to population growth, urbanization, and drought. To better understand and more effectively track the impacts of these contaminants, biologically-based monitoring tools are increasingly being used to complement routine chemical monitoring. This study was initiated to assess the ability of both targeted and untargeted biologically-based monitoring tools to discriminate impacts of two adjacent wastewater treatment plants (WWTPs) on Colorado's South Platte River. A cell-based estrogen assay (in vitro, targeted) determined that water samples collected downstream of the larger of the two WWTPs displayed considerable estrogenic activity in its two separate effluent streams. Hepatic vitellogenin mRNA expression (in vivo, targeted) and NMR-based metabolomic analyses (in vivo, untargeted) from caged male fathead minnows also suggested estrogenic activity downstream of the larger WWTP, but detected significant differences in responses from its two effluent streams. The metabolomic results suggested that these differences were associated with oxidative stress levels. Finally, partial least squares regression was used to explore linkages between the metabolomics responses and the chemical contaminants that were detected at the sites. This analysis, along with univariate statistical approaches, identified significant covariance between the biological endpoints and estrone concentrations, suggesting the importance of this contaminant and recommending increased focus on its presence in the environment. These results underscore the benefits of a combined targeted and untargeted biologically-based monitoring strategy when used alongside contaminant monitoring to more effectively assess ecological impacts of exposures to complex mixtures in surface waters. Published by Elsevier Ltd.
Cady, R.E.; Peckenpaugh, J.M.
1985-01-01
RAQSIM, a generalized flow model of a groundwater system using finite-element methods, is documented to explain how it works and to demonstrate that it gives valid results. Three support programs that are used to compute recharge and discharge data required as input to RAQSIM are described. RAQSIM was developed to solve transient, two-dimensional, regional groundwater flow problems with isotropic or anisotropic conductance. The model can also simulate radially-symmetric flow to a well and steady-state flow. The mathematical basis, program structure, data input and output procedures, organization of data sets, and program features and options of RAQSIM are discussed. An example , containing listings of data and results and illustrating RAQSIM 's capabilities, is discussed in detail. Two test problems also are discussed comparing RAQSIM 's results with analytical procedures. The first support program described, the PET Program, uses solar radiation and other climatic data in the Jensen-Haise method to compute potential evapotranspiration. The second support program, the Soil-Water Program, uses output from the PET Program, soil characteristics, and the ratio of potential to actual evapotranspiration for each crop to compute infiltration, storage, and removal of water from the soil zone. The third program, the Recharge-Discharge Program, uses output from the Soil-Water Program together with other data to compute recharge and discharge from the groundwater flow system. For each support program, a program listing and examples of the data and results for the Twin Platte-Middle Republican study are provided. In addition, a brief discussion on how each program operates and on procedures for running and modifying these programs are presented. (Author 's abstract)
Legleiter, Carl J.; Kinzel, Paul J.; Overstreet, Brandon T.
2011-01-01
This study examined the possibility of mapping depth from optical image data in turbid, sediment-laden channels. Analysis of hyperspectral images from the Platte River indicated that depth retrieval in these environments is feasible, but might not be highly accurate. Four methods of calibrating image-derived depth estimates were evaluated. The first involved extracting image spectra at survey point locations throughout the reach. These paired observations of depth and reflectance were subjected to optimal band ratio analysis (OBRA) to relate (R2 = 0.596) a spectrally based quantity to flow depth. Two other methods were based on OBRA of data from individual cross sections. A fourth strategy used ground-based reflectance measurements to derive an OBRA relation (R2 = 0.944) that was then applied to the image. Depth retrieval accuracy was assessed by visually inspecting cross sections and calculating various error metrics. Calibration via field spectroscopy resulted in a shallow bias but provided relative accuracies similar to image-based methods. Reach-aggregated OBRA was marginally superior to calibrations based on individual cross sections, and depth retrieval accuracy varied considerably along each reach. Errors were lower and observed versus predicted regression R2 values higher for a relatively simple, deeper site than a shallower, braided reach; errors were 1/3 and 1/2 the mean depth for the two reaches. Bathymetric maps were coherent and hydraulically reasonable, however, and might be more reliable than implied by numerical metrics. As an example application, linear discriminant analysis was used to produce a series of depth threshold maps for characterizing shallow-water habitat for roosting cranes.
Steele, Gregory V.; Gurdak, Jason J.; Hobza, Christopher M.
2014-01-01
Uncertainty about the effects of land use and climate on water movement in the unsaturated zone and on groundwater recharge rates can lead to uncertainty in water budgets used for groundwater-flow models. To better understand these effects, a cooperative study between the U.S. Geological Survey and the Central Platte Natural Resources District was initiated in 2007 to determine field-based estimates of recharge rates in selected land-use areas of the Central Platte Natural Resources District in Nebraska. Measured total water potential and unsaturated-zone profiles of tritium, chloride, nitrate as nitrogen, and bromide, along with groundwater-age dates, were used to evaluate water movement in the unsaturated zone and groundwater recharge rates in the central Platte River study area. Eight study sites represented an east-west precipitation contrast across the study area—four beneath groundwater-irrigated cropland (sites 2, 5, and 6 were irrigated corn and site 7 was irrigated alfalfa/corn rotation), three beneath rangeland (sites 1, 4, and 8), and one beneath nonirrigated cropland, or dryland (site 3). Measurements of transient vertical gradients in total water potential indicated that periodic wetting fronts reached greater mean maximum depths beneath the irrigated sites than beneath the rangeland sites, in part, because of the presence of greater and constant antecedent moisture. Beneath the rangeland sites, greater temporal variation in antecedent moisture and total water potential existed and was, in part, likely a result of local precipitation and evapotranspiration. Moreover, greater variability was noticed in the total water potential profiles beneath the western sites than the corresponding eastern sites, which was attributed to less mean annual precipitation in the west. The depth of the peak post-bomb tritium concentration or the interface between the pre-bomb/post-bomb tritium, along with a tritium mass balance, within sampled soil profiles were used to
Object-Oriented Type Inference
DEFF Research Database (Denmark)
Schwartzbach, Michael Ignatieff; Palsberg, Jens
1991-01-01
We present a new approach to inferring types in untyped object-oriented programs with inheritance, assignments, and late binding. It guarantees that all messages are understood, annotates the program with type information, allows polymorphic methods, and can be used as the basis of an op...
Inference in hybrid Bayesian networks
DEFF Research Database (Denmark)
Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2009-01-01
Since the 1980s, Bayesian Networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability-techniques (like fault trees...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....
Mixed normal inference on multicointegration
Boswijk, H.P.
2009-01-01
Asymptotic likelihood analysis of cointegration in I(2) models, see Johansen (1997, 2006), Boswijk (2000) and Paruolo (2000), has shown that inference on most parameters is mixed normal, implying hypothesis test statistics with an asymptotic 2 null distribution. The asymptotic distribution of the
Statistical inference and Aristotle's Rhetoric.
Macdonald, Ranald R
2004-11-01
Formal logic operates in a closed system where all the information relevant to any conclusion is present, whereas this is not the case when one reasons about events and states of the world. Pollard and Richardson drew attention to the fact that the reasoning behind statistical tests does not lead to logically justifiable conclusions. In this paper statistical inferences are defended not by logic but by the standards of everyday reasoning. Aristotle invented formal logic, but argued that people mostly get at the truth with the aid of enthymemes--incomplete syllogisms which include arguing from examples, analogies and signs. It is proposed that statistical tests work in the same way--in that they are based on examples, invoke the analogy of a model and use the size of the effect under test as a sign that the chance hypothesis is unlikely. Of existing theories of statistical inference only a weak version of Fisher's takes this into account. Aristotle anticipated Fisher by producing an argument of the form that there were too many cases in which an outcome went in a particular direction for that direction to be plausibly attributed to chance. We can therefore conclude that Aristotle would have approved of statistical inference and there is a good reason for calling this form of statistical inference classical.
Strongly interacting Fermi gases
Directory of Open Access Journals (Sweden)
Bakr W.
2013-08-01
Full Text Available Strongly interacting gases of ultracold fermions have become an amazingly rich test-bed for many-body theories of fermionic matter. Here we present our recent experiments on these systems. Firstly, we discuss high-precision measurements on the thermodynamics of a strongly interacting Fermi gas across the superfluid transition. The onset of superfluidity is directly observed in the compressibility, the chemical potential, the entropy, and the heat capacity. Our measurements provide benchmarks for current many-body theories on strongly interacting fermions. Secondly, we have studied the evolution of fermion pairing from three to two dimensions in these gases, relating to the physics of layered superconductors. In the presence of p-wave interactions, Fermi gases are predicted to display toplogical superfluidity carrying Majorana edge states. Two possible avenues in this direction are discussed, our creation and direct observation of spin-orbit coupling in Fermi gases and the creation of fermionic molecules of 23Na 40K that will feature strong dipolar interactions in their absolute ground state.
International Nuclear Information System (INIS)
Marier, D.
1992-01-01
This article presents the results of a financial rankings survey which show a strong economic activity in the independent energy industry. The topics of the article include advisor turnover, overseas banks, and the increase in public offerings. The article identifies the top project finance investors for new projects and restructurings and rankings for lenders
Strong Electroweak Symmetry Breaking
Grinstein, Benjamin
2011-01-01
Models of spontaneous breaking of electroweak symmetry by a strong interaction do not have fine tuning/hierarchy problem. They are conceptually elegant and use the only mechanism of spontaneous breaking of a gauge symmetry that is known to occur in nature. The simplest model, minimal technicolor with extended technicolor interactions, is appealing because one can calculate by scaling up from QCD. But it is ruled out on many counts: inappropriately low quark and lepton masses (or excessive FCNC), bad electroweak data fits, light scalar and vector states, etc. However, nature may not choose the minimal model and then we are stuck: except possibly through lattice simulations, we are unable to compute and test the models. In the LHC era it therefore makes sense to abandon specific models (of strong EW breaking) and concentrate on generic features that may indicate discovery. The Technicolor Straw Man is not a model but a parametrized search strategy inspired by a remarkable generic feature of walking technicolor,...
Plasmons in strong superconductors
International Nuclear Information System (INIS)
Baldo, M.; Ducoin, C.
2011-01-01
We present a study of the possible plasmon excitations that can occur in systems where strong superconductivity is present. In these systems the plasmon energy is comparable to or smaller than the pairing gap. As a prototype of these systems we consider the proton component of Neutron Star matter just below the crust when electron screening is not taken into account. For the realistic case we consider in detail the different aspects of the elementary excitations when the proton, electron components are considered within the Random-Phase Approximation generalized to the superfluid case, while the influence of the neutron component is considered only at qualitative level. Electron screening plays a major role in modifying the proton spectrum and spectral function. At the same time the electron plasmon is strongly modified and damped by the indirect coupling with the superfluid proton component, even at moderately low values of the gap. The excitation spectrum shows the interplay of the different components and their relevance for each excitation modes. The results are relevant for neutrino physics and thermodynamical processes in neutron stars. If electron screening is neglected, the spectral properties of the proton component show some resemblance with the physical situation in high-T c superconductors, and we briefly discuss similarities and differences in this connection. In a general prospect, the results of the study emphasize the role of Coulomb interaction in strong superconductors.
Statistical learning and selective inference.
Taylor, Jonathan; Tibshirani, Robert J
2015-06-23
We describe the problem of "selective inference." This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have "cherry-picked"--searched for the strongest associations--means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis.
Bayesian inference with ecological applications
Link, William A
2009-01-01
This text is written to provide a mathematically sound but accessible and engaging introduction to Bayesian inference specifically for environmental scientists, ecologists and wildlife biologists. It emphasizes the power and usefulness of Bayesian methods in an ecological context. The advent of fast personal computers and easily available software has simplified the use of Bayesian and hierarchical models . One obstacle remains for ecologists and wildlife biologists, namely the near absence of Bayesian texts written specifically for them. The book includes many relevant examples, is supported by software and examples on a companion website and will become an essential grounding in this approach for students and research ecologists. Engagingly written text specifically designed to demystify a complex subject Examples drawn from ecology and wildlife research An essential grounding for graduate and research ecologists in the increasingly prevalent Bayesian approach to inference Companion website with analyt...
Statistical inference an integrated approach
Migon, Helio S; Louzada, Francisco
2014-01-01
Introduction Information The concept of probability Assessing subjective probabilities An example Linear algebra and probability Notation Outline of the bookElements of Inference Common statistical modelsLikelihood-based functions Bayes theorem Exchangeability Sufficiency and exponential family Parameter elimination Prior Distribution Entirely subjective specification Specification through functional forms Conjugacy with the exponential family Non-informative priors Hierarchical priors Estimation Introduction to decision theoryBayesian point estimation Classical point estimation Empirical Bayes estimation Comparison of estimators Interval estimation Estimation in the Normal model Approximating Methods The general problem of inference Optimization techniquesAsymptotic theory Other analytical approximations Numerical integration methods Simulation methods Hypothesis Testing Introduction Classical hypothesis testingBayesian hypothesis testing Hypothesis testing and confidence intervalsAsymptotic tests Prediction...
Bayesian inference on proportional elections.
Directory of Open Access Journals (Sweden)
Gabriel Hideki Vatanabe Brunello
Full Text Available Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.
Causal inference based on counterfactuals
Directory of Open Access Journals (Sweden)
Höfler M
2005-09-01
Full Text Available Abstract Background The counterfactual or potential outcome model has become increasingly standard for causal inference in epidemiological and medical studies. Discussion This paper provides an overview on the counterfactual and related approaches. A variety of conceptual as well as practical issues when estimating causal effects are reviewed. These include causal interactions, imperfect experiments, adjustment for confounding, time-varying exposures, competing risks and the probability of causation. It is argued that the counterfactual model of causal effects captures the main aspects of causality in health sciences and relates to many statistical procedures. Summary Counterfactuals are the basis of causal inference in medicine and epidemiology. Nevertheless, the estimation of counterfactual differences pose several difficulties, primarily in observational studies. These problems, however, reflect fundamental barriers only when learning from observations, and this does not invalidate the counterfactual concept.
Probability biases as Bayesian inference
Directory of Open Access Journals (Sweden)
Andre; C. R. Martins
2006-11-01
Full Text Available In this article, I will show how several observed biases in human probabilistic reasoning can be partially explained as good heuristics for making inferences in an environment where probabilities have uncertainties associated to them. Previous results show that the weight functions and the observed violations of coalescing and stochastic dominance can be understood from a Bayesian point of view. We will review those results and see that Bayesian methods should also be used as part of the explanation behind other known biases. That means that, although the observed errors are still errors under the be understood as adaptations to the solution of real life problems. Heuristics that allow fast evaluations and mimic a Bayesian inference would be an evolutionary advantage, since they would give us an efficient way of making decisions. %XX In that sense, it should be no surprise that humans reason with % probability as it has been observed.
Statistical inference on residual life
Jeong, Jong-Hyeon
2014-01-01
This is a monograph on the concept of residual life, which is an alternative summary measure of time-to-event data, or survival data. The mean residual life has been used for many years under the name of life expectancy, so it is a natural concept for summarizing survival or reliability data. It is also more interpretable than the popular hazard function, especially for communications between patients and physicians regarding the efficacy of a new drug in the medical field. This book reviews existing statistical methods to infer the residual life distribution. The review and comparison includes existing inference methods for mean and median, or quantile, residual life analysis through medical data examples. The concept of the residual life is also extended to competing risks analysis. The targeted audience includes biostatisticians, graduate students, and PhD (bio)statisticians. Knowledge in survival analysis at an introductory graduate level is advisable prior to reading this book.
Nonparametric Bayesian inference in biostatistics
Müller, Peter
2015-01-01
As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...
Statistical inference a short course
Panik, Michael J
2012-01-01
A concise, easily accessible introduction to descriptive and inferential techniques Statistical Inference: A Short Course offers a concise presentation of the essentials of basic statistics for readers seeking to acquire a working knowledge of statistical concepts, measures, and procedures. The author conducts tests on the assumption of randomness and normality, provides nonparametric methods when parametric approaches might not work. The book also explores how to determine a confidence interval for a population median while also providing coverage of ratio estimation, randomness, and causal
On Quantum Statistical Inference, II
Barndorff-Nielsen, O. E.; Gill, R. D.; Jupp, P. E.
2003-01-01
Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, theoretical developments in the theory of quantum measurements have brought the basic mathematical framework for the probability calculations much closer to that of classical probability theory. The present paper reviews this field and proposes and inte...
Nonparametric predictive inference in reliability
International Nuclear Information System (INIS)
Coolen, F.P.A.; Coolen-Schrijner, P.; Yan, K.J.
2002-01-01
We introduce a recently developed statistical approach, called nonparametric predictive inference (NPI), to reliability. Bounds for the survival function for a future observation are presented. We illustrate how NPI can deal with right-censored data, and discuss aspects of competing risks. We present possible applications of NPI for Bernoulli data, and we briefly outline applications of NPI for replacement decisions. The emphasis is on introduction and illustration of NPI in reliability contexts, detailed mathematical justifications are presented elsewhere
International Nuclear Information System (INIS)
Gorenstein, M. I.; Gazdzicki, M.
2011-01-01
Analysis of fluctuations of hadron production properties in collisions of relativistic particles profits from use of measurable intensive quantities which are independent of system size variations. The first family of such quantities was proposed in 1992; another is introduced in this paper. Furthermore we present a proof of independence of volume fluctuations for quantities from both families within the framework of the grand canonical ensemble. These quantities are referred to as strongly intensive ones. Influence of conservation laws and resonance decays is also discussed.
Strong-coupling approximations
International Nuclear Information System (INIS)
Abbott, R.B.
1984-03-01
Standard path-integral techniques such as instanton calculations give good answers for weak-coupling problems, but become unreliable for strong-coupling. Here we consider a method of replacing the original potential by a suitably chosen harmonic oscillator potential. Physically this is motivated by the fact that potential barriers below the level of the ground-state energy of a quantum-mechanical system have little effect. Numerically, results are good, both for quantum-mechanical problems and for massive phi 4 field theory in 1 + 1 dimensions. 9 references, 6 figures
Variational inference & deep learning : A new synthesis
Kingma, D.P.
2017-01-01
In this thesis, Variational Inference and Deep Learning: A New Synthesis, we propose novel solutions to the problems of variational (Bayesian) inference, generative modeling, representation learning, semi-supervised learning, and stochastic optimization.
Variational inference & deep learning: A new synthesis
Kingma, D.P.
2017-01-01
In this thesis, Variational Inference and Deep Learning: A New Synthesis, we propose novel solutions to the problems of variational (Bayesian) inference, generative modeling, representation learning, semi-supervised learning, and stochastic optimization.
Continuous Integrated Invariant Inference, Phase I
National Aeronautics and Space Administration — The proposed project will develop a new technique for invariant inference and embed this and other current invariant inference and checking techniques in an...
Strongly disordered superconductors
International Nuclear Information System (INIS)
Muttalib, K.A.
1982-01-01
We examine some universal effects of strong non-magnetic disorder on the electron-phonon and electron-electron interactions in a superconductor. In particular we explicitly take into account the effect of slow diffusion of electrons in a disordered medium by working in an exact impurity eigenstate representation. We find that the normal diffusion of electrons characterized by a constant diffusion coefficient does not lead to any significant correction to the electron-phonon or the effective electron-electron interactions in a superconductor. We then consider sufficiently strong disorder where Anderson localization of electrons becomes important and determine the effect of localization on the electron-electron interactions. We find that due to localization, the diffusion of electrons becomes anomalous in the sense that the diffusion coefficient becomes scale dependent. This results in an increase in the effective electron-electron interaction with increasing disorder. We propose that this provides a natural explanation for the unusual sensitivity of the transition temperature T/sub c/ of the high T/sub c/ superconductors (T/sub c/ > 10 0 K) to damage effects
Dvali, Gia
2009-01-01
We show that whenever a 4-dimensional theory with N particle species emerges as a consistent low energy description of a 3-brane embedded in an asymptotically-flat (4+d)-dimensional space, the holographic scale of high-dimensional gravity sets the strong coupling scale of the 4D theory. This connection persists in the limit in which gravity can be consistently decoupled. We demonstrate this effect for orbifold planes, as well as for the solitonic branes and string theoretic D-branes. In all cases the emergence of a 4D strong coupling scale from bulk holography is a persistent phenomenon. The effect turns out to be insensitive even to such extreme deformations of the brane action that seemingly shield 4D theory from the bulk gravity effects. A well understood example of such deformation is given by large 4D Einstein term in the 3-brane action, which is known to suppress the strength of 5D gravity at short distances and change the 5D Newton's law into the four-dimensional one. Nevertheless, we observe that the ...
Directory of Open Access Journals (Sweden)
Stan Daberkow
2001-01-01
Full Text Available Given the societal concern about groundwater pollution from agricultural sources, public programs have been proposed or implemented to change farmer behavior with respect to nutrient use and management. However, few of these programs designed to change farmer behavior have been evaluated due to the lack of detailed data over an appropriate time frame. The Central Platte Natural Resources District (CPNRD in Nebraska has identified an intensively cultivated, irrigated area with average groundwater nitrate-nitrogen (N levels about double the EPA’s safe drinking water standard. The CPNRD implemented a joint education and regulatory N management program in the mid-1980s to reduce groundwater N. This analysis reports N use and management, yield, and groundwater nitrate trends in the CPNRD for nearly 3000 continuous-corn fields from 1989 to 1998, where producers faced limits on the timing of N fertilizer application but no limits on amounts. Groundwater nitrate levels showed modest improvement over the 10 years of this analysis, falling from the 1989–1993 average of 18.9 to 18.1 mg/l during 1994–1998. The availability of N in excess of crop needs was clearly documented by the CPNRD data and was related to optimistic yield goals, irrigation water use above expected levels, and lack of adherence to commercial fertilizer application guidelines. Over the 10-year period of this analysis, producers reported harvesting an annual average of 9729 kg/ha, 1569 kg/ha (14% below the average yield goal. During 1989�1998, producers reported annually applying an average of 162.5 kg/ha of commercial N fertilizer, 15.7 kg/ha (10% above the guideline level. Including the N contribution from irrigation water, the potential N contribution to the environment (total N available less estimated crop use was estimated at 71.7 kg/ha. This is an estimate of the nitrates available for denitrification, volatilization, runoff, future soil N, and leaching to groundwater. On
Variations on Bayesian Prediction and Inference
2016-05-09
inference 2.2.1 Background There are a number of statistical inference problems that are not generally formulated via a full probability model...problem of inference about an unknown parameter, the Bayesian approach requires a full probability 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...the problem of inference about an unknown parameter, the Bayesian approach requires a full probability model/likelihood which can be an obstacle
Adaptive Inference on General Graphical Models
Acar, Umut A.; Ihler, Alexander T.; Mettu, Ramgopal; Sumer, Ozgur
2012-01-01
Many algorithms and applications involve repeatedly solving variations of the same inference problem; for example we may want to introduce new evidence to the model or perform updates to conditional dependencies. The goal of adaptive inference is to take advantage of what is preserved in the model and perform inference more rapidly than from scratch. In this paper, we describe techniques for adaptive inference on general graphs that support marginal computation and updates to the conditional ...
Antonella Del Rosso
2016-01-01
Twenty years of designing, building and testing a number of innovative technologies, with the strong belief that the endeavour would lead to a historic breakthrough. The Bulletin publishes an abstract of the Courier’s interview with Barry Barish, one of the founding fathers of LIGO. The plots show the signals of gravitational waves detected by the twin LIGO observatories at Livingston, Louisiana, and Hanford, Washington. (Image: Caltech/MIT/LIGO Lab) On 11 February, the Laser Interferometer Gravitational-Wave Observatory (LIGO) and Virgo collaborations published a historic paper in which they showed a gravitational signal emitted by the merger of two black holes. These results come after 20 years of hard work by a large collaboration of scientists operating the two LIGO observatories in the US. Barry Barish, Linde Professor of Physics, Emeritus at the California Institute of Technology and former Director of the Global Design Effort for the Internat...
Strongly interacting Higgs bosons
International Nuclear Information System (INIS)
Appelquist, T.; Bernard, C.
1980-01-01
The sensitivity of present-energy weak interactions to a strongly interacting heavy-Higgs-boson sector is discussed. The gauged nonlinear sigma model, which is the limit of the linear model as the Higgs-boson mass goes to infinity, is used to organize and catalogue all possible heavy-Higgs-boson effects. As long as the SU(2)/sub L/ x SU(2)/sub R/ symmetry of the Higgs sector is preserved, these effects are found to be small, of the order of the square of the gauge coupling times logarithms (but not powers) of the Higgs-boson mass divided by the W mass. We work in the context of a simplified model with gauge group SU(2)/sub L/; the extension to SU(2)/sub L/ x U(1) is briefly discussed
Heuristics as Bayesian inference under extreme priors.
Parpart, Paula; Jones, Matt; Love, Bradley C
2018-05-01
Simple heuristics are often regarded as tractable decision strategies because they ignore a great deal of information in the input data. One puzzle is why heuristics can outperform full-information models, such as linear regression, which make full use of the available information. These "less-is-more" effects, in which a relatively simpler model outperforms a more complex model, are prevalent throughout cognitive science, and are frequently argued to demonstrate an inherent advantage of simplifying computation or ignoring information. In contrast, we show at the computational level (where algorithmic restrictions are set aside) that it is never optimal to discard information. Through a formal Bayesian analysis, we prove that popular heuristics, such as tallying and take-the-best, are formally equivalent to Bayesian inference under the limit of infinitely strong priors. Varying the strength of the prior yields a continuum of Bayesian models with the heuristics at one end and ordinary regression at the other. Critically, intermediate models perform better across all our simulations, suggesting that down-weighting information with the appropriate prior is preferable to entirely ignoring it. Rather than because of their simplicity, our analyses suggest heuristics perform well because they implement strong priors that approximate the actual structure of the environment. We end by considering how new heuristics could be derived by infinitely strengthening the priors of other Bayesian models. These formal results have implications for work in psychology, machine learning and economics. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Sweller, Naomi; Hayes, Brett K
2010-08-01
Three studies examined how task demands that impact on attention to typical or atypical category features shape the category representations formed through classification learning and inference learning. During training categories were learned via exemplar classification or by inferring missing exemplar features. In the latter condition inferences were made about missing typical features alone (typical feature inference) or about both missing typical and atypical features (mixed feature inference). Classification and mixed feature inference led to the incorporation of typical and atypical features into category representations, with both kinds of features influencing inferences about familiar (Experiments 1 and 2) and novel (Experiment 3) test items. Those in the typical inference condition focused primarily on typical features. Together with formal modelling, these results challenge previous accounts that have characterized inference learning as producing a focus on typical category features. The results show that two different kinds of inference learning are possible and that these are subserved by different kinds of category representations.
Generative inference for cultural evolution.
Kandler, Anne; Powell, Adam
2018-04-05
One of the major challenges in cultural evolution is to understand why and how various forms of social learning are used in human populations, both now and in the past. To date, much of the theoretical work on social learning has been done in isolation of data, and consequently many insights focus on revealing the learning processes or the distributions of cultural variants that are expected to have evolved in human populations. In population genetics, recent methodological advances have allowed a greater understanding of the explicit demographic and/or selection mechanisms that underlie observed allele frequency distributions across the globe, and their change through time. In particular, generative frameworks-often using coalescent-based simulation coupled with approximate Bayesian computation (ABC)-have provided robust inferences on the human past, with no reliance on a priori assumptions of equilibrium. Here, we demonstrate the applicability and utility of generative inference approaches to the field of cultural evolution. The framework advocated here uses observed population-level frequency data directly to establish the likely presence or absence of particular hypothesized learning strategies. In this context, we discuss the problem of equifinality and argue that, in the light of sparse cultural data and the multiplicity of possible social learning processes, the exclusion of those processes inconsistent with the observed data might be the most instructive outcome. Finally, we summarize the findings of generative inference approaches applied to a number of case studies.This article is part of the theme issue 'Bridging cultural gaps: interdisciplinary studies in human cultural evolution'. © 2018 The Author(s).
sick: The Spectroscopic Inference Crank
Casey, Andrew R.
2016-03-01
There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal
Inferring network structure from cascades
Ghonge, Sushrut; Vural, Dervis Can
2017-07-01
Many physical, biological, and social phenomena can be described by cascades taking place on a network. Often, the activity can be empirically observed, but not the underlying network of interactions. In this paper we offer three topological methods to infer the structure of any directed network given a set of cascade arrival times. Our formulas hold for a very general class of models where the activation probability of a node is a generic function of its degree and the number of its active neighbors. We report high success rates for synthetic and real networks, for several different cascade models.
SICK: THE SPECTROSCOPIC INFERENCE CRANK
Energy Technology Data Exchange (ETDEWEB)
Casey, Andrew R., E-mail: arc@ast.cam.ac.uk [Institute of Astronomy, University of Cambridge, Madingley Road, Cambdridge, CB3 0HA (United Kingdom)
2016-03-15
There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
2013-01-01
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....
Inference in hybrid Bayesian networks
International Nuclear Information System (INIS)
Langseth, Helge; Nielsen, Thomas D.; Rumi, Rafael; Salmeron, Antonio
2009-01-01
Since the 1980s, Bayesian networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability techniques (like fault trees and reliability block diagrams). However, limitations in the BNs' calculation engine have prevented BNs from becoming equally popular for domains containing mixtures of both discrete and continuous variables (the so-called hybrid domains). In this paper we focus on these difficulties, and summarize some of the last decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability.
SICK: THE SPECTROSCOPIC INFERENCE CRANK
International Nuclear Information System (INIS)
Casey, Andrew R.
2016-01-01
There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal
Strong-interaction nonuniversality
International Nuclear Information System (INIS)
Volkas, R.R.; Foot, R.; He, X.; Joshi, G.C.
1989-01-01
The universal QCD color theory is extended to an SU(3) 1 direct product SU(3) 2 direct product SU(3) 3 gauge theory, where quarks of the ith generation transform as triplets under SU(3)/sub i/ and singlets under the other two factors. The usual color group is then identified with the diagonal subgroup, which remains exact after symmetry breaking. The gauge bosons associated with the 16 broken generators then form two massive octets under ordinary color. The interactions between quarks and these heavy gluonlike particles are explicitly nonuniversal and thus an exploration of their physical implications allows us to shed light on the fundamental issue of strong-interaction universality. Nonuniversality and weak flavor mixing are shown to generate heavy-gluon-induced flavor-changing neutral currents. The phenomenology of these processes is studied, as they provide the major experimental constraint on the extended theory. Three symmetry-breaking scenarios are presented. The first has color breaking occurring at the weak scale, while the second and third divorce the two scales. The third model has the interesting feature of radiatively induced off-diagonal Kobayashi-Maskawa matrix elements
Wickens, F
Our friend and colleague John Strong was cruelly taken from us by a brain tumour on Monday 31st July, a few days before his 65th birthday John started his career working with a group from Westfield College, under the leadership of Ted Bellamy. He obtained his PhD and spent the early part of his career on experiments at Rutherford Appleton Laboratory (RAL), but after the early 1970s his research was focussed on experiments in CERN. Over the years he made a number of notable contributions to experiments in CERN: The Omega spectrometer adopted a system John had originally developed for experiments at RAL using vidicon cameras to record the sparks in the spark chambers; He contributed to the success of NA1 and NA7, where he became heavily involved in the electronic trigger systems; He was responsible for the second level trigger system for the ALEPH detector and spent five years leading a team that designed and built the system, which ran for twelve years with only minor interventions. Following ALEPH he tur...
Stirring Strongly Coupled Plasma
Fadafan, Kazem Bitaghsir; Rajagopal, Krishna; Wiedemann, Urs Achim
2009-01-01
We determine the energy it takes to move a test quark along a circle of radius L with angular frequency w through the strongly coupled plasma of N=4 supersymmetric Yang-Mills (SYM) theory. We find that for most values of L and w the energy deposited by stirring the plasma in this way is governed either by the drag force acting on a test quark moving through the plasma in a straight line with speed v=Lw or by the energy radiated by a quark in circular motion in the absence of any plasma, whichever is larger. There is a continuous crossover from the drag-dominated regime to the radiation-dominated regime. In the crossover regime we find evidence for significant destructive interference between energy loss due to drag and that due to radiation as if in vacuum. The rotating quark thus serves as a model system in which the relative strength of, and interplay between, two different mechanisms of parton energy loss is accessible via a controlled classical gravity calculation. We close by speculating on the implicati...
Plasma pressure and anisotropy inferred from the Tsyganenkomagnetic field model
Directory of Open Access Journals (Sweden)
F. Cao
Full Text Available A numerical procedure has been developed to deduce the plasma pressure and anisotropy from the Tsyganenko magnetic field model. The Tsyganenko empirical field model, which is based on vast satellite field data, provides a realistic description of magnetic field configuration in the magnetosphere. When the force balance under the static condition is assumed, the electromagnetic <strong>J×B> force from the Tsyganenko field model can be used to infer the plasma pressure and anisotropy distributions consistent with the field model. It is found that the <strong>J×B> force obtained from the Tsyganenko field model is not curl-free. The curl-free part of the <strong>J×B> force in an empirical field model can be balanced by the gradient of the isotropic pressure, while the nonzero curl of the <strong>J×B> force can only be associated with the pressure anisotropy. The plasma pressure and anisotropy in the near-Earth plasma sheet are numerically calculated to obtain a static equilibrium consistent with the Tsyganenko field model both in the noon-midnight meridian and in the equatorial plane. The plasma pressure distribution deduced from the Tsyganenko 1989 field model is highly anisotropic and shows this feature early in the substorm growth phase. The pressure anisotropy parameter α_{P}, defined as α_{P}=1-P_{Vert}P_{⊥}, is typically ~0.3 at x ≈ -4.5R_{E} and gradually decreases to a small negative value with an increasing tailward distance. The pressure anisotropy from the Tsyganenko 1989 model accounts for 50% of the cross-tail current at maximum and only in a highly localized region near xsim-10R_{E}. In comparison, the plasma pressure anisotropy inferred from the Tsyganenko 1987 model is much smaller. We also find that the boundary
FLOODPLAIN, PLATTE COUNTY, MISSOURI USA
Federal Emergency Management Agency, Department of Homeland Security — The Floodplain Mapping/Redelineation study deliverables depict and quantify the flood risks for the study area. The primary risk classifications used are the...
TERRAIN, Platte County, Missouri USA
Federal Emergency Management Agency, Department of Homeland Security — Terrain data, as defined in FEMA Guidelines and Specifications, Appendix N: Data Capture Standards, describes the digital topographic data that was used to create...
Lower complexity bounds for lifted inference
DEFF Research Database (Denmark)
Jaeger, Manfred
2015-01-01
instances of the model. Numerous approaches for such “lifted inference” techniques have been proposed. While it has been demonstrated that these techniques will lead to significantly more efficient inference on some specific models, there are only very recent and still quite restricted results that show...... the feasibility of lifted inference on certain syntactically defined classes of models. Lower complexity bounds that imply some limitations for the feasibility of lifted inference on more expressive model classes were established earlier in Jaeger (2000; Jaeger, M. 2000. On the complexity of inference about...... that under the assumption that NETIME≠ETIME, there is no polynomial lifted inference algorithm for knowledge bases of weighted, quantifier-, and function-free formulas. Further strengthening earlier results, this is also shown to hold for approximate inference and for knowledge bases not containing...
Statistical inference for financial engineering
Taniguchi, Masanobu; Ogata, Hiroaki; Taniai, Hiroyuki
2014-01-01
This monograph provides the fundamentals of statistical inference for financial engineering and covers some selected methods suitable for analyzing financial time series data. In order to describe the actual financial data, various stochastic processes, e.g. non-Gaussian linear processes, non-linear processes, long-memory processes, locally stationary processes etc. are introduced and their optimal estimation is considered as well. This book also includes several statistical approaches, e.g., discriminant analysis, the empirical likelihood method, control variate method, quantile regression, realized volatility etc., which have been recently developed and are considered to be powerful tools for analyzing the financial data, establishing a new bridge between time series and financial engineering. This book is well suited as a professional reference book on finance, statistics and statistical financial engineering. Readers are expected to have an undergraduate-level knowledge of statistics.
Type inference for correspondence types
DEFF Research Database (Denmark)
Hüttel, Hans; Gordon, Andy; Hansen, Rene Rydhof
2009-01-01
We present a correspondence type/effect system for authenticity in a π-calculus with polarized channels, dependent pair types and effect terms and show how one may, given a process P and an a priori type environment E, generate constraints that are formulae in the Alternating Least Fixed......-Point (ALFP) logic. We then show how a reasonable model of the generated constraints yields a type/effect assignment such that P becomes well-typed with respect to E if and only if this is possible. The formulae generated satisfy a finite model property; a system of constraints is satisfiable if and only...... if it has a finite model. As a consequence, we obtain the result that type/effect inference in our system is polynomial-time decidable....
Causal inference in public health.
Glass, Thomas A; Goodman, Steven N; Hernán, Miguel A; Samet, Jonathan M
2013-01-01
Causal inference has a central role in public health; the determination that an association is causal indicates the possibility for intervention. We review and comment on the long-used guidelines for interpreting evidence as supporting a causal association and contrast them with the potential outcomes framework that encourages thinking in terms of causes that are interventions. We argue that in public health this framework is more suitable, providing an estimate of an action's consequences rather than the less precise notion of a risk factor's causal effect. A variety of modern statistical methods adopt this approach. When an intervention cannot be specified, causal relations can still exist, but how to intervene to change the outcome will be unclear. In application, the often-complex structure of causal processes needs to be acknowledged and appropriate data collected to study them. These newer approaches need to be brought to bear on the increasingly complex public health challenges of our globalized world.
Inference Attacks and Control on Database Structures
Directory of Open Access Journals (Sweden)
Muhamed Turkanovic
2015-02-01
Full Text Available Today’s databases store information with sensitivity levels that range from public to highly sensitive, hence ensuring confidentiality can be highly important, but also requires costly control. This paper focuses on the inference problem on different database structures. It presents possible treats on privacy with relation to the inference, and control methods for mitigating these treats. The paper shows that using only access control, without any inference control is inadequate, since these models are unable to protect against indirect data access. Furthermore, it covers new inference problems which rise from the dimensions of new technologies like XML, semantics, etc.
LAIT: a local ancestry inference toolkit.
Hui, Daniel; Fang, Zhou; Lin, Jerome; Duan, Qing; Li, Yun; Hu, Ming; Chen, Wei
2017-09-06
Inferring local ancestry in individuals of mixed ancestry has many applications, most notably in identifying disease-susceptible loci that vary among different ethnic groups. Many software packages are available for inferring local ancestry in admixed individuals. However, most of these existing software packages require specific formatted input files and generate output files in various types, yielding practical inconvenience. We developed a tool set, Local Ancestry Inference Toolkit (LAIT), which can convert standardized files into software-specific input file formats as well as standardize and summarize inference results for four popular local ancestry inference software: HAPMIX, LAMP, LAMP-LD, and ELAI. We tested LAIT using both simulated and real data sets and demonstrated that LAIT provides convenience to run multiple local ancestry inference software. In addition, we evaluated the performance of local ancestry software among different supported software packages, mainly focusing on inference accuracy and computational resources used. We provided a toolkit to facilitate the use of local ancestry inference software, especially for users with limited bioinformatics background.
Forward and backward inference in spatial cognition.
Directory of Open Access Journals (Sweden)
Will D Penny
Full Text Available This paper shows that the various computations underlying spatial cognition can be implemented using statistical inference in a single probabilistic model. Inference is implemented using a common set of 'lower-level' computations involving forward and backward inference over time. For example, to estimate where you are in a known environment, forward inference is used to optimally combine location estimates from path integration with those from sensory input. To decide which way to turn to reach a goal, forward inference is used to compute the likelihood of reaching that goal under each option. To work out which environment you are in, forward inference is used to compute the likelihood of sensory observations under the different hypotheses. For reaching sensory goals that require a chaining together of decisions, forward inference can be used to compute a state trajectory that will lead to that goal, and backward inference to refine the route and estimate control signals that produce the required trajectory. We propose that these computations are reflected in recent findings of pattern replay in the mammalian brain. Specifically, that theta sequences reflect decision making, theta flickering reflects model selection, and remote replay reflects route and motor planning. We also propose a mapping of the above computational processes onto lateral and medial entorhinal cortex and hippocampus.
Generative Inferences Based on Learned Relations
Chen, Dawn; Lu, Hongjing; Holyoak, Keith J.
2017-01-01
A key property of relational representations is their "generativity": From partial descriptions of relations between entities, additional inferences can be drawn about other entities. A major theoretical challenge is to demonstrate how the capacity to make generative inferences could arise as a result of learning relations from…
Inference in models with adaptive learning
Chevillon, G.; Massmann, M.; Mavroeidis, S.
2010-01-01
Identification of structural parameters in models with adaptive learning can be weak, causing standard inference procedures to become unreliable. Learning also induces persistent dynamics, and this makes the distribution of estimators and test statistics non-standard. Valid inference can be
Fiducial inference - A Neyman-Pearson interpretation
Salome, D; VonderLinden, W; Dose,; Fischer, R; Preuss, R
1999-01-01
Fisher's fiducial argument is a tool for deriving inferences in the form of a probability distribution on the parameter space, not based on Bayes's Theorem. Lindley established that in exceptional situations fiducial inferences coincide with posterior distributions; in the other situations fiducial
Uncertainty in prediction and in inference
Hilgevoord, J.; Uffink, J.
1991-01-01
The concepts of uncertainty in prediction and inference are introduced and illustrated using the diffraction of light as an example. The close re-lationship between the concepts of uncertainty in inference and resolving power is noted. A general quantitative measure of uncertainty in
Causal inference in economics and marketing.
Varian, Hal R
2016-07-05
This is an elementary introduction to causal inference in economics written for readers familiar with machine learning methods. The critical step in any causal analysis is estimating the counterfactual-a prediction of what would have happened in the absence of the treatment. The powerful techniques used in machine learning may be useful for developing better estimates of the counterfactual, potentially improving causal inference.
Nonparametric predictive inference in statistical process control
Arts, G.R.J.; Coolen, F.P.A.; Laan, van der P.
2000-01-01
New methods for statistical process control are presented, where the inferences have a nonparametric predictive nature. We consider several problems in process control in terms of uncertainties about future observable random quantities, and we develop inferences for these random quantities hased on
The Impact of Disablers on Predictive Inference
Cummins, Denise Dellarosa
2014-01-01
People consider alternative causes when deciding whether a cause is responsible for an effect (diagnostic inference) but appear to neglect them when deciding whether an effect will occur (predictive inference). Five experiments were conducted to test a 2-part explanation of this phenomenon: namely, (a) that people interpret standard predictive…
Compiling Relational Bayesian Networks for Exact Inference
DEFF Research Database (Denmark)
Jaeger, Manfred; Darwiche, Adnan; Chavira, Mark
2006-01-01
We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available PRIMULA tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference...
Compiling Relational Bayesian Networks for Exact Inference
DEFF Research Database (Denmark)
Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan
2004-01-01
We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating...
Extended likelihood inference in reliability
International Nuclear Information System (INIS)
Martz, H.F. Jr.; Beckman, R.J.; Waller, R.A.
1978-10-01
Extended likelihood methods of inference are developed in which subjective information in the form of a prior distribution is combined with sampling results by means of an extended likelihood function. The extended likelihood function is standardized for use in obtaining extended likelihood intervals. Extended likelihood intervals are derived for the mean of a normal distribution with known variance, the failure-rate of an exponential distribution, and the parameter of a binomial distribution. Extended second-order likelihood methods are developed and used to solve several prediction problems associated with the exponential and binomial distributions. In particular, such quantities as the next failure-time, the number of failures in a given time period, and the time required to observe a given number of failures are predicted for the exponential model with a gamma prior distribution on the failure-rate. In addition, six types of life testing experiments are considered. For the binomial model with a beta prior distribution on the probability of nonsurvival, methods are obtained for predicting the number of nonsurvivors in a given sample size and for predicting the required sample size for observing a specified number of nonsurvivors. Examples illustrate each of the methods developed. Finally, comparisons are made with Bayesian intervals in those cases where these are known to exist
Reinforcement learning or active inference?
Friston, Karl J; Daunizeau, Jean; Kiebel, Stefan J
2009-07-29
This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain.
Reinforcement learning or active inference?
Directory of Open Access Journals (Sweden)
Karl J Friston
2009-07-01
Full Text Available This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain.
Active inference and epistemic value.
Friston, Karl; Rigoli, Francesco; Ognibene, Dimitri; Mathys, Christoph; Fitzgerald, Thomas; Pezzulo, Giovanni
2015-01-01
We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.
Ancient Biomolecules and Evolutionary Inference.
Cappellini, Enrico; Prohaska, Ana; Racimo, Fernando; Welker, Frido; Pedersen, Mikkel Winther; Allentoft, Morten E; de Barros Damgaard, Peter; Gutenbrunner, Petra; Dunne, Julie; Hammann, Simon; Roffet-Salque, Mélanie; Ilardo, Melissa; Moreno-Mayar, J Víctor; Wang, Yucheng; Sikora, Martin; Vinner, Lasse; Cox, Jürgen; Evershed, Richard P; Willerslev, Eske
2018-04-25
Over the last decade, studies of ancient biomolecules-particularly ancient DNA, proteins, and lipids-have revolutionized our understanding of evolutionary history. Though initially fraught with many challenges, the field now stands on firm foundations. Researchers now successfully retrieve nucleotide and amino acid sequences, as well as lipid signatures, from progressively older samples, originating from geographic areas and depositional environments that, until recently, were regarded as hostile to long-term preservation of biomolecules. Sampling frequencies and the spatial and temporal scope of studies have also increased markedly, and with them the size and quality of the data sets generated. This progress has been made possible by continuous technical innovations in analytical methods, enhanced criteria for the selection of ancient samples, integrated experimental methods, and advanced computational approaches. Here, we discuss the history and current state of ancient biomolecule research, its applications to evolutionary inference, and future directions for this young and exciting field. Expected final online publication date for the Annual Review of Biochemistry Volume 87 is June 20, 2018. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Bayesian Inference Methods for Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand
2013-01-01
This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...
EI: A Program for Ecological Inference
Directory of Open Access Journals (Sweden)
Gary King
2004-09-01
Full Text Available The program EI provides a method of inferring individual behavior from aggregate data. It implements the statistical procedures, diagnostics, and graphics from the book A Solution to the Ecological Inference Problem: Reconstructing Individual Behavior from Aggregate Data (King 1997. Ecological inference, as traditionally defined, is the process of using aggregate (i.e., "ecological" data to infer discrete individual-level relationships of interest when individual-level data are not available. Ecological inferences are required in political science research when individual-level surveys are unavailable (e.g., local or comparative electoral politics, unreliable (racial politics, insufficient (political geography, or infeasible (political history. They are also required in numerous areas of ma jor significance in public policy (e.g., for applying the Voting Rights Act and other academic disciplines ranging from epidemiology and marketing to sociology and quantitative history.
Sparse linear models: Variational approximate inference and Bayesian experimental design
International Nuclear Information System (INIS)
Seeger, Matthias W
2009-01-01
A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.
Sparse linear models: Variational approximate inference and Bayesian experimental design
Energy Technology Data Exchange (ETDEWEB)
Seeger, Matthias W [Saarland University and Max Planck Institute for Informatics, Campus E1.4, 66123 Saarbruecken (Germany)
2009-12-01
A wide range of problems such as signal reconstruction, denoising, source separation, feature selection, and graphical model search are addressed today by posterior maximization for linear models with sparsity-favouring prior distributions. The Bayesian posterior contains useful information far beyond its mode, which can be used to drive methods for sampling optimization (active learning), feature relevance ranking, or hyperparameter estimation, if only this representation of uncertainty can be approximated in a tractable manner. In this paper, we review recent results for variational sparse inference, and show that they share underlying computational primitives. We discuss how sampling optimization can be implemented as sequential Bayesian experimental design. While there has been tremendous recent activity to develop sparse estimation, little attendance has been given to sparse approximate inference. In this paper, we argue that many problems in practice, such as compressive sensing for real-world image reconstruction, are served much better by proper uncertainty approximations than by ever more aggressive sparse estimation algorithms. Moreover, since some variational inference methods have been given strong convex optimization characterizations recently, theoretical analysis may become possible, promising new insights into nonlinear experimental design.
Inferring the gene network underlying the branching of tomato inflorescence.
Directory of Open Access Journals (Sweden)
Laura Astola
Full Text Available The architecture of tomato inflorescence strongly affects flower production and subsequent crop yield. To understand the genetic activities involved, insight into the underlying network of genes that initiate and control the sympodial growth in the tomato is essential. In this paper, we show how the structure of this network can be derived from available data of the expressions of the involved genes. Our approach starts from employing biological expert knowledge to select the most probable gene candidates behind branching behavior. To find how these genes interact, we develop a stepwise procedure for computational inference of the network structure. Our data consists of expression levels from primary shoot meristems, measured at different developmental stages on three different genotypes of tomato. With the network inferred by our algorithm, we can explain the dynamics corresponding to all three genotypes simultaneously, despite their apparent dissimilarities. We also correctly predict the chronological order of expression peaks for the main hubs in the network. Based on the inferred network, using optimal experimental design criteria, we are able to suggest an informative set of experiments for further investigation of the mechanisms underlying branching behavior.
Bayesian inference of chemical kinetic models from proposed reactions
Galagali, Nikhil
2015-02-01
© 2014 Elsevier Ltd. Bayesian inference provides a natural framework for combining experimental data with prior knowledge to develop chemical kinetic models and quantify the associated uncertainties, not only in parameter values but also in model structure. Most existing applications of Bayesian model selection methods to chemical kinetics have been limited to comparisons among a small set of models, however. The significant computational cost of evaluating posterior model probabilities renders traditional Bayesian methods infeasible when the model space becomes large. We present a new framework for tractable Bayesian model inference and uncertainty quantification using a large number of systematically generated model hypotheses. The approach involves imposing point-mass mixture priors over rate constants and exploring the resulting posterior distribution using an adaptive Markov chain Monte Carlo method. The posterior samples are used to identify plausible models, to quantify rate constant uncertainties, and to extract key diagnostic information about model structure-such as the reactions and operating pathways most strongly supported by the data. We provide numerical demonstrations of the proposed framework by inferring kinetic models for catalytic steam and dry reforming of methane using available experimental data.
Statistical inference an integrated Bayesianlikelihood approach
Aitkin, Murray
2010-01-01
Filling a gap in current Bayesian theory, Statistical Inference: An Integrated Bayesian/Likelihood Approach presents a unified Bayesian treatment of parameter inference and model comparisons that can be used with simple diffuse prior specifications. This novel approach provides new solutions to difficult model comparison problems and offers direct Bayesian counterparts of frequentist t-tests and other standard statistical methods for hypothesis testing.After an overview of the competing theories of statistical inference, the book introduces the Bayes/likelihood approach used throughout. It pre
Quantum electrodynamics of strong fields
International Nuclear Information System (INIS)
Greiner, W.
1983-01-01
Quantum Electrodynamics of Strong Fields provides a broad survey of the theoretical and experimental work accomplished, presenting papers by a group of international researchers who have made significant contributions to this developing area. Exploring the quantum theory of strong fields, the volume focuses on the phase transition to a charged vacuum in strong electric fields. The contributors also discuss such related topics as QED at short distances, precision tests of QED, nonperturbative QCD and confinement, pion condensation, and strong gravitational fields In addition, the volume features a historical paper on the roots of quantum field theory in the history of quantum physics by noted researcher Friedrich Hund
Instabilities in strongly coupled plasmas
Kalman, G J
2003-01-01
The conventional Vlasov treatment of beam-plasma instabilities is inappropriate when the plasma is strongly coupled. In the strongly coupled liquid state, the strong correlations between the dust grains fundamentally affect the conditions for instability. In the crystalline state, the inherent anisotropy couples the longitudinal and transverse polarizations, and results in unstable excitations in both polarizations. We summarize analyses of resonant and non-resonant, as well as resistive instabilities. We consider both ion-dust streaming and dust beam-plasma instabilities. Strong coupling, in general, leads to an enhancement of the growth rates. In the crystalline phase, a resonant transverse instability can be excited.
Short proofs of strong normalization
Wojdyga, Aleksander
2008-01-01
This paper presents simple, syntactic strong normalization proofs for the simply-typed lambda-calculus and the polymorphic lambda-calculus (system F) with the full set of logical connectives, and all the permutative reductions. The normalization proofs use translations of terms and types to systems, for which strong normalization property is known.
Inferring Domain Plans in Question-Answering
National Research Council Canada - National Science Library
Pollack, Martha E
1986-01-01
The importance of plan inference in models of conversation has been widely noted in the computational-linguistics literature, and its incorporation in question-answering systems has enabled a range...
Scalable inference for stochastic block models
Peng, Chengbin; Zhang, Zhihua; Wong, Ka-Chun; Zhang, Xiangliang; Keyes, David E.
2017-01-01
Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference
International Nuclear Information System (INIS)
DeSantis, G.N.
1995-01-01
The calculation decides the integrity of the safety latch that will hold the strong-back to the pump during lifting. The safety latch will be welded to the strong-back and will latch to a 1.5-in. dia cantilever rod welded to the pump baseplate. The static and dynamic analysis shows that the safety latch will hold the strong-back to the pump if the friction clamps fail and the pump become free from the strong-back. Thus, the safety latch will meet the requirements of the Lifting and Rigging Manual for under the hook lifting for static loading; it can withstand shock loads from the strong-back falling 0.25 inch
Efficient algorithms for conditional independence inference
Czech Academy of Sciences Publication Activity Database
Bouckaert, R.; Hemmecke, R.; Lindner, S.; Studený, Milan
2010-01-01
Roč. 11, č. 1 (2010), s. 3453-3479 ISSN 1532-4435 R&D Projects: GA ČR GA201/08/0539; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : conditional independence inference * linear programming approach Subject RIV: BA - General Mathematics Impact factor: 2.949, year: 2010 http://library.utia.cas.cz/separaty/2010/MTR/studeny-efficient algorithms for conditional independence inference.pdf
On the criticality of inferred models
Mastromatteo, Iacopo; Marsili, Matteo
2011-10-01
Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality.
On the criticality of inferred models
International Nuclear Information System (INIS)
Mastromatteo, Iacopo; Marsili, Matteo
2011-01-01
Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality
Polynomial Chaos Surrogates for Bayesian Inference
Le Maitre, Olivier
2016-01-06
The Bayesian inference is a popular probabilistic method to solve inverse problems, such as the identification of field parameter in a PDE model. The inference rely on the Bayes rule to update the prior density of the sought field, from observations, and derive its posterior distribution. In most cases the posterior distribution has no explicit form and has to be sampled, for instance using a Markov-Chain Monte Carlo method. In practice the prior field parameter is decomposed and truncated (e.g. by means of Karhunen- Lo´eve decomposition) to recast the inference problem into the inference of a finite number of coordinates. Although proved effective in many situations, the Bayesian inference as sketched above faces several difficulties requiring improvements. First, sampling the posterior can be a extremely costly task as it requires multiple resolutions of the PDE model for different values of the field parameter. Second, when the observations are not very much informative, the inferred parameter field can highly depends on its prior which can be somehow arbitrary. These issues have motivated the introduction of reduced modeling or surrogates for the (approximate) determination of the parametrized PDE solution and hyperparameters in the description of the prior field. Our contribution focuses on recent developments in these two directions: the acceleration of the posterior sampling by means of Polynomial Chaos expansions and the efficient treatment of parametrized covariance functions for the prior field. We also discuss the possibility of making such approach adaptive to further improve its efficiency.
A Bayesian Network Schema for Lessening Database Inference
National Research Council Canada - National Science Library
Chang, LiWu; Moskowitz, Ira S
2001-01-01
.... The authors introduce a formal schema for database inference analysis, based upon a Bayesian network structure, which identifies critical parameters involved in the inference problem and represents...
Bayesian Inference on Gravitational Waves
Directory of Open Access Journals (Sweden)
Asad Ali
2015-12-01
Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.
Large orders in strong-field QED
Energy Technology Data Exchange (ETDEWEB)
Heinzl, Thomas [School of Mathematics and Statistics, University of Plymouth, Drake Circus, Plymouth PL4 8AA (United Kingdom); Schroeder, Oliver [Science-Computing ag, Hagellocher Weg 73, D-72070 Tuebingen (Germany)
2006-09-15
We address the issue of large-order expansions in strong-field QED. Our approach is based on the one-loop effective action encoded in the associated photon polarization tensor. We concentrate on the simple case of crossed fields aiming at possible applications of high-power lasers to measure vacuum birefringence. A simple next-to-leading order derivative expansion reveals that the indices of refraction increase with frequency. This signals normal dispersion in the small-frequency regime where the derivative expansion makes sense. To gain information beyond that regime we determine the factorial growth of the derivative expansion coefficients evaluating the first 82 orders by means of computer algebra. From this we can infer a nonperturbative imaginary part for the indices of refraction indicating absorption (pair production) as soon as energy and intensity become (super)critical. These results compare favourably with an analytic evaluation of the polarization tensor asymptotics. Kramers-Kronig relations finally allow for a nonperturbative definition of the real parts as well and show that absorption goes hand in hand with anomalous dispersion for sufficiently large frequencies and fields.
Quantum centipedes with strong global constraint
Grange, Pascal
2017-06-01
A centipede made of N quantum walkers on a one-dimensional lattice is considered. The distance between two consecutive legs is either one or two lattice spacings, and a global constraint is imposed: the maximal distance between the first and last leg is N + 1. This is the strongest global constraint compatible with walking. For an initial value of the wave function corresponding to a localized configuration at the origin, the probability law of the first leg of the centipede can be expressed in closed form in terms of Bessel functions. The dispersion relation and the group velocities are worked out exactly. Their maximal group velocity goes to zero when N goes to infinity, which is in contrast with the behaviour of group velocities of quantum centipedes without global constraint, which were recently shown by Krapivsky, Luck and Mallick to give rise to ballistic spreading of extremal wave-front at non-zero velocity in the large-N limit. The corresponding Hamiltonians are implemented numerically, based on a block structure of the space of configurations corresponding to compositions of the integer N. The growth of the maximal group velocity when the strong constraint is gradually relaxed is explored, and observed to be linear in the density of gaps allowed in the configurations. Heuristic arguments are presented to infer that the large-N limit of the globally constrained model can yield finite group velocities provided the allowed number of gaps is a finite fraction of N.
A formal model of interpersonal inference
Directory of Open Access Journals (Sweden)
Michael eMoutoussis
2014-03-01
Full Text Available Introduction: We propose that active Bayesian inference – a general framework for decision-making – can equally be applied to interpersonal exchanges. Social cognition, however, entails special challenges. We address these challenges through a novel formulation of a formal model and demonstrate its psychological significance. Method: We review relevant literature, especially with regards to interpersonal representations, formulate a mathematical model and present a simulation study. The model accommodates normative models from utility theory and places them within the broader setting of Bayesian inference. Crucially, we endow people's prior beliefs, into which utilities are absorbed, with preferences of self and others. The simulation illustrates the model's dynamics and furnishes elementary predictions of the theory. Results: 1. Because beliefs about self and others inform both the desirability and plausibility of outcomes, in this framework interpersonal representations become beliefs that have to be actively inferred. This inference, akin to 'mentalising' in the psychological literature, is based upon the outcomes of interpersonal exchanges. 2. We show how some well-known social-psychological phenomena (e.g. self-serving biases can be explained in terms of active interpersonal inference. 3. Mentalising naturally entails Bayesian updating of how people value social outcomes. Crucially this includes inference about one’s own qualities and preferences. Conclusion: We inaugurate a Bayes optimal framework for modelling intersubject variability in mentalising during interpersonal exchanges. Here, interpersonal representations are endowed with explicit functional and affective properties. We suggest the active inference framework lends itself to the study of psychiatric conditions where mentalising is distorted.
Estimating mountain basin-mean precipitation from streamflow using Bayesian inference
Henn, Brian; Clark, Martyn P.; Kavetski, Dmitri; Lundquist, Jessica D.
2015-10-01
Estimating basin-mean precipitation in complex terrain is difficult due to uncertainty in the topographical representativeness of precipitation gauges relative to the basin. To address this issue, we use Bayesian methodology coupled with a multimodel framework to infer basin-mean precipitation from streamflow observations, and we apply this approach to snow-dominated basins in the Sierra Nevada of California. Using streamflow observations, forcing data from lower-elevation stations, the Bayesian Total Error Analysis (BATEA) methodology and the Framework for Understanding Structural Errors (FUSE), we infer basin-mean precipitation, and compare it to basin-mean precipitation estimated using topographically informed interpolation from gauges (PRISM, the Parameter-elevation Regression on Independent Slopes Model). The BATEA-inferred spatial patterns of precipitation show agreement with PRISM in terms of the rank of basins from wet to dry but differ in absolute values. In some of the basins, these differences may reflect biases in PRISM, because some implied PRISM runoff ratios may be inconsistent with the regional climate. We also infer annual time series of basin precipitation using a two-step calibration approach. Assessment of the precision and robustness of the BATEA approach suggests that uncertainty in the BATEA-inferred precipitation is primarily related to uncertainties in hydrologic model structure. Despite these limitations, time series of inferred annual precipitation under different model and parameter assumptions are strongly correlated with one another, suggesting that this approach is capable of resolving year-to-year variability in basin-mean precipitation.
Estimating uncertainty of inference for validation
Energy Technology Data Exchange (ETDEWEB)
Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM
2010-09-30
We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the
Inferring Mathematical Equations Using Crowdsourcing.
Directory of Open Access Journals (Sweden)
Szymon Wasik
Full Text Available Crowdsourcing, understood as outsourcing work to a large network of people in the form of an open call, has been utilized successfully many times, including a very interesting concept involving the implementation of computer games with the objective of solving a scientific problem by employing users to play a game-so-called crowdsourced serious games. Our main objective was to verify whether such an approach could be successfully applied to the discovery of mathematical equations that explain experimental data gathered during the observation of a given dynamic system. Moreover, we wanted to compare it with an approach based on artificial intelligence that uses symbolic regression to find such formulae automatically. To achieve this, we designed and implemented an Internet game in which players attempt to design a spaceship representing an equation that models the observed system. The game was designed while considering that it should be easy to use for people without strong mathematical backgrounds. Moreover, we tried to make use of the collective intelligence observed in crowdsourced systems by enabling many players to collaborate on a single solution. The idea was tested on several hundred players playing almost 10,000 games and conducting a user opinion survey. The results prove that the proposed solution has very high potential. The function generated during weeklong tests was almost as precise as the analytical solution of the model of the system and, up to a certain complexity level of the formulae, it explained data better than the solution generated automatically by Eureqa, the leading software application for the implementation of symbolic regression. Moreover, we observed benefits of using crowdsourcing; the chain of consecutive solutions that led to the best solution was obtained by the continuous collaboration of several players.
Inferring Mathematical Equations Using Crowdsourcing.
Wasik, Szymon; Fratczak, Filip; Krzyskow, Jakub; Wulnikowski, Jaroslaw
2015-01-01
Crowdsourcing, understood as outsourcing work to a large network of people in the form of an open call, has been utilized successfully many times, including a very interesting concept involving the implementation of computer games with the objective of solving a scientific problem by employing users to play a game-so-called crowdsourced serious games. Our main objective was to verify whether such an approach could be successfully applied to the discovery of mathematical equations that explain experimental data gathered during the observation of a given dynamic system. Moreover, we wanted to compare it with an approach based on artificial intelligence that uses symbolic regression to find such formulae automatically. To achieve this, we designed and implemented an Internet game in which players attempt to design a spaceship representing an equation that models the observed system. The game was designed while considering that it should be easy to use for people without strong mathematical backgrounds. Moreover, we tried to make use of the collective intelligence observed in crowdsourced systems by enabling many players to collaborate on a single solution. The idea was tested on several hundred players playing almost 10,000 games and conducting a user opinion survey. The results prove that the proposed solution has very high potential. The function generated during weeklong tests was almost as precise as the analytical solution of the model of the system and, up to a certain complexity level of the formulae, it explained data better than the solution generated automatically by Eureqa, the leading software application for the implementation of symbolic regression. Moreover, we observed benefits of using crowdsourcing; the chain of consecutive solutions that led to the best solution was obtained by the continuous collaboration of several players.
Deep Learning for Population Genetic Inference.
Sheehan, Sara; Song, Yun S
2016-03-01
Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.
Deep Learning for Population Genetic Inference.
Directory of Open Access Journals (Sweden)
Sara Sheehan
2016-03-01
Full Text Available Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data to the output (e.g., population genetic parameters of interest. We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history. Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.
Deep Learning for Population Genetic Inference
Sheehan, Sara; Song, Yun S.
2016-01-01
Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme. PMID:27018908
Inferring Phylogenetic Networks Using PhyloNet.
Wen, Dingqiao; Yu, Yun; Zhu, Jiafan; Nakhleh, Luay
2018-07-01
PhyloNet was released in 2008 as a software package for representing and analyzing phylogenetic networks. At the time of its release, the main functionalities in PhyloNet consisted of measures for comparing network topologies and a single heuristic for reconciling gene trees with a species tree. Since then, PhyloNet has grown significantly. The software package now includes a wide array of methods for inferring phylogenetic networks from data sets of unlinked loci while accounting for both reticulation (e.g., hybridization) and incomplete lineage sorting. In particular, PhyloNet now allows for maximum parsimony, maximum likelihood, and Bayesian inference of phylogenetic networks from gene tree estimates. Furthermore, Bayesian inference directly from sequence data (sequence alignments or biallelic markers) is implemented. Maximum parsimony is based on an extension of the "minimizing deep coalescences" criterion to phylogenetic networks, whereas maximum likelihood and Bayesian inference are based on the multispecies network coalescent. All methods allow for multiple individuals per species. As computing the likelihood of a phylogenetic network is computationally hard, PhyloNet allows for evaluation and inference of networks using a pseudolikelihood measure. PhyloNet summarizes the results of the various analyzes and generates phylogenetic networks in the extended Newick format that is readily viewable by existing visualization software.
Goal inferences about robot behavior : goal inferences and human response behaviors
Broers, H.A.T.; Ham, J.R.C.; Broeders, R.; De Silva, P.; Okada, M.
2014-01-01
This explorative research focused on the goal inferences human observers draw based on a robot's behavior, and the extent to which those inferences predict people's behavior in response to that robot. Results show that different robot behaviors cause different response behavior from people.
Cultural effects on the association between election outcomes and face-based trait inferences.
Directory of Open Access Journals (Sweden)
Chujun Lin
Full Text Available How competent a politician looks, as assessed in the laboratory, is correlated with whether the politician wins in real elections. This finding has led many to investigate whether the association between candidate appearances and election outcomes transcends cultures. However, these studies have largely focused on European countries and Caucasian candidates. To the best of our knowledge, there are only four cross-cultural studies that have directly investigated how face-based trait inferences correlate with election outcomes across Caucasian and Asian cultures. These prior studies have provided some initial evidence regarding cultural differences, but methodological problems and inconsistent findings have complicated our understanding of how culture mediates the effects of candidate appearances on election outcomes. Additionally, these four past studies have focused on positive traits, with a relative neglect of negative traits, resulting in an incomplete picture of how culture may impact a broader range of trait inferences. To study Caucasian-Asian cultural effects with a more balanced experimental design, and to explore a more complete profile of traits, here we compared how Caucasian and Korean participants' inferences of positive and negative traits correlated with U.S. and Korean election outcomes. Contrary to previous reports, we found that inferences of competence (made by participants from both cultures correlated with both U.S. and Korean election outcomes. Inferences of open-mindedness and threat, two traits neglected in previous cross-cultural studies, were correlated with Korean but not U.S. election outcomes. This differential effect was found in trait judgments made by both Caucasian and Korean participants. Interestingly, the faster the participants made face-based trait inferences, the more strongly those inferences were correlated with real election outcomes. These findings provide new insights into cultural effects and the
Cultural effects on the association between election outcomes and face-based trait inferences.
Lin, Chujun; Adolphs, Ralph; Alvarez, R Michael
2017-01-01
How competent a politician looks, as assessed in the laboratory, is correlated with whether the politician wins in real elections. This finding has led many to investigate whether the association between candidate appearances and election outcomes transcends cultures. However, these studies have largely focused on European countries and Caucasian candidates. To the best of our knowledge, there are only four cross-cultural studies that have directly investigated how face-based trait inferences correlate with election outcomes across Caucasian and Asian cultures. These prior studies have provided some initial evidence regarding cultural differences, but methodological problems and inconsistent findings have complicated our understanding of how culture mediates the effects of candidate appearances on election outcomes. Additionally, these four past studies have focused on positive traits, with a relative neglect of negative traits, resulting in an incomplete picture of how culture may impact a broader range of trait inferences. To study Caucasian-Asian cultural effects with a more balanced experimental design, and to explore a more complete profile of traits, here we compared how Caucasian and Korean participants' inferences of positive and negative traits correlated with U.S. and Korean election outcomes. Contrary to previous reports, we found that inferences of competence (made by participants from both cultures) correlated with both U.S. and Korean election outcomes. Inferences of open-mindedness and threat, two traits neglected in previous cross-cultural studies, were correlated with Korean but not U.S. election outcomes. This differential effect was found in trait judgments made by both Caucasian and Korean participants. Interestingly, the faster the participants made face-based trait inferences, the more strongly those inferences were correlated with real election outcomes. These findings provide new insights into cultural effects and the difficult question of
Cultural effects on the association between election outcomes and face-based trait inferences
Adolphs, Ralph; Alvarez, R. Michael
2017-01-01
How competent a politician looks, as assessed in the laboratory, is correlated with whether the politician wins in real elections. This finding has led many to investigate whether the association between candidate appearances and election outcomes transcends cultures. However, these studies have largely focused on European countries and Caucasian candidates. To the best of our knowledge, there are only four cross-cultural studies that have directly investigated how face-based trait inferences correlate with election outcomes across Caucasian and Asian cultures. These prior studies have provided some initial evidence regarding cultural differences, but methodological problems and inconsistent findings have complicated our understanding of how culture mediates the effects of candidate appearances on election outcomes. Additionally, these four past studies have focused on positive traits, with a relative neglect of negative traits, resulting in an incomplete picture of how culture may impact a broader range of trait inferences. To study Caucasian-Asian cultural effects with a more balanced experimental design, and to explore a more complete profile of traits, here we compared how Caucasian and Korean participants’ inferences of positive and negative traits correlated with U.S. and Korean election outcomes. Contrary to previous reports, we found that inferences of competence (made by participants from both cultures) correlated with both U.S. and Korean election outcomes. Inferences of open-mindedness and threat, two traits neglected in previous cross-cultural studies, were correlated with Korean but not U.S. election outcomes. This differential effect was found in trait judgments made by both Caucasian and Korean participants. Interestingly, the faster the participants made face-based trait inferences, the more strongly those inferences were correlated with real election outcomes. These findings provide new insights into cultural effects and the difficult question of
Phylogeny and Divergence Times of Lemurs Inferred with Recent and Ancient Fossils in the Tree.
Herrera, James P; Dávalos, Liliana M
2016-09-01
Paleontological and neontological systematics seek to answer evolutionary questions with different data sets. Phylogenies inferred for combined extant and extinct taxa provide novel insights into the evolutionary history of life. Primates have an extensive, diverse fossil record and molecular data for living and extinct taxa are rapidly becoming available. We used two models to infer the phylogeny and divergence times for living and fossil primates, the tip-dating (TD) and fossilized birth-death process (FBD). We collected new morphological data, especially on the living and extinct endemic lemurs of Madagascar. We combined the morphological data with published DNA sequences to infer near-complete (88% of lemurs) time-calibrated phylogenies. The results suggest that primates originated around the Cretaceous-Tertiary boundary, slightly earlier than indicated by the fossil record and later than previously inferred from molecular data alone. We infer novel relationships among extinct lemurs, and strong support for relationships that were previously unresolved. Dates inferred with TD were significantly older than those inferred with FBD, most likely related to an assumption of a uniform branching process in the TD compared with a birth-death process assumed in the FBD. This is the first study to combine morphological and DNA sequence data from extinct and extant primates to infer evolutionary relationships and divergence times, and our results shed new light on the tempo of lemur evolution and the efficacy of combined phylogenetic analyses. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Using Alien Coins to Test Whether Simple Inference Is Bayesian
Cassey, Peter; Hawkins, Guy E.; Donkin, Chris; Brown, Scott D.
2016-01-01
Reasoning and inference are well-studied aspects of basic cognition that have been explained as statistically optimal Bayesian inference. Using a simplified experimental design, we conducted quantitative comparisons between Bayesian inference and human inference at the level of individuals. In 3 experiments, with more than 13,000 participants, we…
International Nuclear Information System (INIS)
Aoki, Ken-ichi
1988-01-01
Existence of a strong coupling phase in QED has been suggested in solutions of the Schwinger-Dyson equation and in Monte Carlo simulation of lattice QED. In this article we recapitulate the previous arguments, and formulate the problem in the modern framework of the renormalization theory, Wilsonian renormalization. This scheme of renormalization gives the best understanding of the basic structure of a field theory especially when it has a multi-phase structure. We resolve some misleading arguments in the previous literature. Then we set up a strategy to attack the strong phase, if any. We describe a trial; a coupled Schwinger-Dyson equation. Possible picture of the strong coupling phase QED is presented. (author)
Fuzzy logic controller using different inference methods
International Nuclear Information System (INIS)
Liu, Z.; De Keyser, R.
1994-01-01
In this paper the design of fuzzy controllers by using different inference methods is introduced. Configuration of the fuzzy controllers includes a general rule-base which is a collection of fuzzy PI or PD rules, the triangular fuzzy data model and a centre of gravity defuzzification algorithm. The generalized modus ponens (GMP) is used with the minimum operator of the triangular norm. Under the sup-min inference rule, six fuzzy implication operators are employed to calculate the fuzzy look-up tables for each rule base. The performance is tested in simulated systems with MATLAB/SIMULINK. Results show the effects of using the fuzzy controllers with different inference methods and applied to different test processes
Uncertainty in prediction and in inference
International Nuclear Information System (INIS)
Hilgevoord, J.; Uffink, J.
1991-01-01
The concepts of uncertainty in prediction and inference are introduced and illustrated using the diffraction of light as an example. The close relationship between the concepts of uncertainty in inference and resolving power is noted. A general quantitative measure of uncertainty in inference can be obtained by means of the so-called statistical distance between probability distributions. When applied to quantum mechanics, this distance leads to a measure of the distinguishability of quantum states, which essentially is the absolute value of the matrix element between the states. The importance of this result to the quantum mechanical uncertainty principle is noted. The second part of the paper provides a derivation of the statistical distance on the basis of the so-called method of support
A Learning Algorithm for Multimodal Grammar Inference.
D'Ulizia, A; Ferri, F; Grifoni, P
2011-12-01
The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.
Examples in parametric inference with R
Dixit, Ulhas Jayram
2016-01-01
This book discusses examples in parametric inference with R. Combining basic theory with modern approaches, it presents the latest developments and trends in statistical inference for students who do not have an advanced mathematical and statistical background. The topics discussed in the book are fundamental and common to many fields of statistical inference and thus serve as a point of departure for in-depth study. The book is divided into eight chapters: Chapter 1 provides an overview of topics on sufficiency and completeness, while Chapter 2 briefly discusses unbiased estimation. Chapter 3 focuses on the study of moments and maximum likelihood estimators, and Chapter 4 presents bounds for the variance. In Chapter 5, topics on consistent estimator are discussed. Chapter 6 discusses Bayes, while Chapter 7 studies some more powerful tests. Lastly, Chapter 8 examines unbiased and other tests. Senior undergraduate and graduate students in statistics and mathematics, and those who have taken an introductory cou...
Grammatical inference algorithms, routines and applications
Wieczorek, Wojciech
2017-01-01
This book focuses on grammatical inference, presenting classic and modern methods of grammatical inference from the perspective of practitioners. To do so, it employs the Python programming language to present all of the methods discussed. Grammatical inference is a field that lies at the intersection of multiple disciplines, with contributions from computational linguistics, pattern recognition, machine learning, computational biology, formal learning theory and many others. Though the book is largely practical, it also includes elements of learning theory, combinatorics on words, the theory of automata and formal languages, plus references to real-world problems. The listings presented here can be directly copied and pasted into other programs, thus making the book a valuable source of ready recipes for students, academic researchers, and programmers alike, as well as an inspiration for their further development.>.
Statistical inference based on divergence measures
Pardo, Leandro
2005-01-01
The idea of using functionals of Information Theory, such as entropies or divergences, in statistical inference is not new. However, in spite of the fact that divergence statistics have become a very good alternative to the classical likelihood ratio test and the Pearson-type statistic in discrete models, many statisticians remain unaware of this powerful approach.Statistical Inference Based on Divergence Measures explores classical problems of statistical inference, such as estimation and hypothesis testing, on the basis of measures of entropy and divergence. The first two chapters form an overview, from a statistical perspective, of the most important measures of entropy and divergence and study their properties. The author then examines the statistical analysis of discrete multivariate data with emphasis is on problems in contingency tables and loglinear models using phi-divergence test statistics as well as minimum phi-divergence estimators. The final chapter looks at testing in general populations, prese...
Strong interactions at high energy
International Nuclear Information System (INIS)
Anselmino, M.
1995-01-01
Spin effects in strong interaction high energy processes are subtle phenomena which involve both short and long distance physics and test perturbative and non perturbative aspects of QCD. Moreover, depending on quantities like interferences between different amplitudes and relative phases, spin observables always test a theory at a fundamental quantum mechanical level; it is then no surprise that spin data are often difficult to accommodate within the existing models. A report is made on the main issues and contributions discussed in the parallel Session on the open-quote open-quote Strong interactions at high energy close-quote close-quote in this Conference. copyright 1995 American Institute of Physics
Strong-field dissociation dynamics
International Nuclear Information System (INIS)
DiMauro, L.F.; Yang, Baorui.
1993-01-01
The strong-field dissociation behavior of diatomic molecules is examined under two distinctive physical scenarios. In the first scenario, the dissociation of the isolated hydrogen and deuterium molecular ions is discussed. The dynamics of above-threshold dissociation (ATD) are investigated over a wide range of green and infrared intensities and compared to a dressed-state model. The second situation arises when strong-field neutral dissociation is followed by ionization of the atomic fragments. The study results in a direct measure of the atomic fragment's ac-Stark shift by observing the intensity-dependent shifts in the electron or nuclear fragment kinetic energy. 8 figs., 14 refs
Improved Inference of Heteroscedastic Fixed Effects Models
Directory of Open Access Journals (Sweden)
Afshan Saeed
2016-12-01
Full Text Available Heteroscedasticity is a stern problem that distorts estimation and testing of panel data model (PDM. Arellano (1987 proposed the White (1980 estimator for PDM with heteroscedastic errors but it provides erroneous inference for the data sets including high leverage points. In this paper, our attempt is to improve heteroscedastic consistent covariance matrix estimator (HCCME for panel dataset with high leverage points. To draw robust inference for the PDM, our focus is to improve kernel bootstrap estimators, proposed by Racine and MacKinnon (2007. The Monte Carlo scheme is used for assertion of the results.
Likelihood inference for unions of interacting discs
DEFF Research Database (Denmark)
Møller, Jesper; Helisova, K.
2010-01-01
This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point...... process, where the germs are the centres and the marks are the associated radii of the discs. We propose to use a recent parametric class of interacting disc process models, where the minimal sufficient statistic depends on various geometric properties of the random set, and the density is specified......-based maximum likelihood inference and the effect of specifying different reference Poisson models....
IMAGINE: Interstellar MAGnetic field INference Engine
Steininger, Theo
2018-03-01
IMAGINE (Interstellar MAGnetic field INference Engine) performs inference on generic parametric models of the Galaxy. The modular open source framework uses highly optimized tools and technology such as the MultiNest sampler (ascl:1109.006) and the information field theory framework NIFTy (ascl:1302.013) to create an instance of the Milky Way based on a set of parameters for physical observables, using Bayesian statistics to judge the mismatch between measured data and model prediction. The flexibility of the IMAGINE framework allows for simple refitting for newly available data sets and makes state-of-the-art Bayesian methods easily accessible particularly for random components of the Galactic magnetic field.
Inferring causality from noisy time series data
DEFF Research Database (Denmark)
Mønster, Dan; Fusaroli, Riccardo; Tylén, Kristian
2016-01-01
Convergent Cross-Mapping (CCM) has shown high potential to perform causal inference in the absence of models. We assess the strengths and weaknesses of the method by varying coupling strength and noise levels in coupled logistic maps. We find that CCM fails to infer accurate coupling strength...... and even causality direction in synchronized time-series and in the presence of intermediate coupling. We find that the presence of noise deterministically reduces the level of cross-mapping fidelity, while the convergence rate exhibits higher levels of robustness. Finally, we propose that controlled noise...
The aggregate site frequency spectrum for comparative population genomic inference.
Xue, Alexander T; Hickerson, Michael J
2015-12-01
Understanding how assemblages of species responded to past climate change is a central goal of comparative phylogeography and comparative population genomics, an endeavour that has increasing potential to integrate with community ecology. New sequencing technology now provides the potential to perform complex demographic inference at unprecedented resolution across assemblages of nonmodel species. To this end, we introduce the aggregate site frequency spectrum (aSFS), an expansion of the site frequency spectrum to use single nucleotide polymorphism (SNP) data sets collected from multiple, co-distributed species for assemblage-level demographic inference. We describe how the aSFS is constructed over an arbitrary number of independent population samples and then demonstrate how the aSFS can differentiate various multispecies demographic histories under a wide range of sampling configurations while allowing effective population sizes and expansion magnitudes to vary independently. We subsequently couple the aSFS with a hierarchical approximate Bayesian computation (hABC) framework to estimate degree of temporal synchronicity in expansion times across taxa, including an empirical demonstration with a data set consisting of five populations of the threespine stickleback (Gasterosteus aculeatus). Corroborating what is generally understood about the recent postglacial origins of these populations, the joint aSFS/hABC analysis strongly suggests that the stickleback data are most consistent with synchronous expansion after the Last Glacial Maximum (posterior probability = 0.99). The aSFS will have general application for multilevel statistical frameworks to test models involving assemblages and/or communities, and as large-scale SNP data from nonmodel species become routine, the aSFS expands the potential for powerful next-generation comparative population genomic inference. © 2015 The Authors. Molecular Ecology Published by John Wiley & Sons Ltd.
Strong Decomposition of Random Variables
DEFF Research Database (Denmark)
Hoffmann-Jørgensen, Jørgen; Kagan, Abram M.; Pitt, Loren D.
2007-01-01
A random variable X is stongly decomposable if X=Y+Z where Y=Φ(X) and Z=X-Φ(X) are independent non-degenerated random variables (called the components). It is shown that at least one of the components is singular, and we derive a necessary and sufficient condition for strong decomposability...... of a discrete random variable....
Strong coupling electroweak symmetry breaking
International Nuclear Information System (INIS)
Barklow, T.L.; Burdman, G.; Chivukula, R.S.
1997-04-01
The authors review models of electroweak symmetry breaking due to new strong interactions at the TeV energy scale and discuss the prospects for their experimental tests. They emphasize the direct observation of the new interactions through high-energy scattering of vector bosons. They also discuss indirect probes of the new interactions and exotic particles predicted by specific theoretical models
Strong coupling electroweak symmetry breaking
Energy Technology Data Exchange (ETDEWEB)
Barklow, T.L. [Stanford Linear Accelerator Center, Menlo Park, CA (United States); Burdman, G. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Chivukula, R.S. [Boston Univ., MA (United States). Dept. of Physics
1997-04-01
The authors review models of electroweak symmetry breaking due to new strong interactions at the TeV energy scale and discuss the prospects for their experimental tests. They emphasize the direct observation of the new interactions through high-energy scattering of vector bosons. They also discuss indirect probes of the new interactions and exotic particles predicted by specific theoretical models.
The colours of strong interaction
International Nuclear Information System (INIS)
1995-01-01
The aim of this session is to draw a consistent framework about the different ways to consider strong interaction. A large part is dedicated to theoretical work and the latest experimental results obtained at the first electron collider HERA are discussed. (A.C.)
Strong cosmic censorship and the strong curvature singularities
International Nuclear Information System (INIS)
Krolak, A.
1987-01-01
Conditions are given under which any asymptotically simple and empty space-time that has a partial Cauchy surface with an asymptotically simple past is globally hyperbolic. It is shown that this result suggests that the Cauchy horizons of the type occurring in Reissner--Nordstroem and Kerr space-times are unstable. This in turn gives support for the validity of the strong cosmic censorship hypothesis
Model averaging, optimal inference and habit formation
Directory of Open Access Journals (Sweden)
Thomas H B FitzGerald
2014-06-01
Full Text Available Postulating that the brain performs approximate Bayesian inference generates principled and empirically testable models of neuronal function – the subject of much current interest in neuroscience and related disciplines. Current formulations address inference and learning under some assumed and particular model. In reality, organisms are often faced with an additional challenge – that of determining which model or models of their environment are the best for guiding behaviour. Bayesian model averaging – which says that an agent should weight the predictions of different models according to their evidence – provides a principled way to solve this problem. Importantly, because model evidence is determined by both the accuracy and complexity of the model, optimal inference requires that these be traded off against one another. This means an agent’s behaviour should show an equivalent balance. We hypothesise that Bayesian model averaging plays an important role in cognition, given that it is both optimal and realisable within a plausible neuronal architecture. We outline model averaging and how it might be implemented, and then explore a number of implications for brain and behaviour. In particular, we propose that model averaging can explain a number of apparently suboptimal phenomena within the framework of approximate (bounded Bayesian inference, focussing particularly upon the relationship between goal-directed and habitual behaviour.
Efficient Bayesian inference for ARFIMA processes
Graves, T.; Gramacy, R. B.; Franzke, C. L. E.; Watkins, N. W.
2015-03-01
Many geophysical quantities, like atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long-range dependence (LRD). LRD means that these quantities experience non-trivial temporal memory, which potentially enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LRD. In this paper we present a modern and systematic approach to the inference of LRD. Rather than Mandelbrot's fractional Gaussian noise, we use the more flexible Autoregressive Fractional Integrated Moving Average (ARFIMA) model which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LRD, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g. short memory effects) can be integrated over in order to focus on long memory parameters, and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data, with favorable comparison to the standard estimators.
Campbell's and Rubin's Perspectives on Causal Inference
West, Stephen G.; Thoemmes, Felix
2010-01-01
Donald Campbell's approach to causal inference (D. T. Campbell, 1957; W. R. Shadish, T. D. Cook, & D. T. Campbell, 2002) is widely used in psychology and education, whereas Donald Rubin's causal model (P. W. Holland, 1986; D. B. Rubin, 1974, 2005) is widely used in economics, statistics, medicine, and public health. Campbell's approach focuses on…
Bayesian structural inference for hidden processes
Strelioff, Christopher C.; Crutchfield, James P.
2014-04-01
We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.
HIERARCHICAL PROBABILISTIC INFERENCE OF COSMIC SHEAR
International Nuclear Information System (INIS)
Schneider, Michael D.; Dawson, William A.; Hogg, David W.; Marshall, Philip J.; Bard, Deborah J.; Meyers, Joshua; Lang, Dustin
2015-01-01
Point estimators for the shearing of galaxy images induced by gravitational lensing involve a complex inverse problem in the presence of noise, pixelization, and model uncertainties. We present a probabilistic forward modeling approach to gravitational lensing inference that has the potential to mitigate the biased inferences in most common point estimators and is practical for upcoming lensing surveys. The first part of our statistical framework requires specification of a likelihood function for the pixel data in an imaging survey given parameterized models for the galaxies in the images. We derive the lensing shear posterior by marginalizing over all intrinsic galaxy properties that contribute to the pixel data (i.e., not limited to galaxy ellipticities) and learn the distributions for the intrinsic galaxy properties via hierarchical inference with a suitably flexible conditional probabilitiy distribution specification. We use importance sampling to separate the modeling of small imaging areas from the global shear inference, thereby rendering our algorithm computationally tractable for large surveys. With simple numerical examples we demonstrate the improvements in accuracy from our importance sampling approach, as well as the significance of the conditional distribution specification for the intrinsic galaxy properties when the data are generated from an unknown number of distinct galaxy populations with different morphological characteristics
Interest, Inferences, and Learning from Texts
Clinton, Virginia; van den Broek, Paul
2012-01-01
Topic interest and learning from texts have been found to be positively associated with each other. However, the reason for this positive association is not well understood. The purpose of this study is to examine a cognitive process, inference generation, that could explain the positive association between interest and learning from texts. In…
Ignorability in Statistical and Probabilistic Inference
DEFF Research Database (Denmark)
Jaeger, Manfred
2005-01-01
When dealing with incomplete data in statistical learning, or incomplete observations in probabilistic inference, one needs to distinguish the fact that a certain event is observed from the fact that the observed event has happened. Since the modeling and computational complexities entailed...
Inverse Ising inference with correlated samples
International Nuclear Information System (INIS)
Obermayer, Benedikt; Levine, Erel
2014-01-01
Correlations between two variables of a high-dimensional system can be indicative of an underlying interaction, but can also result from indirect effects. Inverse Ising inference is a method to distinguish one from the other. Essentially, the parameters of the least constrained statistical model are learned from the observed correlations such that direct interactions can be separated from indirect correlations. Among many other applications, this approach has been helpful for protein structure prediction, because residues which interact in the 3D structure often show correlated substitutions in a multiple sequence alignment. In this context, samples used for inference are not independent but share an evolutionary history on a phylogenetic tree. Here, we discuss the effects of correlations between samples on global inference. Such correlations could arise due to phylogeny but also via other slow dynamical processes. We present a simple analytical model to address the resulting inference biases, and develop an exact method accounting for background correlations in alignment data by combining phylogenetic modeling with an adaptive cluster expansion algorithm. We find that popular reweighting schemes are only marginally effective at removing phylogenetic bias, suggest a rescaling strategy that yields better results, and provide evidence that our conclusions carry over to the frequently used mean-field approach to the inverse Ising problem. (paper)
Evolutionary inference via the Poisson Indel Process.
Bouchard-Côté, Alexandre; Jordan, Michael I
2013-01-22
We address the problem of the joint statistical inference of phylogenetic trees and multiple sequence alignments from unaligned molecular sequences. This problem is generally formulated in terms of string-valued evolutionary processes along the branches of a phylogenetic tree. The classic evolutionary process, the TKF91 model [Thorne JL, Kishino H, Felsenstein J (1991) J Mol Evol 33(2):114-124] is a continuous-time Markov chain model composed of insertion, deletion, and substitution events. Unfortunately, this model gives rise to an intractable computational problem: The computation of the marginal likelihood under the TKF91 model is exponential in the number of taxa. In this work, we present a stochastic process, the Poisson Indel Process (PIP), in which the complexity of this computation is reduced to linear. The Poisson Indel Process is closely related to the TKF91 model, differing only in its treatment of insertions, but it has a global characterization as a Poisson process on the phylogeny. Standard results for Poisson processes allow key computations to be decoupled, which yields the favorable computational profile of inference under the PIP model. We present illustrative experiments in which Bayesian inference under the PIP model is compared with separate inference of phylogenies and alignments.
Culture and Pragmatic Inference in Interpersonal Communication
African Journals Online (AJOL)
cognitive process, and that the human capacity for inference is crucially important ... been noted that research in interpersonal communication is currently pushing the ... communicative actions, the social-cultural world of everyday life is not only ... personal experiences of the authors', as documented over time and recreated ...
Inference and the Introductory Statistics Course
Pfannkuch, Maxine; Regan, Matt; Wild, Chris; Budgett, Stephanie; Forbes, Sharleen; Harraway, John; Parsonage, Ross
2011-01-01
This article sets out some of the rationale and arguments for making major changes to the teaching and learning of statistical inference in introductory courses at our universities by changing from a norm-based, mathematical approach to more conceptually accessible computer-based approaches. The core problem of the inferential argument with its…
Statistical Inference on the Canadian Middle Class
Directory of Open Access Journals (Sweden)
Russell Davidson
2018-03-01
Full Text Available Conventional wisdom says that the middle classes in many developed countries have recently suffered losses, in terms of both the share of the total population belonging to the middle class, and also their share in total income. Here, distribution-free methods are developed for inference on these shares, by means of deriving expressions for their asymptotic variances of sample estimates, and the covariance of the estimates. Asymptotic inference can be undertaken based on asymptotic normality. Bootstrap inference can be expected to be more reliable, and appropriate bootstrap procedures are proposed. As an illustration, samples of individual earnings drawn from Canadian census data are used to test various hypotheses about the middle-class shares, and confidence intervals for them are computed. It is found that, for the earlier censuses, sample sizes are large enough for asymptotic and bootstrap inference to be almost identical, but that, in the twenty-first century, the bootstrap fails on account of a strange phenomenon whereby many presumably different incomes in the data are rounded to one and the same value. Another difference between the centuries is the appearance of heavy right-hand tails in the income distributions of both men and women.
Spurious correlations and inference in landscape genetics
Samuel A. Cushman; Erin L. Landguth
2010-01-01
Reliable interpretation of landscape genetic analyses depends on statistical methods that have high power to identify the correct process driving gene flow while rejecting incorrect alternative hypotheses. Little is known about statistical power and inference in individual-based landscape genetics. Our objective was to evaluate the power of causalmodelling with partial...
Cortical information flow during inferences of agency
Dogge, Myrthel; Hofman, Dennis; Boersma, Maria; Dijkerman, H Chris; Aarts, Henk
2014-01-01
Building on the recent finding that agency experiences do not merely rely on sensorimotor information but also on cognitive cues, this exploratory study uses electroencephalographic recordings to examine functional connectivity during agency inference processing in a setting where action and outcome
Quasi-Experimental Designs for Causal Inference
Kim, Yongnam; Steiner, Peter
2016-01-01
When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…
The importance of learning when making inferences
Directory of Open Access Journals (Sweden)
Jorg Rieskamp
2008-03-01
Full Text Available The assumption that people possess a repertoire of strategies to solve the inference problems they face has been made repeatedly. The experimental findings of two previous studies on strategy selection are reexamined from a learning perspective, which argues that people learn to select strategies for making probabilistic inferences. This learning process is modeled with the strategy selection learning (SSL theory, which assumes that people develop subjective expectancies for the strategies they have. They select strategies proportional to their expectancies, which are updated on the basis of experience. For the study by Newell, Weston, and Shanks (2003 it can be shown that people did not anticipate the success of a strategy from the beginning of the experiment. Instead, the behavior observed at the end of the experiment was the result of a learning process that can be described by the SSL theory. For the second study, by Br"oder and Schiffer (2006, the SSL theory is able to provide an explanation for why participants only slowly adapted to new environments in a dynamic inference situation. The reanalysis of the previous studies illustrates the importance of learning for probabilistic inferences.
Colligation, Or the Logical Inference of Interconnection
DEFF Research Database (Denmark)
Falster, Peter
1998-01-01
laws or assumptions. Yet interconnection as an abstract concept seems to be without scientific underpinning in pure logic. Adopting a historical viewpoint, our aim is to show that the reasoning of interconnection may be identified with a neglected kind of logical inference, called "colligation...
Colligation or, The Logical Inference of Interconnection
DEFF Research Database (Denmark)
Franksen, Ole Immanuel; Falster, Peter
2000-01-01
laws or assumptions. Yet interconnection as an abstract concept seems to be without scientific underpinning in oure logic. Adopting a historical viewpoint, our aim is to show that the reasoning of interconnection may be identified with a neglected kind of logical inference, called "colligation...
Inferring motion and location using WLAN RSSI
Kavitha Muthukrishnan, K.; van der Zwaag, B.J.; Havinga, Paul J.M.; Fuller, R.; Koutsoukos, X.
2009-01-01
We present novel algorithms to infer movement by making use of inherent fluctuations in the received signal strengths from existing WLAN infrastructure. We evaluate the performance of the presented algorithms based on classification metrics such as recall and precision using annotated traces
Strongly Correlated Systems Theoretical Methods
Avella, Adolfo
2012-01-01
The volume presents, for the very first time, an exhaustive collection of those modern theoretical methods specifically tailored for the analysis of Strongly Correlated Systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and materials science, belong to this class of systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognized main contributors. The exposition has a clear pedagogical cut and fully reports on the most relevant case study where the specific technique showed to be very successful in describing and enlightening the puzzling physics of a particular strongly correlated system. The book is intended for advanced graduate students and post-docs in the field as textbook and/or main reference, but also for other researchers in the field who appreciates consulting a single, but comprehensive, source or wishes to get acquainted, in a as painless as po...
Strongly correlated systems numerical methods
Mancini, Ferdinando
2013-01-01
This volume presents, for the very first time, an exhaustive collection of those modern numerical methods specifically tailored for the analysis of Strongly Correlated Systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and material science, belong to this class of systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognized main contributors. The exposition has a clear pedagogical cut and fully reports on the most relevant case study where the specific technique showed to be very successful in describing and enlightening the puzzling physics of a particular strongly correlated system. The book is intended for advanced graduate students and post-docs in the field as textbook and/or main reference, but also for other researchers in the field who appreciate consulting a single, but comprehensive, source or wishes to get acquainted, in a as painless as possi...
Strongly correlated systems experimental techniques
Mancini, Ferdinando
2015-01-01
The continuous evolution and development of experimental techniques is at the basis of any fundamental achievement in modern physics. Strongly correlated systems (SCS), more than any other, need to be investigated through the greatest variety of experimental techniques in order to unveil and crosscheck the numerous and puzzling anomalous behaviors characterizing them. The study of SCS fostered the improvement of many old experimental techniques, but also the advent of many new ones just invented in order to analyze the complex behaviors of these systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and materials science, belong to this class of systems. The volume presents a representative collection of the modern experimental techniques specifically tailored for the analysis of strongly correlated systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognize...
Flavour Democracy in Strong Unification
Abel, S A; Abel, Steven; King, Steven
1998-01-01
We show that the fermion mass spectrum may naturally be understood in terms of flavour democratic fixed points in supersymmetric theories which have a large domain of attraction in the presence of "strong unification". Our approach provides an alternative to the approximate Yukawa texture zeroes of the Froggatt-Nielsen mechanism. We discuss a particular model based on a broken gauged $SU(3)_L\\times SU(3)_R$ family symmetry which illustrates our approach.
Zhou, Haotian; Majka, Elizabeth A; Epley, Nicholas
2017-04-01
People use at least two strategies to solve the challenge of understanding another person's mind: inferring that person's perspective by reading his or her behavior (theorization) and getting that person's perspective by experiencing his or her situation (simulation). The five experiments reported here demonstrate a strong tendency for people to underestimate the value of simulation. Predictors estimated a stranger's emotional reactions toward 50 pictures. They could either infer the stranger's perspective by reading his or her facial expressions or simulate the stranger's perspective by watching the pictures he or she viewed. Predictors were substantially more accurate when they got perspective through simulation, but overestimated the accuracy they had achieved by inferring perspective. Predictors' miscalibrated confidence stemmed from overestimating the information revealed through facial expressions and underestimating the similarity in people's reactions to a given situation. People seem to underappreciate a useful strategy for understanding the minds of others, even after they gain firsthand experience with both strategies.
Inference of Well-Typings for Logic Programs with Application to Termination Analysis
DEFF Research Database (Denmark)
Bruynooghe, M.; Gallagher, John Patrick; Humbeeck, W. Van
2005-01-01
A method is developed to infer a polymorphic well-typing for a logic program. Our motivation is to improve the automation of termination analysis by deriving types from which norms can automatically be constructed. Previous work on type-based termination analysis used either types declared...... by the user, or automatically generated monomorphic types describing the success set of predicates. The latter types are less precise and result in weaker termination conditions than those obtained from declared types. Our type inference procedure involves solving set constraints generated from the program...... and derives a well-typing in contrast to a success-set approximation. Experiments so far show that our automatically inferred well-typings are close to the declared types and result in termination conditions that are as strong as those obtained with declared types. We describe the method, its implementation...
String dynamics at strong coupling
International Nuclear Information System (INIS)
Hull, C.M.
1996-01-01
The dynamics of superstring, supergravity and M-theories and their compactifications are probed by studying the various perturbation theories that emerge in the strong and weak-coupling limits for various directions in coupling constant space. The results support the picture of an underlying non-perturbative theory that, when expanded perturbatively in different coupling constants, gives different perturbation theories, which can be perturbative superstring theories or superparticle theories. The p-brane spectrum is considered in detail and a criterion found to establish which p-branes govern the strong-coupling dynamics. In many cases there are competing conjectures in the literature, and this analysis decides between them. In other cases, new results are found. The chiral 6-dimensional theory resulting from compactifying the type IIB string on K 3 is studied in detail and it is found that certain strong-coupling limits appear to give new theories, some of which hint at the possibility of a 12-dimensional origin. (orig.)
Active inference, sensory attenuation and illusions.
Brown, Harriet; Adams, Rick A; Parees, Isabel; Edwards, Mark; Friston, Karl
2013-11-01
Active inference provides a simple and neurobiologically plausible account of how action and perception are coupled in producing (Bayes) optimal behaviour. This can be seen most easily as minimising prediction error: we can either change our predictions to explain sensory input through perception. Alternatively, we can actively change sensory input to fulfil our predictions. In active inference, this action is mediated by classical reflex arcs that minimise proprioceptive prediction error created by descending proprioceptive predictions. However, this creates a conflict between action and perception; in that, self-generated movements require predictions to override the sensory evidence that one is not actually moving. However, ignoring sensory evidence means that externally generated sensations will not be perceived. Conversely, attending to (proprioceptive and somatosensory) sensations enables the detection of externally generated events but precludes generation of actions. This conflict can be resolved by attenuating the precision of sensory evidence during movement or, equivalently, attending away from the consequences of self-made acts. We propose that this Bayes optimal withdrawal of precise sensory evidence during movement is the cause of psychophysical sensory attenuation. Furthermore, it explains the force-matching illusion and reproduces empirical results almost exactly. Finally, if attenuation is removed, the force-matching illusion disappears and false (delusional) inferences about agency emerge. This is important, given the negative correlation between sensory attenuation and delusional beliefs in normal subjects--and the reduction in the magnitude of the illusion in schizophrenia. Active inference therefore links the neuromodulatory optimisation of precision to sensory attenuation and illusory phenomena during the attribution of agency in normal subjects. It also provides a functional account of deficits in syndromes characterised by false inference
Arnold, L.R.; Mladinich, C.S.; Langer, W.H.; Daniels, J.S.
2010-01-01
Land use in the South Platte River valley between the cities of Brighton and Fort Lupton, Colo., is undergoing change as urban areas expand, and the extent of aggregate mining in the Brighton-Fort Lupton area is increasing as the demand for aggregate grows in response to urban development. To improve understanding of land-use change and the potential effects of land-use change and aggregate mining on groundwater flow, the U.S. Geological Survey, in cooperation with the cities of Brighton and Fort Lupton, analyzed socioeconomic and land-use trends and constructed a numerical groundwater flow model of the South Platte alluvial aquifer in the Brighton-Fort Lupton area. The numerical groundwater flow model was used to simulate (1) steady-state hydrologic effects of predicted land-use conditions in 2020 and 2040, (2) transient cumulative hydrologic effects of the potential extent of reclaimed aggregate pits in 2020 and 2040, (3) transient hydrologic effects of actively dewatered aggregate pits, and (4) effects of different hypothetical pit spacings and configurations on groundwater levels. The SLEUTH (Slope, Land cover, Exclusion, Urbanization, Transportation, and Hillshade) urban-growth modeling program was used to predict the extent of urban area in 2020 and 2040. Wetlands in the Brighton-Fort Lupton area were mapped as part of the study, and mapped wetland locations and areas of riparian herbaceous vegetation previously mapped by the Colorado Division of Wildlife were compared to simulation results to indicate areas where wetlands or riparian herbaceous vegetation might be affected by groundwater-level changes resulting from land-use change or aggregate mining. Analysis of land-use conditions in 1957, 1977, and 2000 indicated that the general distribution of irrigated land and non-irrigated land remained similar from 1957 to 2000, but both land uses decreased as urban area increased. Urban area increased about 165 percent from 1957 to 1977 and about 56 percent from
Okalebo, J. A.; Das Choudhury, S.; Awada, T.; Suyker, A.; LeBauer, D.; Newcomb, M.; Ward, R.
2017-12-01
The Long-term Agroecosystem Research (LTAR) network is a USDA-ARS effort that focuses on conducting research that addresses current and emerging issues in agriculture related to sustainability and profitability of agroecosystems in the face of climate change and population growth. There are 18 sites across the USA covering key agricultural production regions. In Nebraska, a partnership between the University of Nebraska - Lincoln and ARD/USDA resulted in the establishment of the Platte River - High Plains Aquifer LTAR site in 2014. The site conducts research to sustain multiple ecosystem services focusing specifically on Nebraska's main agronomic production agroecosystems that comprise of abundant corn, soybeans, managed grasslands and beef production. As part of the national LTAR network, PR-HPA participates and contributes near-surface remotely sensed imagery of corn, soybean and grassland canopy phenology to the PhenoCam Network through high-resolution digital cameras. This poster highlights the application, advantages and usefulness of near-surface remotely sensed imagery in agroecosystem studies and management. It demonstrates how both Infrared and Red-Green-Blue imagery may be applied to monitor phenological events as well as crop abiotic stresses. Computer-based algorithms and analytic techniques proved very instrumental in revealing crop phenological changes such as green-up and tasseling in corn. This poster also reports the suitability and applicability of corn-derived computer based algorithms for evaluating phenological development of sorghum since both crops have similarities in their phenology; with sorghum panicles being similar to corn tassels. This later assessment was carried out using a sorghum dataset obtained from the Transportation Energy Resources from Renewable Agriculture Phenotyping Reference Platform project, Maricopa Agricultural Center, Arizona.
PREFACE: Strongly correlated electron systems Strongly correlated electron systems
Saxena, Siddharth S.; Littlewood, P. B.
2012-07-01
This special section is dedicated to the Strongly Correlated Electron Systems Conference (SCES) 2011, which was held from 29 August-3 September 2011, in Cambridge, UK. SCES'2011 is dedicated to 100 years of superconductivity and covers a range of topics in the area of strongly correlated systems. The correlated electronic and magnetic materials featured include f-electron based heavy fermion intermetallics and d-electron based transition metal compounds. The selected papers derived from invited presentations seek to deepen our understanding of the rich physical phenomena that arise from correlation effects. The focus is on quantum phase transitions, non-Fermi liquid phenomena, quantum magnetism, unconventional superconductivity and metal-insulator transitions. Both experimental and theoretical work is presented. Based on fundamental advances in the understanding of electronic materials, much of 20th century materials physics was driven by miniaturisation and integration in the electronics industry to the current generation of nanometre scale devices. The achievements of this industry have brought unprecedented advances to society and well-being, and no doubt there is much further to go—note that this progress is founded on investments and studies in the fundamentals of condensed matter physics from more than 50 years ago. Nevertheless, the defining challenges for the 21st century will lie in the discovery in science, and deployment through engineering, of technologies that can deliver the scale needed to have an impact on the sustainability agenda. Thus the big developments in nanotechnology may lie not in the pursuit of yet smaller transistors, but in the design of new structures that can revolutionise the performance of solar cells, batteries, fuel cells, light-weight structural materials, refrigeration, water purification, etc. The science presented in the papers of this special section also highlights the underlying interest in energy-dense materials, which
International Nuclear Information System (INIS)
L'Huillier, A.
2002-01-01
When a high-power laser focuses into a gas of atoms, the electromagnetic field becomes of the same magnitude as the Coulomb field which binds a 1s electron in a hydrogen atom. 3 highly non-linear phenomena can happen: 1) ATI (above threshold ionization): electrons initially in the ground state absorb a large number of photons, many more than the minimum number required for ionization; 2) multiple ionization: many electrons can be emitted one at a time, in a sequential process, or simultaneously in a mechanism called direct or non-sequential; and 3) high order harmonic generation (HHG): efficient photon emission in the extreme ultraviolet range, in the form of high-order harmonics of the fundamental laser field can occur. The theoretical problem consists in solving the time dependent Schroedinger equation (TDSE) that describes the interaction of a many-electron atom with a laser field. A number of methods have been proposed to solve this problem in the case of a hydrogen atom or a single-active electron atom in a strong laser field. A large effort is presently being devoted to go beyond the single-active approximation. The understanding of the physics of the interaction between atoms and strong laser fields has been provided by a very simple model called ''simple man's theory''. A unified view of HHG, ATI, and non-sequential ionization, originating from the simple man's model and the strong field approximation, expressed in terms of electrons trajectories or quantum paths is slowly emerging. (A.C.)
Rydberg atoms in strong fields
International Nuclear Information System (INIS)
Kleppner, D.; Tsimmerman, M.
1985-01-01
Experimental and theoretical achievements in studying Rydberg atoms in external fields are considered. Only static (or quasistatic) fields and ''one-electron'' atoms, i.e. atoms that are well described by one-electron states, are discussed. Mainly behaviour of alkali metal atoms in electric field is considered. The state of theoretical investigations for hydrogen atom in magnetic field is described, but experimental data for atoms of alkali metals are presented as an illustration. Results of the latest experimental and theoretical investigations into the structure of Rydberg atoms in strong fields are presented
Strong versions of Bell's theorem
International Nuclear Information System (INIS)
Stapp, H.P.
1994-01-01
Technical aspects of a recently constructed strong version of Bell's theorem are discussed. The theorem assumes neither hidden variables nor factorization, and neither determinism nor counterfactual definiteness. It deals directly with logical connections. Hence its relationship with modal logic needs to be described. It is shown that the proof can be embedded in an orthodox modal logic, and hence its compatibility with modal logic assured, but that this embedding weakens the theorem by introducing as added assumptions the conventionalities of the particular modal logic that is adopted. This weakening is avoided in the recent proof by using directly the set-theoretic conditions entailed by the locality assumption
Strongly interacting light dark matter
International Nuclear Information System (INIS)
Bruggisser, Sebastian; Riva, Francesco; Urbano, Alfredo
2016-07-01
In the presence of approximate global symmetries that forbid relevant interactions, strongly coupled light Dark Matter (DM) can appear weakly coupled at small-energy and generate a sizable relic abundance. Fundamental principles like unitarity restrict these symmetries to a small class, where the leading interactions are captured by effective operators up to dimension-8. Chiral symmetry, spontaneously broken global symmetries and non-linearly realized supersymmetry are examples of this. Their DM candidates (composite fermions, pseudo-Nambu-Goldstone Bosons and Goldstini) are interesting targets for LHC missing-energy searches.
Weak consistency and strong paraconsistency
Directory of Open Access Journals (Sweden)
Gemma Robles
2009-11-01
Full Text Available In a standard sense, consistency and paraconsistency are understood as, respectively, the absence of any contradiction and as the absence of the ECQ (“E contradictione quodlibet” rule that allows us to conclude any well formed formula from any contradiction. The aim of this paper is to explain the concepts of weak consistency alternative to the standard one, the concepts of paraconsistency related to them and the concept of strong paraconsistency, all of which have been defined by the author together with José M. Méndez.
Energy Technology Data Exchange (ETDEWEB)
Dowrick, N.J. (Dept. of Physics, Oxford (United Kingdom)); McDougall, N.A. (National Lab. for High Energy Physics, Tsukuba, Ibaraki (Japan))
1992-07-09
We show that two well-known solutions to the strong CP problem, the axion and a massless quark, may be understood in terms of the mechanism recently proposed by Samuel where long-range interactions between topological charges may be responsible for the removal of CP violation. We explain how the axion and a QCD meson (identified as the {eta}' if all quarks are massless) suppress fluctuations in global topological charge by almost identical dynamical although the masses, couplings and relevant length scales are very different. Furthermore, we elucidate the precise origin of the {eta}' mass. (orig.).
Scalar strong interaction hadron theory
Hoh, Fang Chao
2015-01-01
The scalar strong interaction hadron theory, SSI, is a first principles' and nonlocal theory at quantum mechanical level that provides an alternative to low energy QCD and Higgs related part of the standard model. The quark-quark interaction is scalar rather than color-vectorial. A set of equations of motion for mesons and another set for baryons have been constructed. This book provides an account of the present state of a theory supposedly still at its early stage of development. This work will facilitate researchers interested in entering into this field and serve as a basis for possible future development of this theory.
Estimation of strong ground motion
International Nuclear Information System (INIS)
Watabe, Makoto
1993-01-01
Fault model has been developed to estimate a strong ground motion in consideration of characteristics of seismic source and propagation path of seismic waves. There are two different approaches in the model. The first one is a theoretical approach, while the second approach is a semi-empirical approach. Though the latter is more practical than the former to be applied to the estimation of input motions, it needs at least the small-event records, the value of the seismic moment of the small event and the fault model of the large event
Strong Mechanoluminescence from Oxynitridosilicate Phosphors
Energy Technology Data Exchange (ETDEWEB)
Zhang Lin; Xu Chaonan; Yamada, Hiroshi, E-mail: cn-xu@aist.go.jp [National Institute of Advanced Industrial Science and Technology (AIST), 807-1 Shuku, Tosu, Saga 841-0052 (Japan)
2011-10-29
We successfully developed a novel Mechanoluminescence (ML) material with water resistance, oxynitridosilicate; BaSi{sub 2}O{sub 2}N{sub 2}: Eu{sup 2+}. The crystal structure, photoluminescence (PL) and ML properties were characterized. The ML of BaSi{sub 2}O{sub 2}N{sub 2}: Eu{sup 2+} is so strong that the blue-green emission can be observed by the naked eyes clearly. In addition, it shows superior water resistance property. No changes were found in the ML intensities during the total water treatment test.
Effective lagrangian for strong interactions
International Nuclear Information System (INIS)
Jain, P.
1988-01-01
We attempt to construct a realistic phenomenological Lagrangian in order to describe strong interactions. This is in general a very complicated problem and we shall explore its various aspects. We first include the vector mesons by writing down the most general chiral invariant terms proportional to the Levi-Civita symbol ε μναβ . These terms involve three unknown coefficients, which are calculated by using the experimental results of strong interaction processes. We then calculate the static nucleon properties by finding the solitonic excitations of this model. The results turn out to be, as is also the case for most other vector-pseudoscalar Lagrangians, better than the Skyrme model but are still somewhat different from the experiments. Another aspect that we shall study is the incorporation of scale anomaly of QCD into the Skyrme model. We thus introduce a scalar glueball in our Lagrangian. Here we find an interesting result that the effective glue field dynamically forms a bag for the soliton. Depending on the values of the parameters, we get either a deep bag or a shallow bag. However by including the scalar meson, we find that to get realistic scalar sector we must have the shallow bag. Finally we show some intriguing connections between the chiral quark model, in which the nucleon is described as a solitonic excitation, and the ordinary potential binding quark model
EDITORIAL: Strongly correlated electron systems Strongly correlated electron systems
Ronning, Filip; Batista, Cristian
2011-03-01
Strongly correlated electrons is an exciting and diverse field in condensed matter physics. This special issue aims to capture some of that excitement and recent developments in the field. Given that this issue was inspired by the 2010 International Conference on Strongly Correlated Electron Systems (SCES 2010), we briefly give some history in order to place this issue in context. The 2010 International Conference on Strongly Correlated Electron Systems was held in Santa Fe, New Mexico, a reunion of sorts from the 1989 International Conference on the Physics of Highly Correlated Electron Systems that also convened in Santa Fe. SCES 2010—co-chaired by John Sarrao and Joe Thompson—followed the tradition of earlier conferences, in this century, hosted by Buzios (2008), Houston (2007), Vienna (2005), Karlsruhe (2004), Krakow (2002) and Ann Arbor (2001). Every three years since 1997, SCES has joined the International Conference on Magnetism (ICM), held in Recife (2000), Rome (2003), Kyoto (2006) and Karlsruhe (2009). Like its predecessors, SCES 2010 topics included strongly correlated f- and d-electron systems, heavy-fermion behaviors, quantum-phase transitions, non-Fermi liquid phenomena, unconventional superconductivity, and emergent states that arise from electronic correlations. Recent developments from studies of quantum magnetism and cold atoms complemented the traditional subjects and were included in SCES 2010. 2010 celebrated the 400th anniversary of Santa Fe as well as the birth of astronomy. So what's the connection to SCES? The Dutch invention of the first practical telescope and its use by Galileo in 1610 and subsequent years overturned dogma that the sun revolved about the earth. This revolutionary, and at the time heretical, conclusion required innovative combinations of new instrumentation, observation and mathematics. These same combinations are just as important 400 years later and are the foundation of scientific discoveries that were discussed
Strong Selective Adsorption of Polymers.
Ge, Ting; Rubinstein, Michael
2015-06-09
A scaling theory is developed for selective adsorption of polymers induced by the strong binding between specific monomers and complementary surface adsorption sites. By "selective" we mean specific attraction between a subset of all monomers, called "sticky", and a subset of surface sites, called "adsorption sites". We demonstrate that, in addition to the expected dependence on the polymer volume fraction ϕ bulk in the bulk solution, selective adsorption strongly depends on the ratio between two characteristic length scales, the root-mean-square distance l between neighboring sticky monomers along the polymer, and the average distance d between neighboring surface adsorption sites. The role of the ratio l / d arises from the fact that a polymer needs to deform to enable the spatial commensurability between its sticky monomers and the surface adsorption sites for selective adsorption. We study strong selective adsorption of both telechelic polymers with two end monomers being sticky and multisticker polymers with many sticky monomers between sticky ends. For telechelic polymers, we identify four adsorption regimes at l / d 1, we expect that the adsorption layer at exponentially low ϕ bulk consists of separated unstretched loops, while as ϕ bulk increases the layer crosses over to a brush of extended loops with a second layer of weakly overlapping tails. For multisticker chains, in the limit of exponentially low ϕ bulk , adsorbed polymers are well separated from each other. As l / d increases, the conformation of an individual polymer changes from a single-end-adsorbed "mushroom" to a random walk of loops. For high ϕ bulk , adsorbed polymers at small l / d are mushrooms that cover all the adsorption sites. At sufficiently large l / d , adsorbed multisticker polymers strongly overlap. We anticipate the formation of a self-similar carpet and with increasing l / d a two-layer structure with a brush of loops covered by a self-similar carpet. As l / d exceeds the
Likelihood inference for unions of interacting discs
DEFF Research Database (Denmark)
Møller, Jesper; Helisová, Katarina
To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....
An Intuitive Dashboard for Bayesian Network Inference
International Nuclear Information System (INIS)
Reddy, Vikas; Farr, Anna Charisse; Wu, Paul; Mengersen, Kerrie; Yarlagadda, Prasad K D V
2014-01-01
Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++
The NIFTY way of Bayesian signal inference
International Nuclear Information System (INIS)
Selig, Marco
2014-01-01
We introduce NIFTY, 'Numerical Information Field Theory', a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTY can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTY as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D 3 PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy
The NIFTy way of Bayesian signal inference
Selig, Marco
2014-12-01
We introduce NIFTy, "Numerical Information Field Theory", a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTy can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTy as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.
Bayesianism and inference to the best explanation
Directory of Open Access Journals (Sweden)
Valeriano IRANZO
2008-01-01
Full Text Available Bayesianism and Inference to the best explanation (IBE are two different models of inference. Recently there has been some debate about the possibility of “bayesianizing” IBE. Firstly I explore several alternatives to include explanatory considerations in Bayes’s Theorem. Then I distinguish two different interpretations of prior probabilities: “IBE-Bayesianism” (IBE-Bay and “frequentist-Bayesianism” (Freq-Bay. After detailing the content of the latter, I propose a rule for assessing the priors. I also argue that Freq-Bay: (i endorses a role for explanatory value in the assessment of scientific hypotheses; (ii avoids a purely subjectivist reading of prior probabilities; and (iii fits better than IBE-Bayesianism with two basic facts about science, i.e., the prominent role played by empirical testing and the existence of many scientific theories in the past that failed to fulfil their promises and were subsequently abandoned.
Inferring genetic interactions from comparative fitness data.
Crona, Kristina; Gavryushkin, Alex; Greene, Devin; Beerenwinkel, Niko
2017-12-20
Darwinian fitness is a central concept in evolutionary biology. In practice, however, it is hardly possible to measure fitness for all genotypes in a natural population. Here, we present quantitative tools to make inferences about epistatic gene interactions when the fitness landscape is only incompletely determined due to imprecise measurements or missing observations. We demonstrate that genetic interactions can often be inferred from fitness rank orders, where all genotypes are ordered according to fitness, and even from partial fitness orders. We provide a complete characterization of rank orders that imply higher order epistasis. Our theory applies to all common types of gene interactions and facilitates comprehensive investigations of diverse genetic interactions. We analyzed various genetic systems comprising HIV-1, the malaria-causing parasite Plasmodium vivax , the fungus Aspergillus niger , and the TEM-family of β-lactamase associated with antibiotic resistance. For all systems, our approach revealed higher order interactions among mutations.
An emergent approach to analogical inference
Thibodeau, Paul H.; Flusberg, Stephen J.; Glick, Jeremy J.; Sternberg, Daniel A.
2013-03-01
In recent years, a growing number of researchers have proposed that analogy is a core component of human cognition. According to the dominant theoretical viewpoint, analogical reasoning requires a specific suite of cognitive machinery, including explicitly coded symbolic representations and a mapping or binding mechanism that operates over these representations. Here we offer an alternative approach: we find that analogical inference can emerge naturally and spontaneously from a relatively simple, error-driven learning mechanism without the need to posit any additional analogy-specific machinery. The results also parallel findings from the developmental literature on analogy, demonstrating a shift from an initial reliance on surface feature similarity to the use of relational similarity later in training. Variants of the model allow us to consider and rule out alternative accounts of its performance. We conclude by discussing how these findings can potentially refine our understanding of the processes that are required to perform analogical inference.
Pointwise probability reinforcements for robust statistical inference.
Frénay, Benoît; Verleysen, Michel
2014-02-01
Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). AFDs are observations that are much more frequent in the training sample that they should be, with respect to their theoretical probability, and include e.g. outliers. Estimates of parameters tend to be biased towards models which support such data. This paper proposes to introduce pointwise probability reinforcements (PPRs): the probability of each observation is reinforced by a PPR and a regularisation allows controlling the amount of reinforcement which compensates for AFDs. The proposed solution is very generic, since it can be used to robustify any statistical inference method which can be formulated as a likelihood maximisation. Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. Moreover, outliers can be filtered manually since an abnormality degree is obtained for each observation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Statistical inference from imperfect photon detection
International Nuclear Information System (INIS)
Audenaert, Koenraad M R; Scheel, Stefan
2009-01-01
We consider the statistical properties of photon detection with imperfect detectors that exhibit dark counts and less than unit efficiency, in the context of tomographic reconstruction. In this context, the detectors are used to implement certain positive operator-valued measures (POVMs) that would allow us to reconstruct the quantum state or quantum process under consideration. Here we look at the intermediate step of inferring outcome probabilities from measured outcome frequencies, and show how this inference can be performed in a statistically sound way in the presence of detector imperfections. Merging outcome probabilities for different sets of POVMs into a consistent quantum state picture has been treated elsewhere (Audenaert and Scheel 2009 New J. Phys. 11 023028). Single-photon pulsed measurements as well as continuous wave measurements are covered.
An Intuitive Dashboard for Bayesian Network Inference
Reddy, Vikas; Charisse Farr, Anna; Wu, Paul; Mengersen, Kerrie; Yarlagadda, Prasad K. D. V.
2014-03-01
Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++.
Working with sample data exploration and inference
Chaffe-Stengel, Priscilla
2014-01-01
Managers and analysts routinely collect and examine key performance measures to better understand their operations and make good decisions. Being able to render the complexity of operations data into a coherent account of significant events requires an understanding of how to work well with raw data and to make appropriate inferences. Although some statistical techniques for analyzing data and making inferences are sophisticated and require specialized expertise, there are methods that are understandable and applicable by anyone with basic algebra skills and the support of a spreadsheet package. By applying these fundamental methods themselves rather than turning over both the data and the responsibility for analysis and interpretation to an expert, managers will develop a richer understanding and potentially gain better control over their environment. This text is intended to describe these fundamental statistical techniques to managers, data analysts, and students. Statistical analysis of sample data is enh...
Parametric inference for biological sequence analysis.
Pachter, Lior; Sturmfels, Bernd
2004-11-16
One of the major successes in computational biology has been the unification, by using the graphical model formalism, of a multitude of algorithms for annotating and comparing biological sequences. Graphical models that have been applied to these problems include hidden Markov models for annotation, tree models for phylogenetics, and pair hidden Markov models for alignment. A single algorithm, the sum-product algorithm, solves many of the inference problems that are associated with different statistical models. This article introduces the polytope propagation algorithm for computing the Newton polytope of an observation from a graphical model. This algorithm is a geometric version of the sum-product algorithm and is used to analyze the parametric behavior of maximum a posteriori inference calculations for graphical models.
Inferences on Children’s Reading Groups
Directory of Open Access Journals (Sweden)
Javier González García
2009-05-01
Full Text Available This article focuses on the non-literal information of a text, which can be inferred from key elements or clues offered by the text itself. This kind of text is called implicit text or inference, due to the thinking process that it stimulates. The explicit resources that lead to information retrieval are related to others of implicit information, which have increased their relevance. In this study, during two courses, how two teachers interpret three stories and how they establish a debate dividing the class into three student groups, was analyzed. The sample was formed by two classes of two urban public schools of Burgos capital (Spain, and two of public schools of Tampico (Mexico. This allowed us to observe an increasing percentage value of the group focused in text comprehension, and a lesser percentage of the group perceiving comprehension as a secondary objective.
Inferring Genetic Ancestry: Opportunities, Challenges, and Implications
Royal, Charmaine D.; Novembre, John; Fullerton, Stephanie M.; Goldstein, David B.; Long, Jeffrey C.; Bamshad, Michael J.; Clark, Andrew G.
2010-01-01
Increasing public interest in direct-to-consumer (DTC) genetic ancestry testing has been accompanied by growing concern about issues ranging from the personal and societal implications of the testing to the scientific validity of ancestry inference. The very concept of “ancestry” is subject to misunderstanding in both the general and scientific communities. What do we mean by ancestry? How exactly is ancestry measured? How far back can such ancestry be defined and by which genetic tools? How ...
Spatial Inference Based on Geometric Proportional Analogies
Mullally, Emma-Claire; O'Donoghue, Diarmuid P.
2006-01-01
We describe an instance-based reasoning solution to a variety of spatial reasoning problems. The solution centers on identifying an isomorphic mapping between labelled graphs that represent some problem data and a known solution instance. We describe a number of spatial reasoning problems that are solved by generating non-deductive inferences, integrating topology with area (and other) features. We report the accuracy of our algorithm on different categories of spatial reasoning tasks from th...
Inferring ontology graph structures using OWL reasoning
Rodriguez-Garcia, Miguel Angel
2018-01-05
Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies\\' semantic content remains a challenge.We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies\\' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph .Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.
Role of Speaker Cues in Attention Inference
Jin Joo Lee; Cynthia Breazeal; David DeSteno
2017-01-01
Current state-of-the-art approaches to emotion recognition primarily focus on modeling the nonverbal expressions of the sole individual without reference to contextual elements such as the co-presence of the partner. In this paper, we demonstrate that the accurate inference of listeners’ social-emotional state of attention depends on accounting for the nonverbal behaviors of their storytelling partner, namely their speaker cues. To gain a deeper understanding of the role of speaker cues in at...
Inferring ontology graph structures using OWL reasoning.
Rodríguez-García, Miguel Ángel; Hoehndorf, Robert
2018-01-05
Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies' semantic content remains a challenge. We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph . Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.
Constrained bayesian inference of project performance models
Sunmola, Funlade
2013-01-01
Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...
Using metacognitive cues to infer others' thinking
André Mata; Tiago Almeida
2014-01-01
Three studies tested whether people use cues about the way other people think---for example, whether others respond fast vs. slow---to infer what responses other people might give to reasoning problems. People who solve reasoning problems using deliberative thinking have better insight than intuitive problem-solvers into the responses that other people might give to the same problems. Presumably because deliberative responders think of intuitive responses before they think o...
Thermodynamics of statistical inference by cells.
Lang, Alex H; Fisher, Charles K; Mora, Thierry; Mehta, Pankaj
2014-10-03
The deep connection between thermodynamics, computation, and information is now well established both theoretically and experimentally. Here, we extend these ideas to show that thermodynamics also places fundamental constraints on statistical estimation and learning. To do so, we investigate the constraints placed by (nonequilibrium) thermodynamics on the ability of biochemical signaling networks to estimate the concentration of an external signal. We show that accuracy is limited by energy consumption, suggesting that there are fundamental thermodynamic constraints on statistical inference.
Bootstrap inference when using multiple imputation.
Schomaker, Michael; Heumann, Christian
2018-04-16
Many modern estimators require bootstrapping to calculate confidence intervals because either no analytic standard error is available or the distribution of the parameter of interest is nonsymmetric. It remains however unclear how to obtain valid bootstrap inference when dealing with multiple imputation to address missing data. We present 4 methods that are intuitively appealing, easy to implement, and combine bootstrap estimation with multiple imputation. We show that 3 of the 4 approaches yield valid inference, but that the performance of the methods varies with respect to the number of imputed data sets and the extent of missingness. Simulation studies reveal the behavior of our approaches in finite samples. A topical analysis from HIV treatment research, which determines the optimal timing of antiretroviral treatment initiation in young children, demonstrates the practical implications of the 4 methods in a sophisticated and realistic setting. This analysis suffers from missing data and uses the g-formula for inference, a method for which no standard errors are available. Copyright © 2018 John Wiley & Sons, Ltd.
Inferring epidemic network topology from surveillance data.
Directory of Open Access Journals (Sweden)
Xiang Wan
Full Text Available The transmission of infectious diseases can be affected by many or even hidden factors, making it difficult to accurately predict when and where outbreaks may emerge. One approach at the moment is to develop and deploy surveillance systems in an effort to detect outbreaks as timely as possible. This enables policy makers to modify and implement strategies for the control of the transmission. The accumulated surveillance data including temporal, spatial, clinical, and demographic information, can provide valuable information with which to infer the underlying epidemic networks. Such networks can be quite informative and insightful as they characterize how infectious diseases transmit from one location to another. The aim of this work is to develop a computational model that allows inferences to be made regarding epidemic network topology in heterogeneous populations. We apply our model on the surveillance data from the 2009 H1N1 pandemic in Hong Kong. The inferred epidemic network displays significant effect on the propagation of infectious diseases.
Role of Speaker Cues in Attention Inference
Directory of Open Access Journals (Sweden)
Jin Joo Lee
2017-10-01
Full Text Available Current state-of-the-art approaches to emotion recognition primarily focus on modeling the nonverbal expressions of the sole individual without reference to contextual elements such as the co-presence of the partner. In this paper, we demonstrate that the accurate inference of listeners’ social-emotional state of attention depends on accounting for the nonverbal behaviors of their storytelling partner, namely their speaker cues. To gain a deeper understanding of the role of speaker cues in attention inference, we conduct investigations into real-world interactions of children (5–6 years old storytelling with their peers. Through in-depth analysis of human–human interaction data, we first identify nonverbal speaker cues (i.e., backchannel-inviting cues and listener responses (i.e., backchannel feedback. We then demonstrate how speaker cues can modify the interpretation of attention-related backchannels as well as serve as a means to regulate the responsiveness of listeners. We discuss the design implications of our findings toward our primary goal of developing attention recognition models for storytelling robots, and we argue that social robots can proactively use speaker cues to form more accurate inferences about the attentive state of their human partners.
Cortical information flow during inferences of agency
Directory of Open Access Journals (Sweden)
Myrthel eDogge
2014-08-01
Full Text Available Building on the recent finding that agency experiences do not merely rely on sensorimotor information but also on cognitive cues, this exploratory study uses electroencephalographic recordings to examine functional connectivity during agency inference processing in a setting where action and outcome are independent. Participants completed a computerized task in which they pressed a button followed by one of two color words (red or blue and rated their experienced agency over producing the color. Before executing the action, a matching or mismatching color word was pre-activated by explicitly instructing participants to produce the color (goal condition or by briefly presenting the color word (prime condition. In both conditions, experienced agency was higher in matching versus mismatching trials. Furthermore, increased electroencephalography (EEG-based connectivity strength was observed between parietal and frontal nodes and within the (prefrontal cortex when color-outcomes matched with goals and participants reported high agency. This pattern of increased connectivity was not identified in trials where outcomes were pre-activated through primes. These results suggest that different connections are involved in the experience and in the loss of agency, as well as in inferences of agency resulting from different types of pre-activation. Moreover, the findings provide novel support for the involvement of a fronto-parietal network in agency inferences.
Phylogenetic Inference of HIV Transmission Clusters
Directory of Open Access Journals (Sweden)
Vlad Novitsky
2017-10-01
Full Text Available Better understanding the structure and dynamics of HIV transmission networks is essential for designing the most efficient interventions to prevent new HIV transmissions, and ultimately for gaining control of the HIV epidemic. The inference of phylogenetic relationships and the interpretation of results rely on the definition of the HIV transmission cluster. The definition of the HIV cluster is complex and dependent on multiple factors, including the design of sampling, accuracy of sequencing, precision of sequence alignment, evolutionary models, the phylogenetic method of inference, and specified thresholds for cluster support. While the majority of studies focus on clusters, non-clustered cases could also be highly informative. A new dimension in the analysis of the global and local HIV epidemics is the concept of phylogenetically distinct HIV sub-epidemics. The identification of active HIV sub-epidemics reveals spreading viral lineages and may help in the design of targeted interventions.HIVclustering can also be affected by sampling density. Obtaining a proper sampling density may increase statistical power and reduce sampling bias, so sampling density should be taken into account in study design and in interpretation of phylogenetic results. Finally, recent advances in long-range genotyping may enable more accurate inference of HIV transmission networks. If performed in real time, it could both inform public-health strategies and be clinically relevant (e.g., drug-resistance testing.
Causal inference of asynchronous audiovisual speech
Directory of Open Access Journals (Sweden)
John F Magnotti
2013-11-01
Full Text Available During speech perception, humans integrate auditory information from the voice with visual information from the face. This multisensory integration increases perceptual precision, but only if the two cues come from the same talker; this requirement has been largely ignored by current models of speech perception. We describe a generative model of multisensory speech perception that includes this critical step of determining the likelihood that the voice and face information have a common cause. A key feature of the model is that it is based on a principled analysis of how an observer should solve this causal inference problem using the asynchrony between two cues and the reliability of the cues. This allows the model to make predictions abut the behavior of subjects performing a synchrony judgment task, predictive power that does not exist in other approaches, such as post hoc fitting of Gaussian curves to behavioral data. We tested the model predictions against the performance of 37 subjects performing a synchrony judgment task viewing audiovisual speech under a variety of manipulations, including varying asynchronies, intelligibility, and visual cue reliability. The causal inference model outperformed the Gaussian model across two experiments, providing a better fit to the behavioral data with fewer parameters. Because the causal inference model is derived from a principled understanding of the task, model parameters are directly interpretable in terms of stimulus and subject properties.
Functional neuroanatomy of intuitive physical inference.
Fischer, Jason; Mikhael, John G; Tenenbaum, Joshua B; Kanwisher, Nancy
2016-08-23
To engage with the world-to understand the scene in front of us, plan actions, and predict what will happen next-we must have an intuitive grasp of the world's physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events-a "physics engine" in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general "multiple demand" system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action.
Elements of Causal Inference: Foundations and Learning Algorithms
DEFF Research Database (Denmark)
Peters, Jonas Martin; Janzing, Dominik; Schölkopf, Bernhard
A concise and self-contained introduction to causal inference, increasingly important in data science and machine learning......A concise and self-contained introduction to causal inference, increasingly important in data science and machine learning...
Integrating distributed Bayesian inference and reinforcement learning for sensor management
Grappiolo, C.; Whiteson, S.; Pavlin, G.; Bakker, B.
2009-01-01
This paper introduces a sensor management approach that integrates distributed Bayesian inference (DBI) and reinforcement learning (RL). DBI is implemented using distributed perception networks (DPNs), a multiagent approach to performing efficient inference, while RL is used to automatically
Strong growth for Queensland mining
Energy Technology Data Exchange (ETDEWEB)
1990-10-01
The Queensland mining industry experienced strong growth during 1989-90 as shown in the latest statistics released by the Department of Resource Industries. The total value of Queensland mineral and energy production rose to a new record of $5.1 billion, an increase of 16.5% on 1988-89 production. A major contributing factor was a 20.9 percent increase in the value of coal production. While the quantity of coal produced rose only 1.1 percent, the substantial increase in the value of coal production is attributable to higher coal prices negotiated for export contracts. In Australian dollar terms coal, gold, lead, zinc and crude oil on average experienced higher international prices than in the previous year. Only copper and silver prices declined. 3 tabs.
Strong moduli stabilization and phenomenology
Dudas, Emilian; Mambrini, Yann; Mustafayev, Azar; Olive, Keith A
2013-01-01
We describe the resulting phenomenology of string theory/supergravity models with strong moduli stabilization. The KL model with F-term uplifting, is one such example. Models of this type predict universal scalar masses equal to the gravitino mass. In contrast, A-terms receive highly suppressed gravity mediated contributions. Under certain conditions, the same conclusion is valid for gaugino masses, which like A-terms, are then determined by anomalies. In such models, we are forced to relatively large gravitino masses (30-1000 TeV). We compute the low energy spectrum as a function of m_{3/2}. We see that the Higgs masses naturally takes values between 125-130 GeV. The lower limit is obtained from the requirement of chargino masses greater than 104 GeV, while the upper limit is determined by the relic density of dark matter (wino-like).
Strongly interacting W's and Z's
International Nuclear Information System (INIS)
Gaillard, M.K.
1984-01-01
The study focussed primarily on the dynamics of a strongly interacting W, Z(SIW) sector, with the aim of sharpening predictions for total W, Z yield and W, Z multiplicities expected from WW fusion for various scenarios. Specific issues raised in the context of the general problem of modeling SIW included the specificity of the technicolor (or, equivalently, QCD) model, whether or not a composite scalar model can be evaded, and whether the standard model necessarily implies an I = J = O state (≅ Higgs particle) that is relatively ''light'' (M ≤ hundreds of TeV). The consensus on the last issue was that existing arguments are inconclusive. While the author shall briefly address compositeness and alternatives to the technicolor model, quantitative estimates will be of necessity based on technicolor or an extrapolation of pion data
Uniquely Strongly Clean Group Rings
Institute of Scientific and Technical Information of China (English)
WANG XIU-LAN
2012-01-01
A ring R is called clean if every element is the sum of an idempotent and a unit,and R is called uniquely strongly clean (USC for short) if every element is uniquely the sum of an idempotent and a unit that commute.In this article,some conditions on a ring R and a group G such that RG is clean are given.It is also shown that if G is a locally finite group,then the group ring RG is USC if and only if R is USC,and G is a 2-group.The left uniquely exchange group ring,as a middle ring of the uniquely clean ring and the USC ring,does not possess this property,and so does the uniquely exchange group ring.
Electrophoresis in strong electric fields.
Barany, Sandor
2009-01-01
Two kinds of non-linear electrophoresis (ef) that can be detected in strong electric fields (several hundred V/cm) are considered. The first ("classical" non-linear ef) is due to the interaction of the outer field with field-induced ionic charges in the electric double layer (EDL) under conditions, when field-induced variations of electrolyte concentration remain to be small comparatively to its equilibrium value. According to the Shilov theory, the non-linear component of the electrophoretic velocity for dielectric particles is proportional to the cubic power of the applied field strength (cubic electrophoresis) and to the second power of the particles radius; it is independent of the zeta-potential but is determined by the surface conductivity of particles. The second one, the so-called "superfast electrophoresis" is connected with the interaction of a strong outer field with a secondary diffuse layer of counterions (space charge) that is induced outside the primary (classical) diffuse EDL by the external field itself because of concentration polarization. The Dukhin-Mishchuk theory of "superfast electrophoresis" predicts quadratic dependence of the electrophoretic velocity of unipolar (ionically or electronically) conducting particles on the external field gradient and linear dependence on the particle's size in strong electric fields. These are in sharp contrast to the laws of classical electrophoresis (no dependence of V(ef) on the particle's size and linear dependence on the electric field gradient). A new method to measure the ef velocity of particles in strong electric fields is developed that is based on separation of the effects of sedimentation and electrophoresis using videoimaging and a new flowcell and use of short electric pulses. To test the "classical" non-linear electrophoresis, we have measured the ef velocity of non-conducting polystyrene, aluminium-oxide and (semiconductor) graphite particles as well as Saccharomice cerevisiae yeast cells as a
Bootstrapping phylogenies inferred from rearrangement data
Directory of Open Access Journals (Sweden)
Lin Yu
2012-08-01
Full Text Available Abstract Background Large-scale sequencing of genomes has enabled the inference of phylogenies based on the evolution of genomic architecture, under such events as rearrangements, duplications, and losses. Many evolutionary models and associated algorithms have been designed over the last few years and have found use in comparative genomics and phylogenetic inference. However, the assessment of phylogenies built from such data has not been properly addressed to date. The standard method used in sequence-based phylogenetic inference is the bootstrap, but it relies on a large number of homologous characters that can be resampled; yet in the case of rearrangements, the entire genome is a single character. Alternatives such as the jackknife suffer from the same problem, while likelihood tests cannot be applied in the absence of well established probabilistic models. Results We present a new approach to the assessment of distance-based phylogenetic inference from whole-genome data; our approach combines features of the jackknife and the bootstrap and remains nonparametric. For each feature of our method, we give an equivalent feature in the sequence-based framework; we also present the results of extensive experimental testing, in both sequence-based and genome-based frameworks. Through the feature-by-feature comparison and the experimental results, we show that our bootstrapping approach is on par with the classic phylogenetic bootstrap used in sequence-based reconstruction, and we establish the clear superiority of the classic bootstrap for sequence data and of our corresponding new approach for rearrangement data over proposed variants. Finally, we test our approach on a small dataset of mammalian genomes, verifying that the support values match current thinking about the respective branches. Conclusions Our method is the first to provide a standard of assessment to match that of the classic phylogenetic bootstrap for aligned sequences. Its
Bootstrapping phylogenies inferred from rearrangement data.
Lin, Yu; Rajan, Vaibhav; Moret, Bernard Me
2012-08-29
Large-scale sequencing of genomes has enabled the inference of phylogenies based on the evolution of genomic architecture, under such events as rearrangements, duplications, and losses. Many evolutionary models and associated algorithms have been designed over the last few years and have found use in comparative genomics and phylogenetic inference. However, the assessment of phylogenies built from such data has not been properly addressed to date. The standard method used in sequence-based phylogenetic inference is the bootstrap, but it relies on a large number of homologous characters that can be resampled; yet in the case of rearrangements, the entire genome is a single character. Alternatives such as the jackknife suffer from the same problem, while likelihood tests cannot be applied in the absence of well established probabilistic models. We present a new approach to the assessment of distance-based phylogenetic inference from whole-genome data; our approach combines features of the jackknife and the bootstrap and remains nonparametric. For each feature of our method, we give an equivalent feature in the sequence-based framework; we also present the results of extensive experimental testing, in both sequence-based and genome-based frameworks. Through the feature-by-feature comparison and the experimental results, we show that our bootstrapping approach is on par with the classic phylogenetic bootstrap used in sequence-based reconstruction, and we establish the clear superiority of the classic bootstrap for sequence data and of our corresponding new approach for rearrangement data over proposed variants. Finally, we test our approach on a small dataset of mammalian genomes, verifying that the support values match current thinking about the respective branches. Our method is the first to provide a standard of assessment to match that of the classic phylogenetic bootstrap for aligned sequences. Its support values follow a similar scale and its receiver
Strong plasma turbulence in the earth's electron foreshock
Robinson, P. A.; Newman, D. L.
1991-01-01
A quantitative model is developed to account for the distribution in magnitude and location of the intense plasma waves observed in the earth's electron foreshock given the observed rms levels of waves. In this model, nonlinear strong-turbulence effects cause solitonlike coherent wave packets to form and decouple from incoherent background beam-excited weak turbulence, after which they convect downstream with the solar wind while collapsing to scales as short as 100 m and fields as high as 2 V/m. The existence of waves with energy densities above the strong-turbulence wave-collapse threshold is inferred from observations from IMP 6 and ISEE 1 and quantitative agreement is found between the predicted distribution of fields in an ensemble of such wave packets and the actual field distribution observed in situ by IMP 6. Predictions for the polarization of plasma waves and the bandwidth of ion-sound waves are also consistent with the observations. It is shown that strong-turbulence effects must be incorporated in any comprehensive theory of the propagation and evolution of electron beams in the foreshock. Previous arguments against the existence of strong turbulence in the foreshock are refuted.
Strong plasma turbulence in the earth's electron foreshock
International Nuclear Information System (INIS)
Robinson, P.A.; Newman, D.L.
1991-01-01
A quantitative model is developed to account for the distribution in magnitude and location of the intense plasma waves observed in the Earth's electron foreshock given the observed rms levels of waves. In this model, nonlinear strong-turbulence effects cause solitonlike coherent wave packets to form and decouple from incoherent background beam-excited weak turbulence, after which they convect downstream with the solar wind while collapsing to scales as short as 100 m and fields as high as 2 V m -1 . The existence of waves with energy densities above the strong-turbulence wave-collapse threshold is inferred from observations from IMP 6 and ISEE 1 and quantitative agreement is found between the predicted distribution of fields in an ensemble of such wave packets and the actual field distribution observed in situ by IMP 6. Predictions for the polarization of plasma waves and the bandwidth of ion-sound waves are also consistent with the observations. It is shown that strong-turbulence effects must be incorporated in any comprehensive theory of the propagation and evolution of electron beams in the foreshock. Previous arguments against the existence of strong turbulence in the foreshock are refuted
Type Inference for Session Types in the Pi-Calculus
DEFF Research Database (Denmark)
Graversen, Eva Fajstrup; Harbo, Jacob Buchreitz; Huttel, Hans
2014-01-01
In this paper we present a direct algorithm for session type inference for the π-calculus. Type inference for session types has previously been achieved by either imposing limitations and restriction on the π-calculus, or by reducing the type inference problem to that for linear types. Our approach...
Reasoning about Informal Statistical Inference: One Statistician's View
Rossman, Allan J.
2008-01-01
This paper identifies key concepts and issues associated with the reasoning of informal statistical inference. I focus on key ideas of inference that I think all students should learn, including at secondary level as well as tertiary. I argue that a fundamental component of inference is to go beyond the data at hand, and I propose that statistical…
Statistical Inference at Work: Statistical Process Control as an Example
Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia
2008-01-01
To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…
Malle, Bertram F; Holbrook, Jess
2012-04-01
People interpret behavior by making inferences about agents' intentionality, mind, and personality. Past research studied such inferences 1 at a time; in real life, people make these inferences simultaneously. The present studies therefore examined whether 4 major inferences (intentionality, desire, belief, and personality), elicited simultaneously in response to an observed behavior, might be ordered in a hierarchy of likelihood and speed. To achieve generalizability, the studies included a wide range of stimulus behaviors, presented them verbally and as dynamic videos, and assessed inferences both in a retrieval paradigm (measuring the likelihood and speed of accessing inferences immediately after they were made) and in an online processing paradigm (measuring the speed of forming inferences during behavior observation). Five studies provide evidence for a hierarchy of social inferences-from intentionality and desire to belief to personality-that is stable across verbal and visual presentations and that parallels the order found in developmental and primate research. (c) 2012 APA, all rights reserved.
Hofmann, B
2008-06-01
Are there similarities between scientific and moral inference? This is the key question in this article. It takes as its point of departure an instance of one person's story in the media changing both Norwegian public opinion and a brand-new Norwegian law prohibiting the use of saviour siblings. The case appears to falsify existing norms and to establish new ones. The analysis of this case reveals similarities in the modes of inference in science and morals, inasmuch as (a) a single case functions as a counter-example to an existing rule; (b) there is a common presupposition of stability, similarity and order, which makes it possible to reason from a few cases to a general rule; and (c) this makes it possible to hold things together and retain order. In science, these modes of inference are referred to as falsification, induction and consistency. In morals, they have a variety of other names. Hence, even without abandoning the fact-value divide, there appear to be similarities between inference in science and inference in morals, which may encourage communication across the boundaries between "the two cultures" and which are relevant to medical humanities.
Strong Ideal Convergence in Probabilistic Metric Spaces
Indian Academy of Sciences (India)
In the present paper we introduce the concepts of strongly ideal convergent sequence and strong ideal Cauchy sequence in a probabilistic metric (PM) space endowed with the strong topology, and establish some basic facts. Next, we define the strong ideal limit points and the strong ideal cluster points of a sequence in this ...
Strong Statistical Convergence in Probabilistic Metric Spaces
Şençimen, Celaleddin; Pehlivan, Serpil
2008-01-01
In this article, we introduce the concepts of strongly statistically convergent sequence and strong statistically Cauchy sequence in a probabilistic metric (PM) space endowed with the strong topology, and establish some basic facts. Next, we define the strong statistical limit points and the strong statistical cluster points of a sequence in this space and investigate the relations between these concepts.
2006-01-01
Our friend and colleague John Strong was cruelly taken from us by a brain tumour on 31 July, a few days before his 65th birthday. John started his career and obtained his PhD in a group from Westfield College, initially working on experiments at Rutherford Appleton Laboratory (RAL). From the early 1970s onwards, however, his research was focused on experiments in CERN, with several particularly notable contributions. The Omega spectrometer adopted a system John had originally developed for experiments at RAL using vidicon cameras (a type of television camera) to record the sparks in the spark chambers. This highly automated system allowed Omega to be used in a similar way to bubble chambers. He contributed to the success of NA1 and NA7, where he became heavily involved in the electronic trigger systems. In these experiments the Westfield group joined forces with Italian colleagues to measure the form factors of the pion and the kaon, and the lifetime of some of the newly discovered charm particles. Such h...
Remnants of strong tidal interactions
International Nuclear Information System (INIS)
Mcglynn, T.A.
1990-01-01
This paper examines the properties of stellar systems that have recently undergone a strong tidal shock, i.e., a shock which removes a significant fraction of the particles in the system, and where the shocked system has a much smaller mass than the producer of the tidal field. N-body calculations of King models shocked in a variety of ways are performed, and the consequences of the shocks are investigated. The results confirm the prediction of Jaffe for shocked systems. Several models are also run where the tidal forces on the system are constant, simulating a circular orbit around a primary, and the development of tidal radii under these static conditions appears to be a mild process which does not dramatically affect material that is not stripped. The tidal radii are about twice as large as classical formulas would predict. Remnant density profiles are compared with a sample of elliptical galaxies, and the implications of the results for the development of stellar populations and galaxies are considered. 38 refs
Strongly correlated perovskite fuel cells
Zhou, You; Guan, Xiaofei; Zhou, Hua; Ramadoss, Koushik; Adam, Suhare; Liu, Huajun; Lee, Sungsik; Shi, Jian; Tsuchiya, Masaru; Fong, Dillon D.; Ramanathan, Shriram
2016-06-01
Fuel cells convert chemical energy directly into electrical energy with high efficiencies and environmental benefits, as compared with traditional heat engines. Yttria-stabilized zirconia is perhaps the material with the most potential as an electrolyte in solid oxide fuel cells (SOFCs), owing to its stability and near-unity ionic transference number. Although there exist materials with superior ionic conductivity, they are often limited by their ability to suppress electronic leakage when exposed to the reducing environment at the fuel interface. Such electronic leakage reduces fuel cell power output and the associated chemo-mechanical stresses can also lead to catastrophic fracture of electrolyte membranes. Here we depart from traditional electrolyte design that relies on cation substitution to sustain ionic conduction. Instead, we use a perovskite nickelate as an electrolyte with high initial ionic and electronic conductivity. Since many such oxides are also correlated electron systems, we can suppress the electronic conduction through a filling-controlled Mott transition induced by spontaneous hydrogen incorporation. Using such a nickelate as the electrolyte in free-standing membrane geometry, we demonstrate a low-temperature micro-fabricated SOFC with high performance. The ionic conductivity of the nickelate perovskite is comparable to the best-performing solid electrolytes in the same temperature range, with a very low activation energy. The results present a design strategy for high-performance materials exhibiting emergent properties arising from strong electron correlations.
Strong seismic ground motion propagation
International Nuclear Information System (INIS)
Seale, S.; Archuleta, R.; Pecker, A.; Bouchon, M.; Mohammadioun, G.; Murphy, A.; Mohammadioun, B.
1988-10-01
At the McGee Creek, California, site, 3-component strong-motion accelerometers are located at depths of 166 m, 35 m and 0 m. The surface material is glacial moraine, to a depth of 30.5 m, overlying homfels. Accelerations were recorded from two California earthquakes: Round Valley, M L 5.8, November 23, 1984, 18:08 UTC and Chalfant Valley, M L 6.4, July 21, 1986, 14:42 UTC. By separating out the SH components of acceleration, we were able to determine the orientations of the downhole instruments. By separating out the SV component of acceleration, we were able to determine the approximate angle of incidence of the signal at 166 m. A constant phase velocity Haskell-Thomson model was applied to generate synthetic SH seismograms at the surface using the accelerations recorded at 166 m. In the frequency band 0.0 - 10.0 Hz, we compared the filtered synthetic records to the filtered surface data. The onset of the SH pulse is clearly seen, as are the reflections from the interface at 30.5 m. The synthetic record closely matches the data in amplitude and phase. The fit between the synthetic accelerogram and the data shows that the seismic amplification at the surface is a result of the contrast of the impedances (shear stiffnesses) of the near surface materials
Improved functional overview of protein complexes using inferred epistatic relationships
LENUS (Irish Health Repository)
Ryan, Colm
2011-05-23
Abstract Background Epistatic Miniarray Profiling(E-MAP) quantifies the net effect on growth rate of disrupting pairs of genes, often producing phenotypes that may be more (negative epistasis) or less (positive epistasis) severe than the phenotype predicted based on single gene disruptions. Epistatic interactions are important for understanding cell biology because they define relationships between individual genes, and between sets of genes involved in biochemical pathways and protein complexes. Each E-MAP screen quantifies the interactions between a logically selected subset of genes (e.g. genes whose products share a common function). Interactions that occur between genes involved in different cellular processes are not as frequently measured, yet these interactions are important for providing an overview of cellular organization. Results We introduce a method for combining overlapping E-MAP screens and inferring new interactions between them. We use this method to infer with high confidence 2,240 new strongly epistatic interactions and 34,469 weakly epistatic or neutral interactions. We show that accuracy of the predicted interactions approaches that of replicate experiments and that, like measured interactions, they are enriched for features such as shared biochemical pathways and knockout phenotypes. We constructed an expanded epistasis map for yeast cell protein complexes and show that our new interactions increase the evidence for previously proposed inter-complex connections, and predict many new links. We validated a number of these in the laboratory, including new interactions linking the SWR-C chromatin modifying complex and the nuclear transport apparatus. Conclusion Overall, our data support a modular model of yeast cell protein network organization and show how prediction methods can considerably extend the information that can be extracted from overlapping E-MAP screens.
Strongly interacting photons and atoms
International Nuclear Information System (INIS)
Alge, W.
1999-05-01
This thesis contains the main results of the research topics I have pursued during the my PhD studies at the University of Innsbruck and partly in collaboration with the Institut d' Optique in Orsay, France. It is divided into three parts. The first and largest part discusses the possibility of using strong standing waves as a tool to cool and trap neutral atoms in optical cavities. This is very important in the field of nonlinear optics where several successful experiments with cold atoms in cavities have been performed recently. A discussion of the optical parametric oscillator in a regime where the nonlinearity dominates the evolution is the topic of the second part. We investigated mainly the statistical properties of the cavity output of the three interactive cavity modes. Very recently a system has been proposed which promises fantastic properties. It should exhibit a giant Kerr nonlinearity with negligible absorption thus leading to a photonic turnstile device based on cold atoms in cavity. We have shown that this model suffers from overly simplistic assumptions and developed several more comprehensive approaches to study the behavior of this system. Apart from the division into three parts of different contents the thesis is divided into publications, supplements and invisible stuff. The intention of the supplements is to reach researchers which work in related areas and provide them with more detailed information about the concepts and the numerical tools we used. It is written especially for diploma and PhD students to give them a chance to use the third part of our work which is actually the largest one. They consist of a large number of computer programs we wrote to investigate the behavior of the systems in parameter regions where no hope exists to solve the equations analytically. (author)
Topics in strong Langmuir turbulence
International Nuclear Information System (INIS)
Skoric, M.M.
1981-01-01
This thesis discusses certain aspects of the turbulence of a fully ionised non-isothermal plasma dominated by the Langmuir mode. Some of the basic properties of strongly turbulent plasmas are reviewed. In particular, interest is focused on the state of Langmuir turbulence, that is the turbulence of a simple externally unmagnetized plasma. The problem of the existence and dynamics of Langmuir collapse is discussed, often met as a non-linear stage of the modulational instability in the framework of the Zakharov equations (i.e. simple time-averaged dynamical equations). Possible macroscopic consequences of such dynamical turbulent models are investigated. In order to study highly non-linear collapse dynamics in its advanced stage, a set of generalized Zakharov equations are derived. Going beyond the original approximation, the author includes the effects of higher electron non-linearities and a breakdown of slow-timescale quasi-neutrality. He investigates how these corrections may influence the collapse stabilisation. Recently, it has been realised that the modulational instability in a Langmuir plasma will be accompanied by the collisionless-generation of a slow-timescale magnetic field. Accordingly, a novel physical situation has emerged which is investigated in detail. The stability of monochromatic Langmuir waves in a self-magnetized Langmuir plasma, is discussed, and the existence of a novel magneto-modulational instability shown. The wave collapse dynamics is investigated and a physical interpretation of the basic results is given. A problem of the transient analysis of an interaction of time-dependent electromagnetic pulses with linear cold plasma media is investigated. (Auth.)
Promoting Strong Written Communication Skills
Narayanan, M.
2015-12-01
The reason that an improvement in the quality of technical writing is still needed in the classroom is due to the fact that universities are facing challenging problems not only on the technological front but also on the socio-economic front. The universities are actively responding to the changes that are taking place in the global consumer marketplace. Obviously, there are numerous benefits of promoting strong written communication skills. They can be summarized into the following six categories. First, and perhaps the most important: The University achieves learner satisfaction. The learner has documented verbally, that the necessary knowledge has been successfully acquired. This results in learner loyalty that in turn will attract more qualified learners.Second, quality communication lowers the cost per pupil, consequently resulting in increased productivity backed by a stronger economic structure and forecast. Third, quality communications help to improve the cash flow and cash reserves of the university. Fourth, having high quality communication enables the university to justify the need for high costs of tuition and fees. Fifth, better quality in written communication skills result in attracting top-quality learners. This will lead to happier and satisfied learners, not to mention greater prosperity for the university as a whole. Sixth, quality written communication skills result in reduced complaints, thus meaning fewer hours spent on answering or correcting the situation. The University faculty and staff are thus able to devote more time on scholarly activities, meaningful research and productive community service. References Boyer, Ernest L. (1990). Scholarship reconsidered: Priorities of the Professorate.Princeton, NJ: Carnegie Foundation for the Advancement of Teaching. Hawkins, P., & Winter, J. (1997). Mastering change: Learning the lessons of the enterprise.London: Department for Education and Employment. Buzzel, Robert D., and Bradley T. Gale. (1987
Nonparametric inference of network structure and dynamics
Peixoto, Tiago P.
The network structure of complex systems determine their function and serve as evidence for the evolutionary mechanisms that lie behind them. Despite considerable effort in recent years, it remains an open challenge to formulate general descriptions of the large-scale structure of network systems, and how to reliably extract such information from data. Although many approaches have been proposed, few methods attempt to gauge the statistical significance of the uncovered structures, and hence the majority cannot reliably separate actual structure from stochastic fluctuations. Due to the sheer size and high-dimensionality of many networks, this represents a major limitation that prevents meaningful interpretations of the results obtained with such nonstatistical methods. In this talk, I will show how these issues can be tackled in a principled and efficient fashion by formulating appropriate generative models of network structure that can have their parameters inferred from data. By employing a Bayesian description of such models, the inference can be performed in a nonparametric fashion, that does not require any a priori knowledge or ad hoc assumptions about the data. I will show how this approach can be used to perform model comparison, and how hierarchical models yield the most appropriate trade-off between model complexity and quality of fit based on the statistical evidence present in the data. I will also show how this general approach can be elegantly extended to networks with edge attributes, that are embedded in latent spaces, and that change in time. The latter is obtained via a fully dynamic generative network model, based on arbitrary-order Markov chains, that can also be inferred in a nonparametric fashion. Throughout the talk I will illustrate the application of the methods with many empirical networks such as the internet at the autonomous systems level, the global airport network, the network of actors and films, social networks, citations among
Impact of noise on molecular network inference.
Directory of Open Access Journals (Sweden)
Radhakrishnan Nagarajan
Full Text Available Molecular entities work in concert as a system and mediate phenotypic outcomes and disease states. There has been recent interest in modelling the associations between molecular entities from their observed expression profiles as networks using a battery of algorithms. These networks have proven to be useful abstractions of the underlying pathways and signalling mechanisms. Noise is ubiquitous in molecular data and can have a pronounced effect on the inferred network. Noise can be an outcome of several factors including: inherent stochastic mechanisms at the molecular level, variation in the abundance of molecules, heterogeneity, sensitivity of the biological assay or measurement artefacts prevalent especially in high-throughput settings. The present study investigates the impact of discrepancies in noise variance on pair-wise dependencies, conditional dependencies and constraint-based Bayesian network structure learning algorithms that incorporate conditional independence tests as a part of the learning process. Popular network motifs and fundamental connections, namely: (a common-effect, (b three-chain, and (c coherent type-I feed-forward loop (FFL are investigated. The choice of these elementary networks can be attributed to their prevalence across more complex networks. Analytical expressions elucidating the impact of discrepancies in noise variance on pairwise dependencies and conditional dependencies for special cases of these motifs are presented. Subsequently, the impact of noise on two popular constraint-based Bayesian network structure learning algorithms such as Grow-Shrink (GS and Incremental Association Markov Blanket (IAMB that implicitly incorporate tests for conditional independence is investigated. Finally, the impact of noise on networks inferred from publicly available single cell molecular expression profiles is investigated. While discrepancies in noise variance are overlooked in routine molecular network inference, the
Bayesian Estimation and Inference using Stochastic Hardware
Directory of Open Access Journals (Sweden)
Chetan Singh Thakur
2016-03-01
Full Text Available In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker, demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND, we show how inference can be performed in a Directed Acyclic Graph (DAG using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.
Bayesian Estimation and Inference Using Stochastic Electronics.
Thakur, Chetan Singh; Afshar, Saeed; Wang, Runchun M; Hamilton, Tara J; Tapson, Jonathan; van Schaik, André
2016-01-01
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.
Strong Bayesian evidence for the normal neutrino hierarchy
Energy Technology Data Exchange (ETDEWEB)
Simpson, Fergus; Jimenez, Raul; Verde, Licia [ICCUB, University of Barcelona (UB-IEEC), Marti i Franques 1, Barcelona, 08028 (Spain); Pena-Garay, Carlos, E-mail: fergus2@gmail.com, E-mail: raul.jimenez@icc.ub.edu, E-mail: penagaray@gmail.com, E-mail: liciaverde@icc.ub.edu [I2SysBio, CSIC-UVEG, P.O. 22085, Valencia, 46071 (Spain)
2017-06-01
The configuration of the three neutrino masses can take two forms, known as the normal and inverted hierarchies. We compute the Bayesian evidence associated with these two hierarchies. Previous studies found a mild preference for the normal hierarchy, and this was driven by the asymmetric manner in which cosmological data has confined the available parameter space. Here we identify the presence of a second asymmetry, which is imposed by data from neutrino oscillations. By combining constraints on the squared-mass splittings [1] with the limit on the sum of neutrino masses of Σ m {sub ν} < 0.13 eV [2], and using a minimally informative prior on the masses, we infer odds of 42:1 in favour of the normal hierarchy, which is classified as 'strong' in the Jeffreys' scale. We explore how these odds may evolve in light of higher precision cosmological data, and discuss the implications of this finding with regards to the nature of neutrinos. Finally the individual masses are inferred to be m {sub 1}=3.80{sup +26.2}{sub -3.73}meV; m {sub 2}=8.8{sup +18}{sub -1.2}meV; m {sub 3}=50.4{sup +5.8}{sub -1.2}meV (95% credible intervals).
Robust Inference with Multi-way Clustering
A. Colin Cameron; Jonah B. Gelbach; Douglas L. Miller; Doug Miller
2009-01-01
In this paper we propose a variance estimator for the OLS estimator as well as for nonlinear estimators such as logit, probit and GMM. This variance estimator enables cluster-robust inference when there is two-way or multi-way clustering that is non-nested. The variance estimator extends the standard cluster-robust variance estimator or sandwich estimator for one-way clustering (e.g. Liang and Zeger (1986), Arellano (1987)) and relies on similar relatively weak distributional assumptions. Our...
Approximate Inference and Deep Generative Models
CERN. Geneva
2018-01-01
Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.
Abductive Inference using Array-Based Logic
DEFF Research Database (Denmark)
Frisvad, Jeppe Revall; Falster, Peter; Møller, Gert L.
The notion of abduction has found its usage within a wide variety of AI fields. Computing abductive solutions has, however, shown to be highly intractable in logic programming. To avoid this intractability we present a new approach to logicbased abduction; through the geometrical view of data...... employed in array-based logic we embrace abduction in a simple structural operation. We argue that a theory of abduction on this form allows for an implementation which, at runtime, can perform abductive inference quite efficiently on arbitrary rules of logic representing knowledge of finite domains....
Russia needs a strong counterpart
International Nuclear Information System (INIS)
Slovak, K.; Marcan, P.
2008-01-01
In this paper an interview with the head of OMV, Wolfgang Ruttenstorfer is published. There is extract from this interview: Q: There have been attempts to take over MOL for a quite long time. Do you think you can still succeed? Since the beginning we kept saying that this would not happen from one day to another. But it may take two to three years. But we are positive that it is justified. Q: Resistance from MOL and the Hungarian government is strong. We have tried to persuade the Hungarian government. We offered them a split company management. A part of the management would be in Budapest. We would locate the management of the largest division - the refinery, there. And of course only the best could be part of the management. We would not nominate people according to their nationality, it would not matter whether the person was Austrian, Hungarian or Slovak. We want a Central European company, not Hungarian, Romanian or Slovak company. Q: Would the transaction still be attractive if, because of pressure exercised by Brussels, you had to sell Slovnaft or your refinery in Szazhalobatta? We do not intend to sell any refineries. Q: Rumours are spreading that the Commission may ask you to sell a refinery? We do not want to speculate. Let us wait and see what happens. We do not want to sell refineries. Q: It is said that OMV is coordinating or at least consulting its attempts to acquire MOL with Gazprom. There are many rumours in Central Europe. But I can tell you this is not true. We are interested in this merger because we feel the increasing pressure exercised by Kazakhstan and Russia. We, of course, have a good relationship with Gazprom which we have had enjoyed for over forty years. As indeed Slovakia has. Q: A few weeks ago Austrian daily Wirtschaftsblatt published an article about Gazprom's interest in OMV shares. That is gossip that is more than ten years' old. Similarly to the rumours that Gazprom is a shareholder of MOL. There are no negotiations with Gazprom
Quantum Enhanced Inference in Markov Logic Networks.
Wittek, Peter; Gogolin, Christian
2017-04-19
Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.
Inferring network topology from complex dynamics
International Nuclear Information System (INIS)
Shandilya, Srinivas Gorur; Timme, Marc
2011-01-01
Inferring the network topology from dynamical observations is a fundamental problem pervading research on complex systems. Here, we present a simple, direct method for inferring the structural connection topology of a network, given an observation of one collective dynamical trajectory. The general theoretical framework is applicable to arbitrary network dynamical systems described by ordinary differential equations. No interference (external driving) is required and the type of dynamics is hardly restricted in any way. In particular, the observed dynamics may be arbitrarily complex; stationary, invariant or transient; synchronous or asynchronous and chaotic or periodic. Presupposing a knowledge of the functional form of the dynamical units and of the coupling functions between them, we present an analytical solution to the inverse problem of finding the network topology from observing a time series of state variables only. Robust reconstruction is achieved in any sufficiently long generic observation of the system. We extend our method to simultaneously reconstructing both the entire network topology and all parameters appearing linear in the system's equations of motion. Reconstruction of network topology and system parameters is viable even in the presence of external noise that distorts the original dynamics substantially. The method provides a conceptually new step towards reconstructing a variety of real-world networks, including gene and protein interaction networks and neuronal circuits.
Inferring climate sensitivity from volcanic events
Energy Technology Data Exchange (ETDEWEB)
Boer, G.J. [Environment Canada, University of Victoria, Canadian Centre for Climate Modelling and Analysis, Victoria, BC (Canada); Stowasser, M.; Hamilton, K. [University of Hawaii, International Pacific Research Centre, Honolulu, HI (United States)
2007-04-15
The possibility of estimating the equilibrium climate sensitivity of the earth-system from observations following explosive volcanic eruptions is assessed in the context of a perfect model study. Two modern climate models (the CCCma CGCM3 and the NCAR CCSM2) with different equilibrium climate sensitivities are employed in the investigation. The models are perturbed with the same transient volcano-like forcing and the responses analysed to infer climate sensitivities. For volcano-like forcing the global mean surface temperature responses of the two models are very similar, despite their differing equilibrium climate sensitivities, indicating that climate sensitivity cannot be inferred from the temperature record alone even if the forcing is known. Equilibrium climate sensitivities can be reasonably determined only if both the forcing and the change in heat storage in the system are known very accurately. The geographic patterns of clear-sky atmosphere/surface and cloud feedbacks are similar for both the transient volcano-like and near-equilibrium constant forcing simulations showing that, to a considerable extent, the same feedback processes are invoked, and determine the climate sensitivity, in both cases. (orig.)
Facility Activity Inference Using Radiation Networks
Energy Technology Data Exchange (ETDEWEB)
Rao, Nageswara S. [ORNL; Ramirez Aviles, Camila A. [ORNL
2017-11-01
We consider the problem of inferring the operational status of a reactor facility using measurements from a radiation sensor network deployed around the facility’s ventilation off-gas stack. The intensity of stack emissions decays with distance, and the sensor counts or measurements are inherently random with parameters determined by the intensity at the sensor’s location. We utilize the measurements to estimate the intensity at the stack, and use it in a one-sided Sequential Probability Ratio Test (SPRT) to infer on/off status of the reactor. We demonstrate the superior performance of this method over conventional majority fusers and individual sensors using (i) test measurements from a network of 21 NaI detectors, and (ii) effluence measurements collected at the stack of a reactor facility. We also analytically establish the superior detection performance of the network over individual sensors with fixed and adaptive thresholds by utilizing the Poisson distribution of the counts. We quantify the performance improvements of the network detection over individual sensors using the packing number of the intensity space.
Models for inference in dynamic metacommunity systems
Dorazio, Robert M.; Kery, Marc; Royle, J. Andrew; Plattner, Matthias
2010-01-01
A variety of processes are thought to be involved in the formation and dynamics of species assemblages. For example, various metacommunity theories are based on differences in the relative contributions of dispersal of species among local communities and interactions of species within local communities. Interestingly, metacommunity theories continue to be advanced without much empirical validation. Part of the problem is that statistical models used to analyze typical survey data either fail to specify ecological processes with sufficient complexity or they fail to account for errors in detection of species during sampling. In this paper, we describe a statistical modeling framework for the analysis of metacommunity dynamics that is based on the idea of adopting a unified approach, multispecies occupancy modeling, for computing inferences about individual species, local communities of species, or the entire metacommunity of species. This approach accounts for errors in detection of species during sampling and also allows different metacommunity paradigms to be specified in terms of species- and location-specific probabilities of occurrence, extinction, and colonization: all of which are estimable. In addition, this approach can be used to address inference problems that arise in conservation ecology, such as predicting temporal and spatial changes in biodiversity for use in making conservation decisions. To illustrate, we estimate changes in species composition associated with the species-specific phenologies of flight patterns of butterflies in Switzerland for the purpose of estimating regional differences in biodiversity.
Causal inference, probability theory, and graphical insights.
Baker, Stuart G
2013-11-10
Causal inference from observational studies is a fundamental topic in biostatistics. The causal graph literature typically views probability theory as insufficient to express causal concepts in observational studies. In contrast, the view here is that probability theory is a desirable and sufficient basis for many topics in causal inference for the following two reasons. First, probability theory is generally more flexible than causal graphs: Besides explaining such causal graph topics as M-bias (adjusting for a collider) and bias amplification and attenuation (when adjusting for instrumental variable), probability theory is also the foundation of the paired availability design for historical controls, which does not fit into a causal graph framework. Second, probability theory is the basis for insightful graphical displays including the BK-Plot for understanding Simpson's paradox with a binary confounder, the BK2-Plot for understanding bias amplification and attenuation in the presence of an unobserved binary confounder, and the PAD-Plot for understanding the principal stratification component of the paired availability design. Published 2013. This article is a US Government work and is in the public domain in the USA.
Inferring relevance in a changing world
Directory of Open Access Journals (Sweden)
Robert C Wilson
2012-01-01
Full Text Available Reinforcement learning models of human and animal learning usually concentrate on how we learn the relationship between different stimuli or actions and rewards. However, in real world situations stimuli are ill-defined. On the one hand, our immediate environment is extremely multi-dimensional. On the other hand, in every decision-making scenario only a few aspects of the environment are relevant for obtaining reward, while most are irrelevant. Thus a key question is how do we learn these relevant dimensions, that is, how do we learn what to learn about? We investigated this process of representation learning experimentally, using a task in which one stimulus dimension was relevant for determining reward at each point in time. As in real life situations, in our task the relevant dimension can change without warning, adding ever-present uncertainty engendered by a constantly changing environment. We show that human performance on this task is better described by a suboptimal strategy based on selective attention and serial hypothesis testing rather than a normative strategy based on probabilistic inference. From this, we conjecture that the problem of inferring relevance in general scenarios is too computationally demanding for the brain to solve optimally. As a result the brain utilizes approximations, employing these even in simplified scenarios in which optimal representation learning is tractable, such as the one in our experiment.
Automated adaptive inference of phenomenological dynamical models
Daniels, Bryan
Understanding the dynamics of biochemical systems can seem impossibly complicated at the microscopic level: detailed properties of every molecular species, including those that have not yet been discovered, could be important for producing macroscopic behavior. The profusion of data in this area has raised the hope that microscopic dynamics might be recovered in an automated search over possible models, yet the combinatorial growth of this space has limited these techniques to systems that contain only a few interacting species. We take a different approach inspired by coarse-grained, phenomenological models in physics. Akin to a Taylor series producing Hooke's Law, forgoing microscopic accuracy allows us to constrain the search over dynamical models to a single dimension. This makes it feasible to infer dynamics with very limited data, including cases in which important dynamical variables are unobserved. We name our method Sir Isaac after its ability to infer the dynamical structure of the law of gravitation given simulated planetary motion data. Applying the method to output from a microscopically complicated but macroscopically simple biological signaling model, it is able to adapt the level of detail to the amount of available data. Finally, using nematode behavioral time series data, the method discovers an effective switch between behavioral attractors after the application of a painful stimulus.
Graphical models for inferring single molecule dynamics
Directory of Open Access Journals (Sweden)
Gonzalez Ruben L
2010-10-01
Full Text Available Abstract Background The recent explosion of experimental techniques in single molecule biophysics has generated a variety of novel time series data requiring equally novel computational tools for analysis and inference. This article describes in general terms how graphical modeling may be used to learn from biophysical time series data using the variational Bayesian expectation maximization algorithm (VBEM. The discussion is illustrated by the example of single-molecule fluorescence resonance energy transfer (smFRET versus time data, where the smFRET time series is modeled as a hidden Markov model (HMM with Gaussian observables. A detailed description of smFRET is provided as well. Results The VBEM algorithm returns the model’s evidence and an approximating posterior parameter distribution given the data. The former provides a metric for model selection via maximum evidence (ME, and the latter a description of the model’s parameters learned from the data. ME/VBEM provide several advantages over the more commonly used approach of maximum likelihood (ML optimized by the expectation maximization (EM algorithm, the most important being a natural form of model selection and a well-posed (non-divergent optimization problem. Conclusions The results demonstrate the utility of graphical modeling for inference of dynamic processes in single molecule biophysics.
Quantum Enhanced Inference in Markov Logic Networks
Wittek, Peter; Gogolin, Christian
2017-04-01
Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.
Causal Inference in the Perception of Verticality.
de Winkel, Ksander N; Katliar, Mikhail; Diers, Daniel; Bülthoff, Heinrich H
2018-04-03
The perceptual upright is thought to be constructed by the central nervous system (CNS) as a vector sum; by combining estimates on the upright provided by the visual system and the body's inertial sensors with prior knowledge that upright is usually above the head. Recent findings furthermore show that the weighting of the respective sensory signals is proportional to their reliability, consistent with a Bayesian interpretation of a vector sum (Forced Fusion, FF). However, violations of FF have also been reported, suggesting that the CNS may rely on a single sensory system (Cue Capture, CC), or choose to process sensory signals based on inferred signal causality (Causal Inference, CI). We developed a novel alternative-reality system to manipulate visual and physical tilt independently. We tasked participants (n = 36) to indicate the perceived upright for various (in-)congruent combinations of visual-inertial stimuli, and compared models based on their agreement with the data. The results favor the CI model over FF, although this effect became unambiguous only for large discrepancies (±60°). We conclude that the notion of a vector sum does not provide a comprehensive explanation of the perception of the upright, and that CI offers a better alternative.
Saros, Jasmine E.; Stone, Jeffery R.; Pederson, Gregory T.; Slemmons, Krista; Spanbauer, Trisha; Schliep, Anna; Cahl, Douglas; Williamson, Craig E.; Engstrom, Daniel R.
2015-01-01
Over the 20th century, surface water temperatures have increased in many lake ecosystems around the world, but long-term trends in the vertical thermal structure of lakes remain unclear, despite the strong control that thermal stratification exerts on the biological response of lakes to climate change. Here we used both neo- and paleoecological approaches to develop a fossil-based inference model for lake mixing depths and thereby refine understanding of lake thermal structure change. We focused on three common planktonic diatom taxa, the distributions of which previous research suggests might be affected by mixing depth. Comparative lake surveys and growth rate experiments revealed that these species respond to lake thermal structure when nitrogen is sufficient, with species optima ranging from shallower to deeper mixing depths. The diatom-based mixing depth model was applied to sedimentary diatom profiles extending back to 1750 AD in two lakes with moderate nitrate concentrations but differing climate settings. Thermal reconstructions were consistent with expected changes, with shallower mixing depths inferred for an alpine lake where treeline has advanced, and deeper mixing depths inferred for a boreal lake where wind strength has increased. The inference model developed here provides a new tool to expand and refine understanding of climate-induced changes in lake ecosystems.
Directory of Open Access Journals (Sweden)
Yinyin Yuan
Full Text Available Inferring regulatory relationships among many genes based on their temporal variation in transcript abundance has been a popular research topic. Due to the nature of microarray experiments, classical tools for time series analysis lose power since the number of variables far exceeds the number of the samples. In this paper, we describe some of the existing multivariate inference techniques that are applicable to hundreds of variables and show the potential challenges for small-sample, large-scale data. We propose a directed partial correlation (DPC method as an efficient and effective solution to regulatory network inference using these data. Specifically for genomic data, the proposed method is designed to deal with large-scale datasets. It combines the efficiency of partial correlation for setting up network topology by testing conditional independence, and the concept of Granger causality to assess topology change with induced interruptions. The idea is that when a transcription factor is induced artificially within a gene network, the disruption of the network by the induction signifies a genes role in transcriptional regulation. The benchmarking results using GeneNetWeaver, the simulator for the DREAM challenges, provide strong evidence of the outstanding performance of the proposed DPC method. When applied to real biological data, the inferred starch metabolism network in Arabidopsis reveals many biologically meaningful network modules worthy of further investigation. These results collectively suggest DPC is a versatile tool for genomics research. The R package DPC is available for download (http://code.google.com/p/dpcnet/.
Constraint Satisfaction Inference : Non-probabilistic Global Inference for Sequence Labelling
Canisius, S.V.M.; van den Bosch, A.; Daelemans, W.; Basili, R.; Moschitti, A.
2006-01-01
We present a new method for performing sequence labelling based on the idea of using a machine-learning classifier to generate several possible output sequences, and then applying an inference procedure to select the best sequence among those. Most sequence labelling methods following a similar
Mathematical inference and control of molecular networks from perturbation experiments
Mohammed-Rasheed, Mohammed
in order to affect the time evolution of molecular activity in a desirable manner. In this proposal, we address both the inference and control problems of GRNs. In the first part of the thesis, we consider the control problem. We assume that we are given a general topology network structure, whose dynamics follow a discrete-time Markov chain model. We subsequently develop a comprehensive framework for optimal perturbation control of the network. The aim of the perturbation is to drive the network away from undesirable steady-states and to force it to converge to a unique desirable steady-state. The proposed framework does not make any assumptions about the topology of the initial network (e.g., ergodicity, weak and strong connectivity), and is thus applicable to general topology networks. We define the optimal perturbation as the minimum-energy perturbation measured in terms of the Frobenius norm between the initial and perturbed networks. We subsequently demonstrate that there exists at most one optimal perturbation that forces the network into the desirable steady-state. In the event where the optimal perturbation does not exist, we construct a family of sub-optimal perturbations that approximate the optimal solution arbitrarily closely. In the second part of the thesis, we address the inference problem of GRNs from time series data. We model the dynamics of the molecules using a system of ordinary differential equations corrupted by additive white noise. For large-scale networks, we formulate the inference problem as a constrained maximum likelihood estimation problem. We derive the molecular interactions that maximize the likelihood function while constraining the network to be sparse. We further propose a procedure to recover weak interactions based on the Bayesian information criterion. For small-size networks, we investigated the inference of a globally stable 7-gene melanoma genetic regulatory network from genetic perturbation experiments. We considered five
Human brain lesion-deficit inference remapped.
Mah, Yee-Haur; Husain, Masud; Rees, Geraint; Nachev, Parashkev
2014-09-01
Our knowledge of the anatomical organization of the human brain in health and disease draws heavily on the study of patients with focal brain lesions. Historically the first method of mapping brain function, it is still potentially the most powerful, establishing the necessity of any putative neural substrate for a given function or deficit. Great inferential power, however, carries a crucial vulnerability: without stronger alternatives any consistent error cannot be easily detected. A hitherto unexamined source of such error is the structure of the high-dimensional distribution of patterns of focal damage, especially in ischaemic injury-the commonest aetiology in lesion-deficit studies-where the anatomy is naturally shaped by the architecture of the vascular tree. This distribution is so complex that analysis of lesion data sets of conventional size cannot illuminate its structure, leaving us in the dark about the presence or absence of such error. To examine this crucial question we assembled the largest known set of focal brain lesions (n = 581), derived from unselected patients with acute ischaemic injury (mean age = 62.3 years, standard deviation = 17.8, male:female ratio = 0.547), visualized with diffusion-weighted magnetic resonance imaging, and processed with validated automated lesion segmentation routines. High-dimensional analysis of this data revealed a hidden bias within the multivariate patterns of damage that will consistently distort lesion-deficit maps, displacing inferred critical regions from their true locations, in a manner opaque to replication. Quantifying the size of this mislocalization demonstrates that past lesion-deficit relationships estimated with conventional inferential methodology are likely to be significantly displaced, by a magnitude dependent on the unknown underlying lesion-deficit relationship itself. Past studies therefore cannot be retrospectively corrected, except by new knowledge that would render them redundant
Jonsen, Ian
2016-02-08
State-space models provide a powerful way to scale up inference of movement behaviours from individuals to populations when the inference is made across multiple individuals. Here, I show how a joint estimation approach that assumes individuals share identical movement parameters can lead to improved inference of behavioural states associated with different movement processes. I use simulated movement paths with known behavioural states to compare estimation error between nonhierarchical and joint estimation formulations of an otherwise identical state-space model. Behavioural state estimation error was strongly affected by the degree of similarity between movement patterns characterising the behavioural states, with less error when movements were strongly dissimilar between states. The joint estimation model improved behavioural state estimation relative to the nonhierarchical model for simulated data with heavy-tailed Argos location errors. When applied to Argos telemetry datasets from 10 Weddell seals, the nonhierarchical model estimated highly uncertain behavioural state switching probabilities for most individuals whereas the joint estimation model yielded substantially less uncertainty. The joint estimation model better resolved the behavioural state sequences across all seals. Hierarchical or joint estimation models should be the preferred choice for estimating behavioural states from animal movement data, especially when location data are error-prone.
Meta-learning framework applied in bioinformatics inference system design.
Arredondo, Tomás; Ormazábal, Wladimir
2015-01-01
This paper describes a meta-learner inference system development framework which is applied and tested in the implementation of bioinformatic inference systems. These inference systems are used for the systematic classification of the best candidates for inclusion in bacterial metabolic pathway maps. This meta-learner-based approach utilises a workflow where the user provides feedback with final classification decisions which are stored in conjunction with analysed genetic sequences for periodic inference system training. The inference systems were trained and tested with three different data sets related to the bacterial degradation of aromatic compounds. The analysis of the meta-learner-based framework involved contrasting several different optimisation methods with various different parameters. The obtained inference systems were also contrasted with other standard classification methods with accurate prediction capabilities observed.
Active Inference, homeostatic regulation and adaptive behavioural control.
Pezzulo, Giovanni; Rigoli, Francesco; Friston, Karl
2015-11-01
We review a theory of homeostatic regulation and adaptive behavioural control within the Active Inference framework. Our aim is to connect two research streams that are usually considered independently; namely, Active Inference and associative learning theories of animal behaviour. The former uses a probabilistic (Bayesian) formulation of perception and action, while the latter calls on multiple (Pavlovian, habitual, goal-directed) processes for homeostatic and behavioural control. We offer a synthesis these classical processes and cast them as successive hierarchical contextualisations of sensorimotor constructs, using the generative models that underpin Active Inference. This dissolves any apparent mechanistic distinction between the optimization processes that mediate classical control or learning. Furthermore, we generalize the scope of Active Inference by emphasizing interoceptive inference and homeostatic regulation. The ensuing homeostatic (or allostatic) perspective provides an intuitive explanation for how priors act as drives or goals to enslave action, and emphasises the embodied nature of inference. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Bayesian inference data evaluation and decisions
Harney, Hanns Ludwig
2016-01-01
This new edition offers a comprehensive introduction to the analysis of data using Bayes rule. It generalizes Gaussian error intervals to situations in which the data follow distributions other than Gaussian. This is particularly useful when the observed parameter is barely above the background or the histogram of multiparametric data contains many empty bins, so that the determination of the validity of a theory cannot be based on the chi-squared-criterion. In addition to the solutions of practical problems, this approach provides an epistemic insight: the logic of quantum mechanics is obtained as the logic of unbiased inference from counting data. New sections feature factorizing parameters, commuting parameters, observables in quantum mechanics, the art of fitting with coherent and with incoherent alternatives and fitting with multinomial distribution. Additional problems and examples help deepen the knowledge. Requiring no knowledge of quantum mechanics, the book is written on introductory level, with man...
Bayesian inference and updating of reliability data
International Nuclear Information System (INIS)
Sabri, Z.A.; Cullingford, M.C.; David, H.T.; Husseiny, A.A.
1980-01-01
A Bayes methodology for inference of reliability values using available but scarce current data is discussed. The method can be used to update failure rates as more information becomes available from field experience, assuming that the performance of a given component (or system) exhibits a nonhomogeneous Poisson process. Bayes' theorem is used to summarize the historical evidence and current component data in the form of a posterior distribution suitable for prediction and for smoothing or interpolation. An example is given. It may be appropriate to apply the methodology developed here to human error data, in which case the exponential model might be used to describe the learning behavior of the operator or maintenance crew personnel
Automatic inference of indexing rules for MEDLINE
Directory of Open Access Journals (Sweden)
Shooshan Sonya E
2008-11-01
Full Text Available Abstract Background: Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. Methods: In this paper, we describe the use and the customization of Inductive Logic Programming (ILP to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Results: Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI, a system producing automatic indexing recommendations for MEDLINE. Conclusion: We expect the sets of ILP rules obtained in this experiment to be integrated into MTI.
Progression inference for somatic mutations in cancer
Directory of Open Access Journals (Sweden)
Leif E. Peterson
2017-04-01
Full Text Available Computational methods were employed to determine progression inference of genomic alterations in commonly occurring cancers. Using cross-sectional TCGA data, we computed evolutionary trajectories involving selectivity relationships among pairs of gene-specific genomic alterations such as somatic mutations, deletions, amplifications, downregulation, and upregulation among the top 20 driver genes associated with each cancer. Results indicate that the majority of hierarchies involved TP53, PIK3CA, ERBB2, APC, KRAS, EGFR, IDH1, VHL, etc. Research into the order and accumulation of genomic alterations among cancer driver genes will ever-increase as the costs of nextgen sequencing subside, and personalized/precision medicine incorporates whole-genome scans into the diagnosis and treatment of cancer. Keywords: Oncology, Cancer research, Genetics, Computational biology
Inferring Phylogenetic Networks from Gene Order Data
Directory of Open Access Journals (Sweden)
Alexey Anatolievich Morozov
2013-01-01
Full Text Available Existing algorithms allow us to infer phylogenetic networks from sequences (DNA, protein or binary, sets of trees, and distance matrices, but there are no methods to build them using the gene order data as an input. Here we describe several methods to build split networks from the gene order data, perform simulation studies, and use our methods for analyzing and interpreting different real gene order datasets. All proposed methods are based on intermediate data, which can be generated from genome structures under study and used as an input for network construction algorithms. Three intermediates are used: set of jackknife trees, distance matrix, and binary encoding. According to simulations and case studies, the best intermediates are jackknife trees and distance matrix (when used with Neighbor-Net algorithm. Binary encoding can also be useful, but only when the methods mentioned above cannot be used.
Supplier Selection Using Fuzzy Inference System
Directory of Open Access Journals (Sweden)
hamidreza kadhodazadeh
2014-01-01
Full Text Available Suppliers are one of the most vital parts of supply chain whose operation has significant indirect effect on customer satisfaction. Since customer's expectations from organization are different, organizations should consider different standards, respectively. There are many researches in this field using different standards and methods in recent years. The purpose of this study is to propose an approach for choosing a supplier in a food manufacturing company considering cost, quality, service, type of relationship and structure standards of the supplier organization. To evaluate supplier according to the above standards, the fuzzy inference system has been used. Input data of this system includes supplier's score in any standard that is achieved by AHP approach and the output is final score of each supplier. Finally, a supplier has been selected that although is not the best in price and quality, has achieved good score in all of the standards.
Gene expression inference with deep learning.
Chen, Yifei; Li, Yi; Narayan, Rajiv; Subramanian, Aravind; Xie, Xiaohui
2016-06-15
Large-scale gene expression profiling has been widely used to characterize cellular states in response to various disease conditions, genetic perturbations, etc. Although the cost of whole-genome expression profiles has been dropping steadily, generating a compendium of expression profiling over thousands of samples is still very expensive. Recognizing that gene expressions are often highly correlated, researchers from the NIH LINCS program have developed a cost-effective strategy of profiling only ∼1000 carefully selected landmark genes and relying on computational methods to infer the expression of remaining target genes. However, the computational approach adopted by the LINCS program is currently based on linear regression (LR), limiting its accuracy since it does not capture complex nonlinear relationship between expressions of genes. We present a deep learning method (abbreviated as D-GEX) to infer the expression of target genes from the expression of landmark genes. We used the microarray-based Gene Expression Omnibus dataset, consisting of 111K expression profiles, to train our model and compare its performance to those from other methods. In terms of mean absolute error averaged across all genes, deep learning significantly outperforms LR with 15.33% relative improvement. A gene-wise comparative analysis shows that deep learning achieves lower error than LR in 99.97% of the target genes. We also tested the performance of our learned model on an independent RNA-Seq-based GTEx dataset, which consists of 2921 expression profiles. Deep learning still outperforms LR with 6.57% relative improvement, and achieves lower error in 81.31% of the target genes. D-GEX is available at https://github.com/uci-cbcl/D-GEX CONTACT: xhx@ics.uci.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Systematic parameter inference in stochastic mesoscopic modeling
Energy Technology Data Exchange (ETDEWEB)
Lei, Huan; Yang, Xiu [Pacific Northwest National Laboratory, Richland, WA 99352 (United States); Li, Zhen [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States); Karniadakis, George Em, E-mail: george_karniadakis@brown.edu [Division of Applied Mathematics, Brown University, Providence, RI 02912 (United States)
2017-02-01
We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are “sparse”. The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.
State-Space Inference and Learning with Gaussian Processes
Turner, R; Deisenroth, MP; Rasmussen, CE
2010-01-01
18.10.13 KB. Ok to add author version to spiral, authors hold copyright. State-space inference and learning with Gaussian processes (GPs) is an unsolved problem. We propose a new, general methodology for inference and learning in nonlinear state-space models that are described probabilistically by non-parametric GP models. We apply the expectation maximization algorithm to iterate between inference in the latent state-space and learning the parameters of the underlying GP dynamics model. C...
Probabilistic logic networks a comprehensive framework for uncertain inference
Goertzel, Ben; Goertzel, Izabela Freire; Heljakka, Ari
2008-01-01
This comprehensive book describes Probabilistic Logic Networks (PLN), a novel conceptual, mathematical and computational approach to uncertain inference. A broad scope of reasoning types are considered.
Parametric statistical inference basic theory and modern approaches
Zacks, Shelemyahu; Tsokos, C P
1981-01-01
Parametric Statistical Inference: Basic Theory and Modern Approaches presents the developments and modern trends in statistical inference to students who do not have advanced mathematical and statistical preparation. The topics discussed in the book are basic and common to many fields of statistical inference and thus serve as a jumping board for in-depth study. The book is organized into eight chapters. Chapter 1 provides an overview of how the theory of statistical inference is presented in subsequent chapters. Chapter 2 briefly discusses statistical distributions and their properties. Chapt
Multi-Modal Inference in Animacy Perception for Artificial Object
Directory of Open Access Journals (Sweden)
Kohske Takahashi
2011-10-01
Full Text Available Sometimes we feel animacy for artificial objects and their motion. Animals usually interact with environments through multiple sensory modalities. Here we investigated how the sensory responsiveness of artificial objects to the environment would contribute to animacy judgment for them. In a 90-s trial, observers freely viewed four objects moving in a virtual 3D space. The objects, whose position and motion were determined following Perlin-noise series, kept drifting independently in the space. Visual flashes, auditory bursts, or synchronous flashes and bursts appeared with 1–2 s intervals. The first object abruptly accelerated their motion just after visual flashes, giving an impression of responding to the flash. The second object responded to bursts. The third object responded to synchronous flashes and bursts. The forth object accelerated at a random timing independent of flashes and bursts. The observers rated how strongly they felt animacy for each object. The results showed that the object responding to the auditory bursts was rated as having weaker animacy compared to the other objects. This implies that sensory modality through which an object interacts with the environment may be a factor for animacy perception in the object and may serve as the basis of multi-modal and cross-modal inference of animacy.
Modeling and inferring cleavage patterns in proliferating epithelia.
Directory of Open Access Journals (Sweden)
Ankit B Patel
2009-06-01
Full Text Available The regulation of cleavage plane orientation is one of the key mechanisms driving epithelial morphogenesis. Still, many aspects of the relationship between local cleavage patterns and tissue-level properties remain poorly understood. Here we develop a topological model that simulates the dynamics of a 2D proliferating epithelium from generation to generation, enabling the exploration of a wide variety of biologically plausible cleavage patterns. We investigate a spectrum of models that incorporate the spatial impact of neighboring cells and the temporal influence of parent cells on the choice of cleavage plane. Our findings show that cleavage patterns generate "signature" equilibrium distributions of polygonal cell shapes. These signatures enable the inference of local cleavage parameters such as neighbor impact, maternal influence, and division symmetry from global observations of the distribution of cell shape. Applying these insights to the proliferating epithelia of five diverse organisms, we find that strong division symmetry and moderate neighbor/maternal influence are required to reproduce the predominance of hexagonal cells and low variability in cell shape seen empirically. Furthermore, we present two distinct cleavage pattern models, one stochastic and one deterministic, that can reproduce the empirical distribution of cell shapes. Although the proliferating epithelia of the five diverse organisms show a highly conserved cell shape distribution, there are multiple plausible cleavage patterns that can generate this distribution, and experimental evidence suggests that indeed plants and fruitflies use distinct division mechanisms.
The origins of probabilistic inference in human infants.
Denison, Stephanie; Xu, Fei
2014-03-01
Reasoning under uncertainty is the bread and butter of everyday life. Many areas of psychology, from cognitive, developmental, social, to clinical, are interested in how individuals make inferences and decisions with incomplete information. The ability to reason under uncertainty necessarily involves probability computations, be they exact calculations or estimations. What are the developmental origins of probabilistic reasoning? Recent work has begun to examine whether infants and toddlers can compute probabilities; however, previous experiments have confounded quantity and probability-in most cases young human learners could have relied on simple comparisons of absolute quantities, as opposed to proportions, to succeed in these tasks. We present four experiments providing evidence that infants younger than 12 months show sensitivity to probabilities based on proportions. Furthermore, infants use this sensitivity to make predictions and fulfill their own desires, providing the first demonstration that even preverbal learners use probabilistic information to navigate the world. These results provide strong evidence for a rich quantitative and statistical reasoning system in infants. Copyright © 2013 Elsevier B.V. All rights reserved.
The evolutionary history of ferns inferred from 25 low-copy nuclear genes.
Rothfels, Carl J; Li, Fay-Wei; Sigel, Erin M; Huiet, Layne; Larsson, Anders; Burge, Dylan O; Ruhsam, Markus; Deyholos, Michael; Soltis, Douglas E; Stewart, C Neal; Shaw, Shane W; Pokorny, Lisa; Chen, Tao; dePamphilis, Claude; DeGironimo, Lisa; Chen, Li; Wei, Xiaofeng; Sun, Xiao; Korall, Petra; Stevenson, Dennis W; Graham, Sean W; Wong, Gane K-S; Pryer, Kathleen M
2015-07-01
• Understanding fern (monilophyte) phylogeny and its evolutionary timescale is critical for broad investigations of the evolution of land plants, and for providing the point of comparison necessary for studying the evolution of the fern sister group, seed plants. Molecular phylogenetic investigations have revolutionized our understanding of fern phylogeny, however, to date, these studies have relied almost exclusively on plastid data.• Here we take a curated phylogenomics approach to infer the first broad fern phylogeny from multiple nuclear loci, by combining broad taxon sampling (73 ferns and 12 outgroup species) with focused character sampling (25 loci comprising 35877 bp), along with rigorous alignment, orthology inference and model selection.• Our phylogeny corroborates some earlier inferences and provides novel insights; in particular, we find strong support for Equisetales as sister to the rest of ferns, Marattiales as sister to leptosporangiate ferns, and Dennstaedtiaceae as sister to the eupolypods. Our divergence-time analyses reveal that divergences among the extant fern orders all occurred prior to ∼200 MYA. Finally, our species-tree inferences are congruent with analyses of concatenated data, but generally with lower support. Those cases where species-tree support values are higher than expected involve relationships that have been supported by smaller plastid datasets, suggesting that deep coalescence may be reducing support from the concatenated nuclear data.• Our study demonstrates the utility of a curated phylogenomics approach to inferring fern phylogeny, and highlights the need to consider underlying data characteristics, along with data quantity, in phylogenetic studies. © 2015 Botanical Society of America, Inc.
DEFF Research Database (Denmark)
Bataillon, Thomas; Duan, Jinjie; Hvilsom, Christina
2015-01-01
of recent gene flow from Western into Eastern chimpanzees. The striking contrast in X-linked vs. autosomal polymorphism and divergence previously reported in Central chimpanzees is also found in Eastern and Western chimpanzees. We show that the direction of selection (DoS) statistic exhibits a strong non......-monotonic relationship with the strength of purifying selection S, making it inappropriate for estimating S. We instead use counts in synonymous vs. non-synonymous frequency classes to infer the distribution of S coefficients acting on non-synonymous mutations in each subspecies. The strength of purifying selection we...... infer is congruent with the differences in effective sizes of each subspecies: Central chimpanzees are undergoing the strongest purifying selection followed by Eastern and Western chimpanzees. Coding indels show stronger selection against indels changing the reading frame than observed in human...
Serang, Oliver
2014-01-01
Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called “causal independence”). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to and the space to where is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions. PMID:24626234
Making inference from wildlife collision data: inferring predator absence from prey strikes
Directory of Open Access Journals (Sweden)
Peter Caley
2017-02-01
Full Text Available Wildlife collision data are ubiquitous, though challenging for making ecological inference due to typically irreducible uncertainty relating to the sampling process. We illustrate a new approach that is useful for generating inference from predator data arising from wildlife collisions. By simply conditioning on a second prey species sampled via the same collision process, and by using a biologically realistic numerical response functions, we can produce a coherent numerical response relationship between predator and prey. This relationship can then be used to make inference on the population size of the predator species, including the probability of extinction. The statistical conditioning enables us to account for unmeasured variation in factors influencing the runway strike incidence for individual airports and to enable valid comparisons. A practical application of the approach for testing hypotheses about the distribution and abundance of a predator species is illustrated using the hypothesized red fox incursion into Tasmania, Australia. We estimate that conditional on the numerical response between fox and lagomorph runway strikes on mainland Australia, the predictive probability of observing no runway strikes of foxes in Tasmania after observing 15 lagomorph strikes is 0.001. We conclude there is enough evidence to safely reject the null hypothesis that there is a widespread red fox population in Tasmania at a population density consistent with prey availability. The method is novel and has potential wider application.
Making inference from wildlife collision data: inferring predator absence from prey strikes.
Caley, Peter; Hosack, Geoffrey R; Barry, Simon C
2017-01-01
Wildlife collision data are ubiquitous, though challenging for making ecological inference due to typically irreducible uncertainty relating to the sampling process. We illustrate a new approach that is useful for generating inference from predator data arising from wildlife collisions. By simply conditioning on a second prey species sampled via the same collision process, and by using a biologically realistic numerical response functions, we can produce a coherent numerical response relationship between predator and prey. This relationship can then be used to make inference on the population size of the predator species, including the probability of extinction. The statistical conditioning enables us to account for unmeasured variation in factors influencing the runway strike incidence for individual airports and to enable valid comparisons. A practical application of the approach for testing hypotheses about the distribution and abundance of a predator species is illustrated using the hypothesized red fox incursion into Tasmania, Australia. We estimate that conditional on the numerical response between fox and lagomorph runway strikes on mainland Australia, the predictive probability of observing no runway strikes of foxes in Tasmania after observing 15 lagomorph strikes is 0.001. We conclude there is enough evidence to safely reject the null hypothesis that there is a widespread red fox population in Tasmania at a population density consistent with prey availability. The method is novel and has potential wider application.
DCS Survey Submission for Platte County, MO
Federal Emergency Management Agency, Department of Homeland Security — Survey data includes spatial datasets and data tables necessary to digitally represent data collected in the survey phase of the study. (Source: FEMA Guidelines and...
Directory of Open Access Journals (Sweden)
Thomas H B FitzGerald
2017-05-01
Full Text Available Normative models of human cognition often appeal to Bayesian filtering, which provides optimal online estimates of unknown or hidden states of the world, based on previous observations. However, in many cases it is necessary to optimise beliefs about sequences of states rather than just the current state. Importantly, Bayesian filtering and sequential inference strategies make different predictions about beliefs and subsequent choices, rendering them behaviourally dissociable. Taking data from a probabilistic reversal task we show that subjects' choices provide strong evidence that they are representing short sequences of states. Between-subject measures of this implicit sequential inference strategy had a neurobiological underpinning and correlated with grey matter density in prefrontal and parietal cortex, as well as the hippocampus. Our findings provide, to our knowledge, the first evidence for sequential inference in human cognition, and by exploiting between-subject variation in this measure we provide pointers to its neuronal substrates.
DEFF Research Database (Denmark)
Pedersen, Casper-Emil Tingskov; Frandsen, Peter; Wekesa, Sabenzia N.
2015-01-01
abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale...... through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer...... to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully...
Strong Bisimilarity of Simple Process Algebras
DEFF Research Database (Denmark)
Srba, Jirí
2003-01-01
We study bisimilarity and regularity problems of simple process algebras. In particular, we show PSPACE-hardness of the following problems: (i) strong bisimilarity of Basic Parallel Processes (BPP), (ii) strong bisimilarity of Basic Process Algebra (BPA), (iii) strong regularity of BPP, and (iv......) strong regularity of BPA. We also demonstrate NL-hardness of strong regularity problems for the normed subclasses of BPP and BPA. Bisimilarity problems of simple process algebras are introduced in a general framework of process rewrite systems, and a uniform description of the new techniques used...
Application of strong phosphoric acid to radiochemistry
International Nuclear Information System (INIS)
Terada, Kikuo
1977-01-01
Not only inorganic and organic compounds but also natural substrances, such as accumulations in soil, are completely decomposed and distilled by heating with strong phosphoric acid for 30 to 50 minutes. As applications of strong phosphoric acid to radiochemistry, determination of uranium and boron by use of solubilization effect of this substance, titration of uranyl ion by use of sulfuric iron (II) contained in this substance, application to tracer experiment, and determination of radioactive ruthenium in environmental samples are reviewed. Strong phosphoric acid is also applied to activation analysis, for example, determination of N in pyrographite with iodate potassium-strong phosphoric acid method, separation of Os and Ru with sulfuric cerium (IV) - strong phosphoric acid method or potassium dechromate-strong phosphoric acid method, analysis of Se, As and Sb rocks and accumulations with ammonium bromide, sodium chloride and sodium bromide-strong phosphoric acid method. (Kanao, N.)
Making Inferences in Adulthood: Falling Leaves Mean It's Fall.
Zandi, Taher; Gregory, Monica E.
1988-01-01
Assessed age differences in making inferences from prose. Older adults correctly answered mean of 10 questions related to implicit information and 8 related to explicit information. Young adults answered mean of 7 implicit and 12 explicit information questions. In spite of poorer recall of factual details, older subjects made inferences to greater…
Statistical Inference and Patterns of Inequality in the Global North
Moran, Timothy Patrick
2006-01-01
Cross-national inequality trends have historically been a crucial field of inquiry across the social sciences, and new methodological techniques of statistical inference have recently improved the ability to analyze these trends over time. This paper applies Monte Carlo, bootstrap inference methods to the income surveys of the Luxembourg Income…
Causal Effect Inference with Deep Latent-Variable Models
Louizos, C; Shalit, U.; Mooij, J.; Sontag, D.; Zemel, R.; Welling, M.
2017-01-01
Learning individual-level causal effects from observational data, such as inferring the most effective medication for a specific patient, is a problem of growing importance for policy makers. The most important aspect of inferring causal effects from observational data is the handling of
A Comparative Analysis of Fuzzy Inference Engines in Context of ...
African Journals Online (AJOL)
Fuzzy inference engine has found successful applications in a wide variety of fields, such as automatic control, data classification, decision analysis, expert engines, time series prediction, robotics, pattern recognition, etc. This paper presents a comparative analysis of three fuzzy inference engines, max-product, max-min ...
General Purpose Probabilistic Programming Platform with Effective Stochastic Inference
2018-04-01
REFERENCES 74 LIST OF ACRONYMS 80 ii List of Figures Figure 1. The problem of inferring curves from data while simultaneously choosing the...bottom path) as the inverse problem to computer graphics (top path). ........ 18 Figure 18. An illustration of generative probabilistic graphics for 3D...Building these systems involves simultaneously developing mathematical models, inference algorithms and optimized software implementations. Small changes
A Comparative Analysis of Fuzzy Inference Engines in Context of ...
African Journals Online (AJOL)
PROF. O. E. OSUAGWU
Fuzzy Inference engine is an important part of reasoning systems capable of extracting correct conclusions from ... is known as the inference, or rule definition portion, of fuzzy .... minimal set of decision rules based on input- ... The study uses Mamdani FIS model and. Sugeno FIS ... control of induction motor drive. [18] study.
Deontic Introduction: A Theory of Inference from Is to Ought
Elqayam, Shira; Thompson, Valerie A.; Wilkinson, Meredith R.; Evans, Jonathan St. B. T.; Over, David E.
2015-01-01
Humans have a unique ability to generate novel norms. Faced with the knowledge that there are hungry children in Somalia, we easily and naturally infer that we ought to donate to famine relief charities. Although a contentious and lively issue in metaethics, such inference from "is" to "ought" has not been systematically…
Causal inference in survival analysis using pseudo-observations
DEFF Research Database (Denmark)
Andersen, Per K; Syriopoulou, Elisavet; Parner, Erik T
2017-01-01
Causal inference for non-censored response variables, such as binary or quantitative outcomes, is often based on either (1) direct standardization ('G-formula') or (2) inverse probability of treatment assignment weights ('propensity score'). To do causal inference in survival analysis, one needs ...
Bayesian inference of radiation belt loss timescales.
Camporeale, E.; Chandorkar, M.
2017-12-01
Electron fluxes in the Earth's radiation belts are routinely studied using the classical quasi-linear radial diffusion model. Although this simplified linear equation has proven to be an indispensable tool in understanding the dynamics of the radiation belt, it requires specification of quantities such as the diffusion coefficient and electron loss timescales that are never directly measured. Researchers have so far assumed a-priori parameterisations for radiation belt quantities and derived the best fit using satellite data. The state of the art in this domain lacks a coherent formulation of this problem in a probabilistic framework. We present some recent progress that we have made in performing Bayesian inference of radial diffusion parameters. We achieve this by making extensive use of the theory connecting Gaussian Processes and linear partial differential equations, and performing Markov Chain Monte Carlo sampling of radial diffusion parameters. These results are important for understanding the role and the propagation of uncertainties in radiation belt simulations and, eventually, for providing a probabilistic forecast of energetic electron fluxes in a Space Weather context.
Scalable inference for stochastic block models
Peng, Chengbin
2017-12-08
Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.
Probabilistic learning and inference in schizophrenia
Averbeck, Bruno B.; Evans, Simon; Chouhan, Viraj; Bristow, Eleanor; Shergill, Sukhwinder S.
2010-01-01
Patients with schizophrenia make decisions on the basis of less evidence when required to collect information to make an inference, a behavior often called jumping to conclusions. The underlying basis for this behaviour remains controversial. We examined the cognitive processes underpinning this finding by testing subjects on the beads task, which has been used previously to elicit jumping to conclusions behaviour, and a stochastic sequence learning task, with a similar decision theoretic structure. During the sequence learning task, subjects had to learn a sequence of button presses, while receiving noisy feedback on their choices. We fit a Bayesian decision making model to the sequence task and compared model parameters to the choice behavior in the beads task in both patients and healthy subjects. We found that patients did show a jumping to conclusions style; and those who picked early in the beads task tended to learn less from positive feedback in the sequence task. This favours the likelihood of patients selecting early because they have a low threshold for making decisions, and that they make choices on the basis of relatively little evidence. PMID:20810252
Aesthetic quality inference for online fashion shopping
Chen, Ming; Allebach, Jan
2014-03-01
On-line fashion communities in which participants post photos of personal fashion items for viewing and possible purchase by others are becoming increasingly popular. Generally, these photos are taken by individuals who have no training in photography with low-cost mobile phone cameras. It is desired that photos of the products have high aesthetic quality to improve the users' online shopping experience. In this work, we design features for aesthetic quality inference in the context of online fashion shopping. Psychophysical experiments are conducted to construct a database of the photos' aesthetic evaluation, specifically for photos from an online fashion shopping website. We then extract both generic low-level features and high-level image attributes to represent the aesthetic quality. Using a support vector machine framework, we train a predictor of the aesthetic quality rating based on the feature vector. Experimental results validate the efficacy of our approach. Metadata such as the product type are also used to further improve the result.
Information-Theoretic Inference of Common Ancestors
Directory of Open Access Journals (Sweden)
Bastian Steudel
2015-04-01
Full Text Available A directed acyclic graph (DAG partially represents the conditional independence structure among observations of a system if the local Markov condition holds, that is if every variable is independent of its non-descendants given its parents. In general, there is a whole class of DAGs that represents a given set of conditional independence relations. We are interested in properties of this class that can be derived from observations of a subsystem only. To this end, we prove an information-theoretic inequality that allows for the inference of common ancestors of observed parts in any DAG representing some unknown larger system. More explicitly, we show that a large amount of dependence in terms of mutual information among the observations implies the existence of a common ancestor that distributes this information. Within the causal interpretation of DAGs, our result can be seen as a quantitative extension of Reichenbach’s principle of common cause to more than two variables. Our conclusions are valid also for non-probabilistic observations, such as binary strings, since we state the proof for an axiomatized notion of “mutual information” that includes the stochastic as well as the algorithmic version.
Probabilistic learning and inference in schizophrenia.
Averbeck, Bruno B; Evans, Simon; Chouhan, Viraj; Bristow, Eleanor; Shergill, Sukhwinder S
2011-04-01
Patients with schizophrenia make decisions on the basis of less evidence when required to collect information to make an inference, a behavior often called jumping to conclusions. The underlying basis for this behavior remains controversial. We examined the cognitive processes underpinning this finding by testing subjects on the beads task, which has been used previously to elicit jumping to conclusions behavior, and a stochastic sequence learning task, with a similar decision theoretic structure. During the sequence learning task, subjects had to learn a sequence of button presses, while receiving a noisy feedback on their choices. We fit a Bayesian decision making model to the sequence task and compared model parameters to the choice behavior in the beads task in both patients and healthy subjects. We found that patients did show a jumping to conclusions style; and those who picked early in the beads task tended to learn less from positive feedback in the sequence task. This favours the likelihood of patients selecting early because they have a low threshold for making decisions, and that they make choices on the basis of relatively little evidence. Published by Elsevier B.V.
Active Inference and Learning in the Cerebellum.
Friston, Karl; Herreros, Ivan
2016-09-01
This letter offers a computational account of Pavlovian conditioning in the cerebellum based on active inference and predictive coding. Using eyeblink conditioning as a canonical paradigm, we formulate a minimal generative model that can account for spontaneous blinking, startle responses, and (delay or trace) conditioning. We then establish the face validity of the model using simulated responses to unconditioned and conditioned stimuli to reproduce the sorts of behavior that are observed empirically. The scheme's anatomical validity is then addressed by associating variables in the predictive coding scheme with nuclei and neuronal populations to match the (extrinsic and intrinsic) connectivity of the cerebellar (eyeblink conditioning) system. Finally, we try to establish predictive validity by reproducing selective failures of delay conditioning, trace conditioning, and extinction using (simulated and reversible) focal lesions. Although rather metaphorical, the ensuing scheme can account for a remarkable range of anatomical and neurophysiological aspects of cerebellar circuitry-and the specificity of lesion-deficit mappings that have been established experimentally. From a computational perspective, this work shows how conditioning or learning can be formulated in terms of minimizing variational free energy (or maximizing Bayesian model evidence) using exactly the same principles that underlie predictive coding in perception.
Inferring gene networks from discrete expression data
Zhang, L.
2013-07-18
The modeling of gene networks from transcriptional expression data is an important tool in biomedical research to reveal signaling pathways and to identify treatment targets. Current gene network modeling is primarily based on the use of Gaussian graphical models applied to continuous data, which give a closedformmarginal likelihood. In this paper,we extend network modeling to discrete data, specifically data from serial analysis of gene expression, and RNA-sequencing experiments, both of which generate counts of mRNAtranscripts in cell samples.We propose a generalized linear model to fit the discrete gene expression data and assume that the log ratios of the mean expression levels follow a Gaussian distribution.We restrict the gene network structures to decomposable graphs and derive the graphs by selecting the covariance matrix of the Gaussian distribution with the hyper-inverse Wishart priors. Furthermore, we incorporate prior network models based on gene ontology information, which avails existing biological information on the genes of interest. We conduct simulation studies to examine the performance of our discrete graphical model and apply the method to two real datasets for gene network inference. © The Author 2013. Published by Oxford University Press. All rights reserved.
Bayesian Inference of a Multivariate Regression Model
Directory of Open Access Journals (Sweden)
Marick S. Sinay
2014-01-01
Full Text Available We explore Bayesian inference of a multivariate linear regression model with use of a flexible prior for the covariance structure. The commonly adopted Bayesian setup involves the conjugate prior, multivariate normal distribution for the regression coefficients and inverse Wishart specification for the covariance matrix. Here we depart from this approach and propose a novel Bayesian estimator for the covariance. A multivariate normal prior for the unique elements of the matrix logarithm of the covariance matrix is considered. Such structure allows for a richer class of prior distributions for the covariance, with respect to strength of beliefs in prior location hyperparameters, as well as the added ability, to model potential correlation amongst the covariance structure. The posterior moments of all relevant parameters of interest are calculated based upon numerical results via a Markov chain Monte Carlo procedure. The Metropolis-Hastings-within-Gibbs algorithm is invoked to account for the construction of a proposal density that closely matches the shape of the target posterior distribution. As an application of the proposed technique, we investigate a multiple regression based upon the 1980 High School and Beyond Survey.
Logical inference techniques for loop parallelization
Oancea, Cosmin E.; Rauchwerger, Lawrence
2012-01-01
This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop's memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.
BAYESIAN INFERENCE OF CMB GRAVITATIONAL LENSING
Energy Technology Data Exchange (ETDEWEB)
Anderes, Ethan [Department of Statistics, University of California, Davis, CA 95616 (United States); Wandelt, Benjamin D.; Lavaux, Guilhem [Sorbonne Universités, UPMC Univ Paris 06 and CNRS, UMR7095, Institut d’Astrophysique de Paris, F-75014, Paris (France)
2015-08-01
The Planck satellite, along with several ground-based telescopes, has mapped the cosmic microwave background (CMB) at sufficient resolution and signal-to-noise so as to allow a detection of the subtle distortions due to the gravitational influence of the intervening matter distribution. A natural modeling approach is to write a Bayesian hierarchical model for the lensed CMB in terms of the unlensed CMB and the lensing potential. So far there has been no feasible algorithm for inferring the posterior distribution of the lensing potential from the lensed CMB map. We propose a solution that allows efficient Markov Chain Monte Carlo sampling from the joint posterior of the lensing potential and the unlensed CMB map using the Hamiltonian Monte Carlo technique. The main conceptual step in the solution is a re-parameterization of CMB lensing in terms of the lensed CMB and the “inverse lensing” potential. We demonstrate a fast implementation on simulated data, including noise and a sky cut, that uses a further acceleration based on a very mild approximation of the inverse lensing potential. We find that the resulting Markov Chain has short correlation lengths and excellent convergence properties, making it promising for applications to high-resolution CMB data sets in the future.
Virtual reality and consciousness inference in dreaming.
Hobson, J Allan; Hong, Charles C-H; Friston, Karl J
2014-01-01
This article explores the notion that the brain is genetically endowed with an innate virtual reality generator that - through experience-dependent plasticity - becomes a generative or predictive model of the world. This model, which is most clearly revealed in rapid eye movement (REM) sleep dreaming, may provide the theater for conscious experience. Functional neuroimaging evidence for brain activations that are time-locked to rapid eye movements (REMs) endorses the view that waking consciousness emerges from REM sleep - and dreaming lays the foundations for waking perception. In this view, the brain is equipped with a virtual model of the world that generates predictions of its sensations. This model is continually updated and entrained by sensory prediction errors in wakefulness to ensure veridical perception, but not in dreaming. In contrast, dreaming plays an essential role in maintaining and enhancing the capacity to model the world by minimizing model complexity and thereby maximizing both statistical and thermodynamic efficiency. This perspective suggests that consciousness corresponds to the embodied process of inference, realized through the generation of virtual realities (in both sleep and wakefulness). In short, our premise or hypothesis is that the waking brain engages with the world to predict the causes of sensations, while in sleep the brain's generative model is actively refined so that it generates more efficient predictions during waking. We review the evidence in support of this hypothesis - evidence that grounds consciousness in biophysical computations whose neuronal and neurochemical infrastructure has been disclosed by sleep research.
Inferring human mobility using communication patterns
Palchykov, Vasyl; Mitrović, Marija; Jo, Hang-Hyun; Saramäki, Jari; Pan, Raj Kumar
2014-08-01
Understanding the patterns of mobility of individuals is crucial for a number of reasons, from city planning to disaster management. There are two common ways of quantifying the amount of travel between locations: by direct observations that often involve privacy issues, e.g., tracking mobile phone locations, or by estimations from models. Typically, such models build on accurate knowledge of the population size at each location. However, when this information is not readily available, their applicability is rather limited. As mobile phones are ubiquitous, our aim is to investigate if mobility patterns can be inferred from aggregated mobile phone call data alone. Using data released by Orange for Ivory Coast, we show that human mobility is well predicted by a simple model based on the frequency of mobile phone calls between two locations and their geographical distance. We argue that the strength of the model comes from directly incorporating the social dimension of mobility. Furthermore, as only aggregated call data is required, the model helps to avoid potential privacy problems.
Inference-based procedural modeling of solids
Biggers, Keith
2011-11-01
As virtual environments become larger and more complex, there is an increasing need for more automated construction algorithms to support the development process. We present an approach for modeling solids by combining prior examples with a simple sketch. Our algorithm uses an inference-based approach to incrementally fit patches together in a consistent fashion to define the boundary of an object. This algorithm samples and extracts surface patches from input models, and develops a Petri net structure that describes the relationship between patches along an imposed parameterization. Then, given a new parameterized line or curve, we use the Petri net to logically fit patches together in a manner consistent with the input model. This allows us to easily construct objects of varying sizes and configurations using arbitrary articulation, repetition, and interchanging of parts. The result of our process is a solid model representation of the constructed object that can be integrated into a simulation-based environment. © 2011 Elsevier Ltd. All rights reserved.
Multiple sequence alignment accuracy and phylogenetic inference.
Ogden, T Heath; Rosenberg, Michael S
2006-04-01
Phylogenies are often thought to be more dependent upon the specifics of the sequence alignment rather than on the method of reconstruction. Simulation of sequences containing insertion and deletion events was performed in order to determine the role that alignment accuracy plays during phylogenetic inference. Data sets were simulated for pectinate, balanced, and random tree shapes under different conditions (ultrametric equal branch length, ultrametric random branch length, nonultrametric random branch length). Comparisons between hypothesized alignments and true alignments enabled determination of two measures of alignment accuracy, that of the total data set and that of individual branches. In general, our results indicate that as alignment error increases, topological accuracy decreases. This trend was much more pronounced for data sets derived from more pectinate topologies. In contrast, for balanced, ultrametric, equal branch length tree shapes, alignment inaccuracy had little average effect on tree reconstruction. These conclusions are based on average trends of many analyses under different conditions, and any one specific analysis, independent of the alignment accuracy, may recover very accurate or inaccurate topologies. Maximum likelihood and Bayesian, in general, outperformed neighbor joining and maximum parsimony in terms of tree reconstruction accuracy. Results also indicated that as the length of the branch and of the neighboring branches increase, alignment accuracy decreases, and the length of the neighboring branches is the major factor in topological accuracy. Thus, multiple-sequence alignment can be an important factor in downstream effects on topological reconstruction.
Phylogenetic inference with weighted codon evolutionary distances.
Criscuolo, Alexis; Michel, Christian J
2009-04-01
We develop a new approach to estimate a matrix of pairwise evolutionary distances from a codon-based alignment based on a codon evolutionary model. The method first computes a standard distance matrix for each of the three codon positions. Then these three distance matrices are weighted according to an estimate of the global evolutionary rate of each codon position and averaged into a unique distance matrix. Using a large set of both real and simulated codon-based alignments of nucleotide sequences, we show that this approach leads to distance matrices that have a significantly better treelikeness compared to those obtained by standard nucleotide evolutionary distances. We also propose an alternative weighting to eliminate the part of the noise often associated with some codon positions, particularly the third position, which is known to induce a fast evolutionary rate. Simulation results show that fast distance-based tree reconstruction algorithms on distance matrices based on this codon position weighting can lead to phylogenetic trees that are at least as accurate as, if not better, than those inferred by maximum likelihood. Finally, a well-known multigene dataset composed of eight yeast species and 106 codon-based alignments is reanalyzed and shows that our codon evolutionary distances allow building a phylogenetic tree which is similar to those obtained by non-distance-based methods (e.g., maximum parsimony and maximum likelihood) and also significantly improved compared to standard nucleotide evolutionary distance estimates.
Primate diversification inferred from phylogenies and fossils.
Herrera, James P
2017-12-01
Biodiversity arises from the balance between speciation and extinction. Fossils record the origins and disappearance of organisms, and the branching patterns of molecular phylogenies allow estimation of speciation and extinction rates, but the patterns of diversification are frequently incongruent between these two data sources. I tested two hypotheses about the diversification of primates based on ∼600 fossil species and 90% complete phylogenies of living species: (1) diversification rates increased through time; (2) a significant extinction event occurred in the Oligocene. Consistent with the first hypothesis, analyses of phylogenies supported increasing speciation rates and negligible extinction rates. In contrast, fossils showed that while speciation rates increased, speciation and extinction rates tended to be nearly equal, resulting in zero net diversification. Partially supporting the second hypothesis, the fossil data recorded a clear pattern of diversity decline in the Oligocene, although diversification rates were near zero. The phylogeny supported increased extinction ∼34 Ma, but also elevated extinction ∼10 Ma, coinciding with diversity declines in some fossil clades. The results demonstrated that estimates of speciation and extinction ignoring fossils are insufficient to infer diversification and information on extinct lineages should be incorporated into phylogenetic analyses. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Logical inference techniques for loop parallelization
Oancea, Cosmin E.
2012-01-01
This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop\\'s memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.
Inferring Molecular Processes Heterogeneity from Transcriptional Data.
Gogolewski, Krzysztof; Wronowska, Weronika; Lech, Agnieszka; Lesyng, Bogdan; Gambin, Anna
2017-01-01
RNA microarrays and RNA-seq are nowadays standard technologies to study the transcriptional activity of cells. Most studies focus on tracking transcriptional changes caused by specific experimental conditions. Information referring to genes up- and downregulation is evaluated analyzing the behaviour of relatively large population of cells by averaging its properties. However, even assuming perfect sample homogeneity, different subpopulations of cells can exhibit diverse transcriptomic profiles, as they may follow different regulatory/signaling pathways. The purpose of this study is to provide a novel methodological scheme to account for possible internal, functional heterogeneity in homogeneous cell lines, including cancer ones. We propose a novel computational method to infer the proportion between subpopulations of cells that manifest various functional behaviour in a given sample. Our method was validated using two datasets from RNA microarray experiments. Both experiments aimed to examine cell viability in specific experimental conditions. The presented methodology can be easily extended to RNA-seq data as well as other molecular processes. Moreover, it complements standard tools to indicate most important networks from transcriptomic data and in particular could be useful in the analysis of cancer cell lines affected by biologically active compounds or drugs.
Quantum-Like Representation of Non-Bayesian Inference
Asano, M.; Basieva, I.; Khrennikov, A.; Ohya, M.; Tanaka, Y.
2013-01-01
This research is related to the problem of "irrational decision making or inference" that have been discussed in cognitive psychology. There are some experimental studies, and these statistical data cannot be described by classical probability theory. The process of decision making generating these data cannot be reduced to the classical Bayesian inference. For this problem, a number of quantum-like coginitive models of decision making was proposed. Our previous work represented in a natural way the classical Bayesian inference in the frame work of quantum mechanics. By using this representation, in this paper, we try to discuss the non-Bayesian (irrational) inference that is biased by effects like the quantum interference. Further, we describe "psychological factor" disturbing "rationality" as an "environment" correlating with the "main system" of usual Bayesian inference.
Statistical causal inferences and their applications in public health research
Wu, Pan; Chen, Ding-Geng
2016-01-01
This book compiles and presents new developments in statistical causal inference. The accompanying data and computer programs are publicly available so readers may replicate the model development and data analysis presented in each chapter. In this way, methodology is taught so that readers may implement it directly. The book brings together experts engaged in causal inference research to present and discuss recent issues in causal inference methodological development. This is also a timely look at causal inference applied to scenarios that range from clinical trials to mediation and public health research more broadly. In an academic setting, this book will serve as a reference and guide to a course in causal inference at the graduate level (Master's or Doctorate). It is particularly relevant for students pursuing degrees in Statistics, Biostatistics and Computational Biology. Researchers and data analysts in public health and biomedical research will also find this book to be an important reference.
Human Inferences about Sequences: A Minimal Transition Probability Model.
Directory of Open Access Journals (Sweden)
Florent Meyniel
2016-12-01
Full Text Available The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge.
Audain, Enrique; Uszkoreit, Julian; Sachsenberg, Timo; Pfeuffer, Julianus; Liang, Xiao; Hermjakob, Henning; Sanchez, Aniel; Eisenacher, Martin; Reinert, Knut; Tabb, David L; Kohlbacher, Oliver; Perez-Riverol, Yasset
2017-01-06
In mass spectrometry-based shotgun proteomics, protein identifications are usually the desired result. However, most of the analytical methods are based on the identification of reliable peptides and not the direct identification of intact proteins. Thus, assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is a critical step in proteomics research. Currently, different protein inference algorithms and tools are available for the proteomics community. Here, we evaluated five software tools for protein inference (PIA, ProteinProphet, Fido, ProteinLP, MSBayesPro) using three popular database search engines: Mascot, X!Tandem, and MS-GF+. All the algorithms were evaluated using a highly customizable KNIME workflow using four different public datasets with varying complexities (different sample preparation, species and analytical instruments). We defined a set of quality control metrics to evaluate the performance of each combination of search engines, protein inference algorithm, and parameters on each dataset. We show that the results for complex samples vary not only regarding the actual numbers of reported protein groups but also concerning the actual composition of groups. Furthermore, the robustness of reported proteins when using databases of differing complexities is strongly dependant on the applied inference algorithm. Finally, merging the identifications of multiple search engines does not necessarily increase the number of reported proteins, but does increase the number of peptides per protein and thus can generally be recommended. Protein inference is one of the major challenges in MS-based proteomics nowadays. Currently, there are a vast number of protein inference algorithms and implementations available for the proteomics community. Protein assembly impacts in the final results of the research, the quantitation values and the final claims in the research manuscript. Even though protein
Explosion source strong ground motions in the Mississippi embayment
Langston, C.A.; Bodin, P.; Powell, C.; Withers, M.; Horton, S.; Mooney, W.
2006-01-01
Two strong-motion arrays were deployed for the October 2002 Embayment Seismic Excitation Experiment to study the spatial variation of strong ground motions in the deep, unconsolidated sediments of the Mississippi embayment because there are no comparable strong-motion data from natural earthquakes in the area. Each linear array consisted of eight three-component K2 accelerographs spaced 15 m apart situated 1.2 and 2.5 kin from 2268-kg and 1134-kg borehole explosion sources, respectively. The array data show distinct body-wave and surface-wave arrivals that propagate within the thick, unconsolidated sedimentary column, the high-velocity basement rocks, and small-scale structure near the surface. Time-domain coherence of body-wave and surface-wave arrivals is computed for acceleration, velocity, and displacement time windows. Coherence is high for relatively low-frequency verticalcomponent Rayleigh waves and high-frequency P waves propagating across the array. Prominent high-frequency PS conversions seen on radial components, a proxy for the direct S wave from earthquake sources, lose coherence quickly over the 105-m length of the array. Transverse component signals are least coherent for any ground motion and appear to be highly scattered. Horizontal phase velocity is computed by using the ratio of particle velocity to estimates of the strain based on a plane-wave-propagation model. The resulting time-dependent phase-velocity map is a useful way to infer the propagation mechanisms of individual seismic phases and time windows of three-component waveforms. Displacement gradient analysis is a complementary technique for processing general spatial-array data to obtain horizontal slowness information.
Strong Stationary Duality for Diffusion Processes
Fill, James Allen; Lyzinski, Vince
2014-01-01
We develop the theory of strong stationary duality for diffusion processes on compact intervals. We analytically derive the generator and boundary behavior of the dual process and recover a central tenet of the classical Markov chain theory in the diffusion setting by linking the separation distance in the primal diffusion to the absorption time in the dual diffusion. We also exhibit our strong stationary dual as the natural limiting process of the strong stationary dual sequence of a well ch...
Strongly correlating liquids and their isomorphs
Pedersen, Ulf R.; Gnan, Nicoletta; Bailey, Nicholas P.; Schröder, Thomas B.; Dyre, Jeppe C.
2010-01-01
This paper summarizes the properties of strongly correlating liquids, i.e., liquids with strong correlations between virial and potential energy equilibrium fluctuations at constant volume. We proceed to focus on the experimental predictions for strongly correlating glass-forming liquids. These predictions include i) density scaling, ii) isochronal superposition, iii) that there is a single function from which all frequency-dependent viscoelastic response functions may be calculated, iv) that...
Atom collisions in a strong electromagnetic field
International Nuclear Information System (INIS)
Smirnov, V.S.; Chaplik, A.V.
1976-01-01
It is shown that the long-range part of interatomic interaction is considerably altered in a strong electromagnetic field. Instead of the van der Waals law the potential asymptote can best be described by a dipole-dipole R -3 law. Impact broadening and the line shift in a strong nonresonant field are calculated. The possibility of bound states of two atoms being formed in a strong light field is discussed
Inferring climate variability from skewed proxy records
Emile-Geay, J.; Tingley, M.
2013-12-01
Many paleoclimate analyses assume a linear relationship between the proxy and the target climate variable, and that both the climate quantity and the errors follow normal distributions. An ever-increasing number of proxy records, however, are better modeled using distributions that are heavy-tailed, skewed, or otherwise non-normal, on account of the proxies reflecting non-normally distributed climate variables, or having non-linear relationships with a normally distributed climate variable. The analysis of such proxies requires a different set of tools, and this work serves as a cautionary tale on the danger of making conclusions about the underlying climate from applications of classic statistical procedures to heavily skewed proxy records. Inspired by runoff proxies, we consider an idealized proxy characterized by a nonlinear, thresholded relationship with climate, and describe three approaches to using such a record to infer past climate: (i) applying standard methods commonly used in the paleoclimate literature, without considering the non-linearities inherent to the proxy record; (ii) applying a power transform prior to using these standard methods; (iii) constructing a Bayesian model to invert the mechanistic relationship between the climate and the proxy. We find that neglecting the skewness in the proxy leads to erroneous conclusions and often exaggerates changes in climate variability between different time intervals. In contrast, an explicit treatment of the skewness, using either power transforms or a Bayesian inversion of the mechanistic model for the proxy, yields significantly better estimates of past climate variations. We apply these insights in two paleoclimate settings: (1) a classical sedimentary record from Laguna Pallcacocha, Ecuador (Moy et al., 2002). Our results agree with the qualitative aspects of previous analyses of this record, but quantitative departures are evident and hold implications for how such records are interpreted, and
Vertically Integrated Seismological Analysis II : Inference
Arora, N. S.; Russell, S.; Sudderth, E.
2009-12-01
Methods for automatically associating detected waveform features with hypothesized seismic events, and localizing those events, are a critical component of efforts to verify the Comprehensive Test Ban Treaty (CTBT). As outlined in our companion abstract, we have developed a hierarchical model which views detection, association, and localization as an integrated probabilistic inference problem. In this abstract, we provide more details on the Markov chain Monte Carlo (MCMC) methods used to solve this inference task. MCMC generates samples from a posterior distribution π(x) over possible worlds x by defining a Markov chain whose states are the worlds x, and whose stationary distribution is π(x). In the Metropolis-Hastings (M-H) method, transitions in the Markov chain are constructed in two steps. First, given the current state x, a candidate next state x‧ is generated from a proposal distribution q(x‧ | x), which may be (more or less) arbitrary. Second, the transition to x‧ is not automatic, but occurs with an acceptance probability—α(x‧ | x) = min(1, π(x‧)q(x | x‧)/π(x)q(x‧ | x)). The seismic event model outlined in our companion abstract is quite similar to those used in multitarget tracking, for which MCMC has proved very effective. In this model, each world x is defined by a collection of events, a list of properties characterizing those events (times, locations, magnitudes, and types), and the association of each event to a set of observed detections. The target distribution π(x) = P(x | y), the posterior distribution over worlds x given the observed waveform data y at all stations. Proposal distributions then implement several types of moves between worlds. For example, birth moves create new events; death moves delete existing events; split moves partition the detections for an event into two new events; merge moves combine event pairs; swap moves modify the properties and assocations for pairs of events. Importantly, the rules for
International Nuclear Information System (INIS)
Dashti, Imad
2003-01-01
This paper uses a Bayesian stochastic frontier model to obtain confidence intervals on firm efficiency measures of electric utilities rather than the point estimates reported in most previous studies. Results reveal that the stochastic frontier model yields imprecise measures of firm efficiency. However, the application produces much more precise inference on pairwise efficiency comparisons of firms due to a sometimes strong positive covariance of efficiency measures across firms. In addition, we examine the sensitivity to functional form by repeating the analysis for Cobb-Douglas, translog and Fourier frontiers, with and without imposing monotonicity and concavity
Inferring tie strength from online directed behavior.
Directory of Open Access Journals (Sweden)
Jason J Jones
Full Text Available Some social connections are stronger than others. People have not only friends, but also best friends. Social scientists have long recognized this characteristic of social connections and researchers frequently use the term tie strength to refer to this concept. We used online interaction data (specifically, Facebook interactions to successfully identify real-world strong ties. Ground truth was established by asking users themselves to name their closest friends in real life. We found the frequency of online interaction was diagnostic of strong ties, and interaction frequency was much more useful diagnostically than were attributes of the user or the user's friends. More private communications (messages were not necessarily more informative than public communications (comments, wall posts, and other interactions.
Network inference via adaptive optimal design
Directory of Open Access Journals (Sweden)
Stigter Johannes D
2012-09-01
Full Text Available Abstract Background Current research in network reverse engineering for genetic or metabolic networks very often does not include a proper experimental and/or input design. In this paper we address this issue in more detail and suggest a method that includes an iterative design of experiments based, on the most recent data that become available. The presented approach allows a reliable reconstruction of the network and addresses an important issue, i.e., the analysis and the propagation of uncertainties as they exist in both the data and in our own knowledge. These two types of uncertainties have their immediate ramifications for the uncertainties in the parameter estimates and, hence, are taken into account from the very beginning of our experimental design. Findings The method is demonstrated for two small networks that include a genetic network for mRNA synthesis and degradation and an oscillatory network describing a molecular network underlying adenosine 3’-5’ cyclic monophosphate (cAMP as observed in populations of Dyctyostelium cells. In both cases a substantial reduction in parameter uncertainty was observed. Extension to larger scale networks is possible but needs a more rigorous parameter estimation algorithm that includes sparsity as a constraint in the optimization procedure. Conclusion We conclude that a careful experiment design very often (but not always pays off in terms of reliability in the inferred network topology. For large scale networks a better parameter estimation algorithm is required that includes sparsity as an additional constraint. These algorithms are available in the literature and can also be used in an adaptive optimal design setting as demonstrated in this paper.
On the Hardness of Topology Inference
Acharya, H. B.; Gouda, M. G.
Many systems require information about the topology of networks on the Internet, for purposes like management, efficiency, testing of new protocols and so on. However, ISPs usually do not share the actual topology maps with outsiders; thus, in order to obtain the topology of a network on the Internet, a system must reconstruct it from publicly observable data. The standard method employs traceroute to obtain paths between nodes; next, a topology is generated such that the observed paths occur in the graph. However, traceroute has the problem that some routers refuse to reveal their addresses, and appear as anonymous nodes in traces. Previous research on the problem of topology inference with anonymous nodes has demonstrated that it is at best NP-complete. In this paper, we improve upon this result. In our previous research, we showed that in the special case where nodes may be anonymous in some traces but not in all traces (so all node identifiers are known), there exist trace sets that are generable from multiple topologies. This paper extends our theory of network tracing to the general case (with strictly anonymous nodes), and shows that the problem of computing the network that generated a trace set, given the trace set, has no general solution. The weak version of the problem, which allows an algorithm to output a "small" set of networks- any one of which is the correct one- is also not solvable. Any algorithm guaranteed to output the correct topology outputs at least an exponential number of networks. Our results are surprisingly robust: they hold even when the network is known to have exactly two anonymous nodes, and every node as well as every edge in the network is guaranteed to occur in some trace. On the basis of this result, we suggest that exact reconstruction of network topology requires more powerful tools than traceroute.
Statistical Inference for Data Adaptive Target Parameters.
Hubbard, Alan E; Kherad-Pajouh, Sara; van der Laan, Mark J
2016-05-01
Consider one observes n i.i.d. copies of a random variable with a probability distribution that is known to be an element of a particular statistical model. In order to define our statistical target we partition the sample in V equal size sub-samples, and use this partitioning to define V splits in an estimation sample (one of the V subsamples) and corresponding complementary parameter-generating sample. For each of the V parameter-generating samples, we apply an algorithm that maps the sample to a statistical target parameter. We define our sample-split data adaptive statistical target parameter as the average of these V-sample specific target parameters. We present an estimator (and corresponding central limit theorem) of this type of data adaptive target parameter. This general methodology for generating data adaptive target parameters is demonstrated with a number of practical examples that highlight new opportunities for statistical learning from data. This new framework provides a rigorous statistical methodology for both exploratory and confirmatory analysis within the same data. Given that more research is becoming "data-driven", the theory developed within this paper provides a new impetus for a greater involvement of statistical inference into problems that are being increasingly addressed by clever, yet ad hoc pattern finding methods. To suggest such potential, and to verify the predictions of the theory, extensive simulation studies, along with a data analysis based on adaptively determined intervention rules are shown and give insight into how to structure such an approach. The results show that the data adaptive target parameter approach provides a general framework and resulting methodology for data-driven science.
Inferring modules from human protein interactome classes
Directory of Open Access Journals (Sweden)
Chaurasia Gautam
2010-07-01
Full Text Available Abstract Background The integration of protein-protein interaction networks derived from high-throughput screening approaches and complementary sources is a key topic in systems biology. Although integration of protein interaction data is conventionally performed, the effects of this procedure on the result of network analyses has not been examined yet. In particular, in order to optimize the fusion of heterogeneous interaction datasets, it is crucial to consider not only their degree of coverage and accuracy, but also their mutual dependencies and additional salient features. Results We examined this issue based on the analysis of modules detected by network clustering methods applied to both integrated and individual (disaggregated data sources, which we call interactome classes. Due to class diversity, we deal with variable dependencies of data features arising from structural specificities and biases, but also from possible overlaps. Since highly connected regions of the human interactome may point to potential protein complexes, we have focused on the concept of modularity, and elucidated the detection power of module extraction algorithms by independent validations based on GO, MIPS and KEGG. From the combination of protein interactions with gene expressions, a confidence scoring scheme has been proposed before proceeding via GO with further classification in permanent and transient modules. Conclusions Disaggregated interactomes are shown to be informative for inferring modularity, thus contributing to perform an effective integrative analysis. Validation of the extracted modules by multiple annotation allows for the assessment of confidence measures assigned to the modules in a protein pathway context. Notably, the proposed multilayer confidence scheme can be used for network calibration by enabling a transition from unweighted to weighted interactomes based on biological evidence.
Inference of Cancer-specific Gene Regulatory Networks Using Soft Computing Rules
Directory of Open Access Journals (Sweden)
Xiaosheng Wang
2010-03-01
Full Text Available Perturbations of gene regulatory networks are essentially responsible for oncogenesis. Therefore, inferring the gene regulatory networks is a key step to overcoming cancer. In this work, we propose a method for inferring directed gene regulatory networks based on soft computing rules, which can identify important cause-effect regulatory relations of gene expression. First, we identify important genes associated with a specific cancer (colon cancer using a supervised learning approach. Next, we reconstruct the gene regulatory networks by inferring the regulatory relations among the identified genes, and their regulated relations by other genes within the genome. We obtain two meaningful findings. One is that upregulated genes are regulated by more genes than downregulated ones, while downregulated genes regulate more genes than upregulated ones. The other one is that tumor suppressors suppress tumor activators and activate other tumor suppressors strongly, while tumor activators activate other tumor activators and suppress tumor suppressors weakly, indicating the robustness of biological systems. These findings provide valuable insights into the pathogenesis of cancer.
García, Miguel A; Costea, Mihai; Kuzmina, Maria; Stefanović, Saša
2014-04-01
The parasitic genus Cuscuta, containing some 200 species circumscribed traditionally in three subgenera, is nearly cosmopolitan, occurring in a wide range of habitats and hosts. Previous molecular studies, on subgenera Grammica and Cuscuta, delimited major clades within these groups. However, the sequences used were unalignable among subgenera, preventing the phylogenetic comparison across the genus. We conducted a broad phylogenetic study using rbcL and nrLSU sequences covering the morphological, physiological, and geographical diversity of Cuscuta. We used parsimony methods to reconstruct ancestral states for taxonomically important characters. Biogeographical inferences were obtained using statistical and Bayesian approaches. Four well-supported major clades are resolved. Two of them correspond to subgenera Monogynella and Grammica. Subgenus Cuscuta is paraphyletic, with section Pachystigma sister to subgenus Grammica. Previously described cases of strongly supported discordance between plastid and nuclear phylogenies, interpreted as reticulation events, are confirmed here and three new cases are detected. Dehiscent fruits and globose stigmas are inferred as ancestral character states, whereas the ancestral style number is ambiguous. Biogeographical reconstructions suggest an Old World origin for the genus and subsequent spread to the Americas as a consequence of one long-distance dispersal. Hybridization may play an important yet underestimated role in the evolution of Cuscuta. Our results disagree with scenarios of evolution (polarity) previously proposed for several taxonomically important morphological characters, and with their usage and significance. While several cases of long-distance dispersal are inferred, vicariance or dispersal to adjacent areas emerges as the dominant biogeographical pattern.
Indexing the Environmental Quality Performance Based on A Fuzzy Inference Approach
Iswari, Lizda
2018-03-01
Environmental performance strongly deals with the quality of human life. In Indonesia, this performance is quantified through Environmental Quality Index (EQI) which consists of three indicators, i.e. river quality index, air quality index, and coverage of land cover. The current of this instrument data processing was done by averaging and weighting each index to represent the EQI at the provincial level. However, we found EQI interpretations that may contain some uncertainties and have a range of circumstances possibly less appropriate if processed under a common statistical approach. In this research, we aim to manage the indicators of EQI with a more intuitive computation technique and make some inferences related to the environmental performance in 33 provinces in Indonesia. Research was conducted in three stages of Mamdani Fuzzy Inference System (MAFIS), i.e. fuzzification, data inference, and defuzzification. Data input consists of 10 environmental parameters and the output is an index of Environmental Quality Performance (EQP). Research was applied to the environmental condition data set in 2015 and quantified the results into the scale of 0 to 100, i.e. 10 provinces at good performance with the EQP above 80 dominated by provinces in eastern part of Indonesia, 22 provinces with the EQP between 80 to 50, and one province in Java Island with the EQP below 20. This research shows that environmental quality performance can be quantified without eliminating the natures of the data set and simultaneously is able to show the environment behavior along with its spatial pattern distribution.
Morality Principles for Risk Modelling: Needs and Links with the Origins of Plausible Inference
Solana-Ortega, Alberto; Solana, Vicente
2009-12-01
In comparison with the foundations of probability calculus, the inescapable and controversial issue of how to assign probabilities has only recently become a matter of formal study. The introduction of information as a technical concept was a milestone, but the most promising entropic assignment methods still face unsolved difficulties, manifesting the incompleteness of plausible inference theory. In this paper we examine the situation faced by risk analysts in the critical field of extreme events modelling, where the former difficulties are especially visible, due to scarcity of observational data, the large impact of these phenomena and the obligation to assume professional responsibilities. To respond to the claim for a sound framework to deal with extremes, we propose a metafoundational approach to inference, based on a canon of extramathematical requirements. We highlight their strong moral content, and show how this emphasis in morality, far from being new, is connected with the historic origins of plausible inference. Special attention is paid to the contributions of Caramuel, a contemporary of Pascal, unfortunately ignored in the usual mathematical accounts of probability.
Horn, Sebastian S; Ruggeri, Azzurra; Pachur, Thorsten
2016-09-01
Judgments about objects in the world are often based on probabilistic information (or cues). A frugal judgment strategy that utilizes memory (i.e., the ability to discriminate between known and unknown objects) as a cue for inference is the recognition heuristic (RH). The usefulness of the RH depends on the structure of the environment, particularly the predictive power (validity) of recognition. Little is known about developmental differences in use of the RH. In this study, the authors examined (a) to what extent children and adolescents recruit the RH when making judgments, and (b) around what age adaptive use of the RH emerges. Primary schoolchildren (M = 9 years), younger adolescents (M = 12 years), and older adolescents (M = 17 years) made comparative judgments in task environments with either high or low recognition validity. Reliance on the RH was measured with a hierarchical multinomial model. Results indicated that primary schoolchildren already made systematic use of the RH. However, only older adolescents adaptively adjusted their strategy use between environments and were better able to discriminate between situations in which the RH led to correct versus incorrect inferences. These findings suggest that the use of simple heuristics does not progress unidirectionally across development but strongly depends on the task environment, in line with the perspective of ecological rationality. Moreover, adaptive heuristic inference seems to require experience and a developed base of domain knowledge. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Inference of cancer-specific gene regulatory networks using soft computing rules.
Wang, Xiaosheng; Gotoh, Osamu
2010-03-24
Perturbations of gene regulatory networks are essentially responsible for oncogenesis. Therefore, inferring the gene regulatory networks is a key step to overcoming cancer. In this work, we propose a method for inferring directed gene regulatory networks based on soft computing rules, which can identify important cause-effect regulatory relations of gene expression. First, we identify important genes associated with a specific cancer (colon cancer) using a supervised learning approach. Next, we reconstruct the gene regulatory networks by inferring the regulatory relations among the identified genes, and their regulated relations by other genes within the genome. We obtain two meaningful findings. One is that upregulated genes are regulated by more genes than downregulated ones, while downregulated genes regulate more genes than upregulated ones. The other one is that tumor suppressors suppress tumor activators and activate other tumor suppressors strongly, while tumor activators activate other tumor activators and suppress tumor suppressors weakly, indicating the robustness of biological systems. These findings provide valuable insights into the pathogenesis of cancer.
Ensemble stacking mitigates biases in inference of synaptic connectivity.
Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N
2018-01-01
A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.
Causal inference in biology networks with integrated belief propagation.
Chang, Rui; Karr, Jonathan R; Schadt, Eric E
2015-01-01
Inferring causal relationships among molecular and higher order phenotypes is a critical step in elucidating the complexity of living systems. Here we propose a novel method for inferring causality that is no longer constrained by the conditional dependency arguments that limit the ability of statistical causal inference methods to resolve causal relationships within sets of graphical models that are Markov equivalent. Our method utilizes Bayesian belief propagation to infer the responses of perturbation events on molecular traits given a hypothesized graph structure. A distance measure between the inferred response distribution and the observed data is defined to assess the 'fitness' of the hypothesized causal relationships. To test our algorithm, we infer causal relationships within equivalence classes of gene networks in which the form of the functional interactions that are possible are assumed to be nonlinear, given synthetic microarray and RNA sequencing data. We also apply our method to infer causality in real metabolic network with v-structure and feedback loop. We show that our method can recapitulate the causal structure and recover the feedback loop only from steady-state data which conventional method cannot.
Accounting for the Effect of Earth's Rotation in Magnetotelluric Inference
Riegert, D. L.; Thomson, D. J.
2017-12-01
The study of geomagnetism has been documented as far back as 1722 when the watchmaker G. Graham constructed a more sensitive compass and showed that the variations in geomagnetic direction varied with an irregular daily pattern. Increased interest in geomagnetism in geomagnetism began at the end of the 19th century (Lamb, Schuster, Chapman, and Price). The Magnetotelluric Method was first introduced in the 1950's (Cagniard and Tikhonov), and, at its core, is simply a regression problem. The result of this method is a transfer function estimate which describes the earth's response to magnetic field variations. This estimate can then be used to infer the earth's subsurface structure; useful for applications such as natural resource exploration. The statistical problem of estimating a transfer function between geomagnetic and induced current measurements has evolved since the 1950's due to a variety of problems: non-stationarity, outliers, and violation of Gaussian assumptions. To address some of these issues, robust regression methods (Chave and Thomson, 2004) and the remote reference method (Gambel, 1979) have been proposed and used. The current method seems to provide reasonable estimates, but still requires a large amount of data. Using the multitaper method of spectral analysis (Thomson, 1982), taking long (greater than 4 months) blocks of geomagnetic data, and concentrating on frequencies below 1000 microhertz to avoid ultraviolet effects, one finds that:1) the cross-spectra are dominated by many offset frequencies including plus and minus 1 and 2 cycles per day;2) the coherence at these offset frequencies is often stronger than at zero offset;3) there are strong couplings from the "quasi two-day" cycle;4) frequencines are usually not symmetric;5) the spectra are dominated by the normal modes of the Sun. This talk will discuss the method of incorporating these observations into the transfer function estimation model, some of the difficulties that arose, their
On the Strong Direct Summand Conjecture
McCullough, Jason
2009-01-01
In this thesis, our aim is the study the Vanishing of Maps of Tor Conjecture of Hochster and Huneke. We mainly focus on an equivalent characterization called the Strong Direct Summand Conjecture, due to N. Ranganathan. Our results are separated into three chapters. In Chapter 3, we prove special cases of the Strong Direct Summand Conjecture in…
Physics challenges in the strong interactions
International Nuclear Information System (INIS)
Ellis, S.D.
1992-01-01
The study of strong interactions is now a mature field for which scientist now know that the correct underlying theory is QCD. Here, an overview of the challenges to be faced in the area of the strong interactions during the 1990's is presented. As an illustrative example special attention is given to the analysis of jets as studied at hadron colliders
Physics challenges in the strong interactions
Energy Technology Data Exchange (ETDEWEB)
Ellis, S.D. [Univ. of Washington, Seattle (United States)
1992-12-31
The study of strong interactions is now a mature field for which scientist now know that the correct underlying theory is QCD. Here, an overview of the challenges to be faced in the area of the strong interactions during the 1990`s is presented. As an illustrative example special attention is given to the analysis of jets as studied at hadron colliders.
Theoretical studies of strongly correlated fermions
Energy Technology Data Exchange (ETDEWEB)
Logan, D [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France)
1997-04-01
Strongly correlated fermions are investigated. An understanding of strongly correlated fermions underpins a diverse range of phenomena such as metal-insulator transitions, high-temperature superconductivity, magnetic impurity problems and the properties of heavy-fermion systems, in all of which local moments play an important role. (author).
The strong reflecting property and Harrington's Principle
Cheng, Yong
2015-01-01
In this paper we characterize the strong reflecting property for $L$-cardinals for all $\\omega_n$, characterize Harrington's Principle $HP(L)$ and its generalization and discuss the relationship between the strong reflecting property for $L$-cardinals and Harrington's Principle $HP(L)$.
Strong Nash Equilibria and the Potential Maimizer
van Megen, F.J.C.; Facchini, G.; Borm, P.E.M.; Tijs, S.H.
1996-01-01
A class of non cooperative games characterized by a `congestion e ect' is studied, in which there exists a strong Nash equilibrium, and the set of Nash equilibria, the set of strong Nash equilibria and the set of strategy pro les maximizing the potential function coincide.The structure of the class
Large N baryons, strong coupling theory, quarks
International Nuclear Information System (INIS)
Sakita, B.
1984-01-01
It is shown that in QCD the large N limit is the same as the static strong coupling limit. By using the static strong coupling techniques some of the results of large N baryons are derived. The results are consistent with the large N SU(6) static quark model. (author)
The lambda sigma calculus and strong normalization
DEFF Research Database (Denmark)
Schack-Nielsen, Anders; Schürmann, Carsten
Explicit substitution calculi can be classified into several dis- tinct categories depending on whether they are confluent, meta-confluent, strong normalization preserving, strongly normalizing, simulating, fully compositional, and/or local. In this paper we present a variant of the λσ-calculus, ...
Optimization of strong and weak coordinates
Swart, M.; Bickelhaupt, F.M.
2006-01-01
We present a new scheme for the geometry optimization of equilibrium and transition state structures that can be used for both strong and weak coordinates. We use a screening function that depends on atom-pair distances to differentiate strong coordinates from weak coordinates. This differentiation
78 FR 15710 - Strong Sensitizer Guidance
2013-03-12
... the supplemental definition of ``strong sensitizer'' found at 16 CFR 1500.3(c)(5). The Commission is proposing to revise the supplemental definition of ``strong sensitizer'' due to advancements in the science...'' definition, assist manufacturers in understanding how CPSC staff would assess whether a substance and/or...
Tarlowski, Andrzej
2018-01-01
There is a lively debate concerning the role of conceptual and perceptual information in young children's inductive inferences. While most studies focus on the role of basic level categories in induction the present research contributes to the debate by asking whether children's inductions are guided by ontological constraints. Two studies use a novel inductive paradigm to test whether young children have an expectation that all animals share internal commonalities that do not extend to perceptually similar inanimates. The results show that children make category-consistent responses when asked to project an internal feature from an animal to either a dissimilar animal or a similar toy replica. However, the children do not have a universal preference for category-consistent responses in an analogous task involving vehicles and vehicle toy replicas. The results also show the role of context and individual factors in inferences. Children's early reliance on ontological commitments in induction cannot be explained by perceptual similarity or by children's sensitivity to the authenticity of objects.
Directory of Open Access Journals (Sweden)
Andrzej Tarlowski
2018-04-01
Full Text Available There is a lively debate concerning the role of conceptual and perceptual information in young children's inductive inferences. While most studies focus on the role of basic level categories in induction the present research contributes to the debate by asking whether children's inductions are guided by ontological constraints. Two studies use a novel inductive paradigm to test whether young children have an expectation that all animals share internal commonalities that do not extend to perceptually similar inanimates. The results show that children make category-consistent responses when asked to project an internal feature from an animal to either a dissimilar animal or a similar toy replica. However, the children do not have a universal preference for category-consistent responses in an analogous task involving vehicles and vehicle toy replicas. The results also show the role of context and individual factors in inferences. Children's early reliance on ontological commitments in induction cannot be explained by perceptual similarity or by children's sensitivity to the authenticity of objects.
Bayesian inference of substrate properties from film behavior
International Nuclear Information System (INIS)
Aggarwal, R; Demkowicz, M J; Marzouk, Y M
2015-01-01
We demonstrate that by observing the behavior of a film deposited on a substrate, certain features of the substrate may be inferred with quantified uncertainty using Bayesian methods. We carry out this demonstration on an illustrative film/substrate model where the substrate is a Gaussian random field and the film is a two-component mixture that obeys the Cahn–Hilliard equation. We construct a stochastic reduced order model to describe the film/substrate interaction and use it to infer substrate properties from film behavior. This quantitative inference strategy may be adapted to other film/substrate systems. (paper)
Comparative Study of Inference Methods for Bayesian Nonnegative Matrix Factorisation
DEFF Research Database (Denmark)
Brouwer, Thomas; Frellsen, Jes; Liò, Pietro
2017-01-01
In this paper, we study the trade-offs of different inference approaches for Bayesian matrix factorisation methods, which are commonly used for predicting missing values, and for finding patterns in the data. In particular, we consider Bayesian nonnegative variants of matrix factorisation and tri......-factorisation, and compare non-probabilistic inference, Gibbs sampling, variational Bayesian inference, and a maximum-a-posteriori approach. The variational approach is new for the Bayesian nonnegative models. We compare their convergence, and robustness to noise and sparsity of the data, on both synthetic and real...
Working memory supports inference learning just like classification learning.
Craig, Stewart; Lewandowsky, Stephan
2013-08-01
Recent research has found a positive relationship between people's working memory capacity (WMC) and their speed of category learning. To date, only classification-learning tasks have been considered, in which people learn to assign category labels to objects. It is unknown whether learning to make inferences about category features might also be related to WMC. We report data from a study in which 119 participants undertook classification learning and inference learning, and completed a series of WMC tasks. Working memory capacity was positively related to people's classification and inference learning performance.
Surrogate based approaches to parameter inference in ocean models
Knio, Omar
2016-01-06
This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.
Ensemble stacking mitigates biases in inference of synaptic connectivity
Directory of Open Access Journals (Sweden)
Brendan Chambers
2018-03-01
Full Text Available A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches. Mapping the routing of spikes through local circuitry is crucial for understanding neocortical computation. Under appropriate experimental conditions, these maps can be used to infer likely patterns of synaptic recruitment, linking activity to underlying anatomical connections. Such inferences help to reveal the synaptic implementation of population dynamics and computation. We compare a number of standard functional measures to infer underlying connectivity. We find that regularization impacts measures
Brain Imaging, Forward Inference, and Theories of Reasoning
Heit, Evan
2015-01-01
This review focuses on the issue of how neuroimaging studies address theoretical accounts of reasoning, through the lens of the method of forward inference (Henson, 2005, 2006). After theories of deductive and inductive reasoning are briefly presented, the method of forward inference for distinguishing between psychological theories based on brain imaging evidence is critically reviewed. Brain imaging studies of reasoning, comparing deductive and inductive arguments, comparing meaningful versus non-meaningful material, investigating hemispheric localization, and comparing conditional and relational arguments, are assessed in light of the method of forward inference. Finally, conclusions are drawn with regard to future research opportunities. PMID:25620926
Fast and scalable inference of multi-sample cancer lineages.
Popic, Victoria
2015-05-06
Somatic variants can be used as lineage markers for the phylogenetic reconstruction of cancer evolution. Since somatic phylogenetics is complicated by sample heterogeneity, novel specialized tree-building methods are required for cancer phylogeny reconstruction. We present LICHeE (Lineage Inference for Cancer Heterogeneity and Evolution), a novel method that automates the phylogenetic inference of cancer progression from multiple somatic samples. LICHeE uses variant allele frequencies of somatic single nucleotide variants obtained by deep sequencing to reconstruct multi-sample cell lineage trees and infer the subclonal composition of the samples. LICHeE is open source and available at http://viq854.github.io/lichee .
Fast and scalable inference of multi-sample cancer lineages.
Popic, Victoria; Salari, Raheleh; Hajirasouliha, Iman; Kashef-Haghighi, Dorna; West, Robert B; Batzoglou, Serafim
2015-01-01
Somatic variants can be used as lineage markers for the phylogenetic reconstruction of cancer evolution. Since somatic phylogenetics is complicated by sample heterogeneity, novel specialized tree-building methods are required for cancer phylogeny reconstruction. We present LICHeE (Lineage Inference for Cancer Heterogeneity and Evolution), a novel method that automates the phylogenetic inference of cancer progression from multiple somatic samples. LICHeE uses variant allele frequencies of somatic single nucleotide variants obtained by deep sequencing to reconstruct multi-sample cell lineage trees and infer the subclonal composition of the samples. LICHeE is open source and available at http://viq854.github.io/lichee .
International Conference on Trends and Perspectives in Linear Statistical Inference
Rosen, Dietrich
2018-01-01
This volume features selected contributions on a variety of topics related to linear statistical inference. The peer-reviewed papers from the International Conference on Trends and Perspectives in Linear Statistical Inference (LinStat 2016) held in Istanbul, Turkey, 22-25 August 2016, cover topics in both theoretical and applied statistics, such as linear models, high-dimensional statistics, computational statistics, the design of experiments, and multivariate analysis. The book is intended for statisticians, Ph.D. students, and professionals who are interested in statistical inference. .
Brain imaging, forward inference, and theories of reasoning.
Heit, Evan
2014-01-01
This review focuses on the issue of how neuroimaging studies address theoretical accounts of reasoning, through the lens of the method of forward inference (Henson, 2005, 2006). After theories of deductive and inductive reasoning are briefly presented, the method of forward inference for distinguishing between psychological theories based on brain imaging evidence is critically reviewed. Brain imaging studies of reasoning, comparing deductive and inductive arguments, comparing meaningful versus non-meaningful material, investigating hemispheric localization, and comparing conditional and relational arguments, are assessed in light of the method of forward inference. Finally, conclusions are drawn with regard to future research opportunities.
Surrogate based approaches to parameter inference in ocean models
Knio, Omar
2016-01-01
This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.
Seismic switch for strong motion measurement
Harben, P.E.; Rodgers, P.W.; Ewert, D.W.
1995-05-30
A seismic switching device is described that has an input signal from an existing microseismic station seismometer and a signal from a strong motion measuring instrument. The seismic switch monitors the signal level of the strong motion instrument and passes the seismometer signal to the station data telemetry and recording systems. When the strong motion instrument signal level exceeds a user set threshold level, the seismometer signal is switched out and the strong motion signal is passed to the telemetry system. The amount of time the strong motion signal is passed before switching back to the seismometer signal is user controlled between 1 and 15 seconds. If the threshold level is exceeded during a switch time period, the length of time is extended from that instant by one user set time period. 11 figs.
Hierarchial mark-recapture models: a framework for inference about demographic processes
Link, W.A.; Barker, R.J.
2004-01-01
The development of sophisticated mark-recapture models over the last four decades has provided fundamental tools for the study of wildlife populations, allowing reliable inference about population sizes and demographic rates based on clearly formulated models for the sampling processes. Mark-recapture models are now routinely described by large numbers of parameters. These large models provide the next challenge to wildlife modelers: the extraction of signal from noise in large collections of parameters. Pattern among parameters can be described by strong, deterministic relations (as in ultrastructural models) but is more flexibly and credibly modeled using weaker, stochastic relations. Trend in survival rates is not likely to be manifest by a sequence of values falling precisely on a given parametric curve; rather, if we could somehow know the true values, we might anticipate a regression relation between parameters and explanatory variables, in which true value equals signal plus noise. Hierarchical models provide a useful framework for inference about collections of related parameters. Instead of regarding parameters as fixed but unknown quantities, we regard them as realizations of stochastic processes governed by hyperparameters. Inference about demographic processes is based on investigation of these hyperparameters. We advocate the Bayesian paradigm as a natural, mathematically and scientifically sound basis for inference about hierarchical models. We describe analysis of capture-recapture data from an open population based on hierarchical extensions of the Cormack-Jolly-Seber model. In addition to recaptures of marked animals, we model first captures of animals and losses on capture, and are thus able to estimate survival probabilities w (i.e., the complement of death or permanent emigration) and per capita growth rates f (i.e., the sum of recruitment and immigration rates). Covariation in these rates, a feature of demographic interest, is explicitly
Color inference in visual communication: the meaning of colors in recycling.
Schloss, Karen B; Lessard, Laurent; Walmsley, Charlotte S; Foley, Kathleen
2018-01-01
People interpret abstract meanings from colors, which makes color a useful perceptual feature for visual communication. This process is complicated, however, because there is seldom a one-to-one correspondence between colors and meanings. One color can be associated with many different concepts (one-to-many mapping) and many colors can be associated with the same concept (many-to-one mapping). We propose that to interpret color-coding systems, people perform assignment inference to determine how colors map onto concepts. We studied assignment inference in the domain of recycling. Participants saw images of colored but unlabeled bins and were asked to indicate which bins they would use to discard different kinds of recyclables and trash. In Experiment 1, we tested two hypotheses for how people perform assignment inference. The local assignment hypothesis predicts that people simply match objects with their most strongly associated color. The global assignment hypothesis predicts that people also account for the association strengths between all other objects and colors within the scope of the color-coding system. Participants discarded objects in bins that optimized the color-object associations of the entire set, which is consistent with the global assignment hypothesis. This sometimes resulted in discarding objects in bins whose colors were weakly associated with the object, even when there was a stronger associated option available. In Experiment 2, we tested different methods for encoding color-coding systems and found that people were better at assignment inference when color sets simultaneously maximized the association strength between assigned color-object parings while minimizing associations between unassigned pairings. Our study provides an approach for designing intuitive color-coding systems that facilitate communication through visual media such as graphs, maps, signs, and artifacts.
Models and Inference for Multivariate Spatial Extremes
Vettori, Sabrina
2017-12-07
The development of flexible and interpretable statistical methods is necessary in order to provide appropriate risk assessment measures for extreme events and natural disasters. In this thesis, we address this challenge by contributing to the developing research field of Extreme-Value Theory. We initially study the performance of existing parametric and non-parametric estimators of extremal dependence for multivariate maxima. As the dimensionality increases, non-parametric estimators are more flexible than parametric methods but present some loss in efficiency that we quantify under various scenarios. We introduce a statistical tool which imposes the required shape constraints on non-parametric estimators in high dimensions, significantly improving their performance. Furthermore, by embedding the tree-based max-stable nested logistic distribution in the Bayesian framework, we develop a statistical algorithm that identifies the most likely tree structures representing the data\\'s extremal dependence using the reversible jump Monte Carlo Markov Chain method. A mixture of these trees is then used for uncertainty assessment in prediction through Bayesian model averaging. The computational complexity of full likelihood inference is significantly decreased by deriving a recursive formula for the nested logistic model likelihood. The algorithm performance is verified through simulation experiments which also compare different likelihood procedures. Finally, we extend the nested logistic representation to the spatial framework in order to jointly model multivariate variables collected across a spatial region. This situation emerges often in environmental applications but is not often considered in the current literature. Simulation experiments show that the new class of multivariate max-stable processes is able to detect both the cross and inner spatial dependence of a number of extreme variables at a relatively low computational cost, thanks to its Bayesian hierarchical
Inferring species interactions through joint mark–recapture analysis
Yackulic, Charles B.; Korman, Josh; Yard, Michael D.; Dzul, Maria C.
2018-01-01
Introduced species are frequently implicated in declines of native species. In many cases, however, evidence linking introduced species to native declines is weak. Failure to make strong inferences regarding the role of introduced species can hamper attempts to predict population viability and delay effective management responses. For many species, mark–recapture analysis is the more rigorous form of demographic analysis. However, to our knowledge, there are no mark–recapture models that allow for joint modeling of interacting species. Here, we introduce a two‐species mark–recapture population model in which the vital rates (and capture probabilities) of one species are allowed to vary in response to the abundance of the other species. We use a simulation study to explore bias and choose an approach to model selection. We then use the model to investigate species interactions between endangered humpback chub (Gila cypha) and introduced rainbow trout (Oncorhynchus mykiss) in the Colorado River between 2009 and 2016. In particular, we test hypotheses about how two environmental factors (turbidity and temperature), intraspecific density dependence, and rainbow trout abundance are related to survival, growth, and capture of juvenile humpback chub. We also project the long‐term effects of different rainbow trout abundances on adult humpback chub abundances. Our simulation study suggests this approach has minimal bias under potentially challenging circumstances (i.e., low capture probabilities) that characterized our application and that model selection using indicator variables could reliably identify the true generating model even when process error was high. When the model was applied to rainbow trout and humpback chub, we identified negative relationships between rainbow trout abundance and the survival, growth, and capture probability of juvenile humpback chub. Effects on interspecific interactions on survival and capture probability were strongly
Haber, Noah; Smith, Emily R; Moscoe, Ellen; Andrews, Kathryn; Audy, Robin; Bell, Winnie; Brennan, Alana T; Breskin, Alexander; Kane, Jeremy C; Karra, Mahesh; McClure, Elizabeth S; Suarez, Elizabeth A
2018-01-01
The pathway from evidence generation to consumption contains many steps which can lead to overstatement or misinformation. The proliferation of internet-based health news may encourage selection of media and academic research articles that overstate strength of causal inference. We investigated the state of causal inference in health research as it appears at the end of the pathway, at the point of social media consumption. We screened the NewsWhip Insights database for the most shared media articles on Facebook and Twitter reporting about peer-reviewed academic studies associating an exposure with a health outcome in 2015, extracting the 50 most-shared academic articles and media articles covering them. We designed and utilized a review tool to systematically assess and summarize studies' strength of causal inference, including generalizability, potential confounders, and methods used. These were then compared with the strength of causal language used to describe results in both academic and media articles. Two randomly assigned independent reviewers and one arbitrating reviewer from a pool of 21 reviewers assessed each article. We accepted the most shared 64 media articles pertaining to 50 academic articles for review, representing 68% of Facebook and 45% of Twitter shares in 2015. Thirty-four percent of academic studies and 48% of media articles used language that reviewers considered too strong for their strength of causal inference. Seventy percent of academic studies were considered low or very low strength of inference, with only 6% considered high or very high strength of causal inference. The most severe issues with academic studies' causal inference were reported to be omitted confounding variables and generalizability. Fifty-eight percent of media articles were found to have inaccurately reported the question, results, intervention, or population of the academic study. We find a large disparity between the strength of language as presented to the
Johnson, C.R.; Brennan, Robert
1960-01-01
saturation because the ground water, as it percolates southeastward beneath the area, moves out of the Tertiary and into the Quaternary deposits without apparent hindrance. The water that enters the area as underflow from the west is augmented within the area by water that infiltrates from the land surface. The principal sources of irrigating water are precipitation, seepage from canals and reservoirs, and applied irrigation water. Except for the water withdrawn through wells or discharged by natural processes where valleys have been cut into the zone of saturation, ground water leaves the area as underflow into the Platte River valley on the north, the Blue River drainage basin on the east, or the Republican River valley on the south. Part of the water used for irrigation and watering livestock and all the water used in rural and urban homes, in public buildings, and for industrial purposes is obtained from wells, To date (1952) there is no indication that the supply of ground water is being depleted faster than it is being replenished; instead, studies indicate that greater quantities can be withdrawn without causing an excessive decline of the water table. An increase of ground-water withdrawals to a sustainable maximum, however, will be possible only if the points of withdrawal are scattered fairly uniformly. It is estimated that annual withdrawals per township should not exceed 2,100 acre-feet where infiltrating precipitation is the only source of recharge, or 3,000 acre-feet where other sources of recharge are significant. Although perennial withdrawals of this amount could be sustained indefinitely, they would cause some lowering of the water table and eventually a decrease in the amount of water discharged from the area by natural means. The ground water is of the calcium bicarbonate type. In much of the area it is hard or very hard, and in places it contains excessive amounts of iron. In all other respects the water is chemically suitable for domesti
Indirect Inference for Stochastic Differential Equations Based on Moment Expansions
Ballesio, Marco; Tempone, Raul; Vilanova, Pedro
2016-01-01
We provide an indirect inference method to estimate the parameters of timehomogeneous scalar diffusion and jump diffusion processes. We obtain a system of ODEs that approximate the time evolution of the first two moments of the process
ESPRIT: Exercise Sensing and Pose Recovery Inference Tool, Phase I
National Aeronautics and Space Administration — We propose to develop ESPRIT: an Exercise Sensing and Pose Recovery Inference Tool, in support of NASA's effort in developing crew exercise technologies for...
Automated Flight Safety Inference Engine (AFSIE) System, Phase I
National Aeronautics and Space Administration — We propose to develop an innovative Autonomous Flight Safety Inference Engine (AFSIE) system to autonomously and reliably terminate the flight of an errant launch...
Classification versus inference learning contrasted with real-world categories.
Jones, Erin L; Ross, Brian H
2011-07-01
Categories are learned and used in a variety of ways, but the research focus has been on classification learning. Recent work contrasting classification with inference learning of categories found important later differences in category performance. However, theoretical accounts differ on whether this is due to an inherent difference between the tasks or to the implementation decisions. The inherent-difference explanation argues that inference learners focus on the internal structure of the categories--what each category is like--while classification learners focus on diagnostic information to predict category membership. In two experiments, using real-world categories and controlling for earlier methodological differences, inference learners learned more about what each category was like than did classification learners, as evidenced by higher performance on a novel classification test. These results suggest that there is an inherent difference between learning new categories by classifying an item versus inferring a feature.
Efficient Exact Inference With Loss Augmented Objective in Structured Learning.
Bauer, Alexander; Nakajima, Shinichi; Muller, Klaus-Robert
2016-08-19
Structural support vector machine (SVM) is an elegant approach for building complex and accurate models with structured outputs. However, its applicability relies on the availability of efficient inference algorithms--the state-of-the-art training algorithms repeatedly perform inference to compute a subgradient or to find the most violating configuration. In this paper, we propose an exact inference algorithm for maximizing nondecomposable objectives due to special type of a high-order potential having a decomposable internal structure. As an important application, our method covers the loss augmented inference, which enables the slack and margin scaling formulations of structural SVM with a variety of dissimilarity measures, e.g., Hamming loss, precision and recall, Fβ-loss, intersection over union, and many other functions that can be efficiently computed from the contingency table. We demonstrate the advantages of our approach in natural language parsing and sequence segmentation applications.
BagReg: Protein inference through machine learning.
Zhao, Can; Liu, Dao; Teng, Ben; He, Zengyou
2015-08-01
Protein inference from the identified peptides is of primary importance in the shotgun proteomics. The target of protein inference is to identify whether each candidate protein is truly present in the sample. To date, many computational methods have been proposed to solve this problem. However, there is still no method that can fully utilize the information hidden in the input data. In this article, we propose a learning-based method named BagReg for protein inference. The method firstly artificially extracts five features from the input data, and then chooses each feature as the class feature to separately build models to predict the presence probabilities of proteins. Finally, the weak results from five prediction models are aggregated to obtain the final result. We test our method on six public available data sets. The experimental results show that our method is superior to the state-of-the-art protein inference algorithms. Copyright © 2015 Elsevier Ltd. All rights reserved.
The Human Cochlear Mechanical Nonlinearity Inferred via Psychometric Functions
Directory of Open Access Journals (Sweden)
Nizami Lance
2013-12-01
Extension of the model of Schairer and colleagues results in credible cochlear nonlinearities in man, suggesting that forward-masking provides a non-invasive way to infer the human mechanical cochlear nonlinearity.
A general Bayes weibull inference model for accelerated life testing
International Nuclear Information System (INIS)
Dorp, J. Rene van; Mazzuchi, Thomas A.
2005-01-01
This article presents the development of a general Bayes inference model for accelerated life testing. The failure times at a constant stress level are assumed to belong to a Weibull distribution, but the specification of strict adherence to a parametric time-transformation function is not required. Rather, prior information is used to indirectly define a multivariate prior distribution for the scale parameters at the various stress levels and the common shape parameter. Using the approach, Bayes point estimates as well as probability statements for use-stress (and accelerated) life parameters may be inferred from a host of testing scenarios. The inference procedure accommodates both the interval data sampling strategy and type I censored sampling strategy for the collection of ALT test data. The inference procedure uses the well-known MCMC (Markov Chain Monte Carlo) methods to derive posterior approximations. The approach is illustrated with an example
Inference method using bayesian network for diagnosis of pulmonary nodules
International Nuclear Information System (INIS)
Kawagishi, Masami; Iizuka, Yoshio; Yamamoto, Hiroyuki; Yakami, Masahiro; Kubo, Takeshi; Fujimoto, Koji; Togashi, Kaori
2010-01-01
This report describes the improvements of a naive Bayes model that infers the diagnosis of pulmonary nodules in chest CT images based on the findings obtained when a radiologist interprets the CT images. We have previously introduced an inference model using a naive Bayes classifier and have reported its clinical value based on evaluation using clinical data. In the present report, we introduce the following improvements to the original inference model: the selection of findings based on correlations and the generation of a model using only these findings, and the introduction of classifiers that integrate several simple classifiers each of which is specialized for specific diagnosis. These improvements were found to increase the inference accuracy by 10.4% (p<.01) as compared to the original model in 100 cases (222 nodules) based on leave-one-out evaluation. (author)
Bayesian inference of chemical kinetic models from proposed reactions
Galagali, Nikhil; Marzouk, Youssef M.
2015-01-01
© 2014 Elsevier Ltd. Bayesian inference provides a natural framework for combining experimental data with prior knowledge to develop chemical kinetic models and quantify the associated uncertainties, not only in parameter values but also in model
Inference of beliefs and emotions in patients with Alzheimer's disease.
Zaitchik, Deborah; Koff, Elissa; Brownell, Hiram; Winner, Ellen; Albert, Marilyn
2006-01-01
The present study compared 20 patients with mild to moderate Alzheimer's disease with 20 older controls (ages 69-94 years) on their ability to make inferences about emotions and beliefs in others. Six tasks tested their ability to make 1st-order and 2nd-order inferences as well as to offer explanations and moral evaluations of human action by appeal to emotions and beliefs. Results showed that the ability to infer emotions and beliefs in 1st-order tasks remains largely intact in patients with mild to moderate Alzheimer's. Patients were able to use mental states in the prediction, explanation, and moral evaluation of behavior. Impairment on 2nd-order tasks involving inference of mental states was equivalent to impairment on control tasks, suggesting that patients' difficulty is secondary to their cognitive impairments. ((c) 2006 APA, all rights reserved).
Efficient design and inference in distributed Bayesian networks: an overview
de Oude, P.; Groen, F.C.A.; Pavlin, G.; Bezhanishvili, N.; Löbner, S.; Schwabe, K.; Spada, L.
2011-01-01
This paper discusses an approach to distributed Bayesian modeling and inference, which is relevant for an important class of contemporary real world situation assessment applications. By explicitly considering the locality of causal relations, the presented approach (i) supports coherent distributed
SDG multiple fault diagnosis by real-time inverse inference
International Nuclear Information System (INIS)
Zhang Zhaoqian; Wu Chongguang; Zhang Beike; Xia Tao; Li Anfeng
2005-01-01
In the past 20 years, one of the qualitative simulation technologies, signed directed graph (SDG) has been widely applied in the field of chemical fault diagnosis. However, the assumption of single fault origin was usually used by many former researchers. As a result, this will lead to the problem of combinatorial explosion and has limited SDG to the realistic application on the real process. This is mainly because that most of the former researchers used forward inference engine in the commercial expert system software to carry out the inverse diagnosis inference on the SDG model which violates the internal principle of diagnosis mechanism. In this paper, we present a new SDG multiple faults diagnosis method by real-time inverse inference. This is a method of multiple faults diagnosis from the genuine significance and the inference engine use inverse mechanism. At last, we give an example of 65t/h furnace diagnosis system to demonstrate its applicability and efficiency
SDG multiple fault diagnosis by real-time inverse inference
Energy Technology Data Exchange (ETDEWEB)
Zhang Zhaoqian; Wu Chongguang; Zhang Beike; Xia Tao; Li Anfeng
2005-02-01
In the past 20 years, one of the qualitative simulation technologies, signed directed graph (SDG) has been widely applied in the field of chemical fault diagnosis. However, the assumption of single fault origin was usually used by many former researchers. As a result, this will lead to the problem of combinatorial explosion and has limited SDG to the realistic application on the real process. This is mainly because that most of the former researchers used forward inference engine in the commercial expert system software to carry out the inverse diagnosis inference on the SDG model which violates the internal principle of diagnosis mechanism. In this paper, we present a new SDG multiple faults diagnosis method by real-time inverse inference. This is a method of multiple faults diagnosis from the genuine significance and the inference engine use inverse mechanism. At last, we give an example of 65t/h furnace diagnosis system to demonstrate its applicability and efficiency.
Bayesian Information Criterion as an Alternative way of Statistical Inference
Directory of Open Access Journals (Sweden)
Nadejda Yu. Gubanova
2012-05-01
Full Text Available The article treats Bayesian information criterion as an alternative to traditional methods of statistical inference, based on NHST. The comparison of ANOVA and BIC results for psychological experiment is discussed.
Strong and strategic conformity understanding by 3- and 5-year-old children.
Cordonier, Laurent; Nettles, Theresa; Rochat, Philippe
2017-12-18
'Strong conformity' corresponds to the public endorsement of majority opinions that are in blatant contradiction to one's own correct perceptual judgements of the situation. We tested strong conformity inference by 3- and 5-year-old children using a third-person perspective paradigm. Results show that at neither age, children spontaneously expect that an ostracized third-party individual who wants to affiliate with the majority group will show strong conformity. However, when questioned as to what the ostracized individual should do to befriend others, from 5 years of age children explicitly demonstrate that they construe strong conformity as a strategic means of social affiliation. Additional data suggest that strong and strategic conformity understanding from an observer's third-person perspective is linked to the passing of the language-mediated false belief theory of mind task, an index of children's emerging 'meta' ability to construe the mental state of others. Statement of contribution What is already known on this subject? 'Strong conformity' corresponds to the public endorsement of majority opinions that are in blatant contradiction to one's own correct perceptual judgements of the situation. Asch's (1956, Psychological Monographs: General and Applied, 70, 1) classic demonstration of strong conformity with adults has been replicated with preschool children: 3- to 4-year-olds manifest signs of strong conformity by reversing about thirty to forty per cent of the time their correct perceptual judgements to fit with contradictory statements held unanimously by other individuals (Corriveau & Harris, 2010, Developmental Psychology, 46, 437; Corriveau et al., 2013, Journal of Cognition and Culture, 13, 367; Haun & Tomasello, 2011, Child Development, 82, 1759). As for adults, strong conformity does not obliterate children's own private, accurate knowledge of the situation. It is in essence a public expression to fit the group and alleviate social dissonance
Dual field theory of strong interactions
International Nuclear Information System (INIS)
Akers, D.
1987-01-01
A dual field theory of strong interactions is derived from a Lagrangian of the Yang-Mills and Higgs fields. The existence of a magnetic monopole of mass 2397 MeV and Dirac charge g = (137/2)e is incorporated into the theory. Unification of the strong, weak, and electromagnetic forces is shown to converge at the mass of the intermediate vector boson W/sup +/-/. The coupling constants of the strong and weak interactions are derived in terms of the fine-structure constant α = 1/137
Strong and superstrong pulsed magnetic fields generation
Shneerson, German A; Krivosheev, Sergey I
2014-01-01
Strong pulsed magnetic fields are important for several fields in physics and engineering, such as power generation and accelerator facilities. Basic aspects of the generation of strong and superstrong pulsed magnetic fields technique are given, including the physics and hydrodynamics of the conductors interacting with the field as well as an account of the significant progress in generation of strong magnetic fields using the magnetic accumulation technique. Results of computer simulations as well as a survey of available field technology are completing the volume.
Semi-strong split domination in graphs
Directory of Open Access Journals (Sweden)
Anwar Alwardi
2014-06-01
Full Text Available Given a graph $G = (V,E$, a dominating set $D subseteq V$ is called a semi-strong split dominating set of $G$ if $|V setminus D| geq 1$ and the maximum degree of the subgraph induced by $V setminus D$ is 1. The minimum cardinality of a semi-strong split dominating set (SSSDS of G is the semi-strong split domination number of G, denoted $gamma_{sss}(G$. In this work, we introduce the concept and prove several results regarding it.
Statistical inferences for bearings life using sudden death test
Directory of Open Access Journals (Sweden)
Morariu Cristin-Olimpiu
2017-01-01
Full Text Available In this paper we propose a calculus method for reliability indicators estimation and a complete statistical inferences for three parameters Weibull distribution of bearings life. Using experimental values regarding the durability of bearings tested on stands by the sudden death tests involves a series of particularities of the estimation using maximum likelihood method and statistical inference accomplishment. The paper detailing these features and also provides an example calculation.
Inference in {open_quotes}poor{close_quotes} languages
Energy Technology Data Exchange (ETDEWEB)
Petrov, S. [Oak Ridge National Lab., TN (United States)
1996-12-31
Languages with a solvable implication problem but without complete and consistent systems of inference rules ({open_quote}poor{close_quote} languages) are considered. The problem of existence of a finite, complete, and consistent inference rule system for a {open_quotes}poor{close_quotes} language is stated independently of the language or the rule syntax. Several properties of the problem are proved. An application of the results to the language of join dependencies is given.
Inference of a Nonlinear Stochastic Model of the Cardiorespiratory Interaction
Smelyanskiy, V. N.; Luchinsky, D. G.; Stefanovska, A.; McClintock, P. V.
2005-03-01
We reconstruct a nonlinear stochastic model of the cardiorespiratory interaction in terms of a set of polynomial basis functions representing the nonlinear force governing system oscillations. The strength and direction of coupling and noise intensity are simultaneously inferred from a univariate blood pressure signal. Our new inference technique does not require extensive global optimization, and it is applicable to a wide range of complex dynamical systems subject to noise.
Towards Bayesian Inference of the Fast-Ion Distribution Function
DEFF Research Database (Denmark)
Stagner, L.; Heidbrink, W.W.; Salewski, Mirko
2012-01-01
sensitivity of the measurements are incorporated into Bayesian likelihood probabilities, while prior probabilities enforce physical constraints. As an initial step, this poster uses Bayesian statistics to infer the DIII-D electron density profile from multiple diagnostic measurements. Likelihood functions....... However, when theory and experiment disagree (for one or more diagnostics), it is unclear how to proceed. Bayesian statistics provides a framework to infer the DF, quantify errors, and reconcile discrepant diagnostic measurements. Diagnostic errors and ``weight functions" that describe the phase space...