<strong>Generic Patch Inference>
DEFF Research Database (Denmark)
Andersen, Jesper; Lawall, Julia Laetitia
2008-01-01
A key issue in maintaining Linux device drivers is the need to update drivers in response to evolutions in Linux internal libraries. Currently, there is little tool support for performing and documenting such changes. In this paper we present a tool, spfind, that identifies common changes made...... developers can use it to extract an abstract representation of the set of changes that others have made. Our experiments on recent changes in Linux show that the inferred generic patches are more concise than the corresponding patches found in commits to the Linux source tree while being safe with respect...
Strong Inference in Mathematical Modeling: A Method for Robust Science in the Twenty-First Century.
Ganusov, Vitaly V
2016-01-01
While there are many opinions on what mathematical modeling in biology is, in essence, modeling is a mathematical tool, like a microscope, which allows consequences to logically follow from a set of assumptions. Only when this tool is applied appropriately, as microscope is used to look at small items, it may allow to understand importance of specific mechanisms/assumptions in biological processes. Mathematical modeling can be less useful or even misleading if used inappropriately, for example, when a microscope is used to study stars. According to some philosophers (Oreskes et al., 1994), the best use of mathematical models is not when a model is used to confirm a hypothesis but rather when a model shows inconsistency of the model (defined by a specific set of assumptions) and data. Following the principle of strong inference for experimental sciences proposed by Platt (1964), I suggest "strong inference in mathematical modeling" as an effective and robust way of using mathematical modeling to understand mechanisms driving dynamics of biological systems. The major steps of strong inference in mathematical modeling are (1) to develop multiple alternative models for the phenomenon in question; (2) to compare the models with available experimental data and to determine which of the models are not consistent with the data; (3) to determine reasons why rejected models failed to explain the data, and (4) to suggest experiments which would allow to discriminate between remaining alternative models. The use of strong inference is likely to provide better robustness of predictions of mathematical models and it should be strongly encouraged in mathematical modeling-based publications in the Twenty-First century.
The Freedom to Design Nature: Kant's Strong Ought→Can Inference in 21st Century Perspective
Directory of Open Access Journals (Sweden)
Edward Eugene Kleist
2006-01-01
Full Text Available Kant’s attempts to formulate a conception of the harmony of nature and freedom have two logical presuppositions. The first presupposition is separation of ought and is, which provides a logical formulation of the separation of freedom and nature. Kant might well have settled on the view that the separation between nature and freedom cannot be bridged. Why did Kant attempt to overcome said separation? The second presupposition of Kant’s project to bridge nature and freedom involves an ought→can inference, stating that moral obligation implies the possibility of its fulfillment. There are at least two ways this inference can be understood. There is a weak sense of the inference, stating that no one is obliged to do the impossible. There is also a very strong sense of the inference, stating that if a moral obligation is found to obtain, it must then be possible to fulfill it. Kant interprets the ought→can inference in this strong as well as in the weak sense. Nature, the law-governed totality of what exists, must be understood as able to provide a suitable field for moral realization. The isomorphism between the lawfulness of nature and that of moral freedom animate Kant’s account of moral judgment, and will provide the main focus of the current investigation. Kant conceives of nature and freedom as twin kingdoms, thus providing a theoretical model validating this ought→can inference. The weaker sense of this ought→can inference does justice to moral judgment without requiring the awesome task of bridging nature and freedom. Why, then, should we maintain the strong ought→can inference in our post-Kantian situation? I suggest that Kant’s insistence on the strong ought→can inference may yield an ethical approach to the ever more powerful ways in which human beings technologically transform nature, including human nature itself.
Population genetics inference for longitudinally-sampled mutants under strong selection.
Lacerda, Miguel; Seoighe, Cathal
2014-11-01
Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.
Photo Gallery for South Platte Watershed
South Platte Watershed from the Headwaters to the Denver Metropolitan Area (Colorado) of the Urban Waters Federal Partnership (UWFP) reconnects urban communities with their waterways by improving coordination among federal agencies and collaborating
Natural Gas Storage Facilities, US, 2010, Platts
U.S. Environmental Protection Agency — The Platts Natural Gas Storage Facilities geospatial data layer contains points that represent locations of facilities used for natural gas storage in the United...
Probing the Small-scale Structure in Strongly Lensed Systems via Transdimensional Inference
Daylan, Tansu; Cyr-Racine, Francis-Yan; Diaz Rivero, Ana; Dvorkin, Cora; Finkbeiner, Douglas P.
2018-02-01
Strong lensing is a sensitive probe of the small-scale density fluctuations in the Universe. We implement a pipeline to model strongly lensed systems using probabilistic cataloging, which is a transdimensional, hierarchical, and Bayesian framework to sample from a metamodel (union of models with different dimensionality) consistent with observed photon count maps. Probabilistic cataloging allows one to robustly characterize modeling covariances within and across lens models with different numbers of subhalos. Unlike traditional cataloging of subhalos, it does not require model subhalos to improve the goodness of fit above the detection threshold. Instead, it allows the exploitation of all information contained in the photon count maps—for instance, when constraining the subhalo mass function. We further show that, by not including these small subhalos in the lens model, fixed-dimensional inference methods can significantly mismodel the data. Using a simulated Hubble Space Telescope data set, we show that the subhalo mass function can be probed even when many subhalos in the sample catalogs are individually below the detection threshold and would be absent in a traditional catalog. The implemented software, Probabilistic Cataloger (PCAT) is made publicly available at https://github.com/tdaylan/pcat.
DEFF Research Database (Denmark)
Møller, Jesper
2010-01-01
Chapter 9: This contribution concerns statistical inference for parametric models used in stochastic geometry and based on quick and simple simulation free procedures as well as more comprehensive methods based on a maximum likelihood or Bayesian approach combined with markov chain Monte Carlo...... (MCMC) techniques. Due to space limitations the focus is on spatial point processes....
Middle Platte subbasin ecological response assessment (ERA)
Energy Technology Data Exchange (ETDEWEB)
Schweiger, E.W.; Sefton, D.; Downing, M.
1995-12-31
The Platte River and its alluvial aquifer in Nebraska is a national ecologic, economic, and social treasure. The Platte`s most distinctive role is its vital link in the Central Flyway, furnishing a critical stopover point during the annual migration of 500,000 sandhill cranes and 9 million waterfowl. Not only is this stopover critical to the birds for attaining their destination, reproduction and population stability, the arrival of 80% of the crane population has become a significant economic resource to this agricultural community (birdwatching ecotourists inject a substantial financial stimulant into the economy). Yet the functioning of the Middle Platte Watershed is being threatened by modification of the hydrological regime, physical disturbance and pollution stresses. The objective of the Middle Platte ERA is to provide a scientific basis for making resource decisions on potential land and water management options in the basin. ERA work focuses on habitats identified as being of value and in need of protection. Important compositional and structural characteristics were determined for each habitat type then measurements indicating overall integrity of system functioning were identified. A search for existing datasets was conducted. All quantifiable datasets were used in a synoptic modeling protocol, which produced a relative evaluation of the impact of ecosystem level perturbation on multiple assessment endpoints. The model describes how perturbations effect distinct units relative to their effects in other units within the landscape. Synoptic modeling will produce an ERA that delineates areas of high risk or value and facilitate allocation of conservation efforts to areas that are most critical in maintaining the functions of the Middle Platte River Watershed.
DEFF Research Database (Denmark)
Møller, Jesper
(This text written by Jesper Møller, Aalborg University, is submitted for the collection ‘Stochastic Geometry: Highlights, Interactions and New Perspectives', edited by Wilfrid S. Kendall and Ilya Molchanov, to be published by ClarendonPress, Oxford, and planned to appear as Section 4.1 with the ......(This text written by Jesper Møller, Aalborg University, is submitted for the collection ‘Stochastic Geometry: Highlights, Interactions and New Perspectives', edited by Wilfrid S. Kendall and Ilya Molchanov, to be published by ClarendonPress, Oxford, and planned to appear as Section 4.......1 with the title ‘Inference'.) This contribution concerns statistical inference for parametric models used in stochastic geometry and based on quick and simple simulation free procedures as well as more comprehensive methods using Markov chain Monte Carlo (MCMC) simulations. Due to space limitations the focus...
Semmane, Fethi; Campillo, Michel; Cotton, Fabrice
2005-01-01
The Boumerdes earthquake occurred on a fault whose precise location, offshore the Algerian coast, was unknown. Geodetic data are used to determine the absolute position of the fault. The fault might emerge at about 15 km offshore. Accelerograms are used to infer the space-time history of the rupture using a two-step inversion in the spectral domain. The observed strong motion records agree with the synthetics for the fault location inferred from geodetic data. The fault plane ruptured for about 18 seconds. The slip distribution on the fault indicates one asperity northwest of the hypocenter with maximum slip amplitude about 3 m. This asperity is probably responsible for most of the damage. Another asperity with slightly smaller slip amplitude is located southeast of the hypocenter. The rupture stops its westward propagation close to the Thenia fault, a structure almost perpendicular to the main fault.
Directory of Open Access Journals (Sweden)
Eitan Adam Pechenick
Full Text Available It is tempting to treat frequency trends from the Google Books data sets as indicators of the "true" popularity of various words and phrases. Doing so allows us to draw quantitatively strong conclusions about the evolution of cultural perception of a given topic, such as time or gender. However, the Google Books corpus suffers from a number of limitations which make it an obscure mask of cultural popularity. A primary issue is that the corpus is in effect a library, containing one of each book. A single, prolific author is thereby able to noticeably insert new phrases into the Google Books lexicon, whether the author is widely read or not. With this understood, the Google Books corpus remains an important data set to be considered more lexicon-like than text-like. Here, we show that a distinct problematic feature arises from the inclusion of scientific texts, which have become an increasingly substantive portion of the corpus throughout the 1900 s. The result is a surge of phrases typical to academic articles but less common in general, such as references to time in the form of citations. We use information theoretic methods to highlight these dynamics by examining and comparing major contributions via a divergence measure of English data sets between decades in the period 1800-2000. We find that only the English Fiction data set from the second version of the corpus is not heavily affected by professional texts. Overall, our findings call into question the vast majority of existing claims drawn from the Google Books corpus, and point to the need to fully characterize the dynamics of the corpus before using these data sets to draw broad conclusions about cultural and linguistic evolution.
Pechenick, Eitan Adam; Danforth, Christopher M; Dodds, Peter Sheridan
2015-01-01
It is tempting to treat frequency trends from the Google Books data sets as indicators of the "true" popularity of various words and phrases. Doing so allows us to draw quantitatively strong conclusions about the evolution of cultural perception of a given topic, such as time or gender. However, the Google Books corpus suffers from a number of limitations which make it an obscure mask of cultural popularity. A primary issue is that the corpus is in effect a library, containing one of each book. A single, prolific author is thereby able to noticeably insert new phrases into the Google Books lexicon, whether the author is widely read or not. With this understood, the Google Books corpus remains an important data set to be considered more lexicon-like than text-like. Here, we show that a distinct problematic feature arises from the inclusion of scientific texts, which have become an increasingly substantive portion of the corpus throughout the 1900 s. The result is a surge of phrases typical to academic articles but less common in general, such as references to time in the form of citations. We use information theoretic methods to highlight these dynamics by examining and comparing major contributions via a divergence measure of English data sets between decades in the period 1800-2000. We find that only the English Fiction data set from the second version of the corpus is not heavily affected by professional texts. Overall, our findings call into question the vast majority of existing claims drawn from the Google Books corpus, and point to the need to fully characterize the dynamics of the corpus before using these data sets to draw broad conclusions about cultural and linguistic evolution.
Semmane, F.; Campillo, M.; Cotton, F.
2004-12-01
The Boumerdes earthquake occurred on a fault which precise location, offshore the algerian coast, was unknown. Geodetic data consist of GPS measurements, levelling points and coastal uplifts. They are first used to determine the absolute position of the fault. We performed a series of inversions assuming different positions and chose the model giving the smallest misfit. According to this analysis, the fault emerge at about 15 km offshore. Accelerograms are then used to infer the space-time history of rupture on the fault plane using a two-step inversion in the spectral domain. The observed strong motion records are in good agreement with the synthetics for the fault location inferred from geodetic data. The fault plane ruptured for about 16 seconds. The slip distribution on the fault indicates one asperity north-west of the hypocenter with a maximum slip amplitude larger than 2.5 m. Another asperity with slightly smaller slip amplitude is located south-east of the hypocenter. The rupture seems to stop its propagation westward when it encounters the Thenia fault, a structure almost perpendicular to the main fault. We computed the spatial distribution of ground motion predicted by this fault model and compared it with the observed damages.
Kass, Mason A.; Bloss, Benjamin R.; Irons, Trevor P.; Cannia, James C.; Abraham, Jared D.
2014-01-01
This report is a release of digital data and associated survey descriptions from a series of magnetic resonance soundings (MRS, also known as surface nuclear magnetic resonance) that was conducted during October and November of 2012 in areas of western Nebraska as part of a cooperative hydrologic study by the North Platte Natural Resource District (NRD), South Platte NRD, Twin Platte NRD, the Nebraska Environmental Trust, and the U.S. Geological Survey (USGS). The objective of the study was to delineate the base-of-aquifer and refine the understanding of the hydrologic properties in the aquifer system. The MRS technique non-invasively measures water content in the subsurface, which makes it a useful tool for hydrologic investigations in the near surface (up to depths of approximately 150 meters). In total, 14 MRS production-level soundings were acquired by the USGS over an area of approximately 10,600 square kilometers. The data are presented here in digital format, along with acquisition information, survey and site descriptions, and metadata.
South Platte River Basin - Colorado, Nebraska, and Wyoming
Dennehy, Kevin F.; Litke, David W.; Tate, Cathy M.; Heiny, Janet S.
1993-01-01
The South Platte River Basin was one of 20 study units selected in 1991 for investigation under the U.S. Geological Survey's National Water-Quality Assessment (NAWQA) program. One of the initial tasks undertaken by the study unit team was to review the environmental setting of the basin and assemble ancillary data on natural and anthropogenic factors in the basin. The physical, chemical, and biological quality of the water in the South Platte River Basin is explicitly tied to its environmental setting. The resulting water quality is the product of the natural conditions and human factors that make up the environmental setting of the basin.This description of the environmental setting of the South Platte River Basin and its implications to the water quality will help guide the design of the South Platte NAWQA study. Natural conditions such as physiography, climate, geology, and soils affect the ambient water quality while anthropogenic factors such as water use, population, land use and water-management practices can have a pronounced effect on water quality in the basin. The relative effects of mining, urban, and agricultural land- and water-uses on water-quality constituents are not well understood. The interrelation of the surface-water and ground-water systems and the chemical and biological processes that affect the transport of constituents needs to be addressed. Interactions between biological communities and the water resources also should be considered. The NAWQA program and the South Platte River Basin study will provide information to minimize existing knowledge gaps, so that we may better understand the effect these natural conditions and human factors have on the water-quality conditions in the basin, now and in the future.
Summary of Bed-Sediment Measurements Along the Platte River, Nebraska, 1931-2009
Kinzel, P.J.; Runge, J.T.
2010-01-01
Rivers are conduits for water and sediment supplied from upstream sources. The sizes of the sediments that a river bed consists of typically decrease in a downstream direction because of natural sorting. However, other factors can affect the caliber of bed sediment including changes in upstream water-resource development, land use, and climate that alter the watershed yield of water or sediment. Bed sediments provide both a geologic and stratigraphic record of past fluvial processes and quantification of current sediment transport relations. The objective of this fact sheet is to describe and compare longitudinal measurements of bed-sediment sizes made along the Platte River, Nebraska from 1931 to 2009. The Platte River begins at the junction of the North Platte and South Platte Rivers near North Platte, Nebr. and flows east for approximately 500 kilometers before joining the Missouri River at Plattsmouth, Nebr. The confluence of the Loup River with the Platte River serves to divide the middle (or central) Platte River (the Platte River upstream from the confluence with the Loup River) and lower Platte River (the Platte River downstream from the confluence with Loup River). The Platte River provides water for a variety of needs including: irrigation, infiltration to public water-supply wells, power generation, recreation, and wildlife habitat. The Platte River Basin includes habitat for four federally listed species including the whooping crane (Grus americana), interior least tern (Sterna antillarum), piping plover (Charadrius melodus), and pallid sturgeon (Scaphirhynchus albus). A habitat recovery program for the federally listed species in the Platte River was initiated in 2007. One strategy identified by the recovery program to manage and enhance habitat is the manipulation of streamflow. Understanding the longitudinal and temporal changes in the size gradation of the bed sediment will help to explain the effects of past flow regimes and anticipated
Directory of Open Access Journals (Sweden)
Z. T. Guo
2009-02-01
Full Text Available We correlate the China loess and Antarctica ice records to address the inter-hemispheric climate link over the past 800 ka. The results show a broad coupling between Asian and Antarctic climates at the glacial-interglacial scale. However, a number of decoupled aspects are revealed, among which marine isotope stage (MIS 13 exhibits a strong anomaly compared with the other interglacials. It is characterized by unusually positive benthic oxygen (δ^{18}O and carbon isotope (δ^{13}C values in the world oceans, cooler Antarctic temperature, lower summer sea surface temperature in the South Atlantic, lower CO_{2} and CH_{4} concentrations, but by extremely strong Asian, Indian and African summer monsoons, weakest Asian winter monsoon, and lowest Asian dust and iron fluxes. Pervasive warm conditions were also evidenced by the records from northern high-latitude regions. These consistently indicate a warmer Northern Hemisphere and a cooler Southern Hemisphere, and hence a strong asymmetry of hemispheric climates during MIS-13. Similar anomalies of lesser extents also occurred during MIS-11 and MIS-5e. Thus, MIS-13 provides a case that the Northern Hemisphere experienced a substantial warming under relatively low concentrations of greenhouse gases. It suggests that the global climate system possesses a natural variability that is not predictable from the simple response of northern summer insolation and atmospheric CO_{2} changes. During MIS-13, both hemispheres responded in different ways leading to anomalous continental, marine and atmospheric conditions at the global scale. The correlations also suggest that the marine δ^{18}O record is not always a reliable indicator of the northern ice-volume changes, and that the asymmetry of hemispheric climates is one of the prominent factors controlling the strength of Asian, Indian and African monsoon circulations, most likely through modulating the position of
Kamphuis, W.; Houttuin, K.
2007-01-01
In this report, we introduce a newly developed task environment for experimental team research: the Planning Task for Teams (PLATT). PLATT is a scenario based, computerized, complex planning task for three-person teams. PLATT has been designed to be able to do experimental laboratory research on
South Platte Watershed from the Headwaters to the Denver Metropolitan Area (Colorado) of the Urban Waters Federal Partnership (UWFP) reconnects urban communities with their waterways by improving coordination among federal agencies and collaborating
Peterson, Steven M.; Flynn, Amanda T.; Vrabel, Joseph; Ryter, Derek W.
2015-08-12
The North Platte Natural Resources District (NPNRD) has been actively collecting data and studying groundwater resources because of concerns about the future availability of the highly inter-connected surface-water and groundwater resources. This report, prepared by the U.S. Geological Survey in cooperation with the North Platte Natural Resources District, describes a groundwater-flow model of the North Platte River valley from Bridgeport, Nebraska, extending west to 6 miles into Wyoming. The model was built to improve the understanding of the interaction of surface-water and groundwater resources, and as an optimization tool, the model is able to analyze the effects of water-management options on the simulated stream base flow of the North Platte River. The groundwater system and related sources and sinks of water were simulated using a newton formulation of the U.S. Geological Survey modular three-dimensional groundwater model, referred to as MODFLOW–NWT, which provided an improved ability to solve nonlinear unconfined aquifer simulations with wetting and drying of cells. Using previously published aquifer-base-altitude contours in conjunction with newer test-hole and geophysical data, a new base-of-aquifer altitude map was generated because of the strong effect of the aquifer-base topography on groundwater-flow direction and magnitude. The largest inflow to groundwater is recharge originating from water leaking from canals, which is much larger than recharge originating from infiltration of precipitation. The largest component of groundwater discharge from the study area is to the North Platte River and its tributaries, with smaller amounts of discharge to evapotranspiration and groundwater withdrawals for irrigation. Recharge from infiltration of precipitation was estimated with a daily soil-water-balance model. Annual recharge from canal seepage was estimated using available records from the Bureau of Reclamation and then modified with canal
Social-ecological resilience and law in the Platte River Basin
Birge, Hannah E.; Allen, Craig R.; Craig, Robin; Garmestani, Ahjond S.; Hamm, Joseph A.; Babbitt, Christina; Nemec, Kristine T.; Schlager, Edella
2014-01-01
Efficiency and resistance to rapid change are hallmarks of both the judicial and legislative branches of the United States government. These defining characteristics, while bringing stability and predictability, pose challenges when it comes to managing dynamic natural systems. As our understanding of ecosystems improves, we must devise ways to account for the non-linearities and uncertainties rife in complex social-ecological systems. This paper takes an in-depth look at the Platte River basin over time to explore how the system's resilience—the capacity to absorb disturbance without losing defining structures and functions—responds to human driven change. Beginning with pre-European settlement, the paper explores how water laws, policies, and infrastructure influenced the region's ecology and society. While much of the post-European development in the Platte River basin came at a high ecological cost to the system, the recent tri-state and federal collaborative Platte River Recovery and Implementation Program is a first step towards flexible and adaptive management of the social-ecological system. Using the Platte River basin as an example, we make the case that inherent flexibility and adaptability are vital for the next iteration of natural resources management policies affecting stressed basins. We argue that this can be accomplished by nesting policy in a resilience framework, which we describe and attempt to operationalize for use across systems and at different levels of jurisdiction. As our current natural resources policies fail under the weight of looming global change, unprecedented demand for natural resources, and shifting land use, the need for a new generation of adaptive, flexible natural resources govern-ance emerges. Here we offer a prescription for just that, rooted in the social , ecological and political realities of the Platte River basin. Social-Ecological Resilience and Law in the Platte River Basin (PDF Download Available). Available
Smith, B.D.; Abraham, J.D.; Cannia, J.C.; Minsley, B.J.; Deszcz-Pan, M.; Ball, L.B.
2010-01-01
This report is a release of digital data from a helicopter electromagnetic and magnetic survey that was conducted during June 2009 in areas of western Nebraska as part of a joint hydrologic study by the North Platte Natural Resource District (NRD), South Platte NRD, and U.S. Geological Survey (USGS). Flight lines for the survey totaled 937 line kilometers (582 line miles). The objective of the contracted survey, conducted by Fugro Airborne, Ltd., is to improve the understanding of the relation between surface-water and groundwater systems critical to developing groundwater models used in management programs for water resources. A unique aspect of the survey is the flight line layout. One set of flight lines was flown in a zig-zag pattern extending along the length of the previously collected airborne data. The success of this survey design depended on a well-understood regional hydrogeologic framework and model developed by the Cooperative Hydrologic Study of the Platte River Basin and the airborne geophysical data collected in 2008. Resistivity variations along lines could be related to this framework. In addition to these lines, more traditional surveys consisting of parallel flight lines, separated by about 400 meters were carried out for three blocks in the North Platte NRD, the South Platte NRD and in the area of Crescent Lakes. These surveys helped to establish the spatial variations of the resistivity of hydrostratigraphic units. An additional survey was flown over the Crescent Lake area. The objective of this survey, funded by the USGS Office of Groundwater, was to map shallow hydrogeologic features of the southwestern part of the Sand Hills that contain a mix of fresh to saline lakes.
O'Shaughnessy, Richard; Gerosa, Davide; Wysocki, Daniel
2017-07-07
The inferred parameters of the binary black hole GW151226 are consistent with nonzero spin for the most massive black hole, misaligned from the binary's orbital angular momentum. If the black holes formed through isolated binary evolution from an initially aligned binary star, this misalignment would then arise from a natal kick imparted to the first-born black hole at its birth during stellar collapse. We use simple kinematic arguments to constrain the characteristic magnitude of this kick, and find that a natal kick v_{k}≳50 km/s must be imparted to the black hole at birth to produce misalignments consistent with GW151226. Such large natal kicks exceed those adopted by default in most of the current supernova and binary evolution models.
Larson, D.L.; Galatowitsch, S.M.; Larson, J.L.
2011-01-01
Phragmites australis (common reed) is known to have occurred along the Platte River historically, but recent rapid increases in both distribution and density have begun to impact habitat for migrating sandhill cranes and nesting piping plovers and least terns. Invasiveness in Phragmites has been associated with the incursion of a European genotype (haplotype M) in other areas; determining the genotype of Phragmites along the central Platte River has implications for proper management of the river system. In 2008 we sampled Phragmites patches along the central Platte River from Lexington to Chapman, NE, stratified by bridge segments, to determine the current distribution of haplotype E (native) and haplotype M genotypes. In addition, we did a retrospective analysis of historical Phragmites collections from the central Platte watershed (1902-2006) at the Bessey Herbarium. Fresh tissue from the 2008 survey and dried tissue from the herbarium specimens were classified as haplotype M or E using the restriction fragment length polymorphism procedure. The European haplotype was predominant in the 2008 samples: only 14 Phragmites shoots were identified as native haplotype E; 224 were non-native haplotype M. The retrospective analysis revealed primarily native haplotype individuals. Only collections made in Lancaster County, near Lincoln, NE, were haplotype M, and the earliest of these was collected in 1973. ?? 2011 Copyright by the Center for Great Plains Studies, University of Nebraska-Lincoln.
Woodward, Brenda K.
2008-01-01
The central Platte River is a dynamic, braided, sand-bed river located near Grand Island, Nebraska. An understanding of the Platte River channel characteristics, hydrologic flow patterns, and geomorphic conditions is important for the operation and management of water resources by the City of Grand Island. The north channel of the Platte River flows within 1 mile of the municipal well field, and its surface-water flow recharges the underlying aquifer, which serves as a water source for the city. Recharge from the north channel helps minimize the flow of contaminated ground water from the north of the channel towards the well field. In recent years the river channels have experienced no-flow conditions for extended periods during the summer and fall seasons, and it has been observed that no-flow conditions in the north channel often persist after streamflow has returned to the other three channels. This potentially allows more contaminated ground water to move toward the municipal well field each year, and has caused resource managers to ask whether human disturbances or natural geomorphic change have contributed to the increased frequency of no-flow conditions in the north channel. Analyses of aerial photography, channel surveys, Light Detection and Ranging data, discharge measurements, and historical land surveys were used to understand the past and present dynamics of the four channels of the Platte River near Grand Island and to detect changes with time. Results indicate that some minor changes have occurred in the channels. Changes in bed elevation, channel location, and width were minimal when compared using historical information. Changes in discharge distribution among channels indicate that low- and no-flow conditions in the north channel may be attributed to the small changes in channel characteristics or small elevation differences, along with recent reductions in total streamflow within the Platte River near Grand Island, or to factors not measured in
Geological report on water conditions at Platt National Park, Oklahoma
Gould, Charles Newton; Schoff, Stuart Leeson
1939-01-01
Platt National Park, located in southern Oklahoma, containing 842 acres, was established by Acts of Congress in 1902, 1904, and 1906. The reason for the setting aside of this area was the presence in the area of some 30 'mineral' springs, the water from which contains sulphur, bromide, salt, and other minerals, which are believed to possess medicinal qualities. For many generations the sulphur springs of the Chickasaw Nation had been known for their reputed healing qualities. It had long been the custom for families to come from considerable distances on horseback and in wagons and camp near the springs, in order to drink the water. In course of time a primitive town, known as Sulphur Springs, grew up near a group of springs known since as Pavilion Springs at the mouth of Sulphur Creek, now known as Travertine Creek. This town was still in existence at the time of my first visit to the locality in July, 1901. At this time, in company with Joseph A. Taff, of the United States Geological Survey, I spent a week riding over the country making a preliminary survey looking toward the setting aside of the area for a National Park. After the establishment of the National Park, the old town of Sulphur Springs was abandoned, and when the present boundaries of the park had been established the present town of Sulphur, now county seat of Murray County, grew up. In July 1906, on request of Superintendent Joseph F. Swords, I visited the park and made an examination of the various springs and submitted a report, dated August 15, 1906, to Secretary of the Interior E.A. Hitchcock. Copies of this report are on file in the Regional Office and at Platt National Park. In this report I set forth the approximate amount of flow of the various springs, the character of the water in each, and the conditions of the springs as of that date. I also made certain recommendations regarding proposed improvements of each spring. In this report I say: 'In the town of Sulphur, four wells have been
Gu, Yingxin; Wylie, Bruce K.; Bliss, Norman B.
2013-01-01
This study assessed and described a relationship between satellite-derived growing season averaged Normalized Difference Vegetation Index (NDVI) and annual productivity for grasslands within the Greater Platte River Basin (GPRB) of the United States. We compared growing season averaged NDVI (GSN) with Soil Survey Geographic (SSURGO) database rangeland productivity and flux tower Gross Primary Productivity (GPP) for grassland areas. The GSN was calculated for each of nine years (2000–2008) using the 7-day composite 250-m eMODIS (expedited Moderate Resolution Imaging Spectroradiometer) NDVI data. Strong correlations exist between the nine-year mean GSN (MGSN) and SSURGO annual productivity for grasslands (R2 = 0.74 for approximately 8000 pixels randomly selected from eight homogeneous regions within the GPRB; R2 = 0.96 for the 14 cluster-averaged points). Results also reveal a strong correlation between GSN and flux tower growing season averaged GPP (R2 = 0.71). Finally, we developed an empirical equation to estimate grassland productivity based on the MGSN. Spatially explicit estimates of grassland productivity over the GPRB were generated, which improved the regional consistency of SSURGO grassland productivity data and can help scientists and land managers to better understand the actual biophysical and ecological characteristics of grassland systems in the GPRB. This final estimated grassland production map can also be used as an input for biogeochemical, ecological, and climate change models.
Wang, Weiran; Qiao, Yu; Pan, Wenshi; Yao, Meng
2015-01-01
Many Asian colobine monkey species are suffering from habitat destruction and population size decline. There is a great need to understand their genetic diversity, population structure and demographic history for effective species conservation. The white-headed langur (Trachypithecus leucocephalus) is a Critically Endangered colobine species endemic to the limestone karst forests in southwestern China. We analyzed the mitochondrial DNA (mtDNA) control region sequences of 390 fecal samples from 40 social groups across the main distribution areas, which represented one-third of the total extant population. Only nine haplotypes and 10 polymorphic sites were identified, indicating remarkably low genetic diversity in the species. Using a subset of 77 samples from different individuals, we evaluated genetic variation, population structure, and population demographic history. We found very low values of haplotype diversity (h = 0.570 ± 0.056) and nucleotide diversity (π = 0.00323 ± 0.00044) in the hypervariable region I (HVRI) of the mtDNA control region. Distribution of haplotypes displayed marked geographical pattern, with one population (Chongzuo, CZ) showing a complete lack of genetic diversity (having only one haplotype), whereas the other population (Fusui, FS) having all nine haplotypes. We detected strong population genetic structure among habit patches (ΦST = 0.375, P population size and modest population expansion in the last 2,000 years. Our results indicate different genetic diversity and possibly distinct population history for different local populations, and suggest that CZ and FS should be considered as one evolutionarily significant unit (ESU) and two management units (MUs) pending further investigation using nuclear markers.
Multimodel inference and adaptive management
Rehme, S.E.; Powell, L.A.; Allen, Craig R.
2011-01-01
Ecology is an inherently complex science coping with correlated variables, nonlinear interactions and multiple scales of pattern and process, making it difficult for experiments to result in clear, strong inference. Natural resource managers, policy makers, and stakeholders rely on science to provide timely and accurate management recommendations. However, the time necessary to untangle the complexities of interactions within ecosystems is often far greater than the time available to make management decisions. One method of coping with this problem is multimodel inference. Multimodel inference assesses uncertainty by calculating likelihoods among multiple competing hypotheses, but multimodel inference results are often equivocal. Despite this, there may be pressure for ecologists to provide management recommendations regardless of the strength of their study’s inference. We reviewed papers in the Journal of Wildlife Management (JWM) and the journal Conservation Biology (CB) to quantify the prevalence of multimodel inference approaches, the resulting inference (weak versus strong), and how authors dealt with the uncertainty. Thirty-eight percent and 14%, respectively, of articles in the JWM and CB used multimodel inference approaches. Strong inference was rarely observed, with only 7% of JWM and 20% of CB articles resulting in strong inference. We found the majority of weak inference papers in both journals (59%) gave specific management recommendations. Model selection uncertainty was ignored in most recommendations for management. We suggest that adaptive management is an ideal method to resolve uncertainty when research results in weak inference.
Farnsworth, Jason M; Baasch, David M; Smith, Chadwin B; Werbylo, Kevin L
2017-05-01
Investigations of breeding ecology of interior least tern ( Sternula antillarum athalassos ) and piping plover ( Charadrius melodus ) in the Platte River basin in Nebraska, USA, have embraced the idea that these species are physiologically adapted to begin nesting concurrent with the cessation of spring floods. Low use and productivity on contemporary Platte River sandbars have been attributed to anthropomorphically driven changes in basin hydrology and channel morphology or to unusually late annual runoff events. We examined distributions of least tern and piping plover nest initiation dates in relation to the hydrology of the historical central Platte River (CPR) and contemporary CPR and lower Platte River (LPR). We also developed an emergent sandbar habitat model to evaluate the potential for reproductive success given observed hydrology, stage-discharge relationships, and sandbar height distributions. We found the timing of the late-spring rise to be spatially and temporally consistent, typically occurring in mid-June. However, piping plover nest initiation peaks in May and least tern nest initiation peaks in early June; both of which occur before the late spring rise. In neither case does there appear to be an adaptation to begin nesting concurrent with the cessation of spring floods. As a consequence, there are many years when no successful reproduction is possible because emergent sandbar habitat is inundated after most nests have been initiated, and there is little potential for successful renesting. The frequency of nest inundation, in turn, severely limits the potential for maintenance of stable species subpopulations on Platte River sandbars. Why then did these species expand into and persist in a basin where the hydrology is not ideally suited to their reproductive ecology? We hypothesize the availability and use of alternative off-channel nesting habitats, like sandpits, may allow for the maintenance of stable species subpopulations in the Platte River
Groundwater Quality and Nitrogen Use Efficiency in Nebraska's Central Platte River Valley.
Ferguson, Richard B
2015-03-01
Groundwater nitrate contamination has been an issue in the Platte River Valley of Nebraska since the 1960s, with groundwater nitrate-N concentrations frequently in excess of 10 mg L. This article summarizes education and regulatory efforts to reduce the environmental impact of irrigated crop production in the Platte River Valley. In 1988, a Groundwater Management Area (GWMA) was implemented in the Central Platte Natural Resources District to encourage adoption of improved management practices. Since 1988, there have been steady declines in average groundwater nitrate-N concentrations of about 0.15 mg NO-N L yr in much of the GWMA (from 19 to 15 mg NO-N L). However, N use efficiency (NUE) (partial factor productivity for N [PFP]) has increased very little from 1988 to 2012 (60-65 kg grain kg N), whereas statewide PFP increased from 49 to 67 kg grain kg N in the same period. Although growers are encouraged to credit N from sources besides fertilizer (e.g., soil residual, legumes, irrigation water, and manure), confidence in and use of credits tended to decrease as credits became larger; there was a tendency toward an average N rate regardless of credit-based recommendations. This information, coupled with data from other studies, suggests that much of the decline in groundwater nitrate can be attributed to improved irrigation management-especially conversion from furrow to sprinkler irrigation-and to a lesser extent to improved timing of N application. The development and adoption of improved N management practices, such as fertigation, controlled-release N formulation, and use of crop canopy sensors for in-season N application may be required for further significant NUE gains in these irrigated systems. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Kroese, A.H.; van der Meulen, E.A.; Poortema, Klaas; Schaafsma, W.
1995-01-01
The making of statistical inferences in distributional form is conceptionally complicated because the epistemic 'probabilities' assigned are mixtures of fact and fiction. In this respect they are essentially different from 'physical' or 'frequency-theoretic' probabilities. The distributional form is
Mapping grasslands suitable for cellulosic biofuels in the Greater Platte River Basin, United States
Wylie, Bruce K.; Gu, Yingxin
2012-01-01
Biofuels are an important component in the development of alternative energy supplies, which is needed to achieve national energy independence and security in the United States. The most common biofuel product today in the United States is corn-based ethanol; however, its development is limited because of concerns about global food shortages, livestock and food price increases, and water demand increases for irrigation and ethanol production. Corn-based ethanol also potentially contributes to soil erosion, and pesticides and fertilizers affect water quality. Studies indicate that future potential production of cellulosic ethanol is likely to be much greater than grain- or starch-based ethanol. As a result, economics and policy incentives could, in the near future, encourage expansion of cellulosic biofuels production from grasses, forest woody biomass, and agricultural and municipal wastes. If production expands, cultivation of cellulosic feedstock crops, such as switchgrass (Panicum virgatum L.) and miscanthus (Miscanthus species), is expected to increase dramatically. The main objective of this study is to identify grasslands in the Great Plains that are potentially suitable for cellulosic feedstock (such as switchgrass) production. Producing ethanol from noncropland holdings (such as grassland) will minimize the effects of biofuel developments on global food supplies. Our pilot study area is the Greater Platte River Basin, which includes a broad range of plant productivity from semiarid grasslands in the west to the fertile corn belt in the east. The Greater Platte River Basin was the subject of related U.S. Geological Survey (USGS) integrated research projects.
Sprague, Lori A.
2002-01-01
reservoirs acted as a sink for both nitrogen and phosphorus; the percentage of the total mass (initial storage plus inflows) trapped in the reservoirs during the study period ranged from 49 to 88 percent for nitrogen and from 20 to 86 percent for phosphorus. The nutrient loading, morphology, and operation of the five reservoirs differed, however, leading to several important differences in nutrient dynamics among the reservoirs. Mean nutrient concentrations during the study period decreased in a downstream direction from Riverside Reservoir to Julesburg Reservoir because concentrations in the source water?the South Platte River?decreased downstream as a result of increased distance from wastewater loading upstream from Kersey, Colorado, and the replacement of diverted river water with more dilute ground-water return flow. North Sterling was an exception to this decrease; the strong stratification and resulting anoxia that developed in the reservoir led to nutrient release from the bottom sediments that offset the decrease in external nutrient loading. Variations in nutrient loading also contributed to differences in the nutrient limiting algal growth in the reservoirs, as indicated by mass nitrogen:phosphorus ratios. In Riverside and Jackson Reservoirs, nitrogen became the potential limiting nutrient by midsummer as biological activity depleted the available supply of nitrogen while the high initial phosphorus load was recycled. Prewitt, North Sterling, and Julesburg Reservoirs, with lower initial loadings of phosphorus, were phosphorus-limited throughout the study period, with additional colimitation of nitrogen as biological uptake reduced nitrogen concentrations to near or below laboratory detection limits. The percentage of the total nitrogen and phosphorus mass lost through outflow and trapped in the reservoir due to processes such as biological uptake and sedimentation varied between reservoirs.Generally, reservoirs with short residence times such as North Ste
Channel and island change in the lower Platte River, Eastern Nebraska, USA: 1855 2005
Joeckel, R. M.; Henebry, G. M.
2008-12-01
The lower Platte River has undergone considerable change in channel and bar characteristics since the mid-1850s in four 20-25 km-long study stretches. The same net effect of historical channel shrinkage that was detected upstream from Grand Island, Nebraska, can also be detected in the lower river but differences in the behaviors of study stretches upstream and downstream from major tributaries are striking. The least relative decrease occurred downstream from the Loup River confluence, and the stretch downstream from the Elkhorn River confluence actually showed an increase in channel area during the 1940s. Bank erosion was also greater downstream of the tributaries between ca. 1860 and 1938/1941, particularly in stretch RG, which showed more lateral migration. The cumulative island area and the ratio of island area to channel area relative to the 1938/1941 baseline data showed comparatively great fluctuations in median island size in both downstream stretches. The erratic behavior of island size distributions over time indicates that large islands were accreted to the banks at different times, and that some small, newly-stabilized islands were episodically "flushed" out of the system. In the upstream stretches the stabilization of mobile bars to create new, small islands had a more consistent impact over time. Channel decrease by the abandonment of large, long-lived anabranches and by the in-place narrowing resulting from island accretion were more prominent in these upstream stretches. Across all of the study area, channel area appears to be stabilizing gradually as the rate of decrease lessens. This trend began earliest in stretch RG in the late 1950s and was accompanied by shifts in the size distributions of stabilized islands in that stretch into the 1960s. Elsewhere, even in the easternmost study stretch, stabilizing was occurring by the late 1960s, the same time frame documented by investigations of the Platte system upstream of the study area. Comprehensive
A Concept for a Long Term Hydrologic Observatory in the South Platte River Basin
Ramirez, J. A.
2004-12-01
The intersection between: (1) the Rocky Mountains and developments occurring in high altitude fragile environments; (2) the metropolitan areas emerging at the interface of the mountains and the plains; (3) the irrigation occurring along rivers as they break from the mountains and snake across the Great Plains; and (4) the grasslands and the dryland farming that covers the vast amount of the Great Plains, represents a dynamic, complex, highly integrated ecosystem, stretching from Montana and North Dakota to New Mexico and Texas. This swath of land, and the rivers that cross it (headwaters of the Missouri , the Yellowstone, the North Platte , the South Platte, the Arkansas , the Cimarron, the Red and the Pecos Rivers ), represent a significant percentage of the landmass of the United States. Within this large area, besides tremendous increases in population in metropolitan areas, there are new energy developments, old hard rock mining concerns, new recreation developments, irrigation farms selling water to meet urban demands, new in-stream flow programs, struggling rural areas, and continued "mining" of ground water. The corresponding impacts are creating endangered and threatened species conflicts which require new knowledge to fully understand the measures needed to mitigate harmful ecosystem conditions. Within the Rocky Mountain/Great Plains interface, water is limiting and land is plentiful, presenting natural resource managers with a number of unique problems which demand a scale of integrated science not achieved in the past. For example, water is imported into a number of the streams flowing east from the Rocky Mountains. Nitrogen is deposited in pristine watersheds that rise up high in the Rocky Mountains. Cities capture spring runoff in reservoirs to use at a steady rate over the entire year, putting water into river systems normally moving low flows in the winter. Irrigation of both urban landscapes and farm fields may be at a scale that impacts climate
Schaepe, Nathaniel J.; Soenksen, Philip J.; Rus, David L.
2014-01-01
The lower Platte River, Nebraska, provides drinking water, irrigation water, and in-stream flows for recreation, wildlife habitat, and vital habitats for several threatened and endangered species. The U.S. Geological Survey (USGS), in cooperation with the Lower Platte River Corridor Alliance (LPRCA) developed site-specific regression models for water-quality constituents at four sites (Shell Creek near Columbus, Nebraska [USGS site 06795500]; Elkhorn River at Waterloo, Nebr. [USGS site 06800500]; Salt Creek near Ashland, Nebr. [USGS site 06805000]; and Platte River at Louisville, Nebr. [USGS site 06805500]) in the lower Platte River corridor. The models were developed by relating continuously monitored water-quality properties (surrogate measurements) to discrete water-quality samples. These models enable existing web-based software to provide near-real-time estimates of stream-specific constituent concentrations to support natural resources management decisions. Since 2007, USGS, in cooperation with the LPRCA, has continuously monitored four water-quality properties seasonally within the lower Platte River corridor: specific conductance, water temperature, dissolved oxygen, and turbidity. During 2007 through 2011, the USGS and the Nebraska Department of Environmental Quality collected and analyzed discrete water-quality samples for nutrients, major ions, pesticides, suspended sediment, and bacteria. These datasets were used to develop the regression models. This report documents the collection of these various water-quality datasets and the development of the site-specific regression models. Regression models were developed for all four monitored sites. Constituent models for Shell Creek included nitrate plus nitrite, total phosphorus, orthophosphate, atrazine, acetochlor, suspended sediment, and Escherichia coli (E. coli) bacteria. Regression models that were developed for the Elkhorn River included nitrate plus nitrite, total Kjeldahl nitrogen, total phosphorus
Alexander, Jason S.; Schultze, Devin M.; Zelt, Ronald B.
2013-01-01
The lower Platte River corridor provides important habitats for two State- and federally listed bird species: the interior least tern (terns; Sternula antillarum athallassos) and the piping plover (plovers; Charadrius melodus). However, many of the natural morphological and hydrological characteristics of the Platte River have been altered substantially by water development, channelization, hydropower operations, and invasive vegetation encroachment, which have decreased the abundance of high-quality nesting and foraging habitat for terns and plovers. The lower Platte River (LPR), defined as 103 miles (mi) of the Platte River between its confluence with the Loup River and its confluence with the Missouri River, has narrowed since the late-19th and early-20th centuries, yet it partially retains many geomorphologic and hydrologic characteristics important to terns and plovers. These birds nest on the sandbars in the river and along shorelines at sand- and gravel-pit lakes in the adjacent valley. The need to balance continued economic, infrastructure, and resource development with the conservation of important physical and aquatic habitat resources requires increased understanding of the physical and biological dynamics of the lower Platte River. Spatially and temporally rich datasets for emergent sandbar habitats are necessary to quantify emergent sandbar dynamics relative to hypothesized controls and stressors. In cooperation with the Lower Platte South Natural Resources District, the U.S. Geological Survey initiated a pilot study of emergent sandbar dynamics along a 22-mi segment of the LPR downstream from its confluence with Salt Creek, near Ashland, Nebraska. The purposes of the study were to: (1) develop methods to rapidly assess sandbar geometries and locations in a wide, sand-bed river, and (2) apply and validate the method to assess emergent sandbar dynamics over three seasons in 2011. An examination of the height of sandbars relative to the local stage of
Rohatgi, Vijay K
2003-01-01
Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth
Gu, Yingxin; Wylie, Bruce K.; Zhang, Li; Gilmanov, Tagir G.
2012-01-01
This study evaluates the carbon fluxes and trends and examines the environmental sustainability (e.g., carbon budget, source or sink) of the potential biofuel feedstock sites identified in the Greater Platte River Basin (GPRB). A 9-year (2000–2008) time series of net ecosystem production (NEP), a measure of net carbon absorption or emission by ecosystems, was used to assess the historical trends and budgets of carbon flux for grasslands in the GPRB. The spatially averaged annual NEP (ANEP) for grassland areas that are possibly suitable for biofuel expansion (productive grasslands) was 71–169 g C m−2 year−1 during 2000–2008, indicating a carbon sink (more carbon is absorbed than released) in these areas. The spatially averaged ANEP for areas not suitable for biofuel feedstock development (less productive or degraded grasslands) was −47 to 69 g C m−2 year−1 during 2000–2008, showing a weak carbon source or a weak carbon sink (carbon emitted is nearly equal to carbon absorbed). The 9-year pre-harvest cumulative ANEP was 1166 g C m−2 for the suitable areas (a strong carbon sink) and 200 g C m−2 for the non-suitable areas (a weak carbon sink). Results demonstrate and confirm that our method of dynamic modeling of ecosystem performance can successfully identify areas desirable and sustainable for future biofuel feedstock development. This study provides useful information for land managers and decision makers to make optimal land use decisions regarding biofuel feedstock development and sustainability.
Radionuclide observables for the Platte underground nuclear explosive test on 14 April 1962
Energy Technology Data Exchange (ETDEWEB)
Burnett, Jonathan L.; Milbrath, Brian D.
2016-11-01
Past nuclear weapons tests provide invaluable information for understanding the radionuclide observables and data quality objectives expected during an On-site Inspection (OSI) for the Comprehensive Nuclear-Test-Ban Treaty (CTBT). These radioactive signatures are complex and subject to spatial and temporal variability. The Platte Underground Nuclear Test on 14 April 1962 provides extensive environmental monitoring data that can be modelled and used to assess an OSI. The 1.6 kT test is especially useful as it released the highest amounts of recorded activity during Operation Nougat at the Nevada Test Site – now known as the Nevada National Security Site (NNSS). It has been estimated that 0.36% of the activity was released, and dispersed in a northerly direction. The deposition ranged from 1 x 10-11 to 1 x 10-9 of the atmospheric release (per m2), and has been used to evaluate a hypothetical OSI at 1 week to 2 years post-detonation. Radioactive decay reduces the activity of the 17 OSI relevant radionuclides by 99.7%, such that detection throughout the inspection is only achievable close to the explosion where deposition was highest.
Morris, D.A.; Babcock, H.M.; Langford, R.H.
1960-01-01
Platte County, Wyo., has an area of 2,114 square miles and, in 1950, had a population of 7,925; it lies within parts of two major physiographic provinces, the northern extension of the Southern Rocky Mountains and the northwestern part of the Great Plains. The Laramie Range and related structures lie along the western margin of the county and constitute the eastern limit of the Rocky Mountain Front Range. The High Plains section of the Great Plains province extends eastward from the Laramie Range over the remainder of the county. The original surface of the High Plains has been deeply eroded, and in the northeastern part of the county it is broken by the broad uplifted structural platform of the Hartville Hills. The North Platte River and its tributaries have entrenched their channels as much as 1,000 feet into the plains, leaving wide, very flat intervalley areas that are interrupted by a few isolated buttes and outlying ridges. Well-defined terraces, locally called the Wheatland Flats, have been formed in central Platte County. The climate is semiarid, the average annual precipitation being about 15 inches. Farming and stockraising are the principal occupations in the county. Most of the rocks exposed in the county are of Tertiary and Quaternary age, although rocks as old as Precambrian crop out locally. The Arikaree and Brule formations and younger deposits, including Tertiary ( ?) deposits (undifferentiated) and terrace, flood-plain, and other alluvial deposits, underlie more than two-thirds of the county. Mesozoic, Paleozoic, and Precambrian rocks crop out in the other third and underlie the younger rocks at great depths elsewhere. Small supplies of ground water adequate for domestic and stock use can be obtained from shallow wells in the Casper, Hartville, Cloverly, Brule, and Arikaree formations and in the terrace and flood-plain deposits. Small to moderate amounts of ground water can be obtained from the 'Converse sand' of the Hartville formation. Several
Smith, Bruce D.; Abraham, Jared D.; Cannia, James C.; Hill, Patricia
2009-01-01
This report is a release of digital data from a helicopter electromagnetic and magnetic survey that was conducted during June 2008 in areas of western Nebraska as part of a joint hydrologic study by the North Platte Natural Resource District, South Platte Natural Resource District, and U.S. Geological Survey. The objective of the contracted survey, conducted by Fugro Airborne, Ltd., was to improve the understanding of the relationship between surface water and groundwater systems critical to developing groundwater models used in management programs for water resources. The survey covered 1,375 line km (854 line mi). A unique aspect of this survey is the flight line layout. One set of flight lines were flown paralleling each side of the east-west trending North Platte River and Lodgepole Creek. The survey also included widely separated (10 km) perpendicular north-south lines. The success of this survey design depended on a well understood regional hydrogeologic framework and model developed by the Cooperative Hydrologic Study of the Platte River Basin. Resistivity variations along lines could be related to this framework. In addition to these lines, more traditional surveys consisting of parallel flight lines separated by about 270 m were carried out for one block in each of the drainages. These surveys helped to establish the spatial variations of the resistivity of hydrostratigraphic units. The electromagnetic equipment consisted of six different coil-pair orientations that measured resistivity at separated frequencies from about 400 Hz to about 140,000 Hz. The electromagnetic data along flight lines were converted to electrical resistivity. The resulting line data were converted to geo-referenced grids and maps which are included with this report. In addition to the electromagnetic data, total field magnetic data and digital elevation data were collected. Data released in this report consist of data along flight lines, digital grids, and digital maps of the
Qi, Xin-Shuai; Yuan, Na; Comes, Hans Peter; Sakaguchi, Shota; Qiu, Ying-Xiong
2014-03-04
In East Asia, an increasing number of studies on temperate forest tree species find evidence for migration and gene exchange across the East China Sea (ECS) land bridge up until the last glacial maximum (LGM). However, it is less clear when and how lineages diverged in this region, whether in full isolation or in the face of post-divergence gene flow. Here, we investigate the effects of Quaternary changes in climate and sea level on the evolutionary and demographic history of Platycrater arguta, a rare temperate understorey shrub with disjunct distributions in East China (var. sinensis) and South Japan (var. arguta). Molecular data were obtained from 14 P. arguta populations to infer current patterns of molecular structure and diversity in relation to past (Last Interglacial and Last Glacial Maximum) and present distributions based on ecological niche modelling (ENM). A coalescent-based isolation-with-migration (IM) model was used to estimate lineage divergence times and population demographic parameters. Combining information from nuclear/chloroplast sequence data with nuclear microsatellites, our IM analyses identify the two varieties as genetically distinct units that evolved in strict allopatry since the mid-Pleistocene, c. 0.89 (0.51-1.2) Ma. Together with Bayesian Skyeline Plots, our data further suggest that both lineages experienced post-divergence demographic growth, followed by refugial isolation, divergence, and in the case of var. arguta post-glacial admixture. However, past species distribution modelling indicates that the species' overall distribution has not greatly changed over the last glacial cycles. Our findings highlight the important influence of ancient sea-level changes on the diversification of East Asia's temperate flora. Implicitly, they challenge the notion of general temperate forest expansion across the ECS land bridge, demonstrating instead its 'filter' effect owing to an unsuitable environment for certain species and their biological
2014-01-01
Background In East Asia, an increasing number of studies on temperate forest tree species find evidence for migration and gene exchange across the East China Sea (ECS) land bridge up until the last glacial maximum (LGM). However, it is less clear when and how lineages diverged in this region, whether in full isolation or in the face of post-divergence gene flow. Here, we investigate the effects of Quaternary changes in climate and sea level on the evolutionary and demographic history of Platycrater arguta, a rare temperate understorey shrub with disjunct distributions in East China (var. sinensis) and South Japan (var. arguta). Molecular data were obtained from 14 P. arguta populations to infer current patterns of molecular structure and diversity in relation to past (Last Interglacial and Last Glacial Maximum) and present distributions based on ecological niche modelling (ENM). A coalescent-based isolation-with-migration (IM) model was used to estimate lineage divergence times and population demographic parameters. Results Combining information from nuclear/chloroplast sequence data with nuclear microsatellites, our IM analyses identify the two varieties as genetically distinct units that evolved in strict allopatry since the mid-Pleistocene, c. 0.89 (0.51–1.2) Ma. Together with Bayesian Skyeline Plots, our data further suggest that both lineages experienced post-divergence demographic growth, followed by refugial isolation, divergence, and in the case of var. arguta post-glacial admixture. However, past species distribution modelling indicates that the species’ overall distribution has not greatly changed over the last glacial cycles. Conclusions Our findings highlight the important influence of ancient sea-level changes on the diversification of East Asia’s temperate flora. Implicitly, they challenge the notion of general temperate forest expansion across the ECS land bridge, demonstrating instead its ‘filter’ effect owing to an unsuitable environment
The annual Sandhill crane (Grus canadensis) migration through Nebraska is thought to be a major source of fecal pollution to the Platte River, but of unknown human health risk. To better understand potential risks, the presence of Campylobacter species and fecal bacteria were exa...
2015-03-01
REHABILITATION PROGRAM LOWER PLATTE SOUTH NATURAL RESOURCE DISTRICT ANTELOPE CREEK, LINCOLN, LANCASTER COUNTY, NEBRASKA...District Antelope Creek, Lincoln, Lancaster County, Nebraska 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...NATURAL RESOURCE DISTRICT ANTELOPE CREEK, LINCOLN, LANCASTER COUNTY, NEBRASKA March 2015 In accordance with the National Environmental
2015-03-01
REHABILITATION PROGRAM LOWER PLATTE SOUTH NATURAL RESOURCE DISTRICT SALT CREEK, LINCOLN, LANCASTER COUNTY, NEBRASKA...District Salt Creek, Lincoln, Lancaster County, Nebraska 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER...RESOURCE DISTRICT SALT CREEK, LINCOLN, LANCASTER COUNTY, NEBRASKA March 2015 In accordance with the National Environmental Policy Act and
Howard, Daniel M.; Wylie, Bruce K.; Tieszen, Larry L.
2012-01-01
With an ever expanding population, potential climate variability and an increasing demand for agriculture-based alternative fuels, accurate agricultural land-cover classification for specific crops and their spatial distributions are becoming critical to researchers, policymakers, land managers and farmers. It is important to ensure the sustainability of these and other land uses and to quantify the net impacts that certain management practices have on the environment. Although other quality crop classification products are often available, temporal and spatial coverage gaps can create complications for certain regional or time-specific applications. Our goal was to develop a model capable of classifying major crops in the Greater Platte River Basin (GPRB) for the post-2000 era to supplement existing crop classification products. This study identifies annual spatial distributions and area totals of corn, soybeans, wheat and other crops across the GPRB from 2000 to 2009. We developed a regression tree classification model based on 2.5 million training data points derived from the National Agricultural Statistics Service (NASS) Cropland Data Layer (CDL) in relation to a variety of other relevant input environmental variables. The primary input variables included the weekly 250 m US Geological Survey Earth Observing System Moderate Resolution Imaging Spectroradiometer normalized differential vegetation index, average long-term growing season temperature, average long-term growing season precipitation and yearly start of growing season. An overall model accuracy rating of 78% was achieved for a test sample of roughly 215 000 independent points that were withheld from model training. Ten 250 m resolution annual crop classification maps were produced and evaluated for the GPRB region, one for each year from 2000 to 2009. In addition to the model accuracy assessment, our validation focused on spatial distribution and county-level crop area totals in comparison with the
Density estimation in tiger populations: combining information for strong inference
Gopalaswamy, Arjun M.; Royle, J. Andrew; Delampady, Mohan; Nichols, James D.; Karanth, K. Ullas; Macdonald, David W.
2012-01-01
A productive way forward in studies of animal populations is to efficiently make use of all the information available, either as raw data or as published sources, on critical parameters of interest. In this study, we demonstrate two approaches to the use of multiple sources of information on a parameter of fundamental interest to ecologists: animal density. The first approach produces estimates simultaneously from two different sources of data. The second approach was developed for situations in which initial data collection and analysis are followed up by subsequent data collection and prior knowledge is updated with new data using a stepwise process. Both approaches are used to estimate density of a rare and elusive predator, the tiger, by combining photographic and fecal DNA spatial capture–recapture data. The model, which combined information, provided the most precise estimate of density (8.5 ± 1.95 tigers/100 km2 [posterior mean ± SD]) relative to a model that utilized only one data source (photographic, 12.02 ± 3.02 tigers/100 km2 and fecal DNA, 6.65 ± 2.37 tigers/100 km2). Our study demonstrates that, by accounting for multiple sources of available information, estimates of animal density can be significantly improved.
DEFF Research Database (Denmark)
Andersen, Jesper
2009-01-01
Collateral evolution the problem of updating several library-using programs in response to API changes in the used library. In this dissertation we address the issue of understanding collateral evolutions by automatically inferring a high-level specification of the changes evident in a given set...... specifications inferred by spdiff in Linux are shown. We find that the inferred specifications concisely capture the actual collateral evolution performed in the examples....
Burton, Bethany L.; Johnson, Michaela R.; Vrabel, Joseph; Imig, Brian H.; Payne, Jason; Tompkins, Ryan E.
2009-01-01
Due to water resources of portions of the North Platte River basin being designated as over-appropriated by the State of Nebraska Department of Natural Resources (DNR), the North Platte Natural Resources District (NPNRD), in cooperation with the DNR, is developing an Integrated Management Plan (IMP) for groundwater and surface water in the NPNRD. As part of the IMP, a three-dimensional numerical finite difference groundwater-flow model is being developed to evaluate the effectiveness of using leakage of water from selected irrigation canal systems to manage groundwater recharge. To determine the relative leakage potential of the upper 8 m of the selected irrigation canals within the North Platte River valley in western Nebraska and eastern Wyoming, the U.S. Geological Survey performed a land-based capacitively coupled (CC) resistivity survey along nearly 630 km of 13 canals and 2 laterals in 2004 and from 2007 to 2009. These 13 canals were selected from the 27 irrigation canals in the North Platte valley due to their location, size, irrigated area, and relation to the active North Platte valley flood plain and related paleochannels and terrace deposits where most of the saturated thickness in the alluvium exists. The resistivity data were then compared to continuous cores at 62 test holes down to a maximum depth of 8 m. Borehole electrical conductivity (EC) measurements at 36 of those test holes were done to correlate resistivity values with grain sizes in order to determine potential vertical leakage along the canals as recharge to the underlying alluvial aquifer. The data acquired in 2004, as well as the 25 test hole cores from 2004, are presented elsewhere. These data were reprocessed using the same updated processing and inversion algorithms used on the 2007 through 2009 datasets, providing a consistent and complete dataset for all collection periods. Thirty-seven test hole cores and borehole electrical conductivity measurements were acquired based on the 2008
Energy Technology Data Exchange (ETDEWEB)
Petrov, S.
1996-10-01
Languages with a solvable implication problem but without complete and consistent systems of inference rules (`poor` languages) are considered. The problem of existence of finite complete and consistent inference rule system for a ``poor`` language is stated independently of the language or rules syntax. Several properties of the problem arc proved. An application of results to the language of join dependencies is given.
Bayesian statistical inference
Directory of Open Access Journals (Sweden)
Bruno De Finetti
2017-04-01
Full Text Available This work was translated into English and published in the volume: Bruno De Finetti, Induction and Probability, Biblioteca di Statistica, eds. P. Monari, D. Cocchi, Clueb, Bologna, 1993.Bayesian statistical Inference is one of the last fundamental philosophical papers in which we can find the essential De Finetti's approach to the statistical inference.
Geometric statistical inference
International Nuclear Information System (INIS)
Periwal, Vipul
1999-01-01
A reparametrization-covariant formulation of the inverse problem of probability is explicitly solved for finite sample sizes. The inferred distribution is explicitly continuous for finite sample size. A geometric solution of the statistical inference problem in higher dimensions is outlined
Bailer-Jones, Coryn A. L.
2017-04-01
Preface; 1. Probability basics; 2. Estimation and uncertainty; 3. Statistical models and inference; 4. Linear models, least squares, and maximum likelihood; 5. Parameter estimation: single parameter; 6. Parameter estimation: multiple parameters; 7. Approximating distributions; 8. Monte Carlo methods for inference; 9. Parameter estimation: Markov chain Monte Carlo; 10. Frequentist hypothesis testing; 11. Model comparison; 12. Dealing with more complicated problems; References; Index.
Nagao, Makoto
1990-01-01
Knowledge and Inference discusses an important problem for software systems: How do we treat knowledge and ideas on a computer and how do we use inference to solve problems on a computer? The book talks about the problems of knowledge and inference for the purpose of merging artificial intelligence and library science. The book begins by clarifying the concept of """"knowledge"""" from many points of view, followed by a chapter on the current state of library science and the place of artificial intelligence in library science. Subsequent chapters cover central topics in the artificial intellig
Logical inference and evaluation
International Nuclear Information System (INIS)
Perey, F.G.
1981-01-01
Most methodologies of evaluation currently used are based upon the theory of statistical inference. It is generally perceived that this theory is not capable of dealing satisfactorily with what are called systematic errors. Theories of logical inference should be capable of treating all of the information available, including that not involving frequency data. A theory of logical inference is presented as an extension of deductive logic via the concept of plausibility and the application of group theory. Some conclusions, based upon the application of this theory to evaluation of data, are also given
Probability and Statistical Inference
Prosper, Harrison B.
2006-01-01
These lectures introduce key concepts in probability and statistical inference at a level suitable for graduate students in particle physics. Our goal is to paint as vivid a picture as possible of the concepts covered.
Introductory statistical inference
Mukhopadhyay, Nitis
2014-01-01
This gracefully organized text reveals the rigorous theory of probability and statistical inference in the style of a tutorial, using worked examples, exercises, figures, tables, and computer simulations to develop and illustrate concepts. Drills and boxed summaries emphasize and reinforce important ideas and special techniques.Beginning with a review of the basic concepts and methods in probability theory, moments, and moment generating functions, the author moves to more intricate topics. Introductory Statistical Inference studies multivariate random variables, exponential families of dist
Gu, Yingxin; Wylie, Bruce K.; Howard, Daniel M.; Phuyal, Khem P.; Ji, Lei
2013-01-01
In this study, we developed a new approach that adjusted normalized difference vegetation index (NDVI) pixel values that were near saturation to better characterize the cropland performance (CP) in the Greater Platte River Basin (GPRB), USA. The relationship between NDVI and the ratio vegetation index (RVI) at high NDVI values was investigated, and an empirical equation for estimating saturation-adjusted NDVI (NDVIsat_adjust) based on RVI was developed. A 10-year (2000–2009) NDVIsat_adjust data set was developed using 250-m 7-day composite historical eMODIS (expedited Moderate Resolution Imaging Spectroradiometer) NDVI data. The growing season averaged NDVI (GSN), which is a proxy for ecosystem performance, was estimated and long-term NDVI non-saturation- and saturation-adjusted cropland performance (CPnon_sat_adjust, CPsat_adjust) maps were produced over the GPRB. The final CP maps were validated using National Agricultural Statistics Service (NASS) crop yield data. The relationship between CPsat_adjust and the NASS average corn yield data (r = 0.78, 113 samples) is stronger than the relationship between CPnon_sat_adjust and the NASS average corn yield data (r = 0.67, 113 samples), indicating that the new CPsat_adjust map reduces the NDVI saturation effects and is in good agreement with the corn yield ground observations. Results demonstrate that the NDVI saturation adjustment approach improves the quality of the original GSN map and better depicts the actual vegetation conditions of the GPRB cropland systems.
Gu, Yingxin; Wylie, Bruce K.; Boyte, Stephen; Phuyal, Khem P.
2014-01-01
This study projects future (e.g., 2050 and 2099) grassland productivities in the Greater Platte River Basin (GPRB) using ecosystem performance (EP, a surrogate for measuring ecosystem productivity) models and future climate projections. The EP models developed from a previous study were based on the satellite vegetation index, site geophysical and biophysical features, and weather and climate drivers. The future climate data used in this study were derived from the National Center for Atmospheric Research Community Climate System Model 3.0 ‘SRES A1B’ (a ‘middle’ emissions path). The main objective of this study is to assess the future sustainability of the potential biofuel feedstock areas identified in a previous study. Results show that the potential biofuel feedstock areas (the more mesic eastern part of the GPRB) will remain productive (i.e., aboveground grassland biomass productivity >2750 kg ha−1 year−1) with a slight increasing trend in the future. The spatially averaged EPs for these areas are 3519, 3432, 3557, 3605, 3752, and 3583 kg ha−1 year−1 for current site potential (2000–2008 average), 2020, 2030, 2040, 2050, and 2099, respectively. Therefore, the identified potential biofuel feedstock areas will likely continue to be sustainable for future biofuel development. On the other hand, grasslands identified as having no biofuel potential in the drier western part of the GPRB would be expected to stay unproductive in the future (spatially averaged EPs are 1822, 1691, 1896, 2306, 1994, and 2169 kg ha−1 year−1 for site potential, 2020, 2030, 2040, 2050, and 2099). These areas should continue to be unsuitable for biofuel feedstock development in the future. These future grassland productivity estimation maps can help land managers to understand and adapt to the expected changes in future EP in the GPRB and to assess the future sustainability and feasibility of potential biofuel feedstock areas.
Dietsch, Benjamin J.; Godberson, Julie A.; Steele, Gregory V.
2009-01-01
The Nebraska Department of Natural Resources approved instream-flow appropriations on the Platte River to maintain fish communities, whooping crane roost habitat, and wet meadows used by several wild bird species. In the lower Platte River region, the Nebraska Game and Parks Commission owns an appropriation filed to maintain streamflow for fish communities between the Platte River confluence with the Elkhorn River and the mouth of the Platte River. Because Elkhorn River flow is an integral part of the flow in the reach addressed by this appropriation, the Upper Elkhorn and Lower Elkhorn Natural Resources Districts are involved in overall management of anthropogenic effects on the availability of surface water for instream requirements. The Physical Habitat Simulation System (PHABSIM) and other estimation methodologies were used previously to determine instream requirements for Platte River biota, which led to the filing of five water appropriations applications with the Nebraska Department of Natural Resources in 1993 by the Nebraska Game and Parks Commission. One of these requested instream-flow appropriations of 3,700 cubic feet per second was for the reach from the Elkhorn River to the mouth of the Platte River. Four appropriations were granted with modifications in 1998, by the Nebraska Department of Natural Resources. Daily streamflow data for the periods of record were summarized for 17 streamflow-gaging stations in Nebraska to evaluate streamflow characteristics, including low-flow intervals for consecutive durations of 1, 3, 7, 14, 30, 60, and 183 days. Temporal trends in selected streamflow statistics were not adjusted for variability in precipitation. Results indicated significant positive temporal trends in annual flow for the period of record at eight streamflow-gaging stations - Platte River near Duncan (06774000), Platte River at North Bend (06796000), Elkhorn River at Neligh (06798500), Logan Creek near Uehling (06799500), Maple Creek near Nickerson
Subjective randomness as statistical inference.
Griffiths, Thomas L; Daniels, Dylan; Austerweil, Joseph L; Tenenbaum, Joshua B
2018-06-01
Some events seem more random than others. For example, when tossing a coin, a sequence of eight heads in a row does not seem very random. Where do these intuitions about randomness come from? We argue that subjective randomness can be understood as the result of a statistical inference assessing the evidence that an event provides for having been produced by a random generating process. We show how this account provides a link to previous work relating randomness to algorithmic complexity, in which random events are those that cannot be described by short computer programs. Algorithmic complexity is both incomputable and too general to capture the regularities that people can recognize, but viewing randomness as statistical inference provides two paths to addressing these problems: considering regularities generated by simpler computing machines, and restricting the set of probability distributions that characterize regularity. Building on previous work exploring these different routes to a more restricted notion of randomness, we define strong quantitative models of human randomness judgments that apply not just to binary sequences - which have been the focus of much of the previous work on subjective randomness - but also to binary matrices and spatial clustering. Copyright © 2018 Elsevier Inc. All rights reserved.
Making Type Inference Practical
DEFF Research Database (Denmark)
Schwartzbach, Michael Ignatieff; Oxhøj, Nicholas; Palsberg, Jens
1992-01-01
We present the implementation of a type inference algorithm for untyped object-oriented programs with inheritance, assignments, and late binding. The algorithm significantly improves our previous one, presented at OOPSLA'91, since it can handle collection classes, such as List, in a useful way. A......-oriented languages practical....
Type Inference with Inequalities
DEFF Research Database (Denmark)
Schwartzbach, Michael Ignatieff
1991-01-01
of (monotonic) inequalities on the types of variables and expressions. A general result about systems of inequalities over semilattices yields a solvable form. We distinguish between deciding typability (the existence of solutions) and type inference (the computation of a minimal solution). In our case, both...
Watson, Jane
2007-01-01
Inference, or decision making, is seen in curriculum documents as the final step in a statistical investigation. For a formal statistical enquiry this may be associated with sophisticated tests involving probability distributions. For young students without the mathematical background to perform such tests, it is still possible to draw informal…
Daberkow, S; Taylor, H; Gollehon, N; Moravek, M
2001-11-21
Given the societal concern about groundwater pollution from agricultural sources, public programs have been proposed or implemented to change farmer behavior with respect to nutrient use and management. However, few of these programs designed to change farmer behavior have been evaluated due to the lack of detailed data over an appropriate time frame. The Central Platte Natural Resources District (CPNRD) in Nebraska has identified an intensively cultivated, irrigated area with average groundwater nitrate-nitrogen (N) levels about double the EPA"s safe drinking water standard. The CPNRD implemented a joint education and regulatory N management program in the mid-1980s to reduce groundwater N. This analysis reports N use and management, yield, and groundwater nitrate trends in the CPNRD for nearly 3000 continuous-corn fields from 1989 to 1998, where producers faced limits on the timing of N fertilizer application but no limits on amounts. Groundwater nitrate levels showed modest improvement over the 10 years of this analysis, falling from the 1989-1993 average of 18.9 to 18.1 mg/l during 1994-1998. The availability of N in excess of crop needs was clearly documented by the CPNRD data and was related to optimistic yield goals, irrigation water use above expected levels, and lack of adherence to commercial fertilizer application guidelines. Over the 10-year period of this analysis, producers reported harvesting an annual average of 9729 kg/ha, 1569 kg/ha (14%) below the average yield goal. During 1989-1998, producers reported annually applying an average of 162.5 kg/ha of commercial N fertilizer, 15.7 kg/ha (10%) above the guideline level. Including the N contribution from irrigation water, the potential N contribution to the environment (total N available less estimated crop use) was estimated at 71.7 kg/ha. This is an estimate of the nitrates available for denitrification, volatilization, runoff, future soil N, and leaching to groundwater. On average, between 1989
Computationally efficient Bayesian inference for inverse problems.
Energy Technology Data Exchange (ETDEWEB)
Marzouk, Youssef M.; Najm, Habib N.; Rahn, Larry A.
2007-10-01
Bayesian statistics provides a foundation for inference from noisy and incomplete data, a natural mechanism for regularization in the form of prior information, and a quantitative assessment of uncertainty in the inferred results. Inverse problems - representing indirect estimation of model parameters, inputs, or structural components - can be fruitfully cast in this framework. Complex and computationally intensive forward models arising in physical applications, however, can render a Bayesian approach prohibitive. This difficulty is compounded by high-dimensional model spaces, as when the unknown is a spatiotemporal field. We present new algorithmic developments for Bayesian inference in this context, showing strong connections with the forward propagation of uncertainty. In particular, we introduce a stochastic spectral formulation that dramatically accelerates the Bayesian solution of inverse problems via rapid evaluation of a surrogate posterior. We also explore dimensionality reduction for the inference of spatiotemporal fields, using truncated spectral representations of Gaussian process priors. These new approaches are demonstrated on scalar transport problems arising in contaminant source inversion and in the inference of inhomogeneous material or transport properties. We also present a Bayesian framework for parameter estimation in stochastic models, where intrinsic stochasticity may be intermingled with observational noise. Evaluation of a likelihood function may not be analytically tractable in these cases, and thus several alternative Markov chain Monte Carlo (MCMC) schemes, operating on the product space of the observations and the parameters, are introduced.
Irony as Inferred Contradiction
Directory of Open Access Journals (Sweden)
Лаура Альба-Хуес
2014-12-01
Full Text Available “If we acknowledge the existence of an Irony Principle, we should also acknowledge another ‘higher-order principle’ which has the opposite effect. While irony is an apparently friendly way of being offensive (mock politeness, the type of verbal behaviour known as ‘banter’ is an offensive way of being friendly (mock impoliteness.” Geoffrey Leech, Principles of Pragmatics (1983: 144 In this work I present some theoretical considerations about what I consider to be a permanent and ever-present feature of verbal irony, namely, inferred contradiction , which has to be distinguished from plain, direct (non-inferred contradiction as well as from indirect negation , for a contradiction which is directly expressed cannot be interpreted as ironical (since it lacks a crucial component: inference, and an indirect negation may or may not be ironic (depending on the situation, and thus cannot be considered a permanent feature of the phenomenon. In spite of the fact that many scholars have proposed different theories in order to capture the essence of this intricate and complex phenomenon, not all of them have managed to find a feature or characteristic that applies to or is found in all possible occurrences of irony. I briefly discuss the tenets of some of the best-known of these theories, namely the Classical theories (Socrates, Cicero, Quintilian, the Echoic-Mention Theory (later Echoic Theory, the Echoic Reminder Theory, the Pretence Theory and the Relevant Inappropriateness Theory, trying to show that in all the types of irony emerging from these proposals (e.g. echoic irony, pretence irony, etc. it can be observed that the irony is triggered by inferred contradiction . The one theory that according to my view and knowledge- seems to capture its whole essence to date is Attardo’s (2000 Relevant Inappropriateness Theory, to whose proposal I adhere, but I argue at the same time that inferred contradiction is another feature of irony (which
Causal inference in econometrics
Kreinovich, Vladik; Sriboonchitta, Songsak
2016-01-01
This book is devoted to the analysis of causal inference which is one of the most difficult tasks in data analysis: when two phenomena are observed to be related, it is often difficult to decide whether one of them causally influences the other one, or whether these two phenomena have a common cause. This analysis is the main focus of this volume. To get a good understanding of the causal inference, it is important to have models of economic phenomena which are as accurate as possible. Because of this need, this volume also contains papers that use non-traditional economic models, such as fuzzy models and models obtained by using neural networks and data mining techniques. It also contains papers that apply different econometric models to analyze real-life economic dependencies.
Stochastic processes inference theory
Rao, Malempati M
2014-01-01
This is the revised and enlarged 2nd edition of the authors’ original text, which was intended to be a modest complement to Grenander's fundamental memoir on stochastic processes and related inference theory. The present volume gives a substantial account of regression analysis, both for stochastic processes and measures, and includes recent material on Ridge regression with some unexpected applications, for example in econometrics. The first three chapters can be used for a quarter or semester graduate course on inference on stochastic processes. The remaining chapters provide more advanced material on stochastic analysis suitable for graduate seminars and discussions, leading to dissertation or research work. In general, the book will be of interest to researchers in probability theory, mathematical statistics and electrical and information theory.
Multiple Instance Fuzzy Inference
2015-12-02
and learn the fuzzy inference system’s parameters [24, 25]. In this later technique, supervised and unsupervised learning algorithms are devised to...algorithm ( unsupervised learning ) can be used to identify local contexts of the input space, and a linear classifier (supervised learning ) can be used...instance level (patch-level) labels and would require the image to be correctly segmented and labeled prior to learning . Figure 1.1: Example of an image
2018-02-15
whether unsupervised (such as clustering) or supervised (such as Naive Bayes). We observed the following advantages: 1 APPROVED FOR PUBLIC RELEASE...section, we explain our research in relation to DARPA’s Probabilistic Programming for Advancing Machine Learning (PPAML) program and other approaches...develop machine- learning applications by combining probabilistic models and inference techniques. On one hand, a probabilistic model is a mathematical
Active inference and learning.
Friston, Karl; FitzGerald, Thomas; Rigoli, Francesco; Schwartenbeck, Philipp; O Doherty, John; Pezzulo, Giovanni
2016-09-01
This paper offers an active inference account of choice behaviour and learning. It focuses on the distinction between goal-directed and habitual behaviour and how they contextualise each other. We show that habits emerge naturally (and autodidactically) from sequential policy optimisation when agents are equipped with state-action policies. In active inference, behaviour has explorative (epistemic) and exploitative (pragmatic) aspects that are sensitive to ambiguity and risk respectively, where epistemic (ambiguity-resolving) behaviour enables pragmatic (reward-seeking) behaviour and the subsequent emergence of habits. Although goal-directed and habitual policies are usually associated with model-based and model-free schemes, we find the more important distinction is between belief-free and belief-based schemes. The underlying (variational) belief updating provides a comprehensive (if metaphorical) process theory for several phenomena, including the transfer of dopamine responses, reversal learning, habit formation and devaluation. Finally, we show that active inference reduces to a classical (Bellman) scheme, in the absence of ambiguity. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Continuous Integrated Invariant Inference Project
National Aeronautics and Space Administration — The proposed project will develop a new technique for invariant inference and embed this and other current invariant inference and checking techniques in an...
Zuellig, Robert E.; Heinold, Brian D.; Kondratieff, Boris C.; Ruiter, David E.
2012-01-01
The U.S. Geological Survey, in cooperation with the C.P. Gillette Museum of Arthropod Diversity (Colorado State University, Fort Collins, Colorado), compiled collection record data to document the historical and present-day occurrence of mayfly, stonefly, and caddisfly species in the South Platte River Basin. Data were compiled from records collected between 1873 and 2010 to identify where regional knowledge about species occurrence in the basin is lacking and to encourage future researchers to locate additional populations of these poorly understood but very important organisms. This report provides a description of how data were compiled, a map of approximate collection locations, a listing of the most recent collection records from unique locations, general remarks for each species, a species list with selected summary information, and distribution maps of species collection records.
African Journals Online (AJOL)
denise
dependent values of chlorophyll concentration (Sathyendranath et al. 1995). This sub- surface chlorophyll structure needs to be extrapolated spatially and temporally for the estimation of integrated pigment content and primary production (Sathyen-.
Nonparametric statistical inference
Gibbons, Jean Dickinson
2010-01-01
Overall, this remains a very fine book suitable for a graduate-level course in nonparametric statistics. I recommend it for all people interested in learning the basic ideas of nonparametric statistical inference.-Eugenia Stoimenova, Journal of Applied Statistics, June 2012… one of the best books available for a graduate (or advanced undergraduate) text for a theory course on nonparametric statistics. … a very well-written and organized book on nonparametric statistics, especially useful and recommended for teachers and graduate students.-Biometrics, 67, September 2011This excellently presente
Nanotechnology and statistical inference
Vesely, Sara; Vesely, Leonardo; Vesely, Alessandro
2017-08-01
We discuss some problems that arise when applying statistical inference to data with the aim of disclosing new func-tionalities. A predictive model analyzes the data taken from experiments on a specific material to assess the likelihood that another product, with similar structure and properties, will exhibit the same functionality. It doesn't have much predictive power if vari-ability occurs as a consequence of a specific, non-linear behavior. We exemplify our discussion on some experiments with biased dice.
DEFF Research Database (Denmark)
Andersen, Jesper; Lawall, Julia
2010-01-01
A key issue in maintaining Linux device drivers is the need to keep them up to date with respect to evolutions in Linux internal libraries. Currently, there is little tool support for performing and documenting such changes. In this paper we present a tool, spdiff, that identifies common changes...... developers can use it to extract an abstract representation of the set of changes that others have made. Our experiments on recent changes in Linux show that the inferred generic patches are more concise than the corresponding patches found in commits to the Linux source tree while being safe with respect...
Directory of Open Access Journals (Sweden)
Kevin H. Knuth
2012-06-01
Full Text Available We present a simple and clear foundation for finite inference that unites and significantly extends the approaches of Kolmogorov and Cox. Our approach is based on quantifying lattices of logical statements in a way that satisfies general lattice symmetries. With other applications such as measure theory in mind, our derivations assume minimal symmetries, relying on neither negation nor continuity nor differentiability. Each relevant symmetry corresponds to an axiom of quantification, and these axioms are used to derive a unique set of quantifying rules that form the familiar probability calculus. We also derive a unique quantification of divergence, entropy and information.
Irons, Trevor P.; Hobza, Christopher M.; Steele, Gregory V.; Abraham, Jared D.; Cannia, James C.; Woodward, Duane D.
2012-01-01
Surface nuclear magnetic resonance, a noninvasive geophysical method, measures a signal directly related to the amount of water in the subsurface. This allows for low-cost quantitative estimates of hydraulic parameters. In practice, however, additional factors influence the signal, complicating interpretation. The U.S. Geological Survey, in cooperation with the Central Platte Natural Resources District, evaluated whether hydraulic parameters derived from surface nuclear magnetic resonance data could provide valuable input into groundwater models used for evaluating water-management practices. Two calibration sites in Dawson County, Nebraska, were chosen based on previous detailed hydrogeologic and geophysical investigations. At both sites, surface nuclear magnetic resonance data were collected, and derived parameters were compared with results from four constant-discharge aquifer tests previously conducted at those same sites. Additionally, borehole electromagnetic-induction flowmeter data were analyzed as a less-expensive surrogate for traditional aquifer tests. Building on recent work, a novel surface nuclear magnetic resonance modeling and inversion method was developed that incorporates electrical conductivity and effects due to magnetic-field inhomogeneities, both of which can have a substantial impact on the data. After comparing surface nuclear magnetic resonance inversions at the two calibration sites, the nuclear magnetic-resonance-derived parameters were compared with previously performed aquifer tests in the Central Platte Natural Resources District. This comparison served as a blind test for the developed method. The nuclear magnetic-resonance-derived aquifer parameters were in agreement with results of aquifer tests where the environmental noise allowed data collection and the aquifer test zones overlapped with the surface nuclear magnetic resonance testing. In some cases, the previously performed aquifer tests were not designed fully to characterize
Lu, Jingrang; Ryu, Hodon; Vogel, Jason; Santo Domingo, Jorge; Ashbolt, Nicholas J
2013-06-01
The risk to human health of the annual sandhill crane (Grus canadensis) migration through Nebraska, which is thought to be a major source of fecal pollution of the central Platte River, is unknown. To better understand potential risks, the presence of Campylobacter species and three fecal indicator bacterial groups (Enterococcus spp., Escherichia coli, and Bacteroidetes) was assayed by PCR from crane excreta and water samples collected during their stopover at the Platte River, Nebraska, in 2010. Genus-specific PCR assays and sequence analyses identified Campylobacter jejuni as the predominant Campylobacter species in sandhill crane excreta. Campylobacter spp. were detected in 48% of crane excreta, 24% of water samples, and 11% of sediment samples. The estimated densities of Enterococcus spp. were highest in excreta samples (mean, 4.6 × 10(8) cell equivalents [CE]/g), while water samples contained higher levels of Bacteroidetes (mean, 5.1 × 10(5) CE/100 ml). Enterococcus spp., E. coli, and Campylobacter spp. were significantly increased in river water and sediments during the crane migration period, with Enterococcus sp. densities (~3.3 × 10(5) CE/g) 2 to 4 orders of magnitude higher than those of Bacteroidetes (4.9 × 10(3) CE/g), E. coli (2.2 × 10(3) CE/g), and Campylobacter spp. (37 CE/g). Sequencing data for the 16S rRNA gene and Campylobacter species-specific PCR assays indicated that C. jejuni was the major Campylobacter species present in water, sediments, and crane excreta. Overall, migration appeared to result in a significant, but temporary, change in water quality in spring, when there may be a C. jejuni health hazard associated with water and crops visited by the migrating birds.
Bayesian methods for hackers probabilistic programming and Bayesian inference
Davidson-Pilon, Cameron
2016-01-01
Bayesian methods of inference are deeply natural and extremely powerful. However, most discussions of Bayesian inference rely on intensely complex mathematical analyses and artificial examples, making it inaccessible to anyone without a strong mathematical background. Now, though, Cameron Davidson-Pilon introduces Bayesian inference from a computational perspective, bridging theory to practice–freeing you to get results using computing power. Bayesian Methods for Hackers illuminates Bayesian inference through probabilistic programming with the powerful PyMC language and the closely related Python tools NumPy, SciPy, and Matplotlib. Using this approach, you can reach effective solutions in small increments, without extensive mathematical intervention. Davidson-Pilon begins by introducing the concepts underlying Bayesian inference, comparing it with other techniques and guiding you through building and training your first Bayesian model. Next, he introduces PyMC through a series of detailed examples a...
Contingency inferences driven by base rates: Valid by sampling
Directory of Open Access Journals (Sweden)
Florian Kutzner
2011-04-01
Full Text Available Fiedler et al. (2009, reviewed evidence for the utilization of a contingency inference strategy termed pseudocontingencies (PCs. In PCs, the more frequent levels (and, by implication, the less frequent levels are assumed to be associated. PCs have been obtained using a wide range of task settings and dependent measures. Yet, the readiness with which decision makers rely on PCs is poorly understood. A computer simulation explored two potential sources of subjective validity of PCs. First, PCs are shown to perform above chance level when the task is to infer the sign of moderate to strong population contingencies from a sample of observations. Second, contingency inferences based on PCs and inferences based on cell frequencies are shown to partially agree across samples. Intriguingly, this criterion and convergent validity are by-products of random sampling error, highlighting the inductive nature of contingency inferences.
Inferring horizontal gene transfer.
Directory of Open Access Journals (Sweden)
Matt Ravenhall
2015-05-01
Full Text Available Horizontal or Lateral Gene Transfer (HGT or LGT is the transmission of portions of genomic DNA between organisms through a process decoupled from vertical inheritance. In the presence of HGT events, different fragments of the genome are the result of different evolutionary histories. This can therefore complicate the investigations of evolutionary relatedness of lineages and species. Also, as HGT can bring into genomes radically different genotypes from distant lineages, or even new genes bearing new functions, it is a major source of phenotypic innovation and a mechanism of niche adaptation. For example, of particular relevance to human health is the lateral transfer of antibiotic resistance and pathogenicity determinants, leading to the emergence of pathogenic lineages. Computational identification of HGT events relies upon the investigation of sequence composition or evolutionary history of genes. Sequence composition-based ("parametric" methods search for deviations from the genomic average, whereas evolutionary history-based ("phylogenetic" approaches identify genes whose evolutionary history significantly differs from that of the host species. The evaluation and benchmarking of HGT inference methods typically rely upon simulated genomes, for which the true history is known. On real data, different methods tend to infer different HGT events, and as a result it can be difficult to ascertain all but simple and clear-cut HGT events.
Statistical inferences in phylogeography
DEFF Research Database (Denmark)
Nielsen, Rasmus; Beaumont, Mark A
2009-01-01
can randomly lead to multiple different genealogies. Likewise, the same gene trees can arise under different demographic models. This problem has led to the emergence of many statistical methods for making phylogeographic inferences. A popular phylogeographic approach based on nested clade analysis...... is challenged by the fact that a certain amount of the interpretation of the data is left to the subjective choices of the user, and it has been argued that the method performs poorly in simulation studies. More rigorous statistical methods based on coalescence theory have been developed. However, these methods...... may also be challenged by computational problems or poor model choice. In this review, we will describe the development of statistical methods in phylogeographic analysis, and discuss some of the challenges facing these methods....
Admissibility of logical inference rules
Rybakov, VV
1997-01-01
The aim of this book is to present the fundamental theoretical results concerning inference rules in deductive formal systems. Primary attention is focused on: admissible or permissible inference rules the derivability of the admissible inference rules the structural completeness of logics the bases for admissible and valid inference rules. There is particular emphasis on propositional non-standard logics (primary, superintuitionistic and modal logics) but general logical consequence relations and classical first-order theories are also considered. The book is basically self-contained and
Dopamine, reward learning, and active inference
Directory of Open Access Journals (Sweden)
Thomas eFitzgerald
2015-11-01
Full Text Available Temporal difference learning models propose phasic dopamine signalling encodes reward prediction errors that drive learning. This is supported by studies where optogenetic stimulation of dopamine neurons can stand in lieu of actual reward. Nevertheless, a large body of data also shows that dopamine is not necessary for learning, and that dopamine depletion primarily affects task performance. We offer a resolution to this paradox based on an hypothesis that dopamine encodes the precision of beliefs about alternative actions, and thus controls the outcome-sensitivity of behaviour. We extend an active inference scheme for solving Markov decision processes to include learning, and show that simulated dopamine dynamics strongly resemble those actually observed during instrumental conditioning. Furthermore, simulated dopamine depletion impairs performance but spares learning, while simulated excitation of dopamine neurons drives reward learning, through aberrant inference about outcome states. Our formal approach provides a novel and parsimonious reconciliation of apparently divergent experimental findings.
Testing strong interaction theories
International Nuclear Information System (INIS)
Ellis, J.
1979-01-01
The author discusses possible tests of the current theories of the strong interaction, in particular, quantum chromodynamics. High energy e + e - interactions should provide an excellent means of studying the strong force. (W.D.L.)
Energy Technology Data Exchange (ETDEWEB)
Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ahn, Sungsoo [Korea Advanced Inst. Science and Technology (KAIST), Daejeon (Korea, Republic of); Shin, Jinwoo [Korea Advanced Inst. Science and Technology (KAIST), Daejeon (Korea, Republic of)
2017-05-25
Computing partition function is the most important statistical inference task arising in applications of Graphical Models (GM). Since it is computationally intractable, approximate methods have been used to resolve the issue in practice, where meanfield (MF) and belief propagation (BP) are arguably the most popular and successful approaches of a variational type. In this paper, we propose two new variational schemes, coined Gauged-MF (G-MF) and Gauged-BP (G-BP), improving MF and BP, respectively. Both provide lower bounds for the partition function by utilizing the so-called gauge transformation which modifies factors of GM while keeping the partition function invariant. Moreover, we prove that both G-MF and G-BP are exact for GMs with a single loop of a special structure, even though the bare MF and BP perform badly in this case. Our extensive experiments, on complete GMs of relatively small size and on large GM (up-to 300 variables) confirm that the newly proposed algorithms outperform and generalize MF and BP.
Multistability and perceptual inference.
Gershman, Samuel J; Vul, Edward; Tenenbaum, Joshua B
2012-01-01
Ambiguous images present a challenge to the visual system: How can uncertainty about the causes of visual inputs be represented when there are multiple equally plausible causes? A Bayesian ideal observer should represent uncertainty in the form of a posterior probability distribution over causes. However, in many real-world situations, computing this distribution is intractable and requires some form of approximation. We argue that the visual system approximates the posterior over underlying causes with a set of samples and that this approximation strategy produces perceptual multistability--stochastic alternation between percepts in consciousness. Under our analysis, multistability arises from a dynamic sample-generating process that explores the posterior through stochastic diffusion, implementing a rational form of approximate Bayesian inference known as Markov chain Monte Carlo (MCMC). We examine in detail the most extensively studied form of multistability, binocular rivalry, showing how a variety of experimental phenomena--gamma-like stochastic switching, patchy percepts, fusion, and traveling waves--can be understood in terms of MCMC sampling over simple graphical models of the underlying perceptual tasks. We conjecture that the stochastic nature of spiking neurons may lend itself to implementing sample-based posterior approximations in the brain.
An Inference Language for Imaging
DEFF Research Database (Denmark)
Pedemonte, Stefano; Catana, Ciprian; Van Leemput, Koen
2014-01-01
We introduce iLang, a language and software framework for probabilistic inference. The iLang framework enables the definition of directed and undirected probabilistic graphical models and the automated synthesis of high performance inference algorithms for imaging applications. The iLang framework...... is composed of a set of language primitives and of an inference engine based on a message-passing system that integrates cutting-edge computational tools, including proximal algorithms and high performance Hamiltonian Markov Chain Monte Carlo techniques. A set of domain-specific highly optimized GPU...
Optimization methods for logical inference
Chandru, Vijay
2011-01-01
Merging logic and mathematics in deductive inference-an innovative, cutting-edge approach. Optimization methods for logical inference? Absolutely, say Vijay Chandru and John Hooker, two major contributors to this rapidly expanding field. And even though ""solving logical inference problems with optimization methods may seem a bit like eating sauerkraut with chopsticks. . . it is the mathematical structure of a problem that determines whether an optimization model can help solve it, not the context in which the problem occurs."" Presenting powerful, proven optimization techniques for logic in
On quantum statistical inference
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Gill, Richard D.; Jupp, Peter E.
Recent developments in the mathematical foundations of quantum mechanics have brought the theory closer to that of classical probability and statistics. On the other hand, the unique character of quantum physics sets many of the questions addressed apart from those met classically in stochastics....... Furthermore, concurrent advances in experimental techniques and in the theory of quantum computation have led to a strong interest in questions of quantum information, in particular in the sense of the amount of information about unknown parameters in given observational data or accessible through various...
Wellman, Tristan
2015-01-01
The South Platte River and underlying alluvial aquifer form an important hydrologic resource in northeastern Colorado that provides water to population centers along the Front Range and to agricultural communities across the rural plains. Water is regulated based on seniority of water rights and delivered using a network of administration structures that includes ditches, reservoirs, wells, impacted river sections, and engineered recharge areas. A recent addendum to Colorado water law enacted during 2002-2003 curtailed pumping from thousands of wells that lacked authorized augmentation plans. The restrictions in pumping were hypothesized to increase water storage in the aquifer, causing groundwater to rise near the land surface at some locations. The U.S. Geological Survey (USGS), in cooperation with the Colorado Water Conservation Board and the Colorado Water Institute, completed an assessment of 60 years (yr) of historical groundwater-level records collected from 1953 to 2012 from 1,669 wells. Relations of "high" groundwater levels, defined as depth to water from 0 to 10 feet (ft) below land surface, were compared to precipitation, river discharge, and 36 geographic and administrative attributes to identify natural and human controls in areas with shallow groundwater.
Barlow, J. E.; Burns, I. S.; Guertin, D. P.; Kepner, W. G.; Goodrich, D. C.
2016-12-01
Long-term land-use and land cover change and their associated impacts pose critical challenges to sustaining vital hydrological ecosystem services for future generations. In this study, a methodology to characterize hydrologic impacts from future urban growth through time that was developed and applied on the San Pedro River Basin was expanded and utilized on the South Platte River Basin as well. Future urban growth is represented by housing density maps generated in decadal intervals from 2010 to 2100, produced by the U.S. Environmental Protection Agency (EPA) Integrated Climate and Land-Use Scenarios (ICLUS) project. ICLUS developed future housing density maps by adapting the Intergovernmental Panel on Climate Change (IPCC) Special Report on Emissions Scenarios (SRES) social, economic, and demographic storylines to the conterminous United States. To characterize hydrologic impacts from future growth, the housing density maps were reclassified to National Land Cover Database (NLCD) 2006 land cover classes and used to parameterize the Soil and Water Assessment Tool (SWAT) using the Automated Geospatial Watershed Assessment (AGWA) tool. The objectives of this project were to 1) develop and implement a methodology for adapting the ICLUS data for use in AGWA as an approach to evaluate impacts of development on water-quantity and -quality, 2) present, evaluate, and compare results from scenarios for watersheds in two different geographic and climatic regions, 3) determine watershed specific implications of this type of future land cover change analysis.
Krapu, Gary L.; Brandt, David A.; Kinzel, Paul J.; Pearse, Aaron T.
2014-01-01
We conducted a 10-year study (1998–2007) of the Mid-Continent Population (MCP) of sandhill cranes (Grus canadensis) to identify spring-migration corridors, locations of major stopovers, and migration chronology by crane breeding affiliation (western Alaska–Siberia [WA–S], northern Canada–Nunavut [NC–N], west-central Canada–Alaska [WC–A], and east-central Canada–Minnesota [EC–M]). In the Central Platte River Valley (CPRV) of Nebraska, we evaluated factors influencing staging chronology, food habits, fat storage, and habitat use of sandhill cranes. We compared our findings to results from the Platte River Ecology Study conducted during 1978–1980. We determined spring migration corridors used by the breeding affiliations (designated subpopulations for management purposes) by monitoring 169 cranes marked with platform transmitter terminals (PTTs). We also marked and monitored 456 cranes in the CPRV with very high frequency (VHF) transmitters to evaluate length and pattern of stay, habitat use, and movements. An estimated 42% and 58% of cranes staging in the CPRV were greater sandhill cranes (G. c. tabida) and lesser sandhill cranes (G. c. canadensis), and they stayed for an average of 20 and 25 days (2000–2007), respectively. Cranes from the WA–S, NC–N, WC–A, and EC–M affiliations spent an average of 72, 77, 52, and 53 days, respectively, in spring migration of which 28, 23, 24, and 18 days occurred in the CPRV. The majority of the WA–S subpopulation settled in the CPRV apparently because of inadequate habitat to support more birds upstream, although WA–S cranes accounted for >90% of birds staging in the North Platte River Valley. Crane staging duration in the CPRV was negatively correlated with arrival dates; 92% of cranes stayed >7 days. A program of annual mechanical removal of mature stands of woody growth and seedlings that began in the early 1980s primarily in the main channel of the Platte River has allowed distribution of crane
Statistical inference via fiducial methods
Salomé, Diemer
1998-01-01
In this thesis the attention is restricted to inductive reasoning using a mathematical probability model. A statistical procedure prescribes, for every theoretically possible set of data, the inference about the unknown of interest. ... Zie: Summary
On principles of inductive inference
Kostecki, Ryszard Paweł
2011-01-01
We propose an intersubjective epistemic approach to foundations of probability theory and statistical inference, based on relative entropy and category theory, and aimed to bypass the mathematical and conceptual problems of existing foundational approaches.
Statistical inference for stochastic processes
National Research Council Canada - National Science Library
Basawa, Ishwar V; Prakasa Rao, B. L. S
1980-01-01
The aim of this monograph is to attempt to reduce the gap between theory and applications in the area of stochastic modelling, by directing the interest of future researchers to the inference aspects...
Bayesian Inference: with ecological applications
Link, William A.; Barker, Richard J.
2010-01-01
This text provides a mathematically rigorous yet accessible and engaging introduction to Bayesian inference with relevant examples that will be of interest to biologists working in the fields of ecology, wildlife management and environmental studies as well as students in advanced undergraduate statistics.. This text opens the door to Bayesian inference, taking advantage of modern computational efficiencies and easily accessible software to evaluate complex hierarchical models.
Active inference, communication and hermeneutics.
Friston, Karl J; Frith, Christopher D
2015-07-01
Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others--during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions--both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then--in principle--they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Active inference, communication and hermeneutics☆
Friston, Karl J.; Frith, Christopher D.
2015-01-01
Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others – during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions – both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then – in principle – they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa. PMID:25957007
Neural Correlates of Bridging Inferences and Coherence Processing
Kim, Sung-il; Yoon, Misun; Kim, Wonsik; Lee, Sunyoung; Kang, Eunjoo
2012-01-01
We explored the neural correlates of bridging inferences and coherence processing during story comprehension using Positron Emission Tomography (PET). Ten healthy right-handed volunteers were visually presented three types of stories (Strong Coherence, Weak Coherence, and Control) consisted of three sentences. The causal connectedness among…
Direct Evidence for a Dual Process Model of Deductive Inference
Markovits, Henry; Brunet, Marie-Laurence; Thompson, Valerie; Brisson, Janie
2013-01-01
In 2 experiments, we tested a strong version of a dual process theory of conditional inference (cf. Verschueren et al., 2005a, 2005b) that assumes that most reasoners have 2 strategies available, the choice of which is determined by situational variables, cognitive capacity, and metacognitive control. The statistical strategy evaluates inferences…
Strongly Correlated Topological Insulators
2016-02-03
Strongly Correlated Topological Insulators In the past year, the grant was used for work in the field of topological phases, with emphasis on finding...surface of topological insulators. In the past 3 years, we have started a new direction, that of fractional topological insulators. These are materials...in which a topologically nontrivial quasi-flat band is fractionally filled and then subject to strong interactions. The views, opinions and/or
Isenberg, James
2017-01-01
The Hawking-Penrose theorems tell us that solutions of Einstein's equations are generally singular, in the sense of the incompleteness of causal geodesics (the paths of physical observers). These singularities might be marked by the blowup of curvature and therefore crushing tidal forces, or by the breakdown of physical determinism. Penrose has conjectured (in his `Strong Cosmic Censorship Conjecture`) that it is generically unbounded curvature that causes singularities, rather than causal breakdown. The verification that ``AVTD behavior'' (marked by the domination of time derivatives over space derivatives) is generically present in a family of solutions has proven to be a useful tool for studying model versions of Strong Cosmic Censorship in that family. I discuss some of the history of Strong Cosmic Censorship, and then discuss what is known about AVTD behavior and Strong Cosmic Censorship in families of solutions defined by varying degrees of isometry, and discuss recent results which we believe will extend this knowledge and provide new support for Strong Cosmic Censorship. I also comment on some of the recent work on ``Weak Null Singularities'', and how this relates to Strong Cosmic Censorship.
Espinoza, Benjamin; Gartside, Paul; Kovan-Bakan, Merve; Mamatelashvili, Ana
2012-01-01
A space is `n-strong arc connected' (n-sac) if for any n points in the space there is an arc in the space visiting them in order. A space is omega-strong arc connected (omega-sac) if it is n-sac for all n. We study these properties in finite graphs, regular continua, and rational continua. There are no 4-sac graphs, but there are 3-sac graphs and graphs which are 2-sac but not 3-sac. For every n there is an n-sac regular continuum, but no regular continuum is omega-sac. There is an omega-sac ...
Abortion: Strong's counterexamples fail
DEFF Research Database (Denmark)
Di Nucci, Ezio
2009-01-01
This paper shows that the counterexamples proposed by Strong in 2008 in the Journal of Medical Ethics to Marquis's argument against abortion fail. Strong's basic idea is that there are cases--for example, terminally ill patients--where killing an adult human being is prima facie seriously morally......'s scenarios have some valuable future or admitted that killing them is not seriously morally wrong. Finally, if "valuable future" is interpreted as referring to objective standards, one ends up with implausible and unpalatable moral claims....
Locative inferences in medical texts.
Mayer, P S; Bailey, G H; Mayer, R J; Hillis, A; Dvoracek, J E
1987-06-01
Medical research relies on epidemiological studies conducted on a large set of clinical records that have been collected from physicians recording individual patient observations. These clinical records are recorded for the purpose of individual care of the patient with little consideration for their use by a biostatistician interested in studying a disease over a large population. Natural language processing of clinical records for epidemiological studies must deal with temporal, locative, and conceptual issues. This makes text understanding and data extraction of clinical records an excellent area for applied research. While much has been done in making temporal or conceptual inferences in medical texts, parallel work in locative inferences has not been done. This paper examines the locative inferences as well as the integration of temporal, locative, and conceptual issues in the clinical record understanding domain by presenting an application that utilizes two key concepts in its parsing strategy--a knowledge-based parsing strategy and a minimal lexicon.
Quadratic inference functions in marginal models for longitudinal data.
Song, Peter X-K; Jiang, Zhichang; Park, Eunjoo; Qu, Annie
2009-12-20
The quadratic inference function (QIF) is a new statistical methodology developed for the estimation and inference in longitudinal data analysis using marginal models. This method is an alternative to the popular generalized estimating equations approach, and it has several useful properties such as robustness, a goodness-of-fit test and model selection. This paper presents an introductory review of the QIF, with a strong emphasis on its applications. In particular, a recently developed SAS MACRO QIF is illustrated in this paper to obtain numerical results.
International Nuclear Information System (INIS)
Marier, D.
1992-01-01
This article presents the results of a financial rankings survey which show a strong economic activity in the independent energy industry. The topics of the article include advisor turnover, overseas banks, and the increase in public offerings. The article identifies the top project finance investors for new projects and restructurings and rankings for lenders
Object-Oriented Type Inference
DEFF Research Database (Denmark)
Schwartzbach, Michael Ignatieff; Palsberg, Jens
1991-01-01
We present a new approach to inferring types in untyped object-oriented programs with inheritance, assignments, and late binding. It guarantees that all messages are understood, annotates the program with type information, allows polymorphic methods, and can be used as the basis of an op-timizing......We present a new approach to inferring types in untyped object-oriented programs with inheritance, assignments, and late binding. It guarantees that all messages are understood, annotates the program with type information, allows polymorphic methods, and can be used as the basis of an op...
Eight challenges in phylodynamic inference
Directory of Open Access Journals (Sweden)
Simon D.W. Frost
2015-03-01
Full Text Available The field of phylodynamics, which attempts to enhance our understanding of infectious disease dynamics using pathogen phylogenies, has made great strides in the past decade. Basic epidemiological and evolutionary models are now well characterized with inferential frameworks in place. However, significant challenges remain in extending phylodynamic inference to more complex systems. These challenges include accounting for evolutionary complexities such as changing mutation rates, selection, reassortment, and recombination, as well as epidemiological complexities such as stochastic population dynamics, host population structure, and different patterns at the within-host and between-host scales. An additional challenge exists in making efficient inferences from an ever increasing corpus of sequence data.
Statistical Inference for Fractional Diffusion Processes
Rao, B L S Prakasa
2010-01-01
Statistical Inference for Fractional Diffusion Processes looks at statistical inference for stochastic processes modeled by stochastic differential equations driven by fractional Brownian motion. Other related processes, such as sequential inference, nonparametric and non parametric inference and parametric estimation are also discussed. The book will deal with Fractional Diffusion Processes (FDP) in relation to statistical influence for stochastic processes. The books main focus is on parametric and non parametric inference problems for fractional diffusion processes when a complete path of t
Steele, Gregory V.; Gurdak, Jason J.; Hobza, Christopher M.
2014-01-01
Uncertainty about the effects of land use and climate on water movement in the unsaturated zone and on groundwater recharge rates can lead to uncertainty in water budgets used for groundwater-flow models. To better understand these effects, a cooperative study between the U.S. Geological Survey and the Central Platte Natural Resources District was initiated in 2007 to determine field-based estimates of recharge rates in selected land-use areas of the Central Platte Natural Resources District in Nebraska. Measured total water potential and unsaturated-zone profiles of tritium, chloride, nitrate as nitrogen, and bromide, along with groundwater-age dates, were used to evaluate water movement in the unsaturated zone and groundwater recharge rates in the central Platte River study area. Eight study sites represented an east-west precipitation contrast across the study area—four beneath groundwater-irrigated cropland (sites 2, 5, and 6 were irrigated corn and site 7 was irrigated alfalfa/corn rotation), three beneath rangeland (sites 1, 4, and 8), and one beneath nonirrigated cropland, or dryland (site 3). Measurements of transient vertical gradients in total water potential indicated that periodic wetting fronts reached greater mean maximum depths beneath the irrigated sites than beneath the rangeland sites, in part, because of the presence of greater and constant antecedent moisture. Beneath the rangeland sites, greater temporal variation in antecedent moisture and total water potential existed and was, in part, likely a result of local precipitation and evapotranspiration. Moreover, greater variability was noticed in the total water potential profiles beneath the western sites than the corresponding eastern sites, which was attributed to less mean annual precipitation in the west. The depth of the peak post-bomb tritium concentration or the interface between the pre-bomb/post-bomb tritium, along with a tritium mass balance, within sampled soil profiles were used to
Strong Electroweak Symmetry Breaking
Grinstein, Benjamin
2011-01-01
Models of spontaneous breaking of electroweak symmetry by a strong interaction do not have fine tuning/hierarchy problem. They are conceptually elegant and use the only mechanism of spontaneous breaking of a gauge symmetry that is known to occur in nature. The simplest model, minimal technicolor with extended technicolor interactions, is appealing because one can calculate by scaling up from QCD. But it is ruled out on many counts: inappropriately low quark and lepton masses (or excessive FCNC), bad electroweak data fits, light scalar and vector states, etc. However, nature may not choose the minimal model and then we are stuck: except possibly through lattice simulations, we are unable to compute and test the models. In the LHC era it therefore makes sense to abandon specific models (of strong EW breaking) and concentrate on generic features that may indicate discovery. The Technicolor Straw Man is not a model but a parametrized search strategy inspired by a remarkable generic feature of walking technicolor,...
Inference Optimization using Relational Algebra
Evers, S.; Fokkinga, M.M.; Apers, Peter M.G.
Exact inference procedures in Bayesian networks can be expressed using relational algebra; this provides a common ground for optimizations from the AI and database communities. Specifically, the ability to accomodate sparse representations of probability distributions opens up the way to optimize
Mixed normal inference on multicointegration
Boswijk, H.P.
2009-01-01
Asymptotic likelihood analysis of cointegration in I(2) models, see Johansen (1997, 2006), Boswijk (2000) and Paruolo (2000), has shown that inference on most parameters is mixed normal, implying hypothesis test statistics with an asymptotic 2 null distribution. The asymptotic distribution of the
Statistical inference on variance components
Verdooren, L.R.
1988-01-01
In several sciences but especially in animal and plant breeding, the general mixed model with fixed and random effects plays a great role. Statistical inference on variance components means tests of hypotheses about variance components, constructing confidence intervals for them, estimating them,
Plasmons in strong superconductors
International Nuclear Information System (INIS)
Baldo, M.; Ducoin, C.
2011-01-01
We present a study of the possible plasmon excitations that can occur in systems where strong superconductivity is present. In these systems the plasmon energy is comparable to or smaller than the pairing gap. As a prototype of these systems we consider the proton component of Neutron Star matter just below the crust when electron screening is not taken into account. For the realistic case we consider in detail the different aspects of the elementary excitations when the proton, electron components are considered within the Random-Phase Approximation generalized to the superfluid case, while the influence of the neutron component is considered only at qualitative level. Electron screening plays a major role in modifying the proton spectrum and spectral function. At the same time the electron plasmon is strongly modified and damped by the indirect coupling with the superfluid proton component, even at moderately low values of the gap. The excitation spectrum shows the interplay of the different components and their relevance for each excitation modes. The results are relevant for neutrino physics and thermodynamical processes in neutron stars. If electron screening is neglected, the spectral properties of the proton component show some resemblance with the physical situation in high-T c superconductors, and we briefly discuss similarities and differences in this connection. In a general prospect, the results of the study emphasize the role of Coulomb interaction in strong superconductors.
Strong-coupling approximations
International Nuclear Information System (INIS)
Abbott, R.B.
1984-03-01
Standard path-integral techniques such as instanton calculations give good answers for weak-coupling problems, but become unreliable for strong-coupling. Here we consider a method of replacing the original potential by a suitably chosen harmonic oscillator potential. Physically this is motivated by the fact that potential barriers below the level of the ground-state energy of a quantum-mechanical system have little effect. Numerically, results are good, both for quantum-mechanical problems and for massive phi 4 field theory in 1 + 1 dimensions. 9 references, 6 figures
International Nuclear Information System (INIS)
Ebata, T.
1981-01-01
With an assumed weak multiplet structure for bosonic hadrons, which is consistent with the ΔI = 1/2 rule, it is shown that the strong interaction effective hamiltonian is compatible with the weak SU(2) x U(1) gauge transformation. Especially the rho-meson transforms as a triplet under SU(2)sub(w), and this is the origin of the rho-photon analogy. It is also shown that the existence of the non-vanishing Cabibbo angle is a necessary condition for the absence of the exotic hadrons. (orig.)
Quantifying the multi-scale performance of network inference algorithms.
Oates, Chris J; Amos, Richard; Spencer, Simon E F
2014-10-01
Graphical models are widely used to study complex multivariate biological systems. Network inference algorithms aim to reverse-engineer such models from noisy experimental data. It is common to assess such algorithms using techniques from classifier analysis. These metrics, based on ability to correctly infer individual edges, possess a number of appealing features including invariance to rank-preserving transformation. However, regulation in biological systems occurs on multiple scales and existing metrics do not take into account the correctness of higher-order network structure. In this paper novel performance scores are presented that share the appealing properties of existing scores, whilst capturing ability to uncover regulation on multiple scales. Theoretical results confirm that performance of a network inference algorithm depends crucially on the scale at which inferences are to be made; in particular strong local performance does not guarantee accurate reconstruction of higher-order topology. Applying these scores to a large corpus of data from the DREAM5 challenge, we undertake a data-driven assessment of estimator performance. We find that the "wisdom of crowds" network, that demonstrated superior local performance in the DREAM5 challenge, is also among the best performing methodologies for inference of regulation on multiple length scales.
Dvali, Gia
2009-01-01
We show that whenever a 4-dimensional theory with N particle species emerges as a consistent low energy description of a 3-brane embedded in an asymptotically-flat (4+d)-dimensional space, the holographic scale of high-dimensional gravity sets the strong coupling scale of the 4D theory. This connection persists in the limit in which gravity can be consistently decoupled. We demonstrate this effect for orbifold planes, as well as for the solitonic branes and string theoretic D-branes. In all cases the emergence of a 4D strong coupling scale from bulk holography is a persistent phenomenon. The effect turns out to be insensitive even to such extreme deformations of the brane action that seemingly shield 4D theory from the bulk gravity effects. A well understood example of such deformation is given by large 4D Einstein term in the 3-brane action, which is known to suppress the strength of 5D gravity at short distances and change the 5D Newton's law into the four-dimensional one. Nevertheless, we observe that the ...
Bayesian inference with ecological applications
Link, William A
2009-01-01
This text is written to provide a mathematically sound but accessible and engaging introduction to Bayesian inference specifically for environmental scientists, ecologists and wildlife biologists. It emphasizes the power and usefulness of Bayesian methods in an ecological context. The advent of fast personal computers and easily available software has simplified the use of Bayesian and hierarchical models . One obstacle remains for ecologists and wildlife biologists, namely the near absence of Bayesian texts written specifically for them. The book includes many relevant examples, is supported by software and examples on a companion website and will become an essential grounding in this approach for students and research ecologists. Engagingly written text specifically designed to demystify a complex subject Examples drawn from ecology and wildlife research An essential grounding for graduate and research ecologists in the increasingly prevalent Bayesian approach to inference Companion website with analyt...
Statistical inference on residual life
Jeong, Jong-Hyeon
2014-01-01
This is a monograph on the concept of residual life, which is an alternative summary measure of time-to-event data, or survival data. The mean residual life has been used for many years under the name of life expectancy, so it is a natural concept for summarizing survival or reliability data. It is also more interpretable than the popular hazard function, especially for communications between patients and physicians regarding the efficacy of a new drug in the medical field. This book reviews existing statistical methods to infer the residual life distribution. The review and comparison includes existing inference methods for mean and median, or quantile, residual life analysis through medical data examples. The concept of the residual life is also extended to competing risks analysis. The targeted audience includes biostatisticians, graduate students, and PhD (bio)statisticians. Knowledge in survival analysis at an introductory graduate level is advisable prior to reading this book.
Bayesian Inference on Proportional Elections
Brunello, Gabriel Hideki Vatanabe; Nakano, Eduardo Yoshio
2015-01-01
Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software. PMID:25786259
Nonparametric Bayesian inference in biostatistics
Müller, Peter
2015-01-01
As chapters in this book demonstrate, BNP has important uses in clinical sciences and inference for issues like unknown partitions in genomics. Nonparametric Bayesian approaches (BNP) play an ever expanding role in biostatistical inference from use in proteomics to clinical trials. Many research problems involve an abundance of data and require flexible and complex probability models beyond the traditional parametric approaches. As this book's expert contributors show, BNP approaches can be the answer. Survival Analysis, in particular survival regression, has traditionally used BNP, but BNP's potential is now very broad. This applies to important tasks like arrangement of patients into clinically meaningful subpopulations and segmenting the genome into functionally distinct regions. This book is designed to both review and introduce application areas for BNP. While existing books provide theoretical foundations, this book connects theory to practice through engaging examples and research questions. Chapters c...
Statistical inference an integrated approach
Migon, Helio S; Louzada, Francisco
2014-01-01
Introduction Information The concept of probability Assessing subjective probabilities An example Linear algebra and probability Notation Outline of the bookElements of Inference Common statistical modelsLikelihood-based functions Bayes theorem Exchangeability Sufficiency and exponential family Parameter elimination Prior Distribution Entirely subjective specification Specification through functional forms Conjugacy with the exponential family Non-informative priors Hierarchical priors Estimation Introduction to decision theoryBayesian point estimation Classical point estimation Empirical Bayes estimation Comparison of estimators Interval estimation Estimation in the Normal model Approximating Methods The general problem of inference Optimization techniquesAsymptotic theory Other analytical approximations Numerical integration methods Simulation methods Hypothesis Testing Introduction Classical hypothesis testingBayesian hypothesis testing Hypothesis testing and confidence intervalsAsymptotic tests Prediction...
Bayesian inference on proportional elections.
Directory of Open Access Journals (Sweden)
Gabriel Hideki Vatanabe Brunello
Full Text Available Polls for majoritarian voting systems usually show estimates of the percentage of votes for each candidate. However, proportional vote systems do not necessarily guarantee the candidate with the most percentage of votes will be elected. Thus, traditional methods used in majoritarian elections cannot be applied on proportional elections. In this context, the purpose of this paper was to perform a Bayesian inference on proportional elections considering the Brazilian system of seats distribution. More specifically, a methodology to answer the probability that a given party will have representation on the chamber of deputies was developed. Inferences were made on a Bayesian scenario using the Monte Carlo simulation technique, and the developed methodology was applied on data from the Brazilian elections for Members of the Legislative Assembly and Federal Chamber of Deputies in 2010. A performance rate was also presented to evaluate the efficiency of the methodology. Calculations and simulations were carried out using the free R statistical software.
Causal inference based on counterfactuals
Directory of Open Access Journals (Sweden)
Höfler M
2005-09-01
Full Text Available Abstract Background The counterfactual or potential outcome model has become increasingly standard for causal inference in epidemiological and medical studies. Discussion This paper provides an overview on the counterfactual and related approaches. A variety of conceptual as well as practical issues when estimating causal effects are reviewed. These include causal interactions, imperfect experiments, adjustment for confounding, time-varying exposures, competing risks and the probability of causation. It is argued that the counterfactual model of causal effects captures the main aspects of causality in health sciences and relates to many statistical procedures. Summary Counterfactuals are the basis of causal inference in medicine and epidemiology. Nevertheless, the estimation of counterfactual differences pose several difficulties, primarily in observational studies. These problems, however, reflect fundamental barriers only when learning from observations, and this does not invalidate the counterfactual concept.
Antonella Del Rosso
2016-01-01
Twenty years of designing, building and testing a number of innovative technologies, with the strong belief that the endeavour would lead to a historic breakthrough. The Bulletin publishes an abstract of the Courier’s interview with Barry Barish, one of the founding fathers of LIGO. The plots show the signals of gravitational waves detected by the twin LIGO observatories at Livingston, Louisiana, and Hanford, Washington. (Image: Caltech/MIT/LIGO Lab) On 11 February, the Laser Interferometer Gravitational-Wave Observatory (LIGO) and Virgo collaborations published a historic paper in which they showed a gravitational signal emitted by the merger of two black holes. These results come after 20 years of hard work by a large collaboration of scientists operating the two LIGO observatories in the US. Barry Barish, Linde Professor of Physics, Emeritus at the California Institute of Technology and former Director of the Global Design Effort for the Internat...
Racing for conditional independence inference
Czech Academy of Sciences Publication Activity Database
Bouckaert, R. R.; Studený, Milan
2005-01-01
Roč. 3571, - (2005), s. 221-232 ISSN 0302-9743. [ECSQARU 2005. European Conference /8./. Barcelona, 06.07.2005-08.07.2005] R&D Projects: GA ČR GA201/04/0393; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : conditional independence inference * imset * racing algorithms Subject RIV: BA - General Mathematics
Statistical inference a short course
Panik, Michael J
2012-01-01
A concise, easily accessible introduction to descriptive and inferential techniques Statistical Inference: A Short Course offers a concise presentation of the essentials of basic statistics for readers seeking to acquire a working knowledge of statistical concepts, measures, and procedures. The author conducts tests on the assumption of randomness and normality, provides nonparametric methods when parametric approaches might not work. The book also explores how to determine a confidence interval for a population median while also providing coverage of ratio estimation, randomness, and causal
On Quantum Statistical Inference, II
Barndorff-Nielsen, O. E.; Gill, R. D.; Jupp, P. E.
2003-01-01
Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, theoretical developments in the theory of quantum measurements have brought the basic mathematical framework for the probability calculations much closer to that of classical probability theory. The present paper reviews this field and proposes and inte...
Nonparametric predictive inference in reliability
International Nuclear Information System (INIS)
Coolen, F.P.A.; Coolen-Schrijner, P.; Yan, K.J.
2002-01-01
We introduce a recently developed statistical approach, called nonparametric predictive inference (NPI), to reliability. Bounds for the survival function for a future observation are presented. We illustrate how NPI can deal with right-censored data, and discuss aspects of competing risks. We present possible applications of NPI for Bernoulli data, and we briefly outline applications of NPI for replacement decisions. The emphasis is on introduction and illustration of NPI in reliability contexts, detailed mathematical justifications are presented elsewhere
Computational Neuropsychology and Bayesian Inference.
Parr, Thomas; Rees, Geraint; Friston, Karl J
2018-01-01
Computational theories of brain function have become very influential in neuroscience. They have facilitated the growth of formal approaches to disease, particularly in psychiatric research. In this paper, we provide a narrative review of the body of computational research addressing neuropsychological syndromes, and focus on those that employ Bayesian frameworks. Bayesian approaches to understanding brain function formulate perception and action as inferential processes. These inferences combine 'prior' beliefs with a generative (predictive) model to explain the causes of sensations. Under this view, neuropsychological deficits can be thought of as false inferences that arise due to aberrant prior beliefs (that are poor fits to the real world). This draws upon the notion of a Bayes optimal pathology - optimal inference with suboptimal priors - and provides a means for computational phenotyping. In principle, any given neuropsychological disorder could be characterized by the set of prior beliefs that would make a patient's behavior appear Bayes optimal. We start with an overview of some key theoretical constructs and use these to motivate a form of computational neuropsychology that relates anatomical structures in the brain to the computations they perform. Throughout, we draw upon computational accounts of neuropsychological syndromes. These are selected to emphasize the key features of a Bayesian approach, and the possible types of pathological prior that may be present. They range from visual neglect through hallucinations to autism. Through these illustrative examples, we review the use of Bayesian approaches to understand the link between biology and computation that is at the heart of neuropsychology.
Continuous Integrated Invariant Inference, Phase I
National Aeronautics and Space Administration — The proposed project will develop a new technique for invariant inference and embed this and other current invariant inference and checking techniques in an...
Variational inference & deep learning : A new synthesis
Kingma, D.P.
2017-01-01
In this thesis, Variational Inference and Deep Learning: A New Synthesis, we propose novel solutions to the problems of variational (Bayesian) inference, generative modeling, representation learning, semi-supervised learning, and stochastic optimization.
Okalebo, J. A.; Wienhold, B.; Suyker, A.; Erickson, G.; Hayes, M. J.; Awada, T.
2015-12-01
The Platte River - High Plains Aquifer (PR-HPA) is one of 18 established Long Term Agroecosystem Research (LTAR) networks across the US. PR-HPA is a partnership between the Institute of Agriculture and Natural Resources at the University of Nebraska-Lincoln (UNL), the USDA-ARS Agroecosystem Management Research Unit (AMRU) in Lincoln, and the USDA-ARS Environmental Management Research Unit (EMRU) in Clay Center, NE. The PR-HPA network encompasses 27,750 ha of research sites with data going back to the early 1900s. A partial list of on-going research projects include those encompassing long-term manuring and continuous corn (Est. 1912), dryland tillage plots (Est. 1970), soil nutrients and tillage (Est. 1983), biofuel feedstock studies (Est. 2001), and carbon sequestration study (Est. 2000). Affiliated partners include the National Drought Mitigation Center (NDMC) that develops measures to improve preparedness and adaptation to climate variability and drought; the High Plains Regional Climate Center (HPRCC) that coordinates data acquisition from over 170 automated weather stations and around 50 automated soil moisture network across NE and beyond; the AMERIFLUX and NEBFLUX networks that coordinate the water vapor and carbon dioxide flux measurements across NE with emphasis on rainfed and irrigated crop lands; the ARS Greenhouse gas Reduction through Agricultural Carbon Enhancement network (GRACEnet) and the Resilient Economic Agricultural Practices (REAP) project; and the Center for Advanced Land Management Information Technologies (CALMIT) that assists with the use of geospatial technologies for agriculture and natural resource applications. Current emphases are on addressing present-day and emerging issues related to profitability and sustainability of agroecosystems. The poster will highlight some of the ongoing and planned efforts in research pertaining to climate variability and change, water sustainability, and ecological and agronomic challenges associated
Directory of Open Access Journals (Sweden)
Stan Daberkow
2001-01-01
Full Text Available Given the societal concern about groundwater pollution from agricultural sources, public programs have been proposed or implemented to change farmer behavior with respect to nutrient use and management. However, few of these programs designed to change farmer behavior have been evaluated due to the lack of detailed data over an appropriate time frame. The Central Platte Natural Resources District (CPNRD in Nebraska has identified an intensively cultivated, irrigated area with average groundwater nitrate-nitrogen (N levels about double the EPA’s safe drinking water standard. The CPNRD implemented a joint education and regulatory N management program in the mid-1980s to reduce groundwater N. This analysis reports N use and management, yield, and groundwater nitrate trends in the CPNRD for nearly 3000 continuous-corn fields from 1989 to 1998, where producers faced limits on the timing of N fertilizer application but no limits on amounts. Groundwater nitrate levels showed modest improvement over the 10 years of this analysis, falling from the 1989–1993 average of 18.9 to 18.1 mg/l during 1994–1998. The availability of N in excess of crop needs was clearly documented by the CPNRD data and was related to optimistic yield goals, irrigation water use above expected levels, and lack of adherence to commercial fertilizer application guidelines. Over the 10-year period of this analysis, producers reported harvesting an annual average of 9729 kg/ha, 1569 kg/ha (14% below the average yield goal. During 1989�1998, producers reported annually applying an average of 162.5 kg/ha of commercial N fertilizer, 15.7 kg/ha (10% above the guideline level. Including the N contribution from irrigation water, the potential N contribution to the environment (total N available less estimated crop use was estimated at 71.7 kg/ha. This is an estimate of the nitrates available for denitrification, volatilization, runoff, future soil N, and leaching to groundwater. On
Variations on Bayesian Prediction and Inference
2016-05-09
inference 2.2.1 Background There are a number of statistical inference problems that are not generally formulated via a full probability model...problem of inference about an unknown parameter, the Bayesian approach requires a full probability 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...the problem of inference about an unknown parameter, the Bayesian approach requires a full probability model/likelihood which can be an obstacle
Wickens, F
Our friend and colleague John Strong was cruelly taken from us by a brain tumour on Monday 31st July, a few days before his 65th birthday John started his career working with a group from Westfield College, under the leadership of Ted Bellamy. He obtained his PhD and spent the early part of his career on experiments at Rutherford Appleton Laboratory (RAL), but after the early 1970s his research was focussed on experiments in CERN. Over the years he made a number of notable contributions to experiments in CERN: The Omega spectrometer adopted a system John had originally developed for experiments at RAL using vidicon cameras to record the sparks in the spark chambers; He contributed to the success of NA1 and NA7, where he became heavily involved in the electronic trigger systems; He was responsible for the second level trigger system for the ALEPH detector and spent five years leading a team that designed and built the system, which ran for twelve years with only minor interventions. Following ALEPH he tur...
Stirring Strongly Coupled Plasma
Fadafan, Kazem Bitaghsir; Rajagopal, Krishna; Wiedemann, Urs Achim
2009-01-01
We determine the energy it takes to move a test quark along a circle of radius L with angular frequency w through the strongly coupled plasma of N=4 supersymmetric Yang-Mills (SYM) theory. We find that for most values of L and w the energy deposited by stirring the plasma in this way is governed either by the drag force acting on a test quark moving through the plasma in a straight line with speed v=Lw or by the energy radiated by a quark in circular motion in the absence of any plasma, whichever is larger. There is a continuous crossover from the drag-dominated regime to the radiation-dominated regime. In the crossover regime we find evidence for significant destructive interference between energy loss due to drag and that due to radiation as if in vacuum. The rotating quark thus serves as a model system in which the relative strength of, and interplay between, two different mechanisms of parton energy loss is accessible via a controlled classical gravity calculation. We close by speculating on the implicati...
Strong-interaction nonuniversality
International Nuclear Information System (INIS)
Volkas, R.R.; Foot, R.; He, X.; Joshi, G.C.
1989-01-01
The universal QCD color theory is extended to an SU(3) 1 direct product SU(3) 2 direct product SU(3) 3 gauge theory, where quarks of the ith generation transform as triplets under SU(3)/sub i/ and singlets under the other two factors. The usual color group is then identified with the diagonal subgroup, which remains exact after symmetry breaking. The gauge bosons associated with the 16 broken generators then form two massive octets under ordinary color. The interactions between quarks and these heavy gluonlike particles are explicitly nonuniversal and thus an exploration of their physical implications allows us to shed light on the fundamental issue of strong-interaction universality. Nonuniversality and weak flavor mixing are shown to generate heavy-gluon-induced flavor-changing neutral currents. The phenomenology of these processes is studied, as they provide the major experimental constraint on the extended theory. Three symmetry-breaking scenarios are presented. The first has color breaking occurring at the weak scale, while the second and third divorce the two scales. The third model has the interesting feature of radiatively induced off-diagonal Kobayashi-Maskawa matrix elements
Heuristics as Bayesian inference under extreme priors.
Parpart, Paula; Jones, Matt; Love, Bradley C
2018-05-01
Simple heuristics are often regarded as tractable decision strategies because they ignore a great deal of information in the input data. One puzzle is why heuristics can outperform full-information models, such as linear regression, which make full use of the available information. These "less-is-more" effects, in which a relatively simpler model outperforms a more complex model, are prevalent throughout cognitive science, and are frequently argued to demonstrate an inherent advantage of simplifying computation or ignoring information. In contrast, we show at the computational level (where algorithmic restrictions are set aside) that it is never optimal to discard information. Through a formal Bayesian analysis, we prove that popular heuristics, such as tallying and take-the-best, are formally equivalent to Bayesian inference under the limit of infinitely strong priors. Varying the strength of the prior yields a continuum of Bayesian models with the heuristics at one end and ordinary regression at the other. Critically, intermediate models perform better across all our simulations, suggesting that down-weighting information with the appropriate prior is preferable to entirely ignoring it. Rather than because of their simplicity, our analyses suggest heuristics perform well because they implement strong priors that approximate the actual structure of the environment. We end by considering how new heuristics could be derived by infinitely strengthening the priors of other Bayesian models. These formal results have implications for work in psychology, machine learning and economics. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Computational Neuropsychology and Bayesian Inference
Directory of Open Access Journals (Sweden)
Thomas Parr
2018-02-01
Full Text Available Computational theories of brain function have become very influential in neuroscience. They have facilitated the growth of formal approaches to disease, particularly in psychiatric research. In this paper, we provide a narrative review of the body of computational research addressing neuropsychological syndromes, and focus on those that employ Bayesian frameworks. Bayesian approaches to understanding brain function formulate perception and action as inferential processes. These inferences combine ‘prior’ beliefs with a generative (predictive model to explain the causes of sensations. Under this view, neuropsychological deficits can be thought of as false inferences that arise due to aberrant prior beliefs (that are poor fits to the real world. This draws upon the notion of a Bayes optimal pathology – optimal inference with suboptimal priors – and provides a means for computational phenotyping. In principle, any given neuropsychological disorder could be characterized by the set of prior beliefs that would make a patient’s behavior appear Bayes optimal. We start with an overview of some key theoretical constructs and use these to motivate a form of computational neuropsychology that relates anatomical structures in the brain to the computations they perform. Throughout, we draw upon computational accounts of neuropsychological syndromes. These are selected to emphasize the key features of a Bayesian approach, and the possible types of pathological prior that may be present. They range from visual neglect through hallucinations to autism. Through these illustrative examples, we review the use of Bayesian approaches to understand the link between biology and computation that is at the heart of neuropsychology.
Plasma pressure and anisotropy inferred from the Tsyganenkomagnetic field model
Directory of Open Access Journals (Sweden)
F. Cao
Full Text Available A numerical procedure has been developed to deduce the plasma pressure and anisotropy from the Tsyganenko magnetic field model. The Tsyganenko empirical field model, which is based on vast satellite field data, provides a realistic description of magnetic field configuration in the magnetosphere. When the force balance under the static condition is assumed, the electromagnetic <strong>J×B> force from the Tsyganenko field model can be used to infer the plasma pressure and anisotropy distributions consistent with the field model. It is found that the <strong>J×B> force obtained from the Tsyganenko field model is not curl-free. The curl-free part of the <strong>J×B> force in an empirical field model can be balanced by the gradient of the isotropic pressure, while the nonzero curl of the <strong>J×B> force can only be associated with the pressure anisotropy. The plasma pressure and anisotropy in the near-Earth plasma sheet are numerically calculated to obtain a static equilibrium consistent with the Tsyganenko field model both in the noon-midnight meridian and in the equatorial plane. The plasma pressure distribution deduced from the Tsyganenko 1989 field model is highly anisotropic and shows this feature early in the substorm growth phase. The pressure anisotropy parameter α_{P}, defined as α_{P}=1-P_{Vert}P_{⊥}, is typically ~0.3 at x ≈ -4.5R_{E} and gradually decreases to a small negative value with an increasing tailward distance. The pressure anisotropy from the Tsyganenko 1989 model accounts for 50% of the cross-tail current at maximum and only in a highly localized region near xsim-10R_{E}. In comparison, the plasma pressure anisotropy inferred from the Tsyganenko 1987 model is much smaller. We also find that the boundary
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
2013-01-01
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....
Bayesian inference for Hawkes processes
DEFF Research Database (Denmark)
Rasmussen, Jakob Gulddahl
The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....
SICK: THE SPECTROSCOPIC INFERENCE CRANK
International Nuclear Information System (INIS)
Casey, Andrew R.
2016-01-01
There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal
sick: The Spectroscopic Inference Crank
Casey, Andrew R.
2016-03-01
There exists an inordinate amount of spectral data in both public and private astronomical archives that remain severely under-utilized. The lack of reliable open-source tools for analyzing large volumes of spectra contributes to this situation, which is poised to worsen as large surveys successively release orders of magnitude more spectra. In this article I introduce sick, the spectroscopic inference crank, a flexible and fast Bayesian tool for inferring astrophysical parameters from spectra. sick is agnostic to the wavelength coverage, resolving power, or general data format, allowing any user to easily construct a generative model for their data, regardless of its source. sick can be used to provide a nearest-neighbor estimate of model parameters, a numerically optimized point estimate, or full Markov Chain Monte Carlo sampling of the posterior probability distributions. This generality empowers any astronomer to capitalize on the plethora of published synthetic and observed spectra, and make precise inferences for a host of astrophysical (and nuisance) quantities. Model intensities can be reliably approximated from existing grids of synthetic or observed spectra using linear multi-dimensional interpolation, or a Cannon-based model. Additional phenomena that transform the data (e.g., redshift, rotational broadening, continuum, spectral resolution) are incorporated as free parameters and can be marginalized away. Outlier pixels (e.g., cosmic rays or poorly modeled regimes) can be treated with a Gaussian mixture model, and a noise model is included to account for systematically underestimated variance. Combining these phenomena into a scalar-justified, quantitative model permits precise inferences with credible uncertainties on noisy data. I describe the common model features, the implementation details, and the default behavior, which is balanced to be suitable for most astronomical applications. Using a forward model on low-resolution, high signal
Inference in hybrid Bayesian networks
DEFF Research Database (Denmark)
Lanseth, Helge; Nielsen, Thomas Dyhre; Rumí, Rafael
2009-01-01
Since the 1980s, Bayesian Networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability-techniques (like fault trees a...... decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability....... and reliability block diagrams). However, limitations in the BNs' calculation engine have prevented BNs from becoming equally popular for domains containing mixtures of both discrete and continuous variables (so-called hybrid domains). In this paper we focus on these difficulties, and summarize some of the last...
Type Inference of Turbo Pascal
DEFF Research Database (Denmark)
Hougaard, Ole Ildsgaard; Schwartzbach, Michael I; Askari, Hosein
1995-01-01
Type inference is generally thought of as being an exclusive property of the functional programming paradigm. We argue that such a feature may be of significant benefit for also standard imperative languages. We present a working tool (available by WWW) providing these benefits for a full version...... of Turbo Pascal. It has the form of a preprocessor that analyzes programs in which the type annotations are only partial or even absent. The resulting program has full type annotations, will be accepted by the standard Turbo Pascal compiler, and has polymorphic use of procedures resolved by means of code...
Inferring network structure from cascades
Ghonge, Sushrut; Vural, Dervis Can
2017-07-01
Many physical, biological, and social phenomena can be described by cascades taking place on a network. Often, the activity can be empirically observed, but not the underlying network of interactions. In this paper we offer three topological methods to infer the structure of any directed network given a set of cascade arrival times. Our formulas hold for a very general class of models where the activation probability of a node is a generic function of its degree and the number of its active neighbors. We report high success rates for synthetic and real networks, for several different cascade models.
DEFF Research Database (Denmark)
Lenoir, Jonathan; Graae, Bente; Aarrestad, Per
2013-01-01
Recent studies from mountainous areas of small spatial extent (<2500 km(2) ) suggest that fine-grained thermal variability over tens or hundreds of metres exceeds much of the climate warming expected for the coming decades. Such variability in temperature provides buffering to mitigate climate-ch...
Inferring Identity From Language: Linguistic Intergroup Bias Informs Social Categorization.
Porter, Shanette C; Rheinschmidt-Same, Michelle; Richeson, Jennifer A
2016-01-01
The present research examined whether a communicator's verbal, implicit message regarding a target is used as a cue for inferring that communicator's social identity. Previous research has found linguistic intergroup bias (LIB) in individuals' speech: They use abstract language to describe in-group targets' desirable behaviors and concrete language to describe their undesirable behaviors (favorable LIB), but use concrete language for out-group targets' desirable behaviors and abstract language for their undesirable behaviors (unfavorable LIB). Consequently, one can infer the type of language a communicator is likely to use to describe in-group and out-group targets. We hypothesized and found evidence for the reverse inference. Across four studies, individuals inferred a communicator's social identity on the basis of the communicator's use of an LIB. Specifically, participants more strongly believed that a communicator and target shared a social identity when the communicator used the favorable, rather than the unfavorable, LIB in describing that target. © The Author(s) 2015.
FLOODPLAIN, PLATTE COUNTY, MISSOURI USA
Federal Emergency Management Agency, Department of Homeland Security — The Floodplain Mapping/Redelineation study deliverables depict and quantify the flood risks for the study area. The primary risk classifications used are the...
HYDRAULICS, PLATTE COUNTY, MISSOURI, USA
Federal Emergency Management Agency, Department of Homeland Security — Recent developments in digital terrain and geospatial database management technology make it possible to protect this investment for existing and future projects to...
Bayesian Inference of Tumor Hypoxia
Gunawan, R.; Tenti, G.; Sivaloganathan, S.
2009-12-01
Tumor hypoxia is a state of oxygen deprivation in tumors. It has been associated with aggressive tumor phenotypes and with increased resistance to conventional cancer therapies. In this study, we report on the application of Bayesian sequential analysis in estimating the most probable value of tumor hypoxia quantification based on immunohistochemical assays of a biomarker. The `gold standard' of tumor hypoxia assessment is a direct measurement of pO2 in vivo by the Eppendorf polarographic electrode, which is an invasive technique restricted to accessible sites and living tissues. An attractive alternative is immunohistochemical staining to detect proteins expressed by cells during hypoxia. Carbonic anhydrase IX (CAIX) is an enzyme expressed on the cell membrane during hypoxia to balance the immediate extracellular microenvironment. CAIX is widely regarded as a surrogate marker of chronic hypoxia in various cancers. The study was conducted with two different experimental procedures. The first data set was a group of three patients with invasive cervical carcinomas, from which five biopsies were obtained. Each of the biopsies was fully sectioned and from each section, the proportion of CAIX-positive cells was estimated. Measurements were made by image analysis of multiple deep sections cut through these biopsies, labeled for CAIX using both immunofluorescence and immunohistochemical techniques [1]. The second data set was a group of 24 patients, also with invasive cervical carcinomas, from which two biopsies were obtained. Bayesian parameter estimation was applied to obtain a reliable inference about the proportion of CAIX-positive cells within the carcinomas, based on the available biopsies. From the first data set, two to three biopsies were found to be sufficient to infer the overall CAIX percentage in the simple form: best estimate±uncertainty. The second data-set led to a similar result in 70% of the cases. In the remaining cases Bayes' theorem warned us
Spontaneous Trait Inferences on Social Media.
Levordashka, Ana; Utz, Sonja
2017-01-01
The present research investigates whether spontaneous trait inferences occur under conditions characteristic of social media and networking sites: nonextreme, ostensibly self-generated content, simultaneous presentation of multiple cues, and self-paced browsing. We used an established measure of trait inferences (false recognition paradigm) and a direct assessment of impressions. Without being asked to do so, participants spontaneously formed impressions of people whose status updates they saw. Our results suggest that trait inferences occurred from nonextreme self-generated content, which is commonly found in social media updates (Experiment 1) and when nine status updates from different people were presented in parallel (Experiment 2). Although inferences did occur during free browsing, the results suggest that participants did not necessarily associate the traits with the corresponding status update authors (Experiment 3). Overall, the findings suggest that spontaneous trait inferences occur on social media. We discuss implications for online communication and research on spontaneous trait inferences.
Sociolinguistic Perception as Inference Under Uncertainty.
Kleinschmidt, Dave F; Weatherholtz, Kodi; Florian Jaeger, T
2018-03-15
Social and linguistic perceptions are linked. On one hand, talker identity affects speech perception. On the other hand, speech itself provides information about a talker's identity. Here, we propose that the same probabilistic knowledge might underlie both socially conditioned linguistic inferences and linguistically conditioned social inferences. Our computational-level approach-the ideal adapter-starts from the idea that listeners use probabilistic knowledge of covariation between social, linguistic, and acoustic cues in order to infer the most likely explanation of the speech signals they hear. As a first step toward understanding social inferences in this framework, we use a simple ideal observer model to show that it would be possible to infer aspects of a talker's identity using cue distributions based on actual speech production data. This suggests the possibility of a single formal framework for social and linguistic inferences and the interactions between them. Copyright © 2018 Cognitive Science Society, Inc.
Statistical inference for financial engineering
Taniguchi, Masanobu; Ogata, Hiroaki; Taniai, Hiroyuki
2014-01-01
This monograph provides the fundamentals of statistical inference for financial engineering and covers some selected methods suitable for analyzing financial time series data. In order to describe the actual financial data, various stochastic processes, e.g. non-Gaussian linear processes, non-linear processes, long-memory processes, locally stationary processes etc. are introduced and their optimal estimation is considered as well. This book also includes several statistical approaches, e.g., discriminant analysis, the empirical likelihood method, control variate method, quantile regression, realized volatility etc., which have been recently developed and are considered to be powerful tools for analyzing the financial data, establishing a new bridge between time series and financial engineering. This book is well suited as a professional reference book on finance, statistics and statistical financial engineering. Readers are expected to have an undergraduate-level knowledge of statistics.
Inferring echolocation in ancient bats.
Simmons, Nancy B; Seymour, Kevin L; Habersetzer, Jörg; Gunnell, Gregg F
2010-08-19
Laryngeal echolocation, used by most living bats to form images of their surroundings and to detect and capture flying prey, is considered to be a key innovation for the evolutionary success of bats, and palaeontologists have long sought osteological correlates of echolocation that can be used to infer the behaviour of fossil bats. Veselka et al. argued that the most reliable trait indicating echolocation capabilities in bats is an articulation between the stylohyal bone (part of the hyoid apparatus that supports the throat and larynx) and the tympanic bone, which forms the floor of the middle ear. They examined the oldest and most primitive known bat, Onychonycteris finneyi (early Eocene, USA), and argued that it showed evidence of this stylohyal-tympanic articulation, from which they concluded that O. finneyi may have been capable of echolocation. We disagree with their interpretation of key fossil data and instead argue that O. finneyi was probably not an echolocating bat.
Inference and uncertainty in radiology.
Sistrom, Chris
2006-05-01
This paper seeks to enhance understanding of the philosophical underpinnings of our discipline and the resulting practical implications. Radiology reports exist in order to convey new knowledge about a patient's condition based on empiric observations from anatomic or functional images of the body. The route to explanation and prediction from empiric evidence is mostly through inference based on inductive (and sometimes abductive) arguments. The conclusions of inductive arguments are, by definition, contingent and provisional. Therefore, it is necessary to deal in some way with the uncertainty of inferential conclusions (i.e. interpretations) made in radiology reports. Two paradigms for managing uncertainty in natural sciences exist in dialectic tension with each other. These are the frequentist and Bayesian theories of probability. Tension between them is mirrored during routine interactions among radiologists and clinicians. I will describe these core issues and argue that they are quite relevant to routine image interpretation and reporting.
Polynomial Regressions and Nonsense Inference
Directory of Open Access Journals (Sweden)
Daniel Ventosa-Santaulària
2013-11-01
Full Text Available Polynomial specifications are widely used, not only in applied economics, but also in epidemiology, physics, political analysis and psychology, just to mention a few examples. In many cases, the data employed to estimate such specifications are time series that may exhibit stochastic nonstationary behavior. We extend Phillips’ results (Phillips, P. Understanding spurious regressions in econometrics. J. Econom. 1986, 33, 311–340. by proving that an inference drawn from polynomial specifications, under stochastic nonstationarity, is misleading unless the variables cointegrate. We use a generalized polynomial specification as a vehicle to study its asymptotic and finite-sample properties. Our results, therefore, lead to a call to be cautious whenever practitioners estimate polynomial regressions.
Type inference for correspondence types
DEFF Research Database (Denmark)
Hüttel, Hans; Gordon, Andy; Hansen, Rene Rydhof
2009-01-01
We present a correspondence type/effect system for authenticity in a π-calculus with polarized channels, dependent pair types and effect terms and show how one may, given a process P and an a priori type environment E, generate constraints that are formulae in the Alternating Least Fixed......-Point (ALFP) logic. We then show how a reasonable model of the generated constraints yields a type/effect assignment such that P becomes well-typed with respect to E if and only if this is possible. The formulae generated satisfy a finite model property; a system of constraints is satisfiable if and only...... if it has a finite model. As a consequence, we obtain the result that type/effect inference in our system is polynomial-time decidable....
Compiling Relational Bayesian Networks for Exact Inference
DEFF Research Database (Denmark)
Jaeger, Manfred; Chavira, Mark; Darwiche, Adnan
2004-01-01
We describe a system for exact inference with relational Bayesian networks as defined in the publicly available \\primula\\ tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference by evaluating...... and differentiating these circuits in time linear in their size. We report on experimental results showing the successful compilation, and efficient inference, on relational Bayesian networks whose {\\primula}--generated propositional instances have thousands of variables, and whose jointrees have clusters...
Bayesian Inference Methods for Sparse Channel Estimation
DEFF Research Database (Denmark)
Pedersen, Niels Lovmand
2013-01-01
of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... and computational complexity. We also analyze the impact of transceiver filters on the sparseness of the channel response, and propose a dictionary design that permits the deployment of sparse inference methods in conditions of low bandwidth....
Inference Attacks and Control on Database Structures
Directory of Open Access Journals (Sweden)
Muhamed Turkanovic
2015-02-01
Full Text Available Today’s databases store information with sensitivity levels that range from public to highly sensitive, hence ensuring confidentiality can be highly important, but also requires costly control. This paper focuses on the inference problem on different database structures. It presents possible treats on privacy with relation to the inference, and control methods for mitigating these treats. The paper shows that using only access control, without any inference control is inadequate, since these models are unable to protect against indirect data access. Furthermore, it covers new inference problems which rise from the dimensions of new technologies like XML, semantics, etc.
Inferring gene regression networks with model trees
Directory of Open Access Journals (Sweden)
Aguilar-Ruiz Jesus S
2010-10-01
Full Text Available Abstract Background Novel strategies are required in order to handle the huge amount of data produced by microarray technologies. To infer gene regulatory networks, the first step is to find direct regulatory relationships between genes building the so-called gene co-expression networks. They are typically generated using correlation statistics as pairwise similarity measures. Correlation-based methods are very useful in order to determine whether two genes have a strong global similarity but do not detect local similarities. Results We propose model trees as a method to identify gene interaction networks. While correlation-based methods analyze each pair of genes, in our approach we generate a single regression tree for each gene from the remaining genes. Finally, a graph from all the relationships among output and input genes is built taking into account whether the pair of genes is statistically significant. For this reason we apply a statistical procedure to control the false discovery rate. The performance of our approach, named REGNET, is experimentally tested on two well-known data sets: Saccharomyces Cerevisiae and E.coli data set. First, the biological coherence of the results are tested. Second the E.coli transcriptional network (in the Regulon database is used as control to compare the results to that of a correlation-based method. This experiment shows that REGNET performs more accurately at detecting true gene associations than the Pearson and Spearman zeroth and first-order correlation-based methods. Conclusions REGNET generates gene association networks from gene expression data, and differs from correlation-based methods in that the relationship between one gene and others is calculated simultaneously. Model trees are very useful techniques to estimate the numerical values for the target genes by linear regression functions. They are very often more precise than linear regression models because they can add just different linear
LAIT: a local ancestry inference toolkit.
Hui, Daniel; Fang, Zhou; Lin, Jerome; Duan, Qing; Li, Yun; Hu, Ming; Chen, Wei
2017-09-06
Inferring local ancestry in individuals of mixed ancestry has many applications, most notably in identifying disease-susceptible loci that vary among different ethnic groups. Many software packages are available for inferring local ancestry in admixed individuals. However, most of these existing software packages require specific formatted input files and generate output files in various types, yielding practical inconvenience. We developed a tool set, Local Ancestry Inference Toolkit (LAIT), which can convert standardized files into software-specific input file formats as well as standardize and summarize inference results for four popular local ancestry inference software: HAPMIX, LAMP, LAMP-LD, and ELAI. We tested LAIT using both simulated and real data sets and demonstrated that LAIT provides convenience to run multiple local ancestry inference software. In addition, we evaluated the performance of local ancestry software among different supported software packages, mainly focusing on inference accuracy and computational resources used. We provided a toolkit to facilitate the use of local ancestry inference software, especially for users with limited bioinformatics background.
Forward and backward inference in spatial cognition.
Directory of Open Access Journals (Sweden)
Will D Penny
Full Text Available This paper shows that the various computations underlying spatial cognition can be implemented using statistical inference in a single probabilistic model. Inference is implemented using a common set of 'lower-level' computations involving forward and backward inference over time. For example, to estimate where you are in a known environment, forward inference is used to optimally combine location estimates from path integration with those from sensory input. To decide which way to turn to reach a goal, forward inference is used to compute the likelihood of reaching that goal under each option. To work out which environment you are in, forward inference is used to compute the likelihood of sensory observations under the different hypotheses. For reaching sensory goals that require a chaining together of decisions, forward inference can be used to compute a state trajectory that will lead to that goal, and backward inference to refine the route and estimate control signals that produce the required trajectory. We propose that these computations are reflected in recent findings of pattern replay in the mammalian brain. Specifically, that theta sequences reflect decision making, theta flickering reflects model selection, and remote replay reflects route and motor planning. We also propose a mapping of the above computational processes onto lateral and medial entorhinal cortex and hippocampus.
Fiducial inference - A Neyman-Pearson interpretation
Salome, D; VonderLinden, W; Dose,; Fischer, R; Preuss, R
1999-01-01
Fisher's fiducial argument is a tool for deriving inferences in the form of a probability distribution on the parameter space, not based on Bayes's Theorem. Lindley established that in exceptional situations fiducial inferences coincide with posterior distributions; in the other situations fiducial
Uncertainty in prediction and in inference
Hilgevoord, J.; Uffink, J.
1991-01-01
The concepts of uncertainty in prediction and inference are introduced and illustrated using the diffraction of light as an example. The close re-lationship between the concepts of uncertainty in inference and resolving power is noted. A general quantitative measure of uncertainty in
Reinforcement learning or active inference?
Friston, Karl J; Daunizeau, Jean; Kiebel, Stefan J
2009-07-29
This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain.
Reinforcement learning or active inference?
Directory of Open Access Journals (Sweden)
Karl J Friston
2009-07-01
Full Text Available This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain.
Extended likelihood inference in reliability
International Nuclear Information System (INIS)
Martz, H.F. Jr.; Beckman, R.J.; Waller, R.A.
1978-10-01
Extended likelihood methods of inference are developed in which subjective information in the form of a prior distribution is combined with sampling results by means of an extended likelihood function. The extended likelihood function is standardized for use in obtaining extended likelihood intervals. Extended likelihood intervals are derived for the mean of a normal distribution with known variance, the failure-rate of an exponential distribution, and the parameter of a binomial distribution. Extended second-order likelihood methods are developed and used to solve several prediction problems associated with the exponential and binomial distributions. In particular, such quantities as the next failure-time, the number of failures in a given time period, and the time required to observe a given number of failures are predicted for the exponential model with a gamma prior distribution on the failure-rate. In addition, six types of life testing experiments are considered. For the binomial model with a beta prior distribution on the probability of nonsurvival, methods are obtained for predicting the number of nonsurvivors in a given sample size and for predicting the required sample size for observing a specified number of nonsurvivors. Examples illustrate each of the methods developed. Finally, comparisons are made with Bayesian intervals in those cases where these are known to exist
Inferring evoked brain connectivity through adaptive perturbation.
Lepage, Kyle Q; Ching, ShiNung; Kramer, Mark A
2013-04-01
Inference of functional networks-representing the statistical associations between time series recorded from multiple sensors-has found important applications in neuroscience. However, networksexhibiting time-locked activity between physically independent elements can bias functional connectivity estimates employing passive measurements. Here, a perturbative and adaptive method of inferring network connectivity based on measurement and stimulation-so called "evoked network connectivity" is introduced. This procedure, employing a recursive Bayesian update scheme, allows principled network stimulation given a current network estimate inferred from all previous stimulations and recordings. The method decouples stimulus and detector design from network inference and can be suitably applied to a wide range of clinical and basic neuroscience related problems. The proposed method demonstrates improved accuracy compared to network inference based on passive observation of node dynamics and an increased rate of convergence relative to network estimation employing a naïve stimulation strategy.
EI: A Program for Ecological Inference
Directory of Open Access Journals (Sweden)
Gary King
2004-09-01
Full Text Available The program EI provides a method of inferring individual behavior from aggregate data. It implements the statistical procedures, diagnostics, and graphics from the book A Solution to the Ecological Inference Problem: Reconstructing Individual Behavior from Aggregate Data (King 1997. Ecological inference, as traditionally defined, is the process of using aggregate (i.e., "ecological" data to infer discrete individual-level relationships of interest when individual-level data are not available. Ecological inferences are required in political science research when individual-level surveys are unavailable (e.g., local or comparative electoral politics, unreliable (racial politics, insufficient (political geography, or infeasible (political history. They are also required in numerous areas of ma jor significance in public policy (e.g., for applying the Voting Rights Act and other academic disciplines ranging from epidemiology and marketing to sociology and quantitative history.
Quantum electrodynamics of strong fields
International Nuclear Information System (INIS)
Greiner, W.
1983-01-01
Quantum Electrodynamics of Strong Fields provides a broad survey of the theoretical and experimental work accomplished, presenting papers by a group of international researchers who have made significant contributions to this developing area. Exploring the quantum theory of strong fields, the volume focuses on the phase transition to a charged vacuum in strong electric fields. The contributors also discuss such related topics as QED at short distances, precision tests of QED, nonperturbative QCD and confinement, pion condensation, and strong gravitational fields In addition, the volume features a historical paper on the roots of quantum field theory in the history of quantum physics by noted researcher Friedrich Hund
Bayesian inference of chemical kinetic models from proposed reactions
Galagali, Nikhil
2015-02-01
© 2014 Elsevier Ltd. Bayesian inference provides a natural framework for combining experimental data with prior knowledge to develop chemical kinetic models and quantify the associated uncertainties, not only in parameter values but also in model structure. Most existing applications of Bayesian model selection methods to chemical kinetics have been limited to comparisons among a small set of models, however. The significant computational cost of evaluating posterior model probabilities renders traditional Bayesian methods infeasible when the model space becomes large. We present a new framework for tractable Bayesian model inference and uncertainty quantification using a large number of systematically generated model hypotheses. The approach involves imposing point-mass mixture priors over rate constants and exploring the resulting posterior distribution using an adaptive Markov chain Monte Carlo method. The posterior samples are used to identify plausible models, to quantify rate constant uncertainties, and to extract key diagnostic information about model structure-such as the reactions and operating pathways most strongly supported by the data. We provide numerical demonstrations of the proposed framework by inferring kinetic models for catalytic steam and dry reforming of methane using available experimental data.
Inferring the gene network underlying the branching of tomato inflorescence.
Directory of Open Access Journals (Sweden)
Laura Astola
Full Text Available The architecture of tomato inflorescence strongly affects flower production and subsequent crop yield. To understand the genetic activities involved, insight into the underlying network of genes that initiate and control the sympodial growth in the tomato is essential. In this paper, we show how the structure of this network can be derived from available data of the expressions of the involved genes. Our approach starts from employing biological expert knowledge to select the most probable gene candidates behind branching behavior. To find how these genes interact, we develop a stepwise procedure for computational inference of the network structure. Our data consists of expression levels from primary shoot meristems, measured at different developmental stages on three different genotypes of tomato. With the network inferred by our algorithm, we can explain the dynamics corresponding to all three genotypes simultaneously, despite their apparent dissimilarities. We also correctly predict the chronological order of expression peaks for the main hubs in the network. Based on the inferred network, using optimal experimental design criteria, we are able to suggest an informative set of experiments for further investigation of the mechanisms underlying branching behavior.
Energy Technology Data Exchange (ETDEWEB)
Pelaez, Jose R
1998-12-14
We present a brief pedagogical introduction to the Effective Electroweak Chiral Lagrangians, which provide a model independent description of the WW interactions in the strong regime. When it is complemented with some unitarization or a dispersive approach, this formalism allows the study of the general strong scenario expected at the LHC, including resonances.
International Nuclear Information System (INIS)
DeSantis, G.N.
1995-01-01
The calculation decides the integrity of the safety latch that will hold the strong-back to the pump during lifting. The safety latch will be welded to the strong-back and will latch to a 1.5-in. dia cantilever rod welded to the pump baseplate. The static and dynamic analysis shows that the safety latch will hold the strong-back to the pump if the friction clamps fail and the pump become free from the strong-back. Thus, the safety latch will meet the requirements of the Lifting and Rigging Manual for under the hook lifting for static loading; it can withstand shock loads from the strong-back falling 0.25 inch
Energy Technology Data Exchange (ETDEWEB)
DeSantis, G.N.
1995-03-06
The calculation decides the integrity of the safety latch that will hold the strong-back to the pump during lifting. The safety latch will be welded to the strong-back and will latch to a 1.5-in. dia cantilever rod welded to the pump baseplate. The static and dynamic analysis shows that the safety latch will hold the strong-back to the pump if the friction clamps fail and the pump become free from the strong-back. Thus, the safety latch will meet the requirements of the Lifting and Rigging Manual for under the hook lifting for static loading; it can withstand shock loads from the strong-back falling 0.25 inch.
Inference and the introductory statistics course
Pfannkuch, Maxine; Regan, Matt; Wild, Chris; Budgett, Stephanie; Forbes, Sharleen; Harraway, John; Parsonage, Ross
2011-10-01
This article sets out some of the rationale and arguments for making major changes to the teaching and learning of statistical inference in introductory courses at our universities by changing from a norm-based, mathematical approach to more conceptually accessible computer-based approaches. The core problem of the inferential argument with its hypothetical probabilistic reasoning process is examined in some depth. We argue that the revolution in the teaching of inference must begin. We also discuss some perplexing issues, problematic areas and some new insights into language conundrums associated with introducing the logic of inference through randomization methods.
Compiling Relational Bayesian Networks for Exact Inference
DEFF Research Database (Denmark)
Jaeger, Manfred; Darwiche, Adnan; Chavira, Mark
2006-01-01
We describe in this paper a system for exact inference with relational Bayesian networks as defined in the publicly available PRIMULA tool. The system is based on compiling propositional instances of relational Bayesian networks into arithmetic circuits and then performing online inference...... by evaluating and differentiating these circuits in time linear in their size. We report on experimental results showing successful compilation and efficient inference on relational Bayesian networks, whose PRIMULA--generated propositional instances have thousands of variables, and whose jointrees have clusters...
Titanium: light, strong, and white
Woodruff, Laurel; Bedinger, George
2013-01-01
Titanium (Ti) is a strong silver-gray metal that is highly resistant to corrosion and is chemically inert. It is as strong as steel but 45 percent lighter, and it is twice as strong as aluminum but only 60 percent heavier. Titanium dioxide (TiO2) has a very high refractive index, which means that it has high light-scattering ability. As a result, TiO2 imparts whiteness, opacity, and brightness to many products. ...Because of the unique physical properties of titanium metal and the whiteness provided by TiO2, titanium is now used widely in modern industrial societies.
Experimental evidence for circular inference in schizophrenia
Jardri, Renaud; Duverne, Sandrine; Litvinova, Alexandra S.; Denève, Sophie
2017-01-01
Schizophrenia (SCZ) is a complex mental disorder that may result in some combination of hallucinations, delusions and disorganized thinking. Here SCZ patients and healthy controls (CTLs) report their level of confidence on a forced-choice task that manipulated the strength of sensory evidence and prior information. Neither group's responses can be explained by simple Bayesian inference. Rather, individual responses are best captured by a model with different degrees of circular inference. Circular inference refers to a corruption of sensory data by prior information and vice versa, leading us to `see what we expect' (through descending loops), to `expect what we see' (through ascending loops) or both. Ascending loops are stronger for SCZ than CTLs and correlate with the severity of positive symptoms. Descending loops correlate with the severity of negative symptoms. Both loops correlate with disorganized symptoms. The findings suggest that circular inference might mediate the clinical manifestations of SCZ.
Artificial Hydrocarbon Networks Fuzzy Inference System
Directory of Open Access Journals (Sweden)
Hiram Ponce
2013-01-01
Full Text Available This paper presents a novel fuzzy inference model based on artificial hydrocarbon networks, a computational algorithm for modeling problems based on chemical hydrocarbon compounds. In particular, the proposed fuzzy-molecular inference model (FIM-model uses molecular units of information to partition the output space in the defuzzification step. Moreover, these molecules are linguistic units that can be partially understandable due to the organized structure of the topology and metadata parameters involved in artificial hydrocarbon networks. In addition, a position controller for a direct current (DC motor was implemented using the proposed FIM-model in type-1 and type-2 fuzzy inference systems. Experimental results demonstrate that the fuzzy-molecular inference model can be applied as an alternative of type-2 Mamdani’s fuzzy control systems because the set of molecular units can deal with dynamic uncertainties mostly present in real-world control applications.
SEBINI: Software Environment for BIological Network Inference.
Taylor, Ronald C; Shah, Anuj; Treatman, Charles; Blevins, Meridith
2006-11-01
The Software Environment for BIological Network Inference (SEBINI) has been created to provide an interactive environment for the deployment and evaluation of algorithms used to reconstruct the structure of biological regulatory and interaction networks. SEBINI can be used to compare and train network inference methods on artificial networks and simulated gene expression perturbation data. It also allows the analysis within the same framework of experimental high-throughput expression data using the suite of (trained) inference methods; hence SEBINI should be useful to software developers wishing to evaluate, compare, refine or combine inference techniques, and to bioinformaticians analyzing experimental data. SEBINI provides a platform that aids in more accurate reconstruction of biological networks, with less effort, in less time. A demonstration website is located at https://www.emsl.pnl.gov/NIT/NIT.html. The Java source code and PostgreSQL database schema are available freely for non-commercial use.
Inferring Domain Plans in Question-Answering
National Research Council Canada - National Science Library
Pollack, Martha E
1986-01-01
The importance of plan inference in models of conversation has been widely noted in the computational-linguistics literature, and its incorporation in question-answering systems has enabled a range...
Quantum centipedes with strong global constraint
Grange, Pascal
2017-06-01
A centipede made of N quantum walkers on a one-dimensional lattice is considered. The distance between two consecutive legs is either one or two lattice spacings, and a global constraint is imposed: the maximal distance between the first and last leg is N + 1. This is the strongest global constraint compatible with walking. For an initial value of the wave function corresponding to a localized configuration at the origin, the probability law of the first leg of the centipede can be expressed in closed form in terms of Bessel functions. The dispersion relation and the group velocities are worked out exactly. Their maximal group velocity goes to zero when N goes to infinity, which is in contrast with the behaviour of group velocities of quantum centipedes without global constraint, which were recently shown by Krapivsky, Luck and Mallick to give rise to ballistic spreading of extremal wave-front at non-zero velocity in the large-N limit. The corresponding Hamiltonians are implemented numerically, based on a block structure of the space of configurations corresponding to compositions of the integer N. The growth of the maximal group velocity when the strong constraint is gradually relaxed is explored, and observed to be linear in the density of gaps allowed in the configurations. Heuristic arguments are presented to infer that the large-N limit of the globally constrained model can yield finite group velocities provided the allowed number of gaps is a finite fraction of N.
Artificial Hydrocarbon Networks Fuzzy Inference System
Ponce, Hiram; Ponce, Pedro; Molina, Arturo
2013-01-01
This paper presents a novel fuzzy inference model based on artificial hydrocarbon networks, a computational algorithm for modeling problems based on chemical hydrocarbon compounds. In particular, the proposed fuzzy-molecular inference model (FIM-model) uses molecular units of information to partition the output space in the defuzzification step. Moreover, these molecules are linguistic units that can be partially understandable due to the organized structure of the topology and metadata param...
Efficient algorithms for conditional independence inference
Czech Academy of Sciences Publication Activity Database
Bouckaert, R.; Hemmecke, R.; Lindner, S.; Studený, Milan
2010-01-01
Roč. 11, č. 1 (2010), s. 3453-3479 ISSN 1532-4435 R&D Projects: GA ČR GA201/08/0539; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : conditional independence inference * linear programming approach Subject RIV: BA - General Mathematics Impact factor: 2.949, year: 2010 http://library.utia.cas.cz/separaty/2010/MTR/studeny-efficient algorithms for conditional independence inference.pdf
Probabilistic inferences related to the measurement process
International Nuclear Information System (INIS)
Rossi, G. B.
2010-01-01
In measurement indications from a measuring system are acquired and, on the basis of them, some inference about the measurand is made. The final result may be the assignment of a probability distribution for the possible values of the measurand. We discuss the logical structure of such an inference and some of its epistemological consequences. In particular, we propose a new solution to the problem of systematic effects in measurement.
Polynomial Chaos Surrogates for Bayesian Inference
Le Maitre, Olivier
2016-01-06
The Bayesian inference is a popular probabilistic method to solve inverse problems, such as the identification of field parameter in a PDE model. The inference rely on the Bayes rule to update the prior density of the sought field, from observations, and derive its posterior distribution. In most cases the posterior distribution has no explicit form and has to be sampled, for instance using a Markov-Chain Monte Carlo method. In practice the prior field parameter is decomposed and truncated (e.g. by means of Karhunen- Lo´eve decomposition) to recast the inference problem into the inference of a finite number of coordinates. Although proved effective in many situations, the Bayesian inference as sketched above faces several difficulties requiring improvements. First, sampling the posterior can be a extremely costly task as it requires multiple resolutions of the PDE model for different values of the field parameter. Second, when the observations are not very much informative, the inferred parameter field can highly depends on its prior which can be somehow arbitrary. These issues have motivated the introduction of reduced modeling or surrogates for the (approximate) determination of the parametrized PDE solution and hyperparameters in the description of the prior field. Our contribution focuses on recent developments in these two directions: the acceleration of the posterior sampling by means of Polynomial Chaos expansions and the efficient treatment of parametrized covariance functions for the prior field. We also discuss the possibility of making such approach adaptive to further improve its efficiency.
On the criticality of inferred models
International Nuclear Information System (INIS)
Mastromatteo, Iacopo; Marsili, Matteo
2011-01-01
Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality
Bayesian Inference on Gravitational Waves
Directory of Open Access Journals (Sweden)
Asad Ali
2015-12-01
Full Text Available The Bayesian approach is increasingly becoming popular among the astrophysics data analysis communities. However, the Pakistan statistics communities are unaware of this fertile interaction between the two disciplines. Bayesian methods have been in use to address astronomical problems since the very birth of the Bayes probability in eighteenth century. Today the Bayesian methods for the detection and parameter estimation of gravitational waves have solid theoretical grounds with a strong promise for the realistic applications. This article aims to introduce the Pakistan statistics communities to the applications of Bayesian Monte Carlo methods in the analysis of gravitational wave data with an overview of the Bayesian signal detection and estimation methods and demonstration by a couple of simplified examples.
Energy Technology Data Exchange (ETDEWEB)
Marshall, P.
2005-01-03
Basic considerations of lens detection and identification indicate that a wide field survey of the types planned for weak lensing and Type Ia SNe with SNAP are close to optimal for the optical detection of strong lenses. Such a ''piggy-back'' survey might be expected even pessimistically to provide a catalogue of a few thousand new strong lenses, with the numbers dominated by systems of faint blue galaxies lensed by foreground ellipticals. After sketching out our strategy for detecting and measuring these galaxy lenses using the SNAP images, we discuss some of the scientific applications of such a large sample of gravitational lenses: in particular we comment on the partition of information between lens structure, the source population properties and cosmology. Understanding this partitioning is key to assessing strong lens cosmography's value as a cosmological probe.
International Nuclear Information System (INIS)
Aoki, Ken-ichi
1988-01-01
Existence of a strong coupling phase in QED has been suggested in solutions of the Schwinger-Dyson equation and in Monte Carlo simulation of lattice QED. In this article we recapitulate the previous arguments, and formulate the problem in the modern framework of the renormalization theory, Wilsonian renormalization. This scheme of renormalization gives the best understanding of the basic structure of a field theory especially when it has a multi-phase structure. We resolve some misleading arguments in the previous literature. Then we set up a strategy to attack the strong phase, if any. We describe a trial; a coupled Schwinger-Dyson equation. Possible picture of the strong coupling phase QED is presented. (author)
Estimating uncertainty of inference for validation
Energy Technology Data Exchange (ETDEWEB)
Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM
2010-09-30
We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the
Inferring Mathematical Equations Using Crowdsourcing.
Directory of Open Access Journals (Sweden)
Szymon Wasik
Full Text Available Crowdsourcing, understood as outsourcing work to a large network of people in the form of an open call, has been utilized successfully many times, including a very interesting concept involving the implementation of computer games with the objective of solving a scientific problem by employing users to play a game-so-called crowdsourced serious games. Our main objective was to verify whether such an approach could be successfully applied to the discovery of mathematical equations that explain experimental data gathered during the observation of a given dynamic system. Moreover, we wanted to compare it with an approach based on artificial intelligence that uses symbolic regression to find such formulae automatically. To achieve this, we designed and implemented an Internet game in which players attempt to design a spaceship representing an equation that models the observed system. The game was designed while considering that it should be easy to use for people without strong mathematical backgrounds. Moreover, we tried to make use of the collective intelligence observed in crowdsourced systems by enabling many players to collaborate on a single solution. The idea was tested on several hundred players playing almost 10,000 games and conducting a user opinion survey. The results prove that the proposed solution has very high potential. The function generated during weeklong tests was almost as precise as the analytical solution of the model of the system and, up to a certain complexity level of the formulae, it explained data better than the solution generated automatically by Eureqa, the leading software application for the implementation of symbolic regression. Moreover, we observed benefits of using crowdsourcing; the chain of consecutive solutions that led to the best solution was obtained by the continuous collaboration of several players.
Inferring Mathematical Equations Using Crowdsourcing.
Wasik, Szymon; Fratczak, Filip; Krzyskow, Jakub; Wulnikowski, Jaroslaw
2015-01-01
Crowdsourcing, understood as outsourcing work to a large network of people in the form of an open call, has been utilized successfully many times, including a very interesting concept involving the implementation of computer games with the objective of solving a scientific problem by employing users to play a game-so-called crowdsourced serious games. Our main objective was to verify whether such an approach could be successfully applied to the discovery of mathematical equations that explain experimental data gathered during the observation of a given dynamic system. Moreover, we wanted to compare it with an approach based on artificial intelligence that uses symbolic regression to find such formulae automatically. To achieve this, we designed and implemented an Internet game in which players attempt to design a spaceship representing an equation that models the observed system. The game was designed while considering that it should be easy to use for people without strong mathematical backgrounds. Moreover, we tried to make use of the collective intelligence observed in crowdsourced systems by enabling many players to collaborate on a single solution. The idea was tested on several hundred players playing almost 10,000 games and conducting a user opinion survey. The results prove that the proposed solution has very high potential. The function generated during weeklong tests was almost as precise as the analytical solution of the model of the system and, up to a certain complexity level of the formulae, it explained data better than the solution generated automatically by Eureqa, the leading software application for the implementation of symbolic regression. Moreover, we observed benefits of using crowdsourcing; the chain of consecutive solutions that led to the best solution was obtained by the continuous collaboration of several players.
Deep Learning for Population Genetic Inference.
Directory of Open Access Journals (Sweden)
Sara Sheehan
2016-03-01
Full Text Available Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data to the output (e.g., population genetic parameters of interest. We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history. Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme.
Deep Learning for Population Genetic Inference
Sheehan, Sara; Song, Yun S.
2016-01-01
Given genomic variation data from multiple individuals, computing the likelihood of complex population genetic models is often infeasible. To circumvent this problem, we introduce a novel likelihood-free inference framework by applying deep learning, a powerful modern technique in machine learning. Deep learning makes use of multilayer neural networks to learn a feature-based function from the input (e.g., hundreds of correlated summary statistics of data) to the output (e.g., population genetic parameters of interest). We demonstrate that deep learning can be effectively employed for population genetic inference and learning informative features of data. As a concrete application, we focus on the challenging problem of jointly inferring natural selection and demography (in the form of a population size change history). Our method is able to separate the global nature of demography from the local nature of selection, without sequential steps for these two factors. Studying demography and selection jointly is motivated by Drosophila, where pervasive selection confounds demographic analysis. We apply our method to 197 African Drosophila melanogaster genomes from Zambia to infer both their overall demography, and regions of their genome under selection. We find many regions of the genome that have experienced hard sweeps, and fewer under selection on standing variation (soft sweep) or balancing selection. Interestingly, we find that soft sweeps and balancing selection occur more frequently closer to the centromere of each chromosome. In addition, our demographic inference suggests that previously estimated bottlenecks for African Drosophila melanogaster are too extreme. PMID:27018908
Strong Decomposition of Random Variables
DEFF Research Database (Denmark)
Hoffmann-Jørgensen, Jørgen; Kagan, Abram M.; Pitt, Loren D.
2007-01-01
A random variable X is stongly decomposable if X=Y+Z where Y=Φ(X) and Z=X-Φ(X) are independent non-degenerated random variables (called the components). It is shown that at least one of the components is singular, and we derive a necessary and sufficient condition for strong decomposability...
Strong interaction at finite temperature
Indian Academy of Sciences (India)
Abstract. We review two methods discussed in the literature to determine the effective parameters of strongly interacting particles as they move through a heat bath. The first one is the general method of chiral perturbation theory, which may be readily applied to this problem. The other is the method of thermal QCD sum rules ...
Relevance of different prior knowledge sources for inferring gene interaction networks.
Olsen, Catharina; Bontempi, Gianluca; Emmert-Streib, Frank; Quackenbush, John; Haibe-Kains, Benjamin
2014-01-01
When inferring networks from high-throughput genomic data, one of the main challenges is the subsequent validation of these networks. In the best case scenario, the true network is partially known from previous research results published in structured databases or research articles. Traditionally, inferred networks are validated against these known interactions. Whenever the recovery rate is gauged to be high enough, subsequent high scoring but unknown inferred interactions are deemed good candidates for further experimental validation. Therefore such validation framework strongly depends on the quantity and quality of published interactions and presents serious pitfalls: (1) availability of these known interactions for the studied problem might be sparse; (2) quantitatively comparing different inference algorithms is not trivial; and (3) the use of these known interactions for validation prevents their integration in the inference procedure. The latter is particularly relevant as it has recently been showed that integration of priors during network inference significantly improves the quality of inferred networks. To overcome these problems when validating inferred networks, we recently proposed a data-driven validation framework based on single gene knock-down experiments. Using this framework, we were able to demonstrate the benefits of integrating prior knowledge and expression data. In this paper we used this framework to assess the quality of different sources of prior knowledge on their own and in combination with different genomic data sets in colorectal cancer. We observed that most prior sources lead to significant F-scores. Furthermore, their integration with genomic data leads to a significant increase in F-scores, especially for priors extracted from full text PubMed articles, known co-expression modules and genetic interactions. Lastly, we observed that the results are consistent for three different data sets: experimental knock-down data and two
Phylogeny and Divergence Times of Lemurs Inferred with Recent and Ancient Fossils in the Tree.
Herrera, James P; Dávalos, Liliana M
2016-09-01
Paleontological and neontological systematics seek to answer evolutionary questions with different data sets. Phylogenies inferred for combined extant and extinct taxa provide novel insights into the evolutionary history of life. Primates have an extensive, diverse fossil record and molecular data for living and extinct taxa are rapidly becoming available. We used two models to infer the phylogeny and divergence times for living and fossil primates, the tip-dating (TD) and fossilized birth-death process (FBD). We collected new morphological data, especially on the living and extinct endemic lemurs of Madagascar. We combined the morphological data with published DNA sequences to infer near-complete (88% of lemurs) time-calibrated phylogenies. The results suggest that primates originated around the Cretaceous-Tertiary boundary, slightly earlier than indicated by the fossil record and later than previously inferred from molecular data alone. We infer novel relationships among extinct lemurs, and strong support for relationships that were previously unresolved. Dates inferred with TD were significantly older than those inferred with FBD, most likely related to an assumption of a uniform branching process in the TD compared with a birth-death process assumed in the FBD. This is the first study to combine morphological and DNA sequence data from extinct and extant primates to infer evolutionary relationships and divergence times, and our results shed new light on the tempo of lemur evolution and the efficacy of combined phylogenetic analyses. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Strong-strong beam-beam simulation on parallel computer
Energy Technology Data Exchange (ETDEWEB)
Qiang, Ji
2004-08-02
The beam-beam interaction puts a strong limit on the luminosity of the high energy storage ring colliders. At the interaction points, the electromagnetic fields generated by one beam focus or defocus the opposite beam. This can cause beam blowup and a reduction of luminosity. An accurate simulation of the beam-beam interaction is needed to help optimize the luminosity in high energy colliders.
Strong-strong beam-beam simulation on parallel computer
International Nuclear Information System (INIS)
Qiang, Ji
2004-01-01
The beam-beam interaction puts a strong limit on the luminosity of the high energy storage ring colliders. At the interaction points, the electromagnetic fields generated by one beam focus or defocus the opposite beam. This can cause beam blowup and a reduction of luminosity. An accurate simulation of the beam-beam interaction is needed to help optimize the luminosity in high energy colliders
Using Alien Coins to Test Whether Simple Inference Is Bayesian
Cassey, Peter; Hawkins, Guy E.; Donkin, Chris; Brown, Scott D.
2016-01-01
Reasoning and inference are well-studied aspects of basic cognition that have been explained as statistically optimal Bayesian inference. Using a simplified experimental design, we conducted quantitative comparisons between Bayesian inference and human inference at the level of individuals. In 3 experiments, with more than 13,000 participants, we…
PREFACE: Strongly correlated electron systems Strongly correlated electron systems
Saxena, Siddharth S.; Littlewood, P. B.
2012-07-01
This special section is dedicated to the Strongly Correlated Electron Systems Conference (SCES) 2011, which was held from 29 August-3 September 2011, in Cambridge, UK. SCES'2011 is dedicated to 100 years of superconductivity and covers a range of topics in the area of strongly correlated systems. The correlated electronic and magnetic materials featured include f-electron based heavy fermion intermetallics and d-electron based transition metal compounds. The selected papers derived from invited presentations seek to deepen our understanding of the rich physical phenomena that arise from correlation effects. The focus is on quantum phase transitions, non-Fermi liquid phenomena, quantum magnetism, unconventional superconductivity and metal-insulator transitions. Both experimental and theoretical work is presented. Based on fundamental advances in the understanding of electronic materials, much of 20th century materials physics was driven by miniaturisation and integration in the electronics industry to the current generation of nanometre scale devices. The achievements of this industry have brought unprecedented advances to society and well-being, and no doubt there is much further to go—note that this progress is founded on investments and studies in the fundamentals of condensed matter physics from more than 50 years ago. Nevertheless, the defining challenges for the 21st century will lie in the discovery in science, and deployment through engineering, of technologies that can deliver the scale needed to have an impact on the sustainability agenda. Thus the big developments in nanotechnology may lie not in the pursuit of yet smaller transistors, but in the design of new structures that can revolutionise the performance of solar cells, batteries, fuel cells, light-weight structural materials, refrigeration, water purification, etc. The science presented in the papers of this special section also highlights the underlying interest in energy-dense materials, which
Single board system for fuzzy inference
Symon, James R.; Watanabe, Hiroyuki
1991-01-01
The very large scale integration (VLSI) implementation of a fuzzy logic inference mechanism allows the use of rule-based control and decision making in demanding real-time applications. Researchers designed a full custom VLSI inference engine. The chip was fabricated using CMOS technology. The chip consists of 688,000 transistors of which 476,000 are used for RAM memory. The fuzzy logic inference engine board system incorporates the custom designed integrated circuit into a standard VMEbus environment. The Fuzzy Logic system uses Transistor-Transistor Logic (TTL) parts to provide the interface between the Fuzzy chip and a standard, double height VMEbus backplane, allowing the chip to perform application process control through the VMEbus host. High level C language functions hide details of the hardware system interface from the applications level programmer. The first version of the board was installed on a robot at Oak Ridge National Laboratory in January of 1990.
A Learning Algorithm for Multimodal Grammar Inference.
D'Ulizia, A; Ferri, F; Grifoni, P
2011-12-01
The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.
Statistical inference based on divergence measures
Pardo, Leandro
2005-01-01
The idea of using functionals of Information Theory, such as entropies or divergences, in statistical inference is not new. However, in spite of the fact that divergence statistics have become a very good alternative to the classical likelihood ratio test and the Pearson-type statistic in discrete models, many statisticians remain unaware of this powerful approach.Statistical Inference Based on Divergence Measures explores classical problems of statistical inference, such as estimation and hypothesis testing, on the basis of measures of entropy and divergence. The first two chapters form an overview, from a statistical perspective, of the most important measures of entropy and divergence and study their properties. The author then examines the statistical analysis of discrete multivariate data with emphasis is on problems in contingency tables and loglinear models using phi-divergence test statistics as well as minimum phi-divergence estimators. The final chapter looks at testing in general populations, prese...
Grammatical inference algorithms, routines and applications
Wieczorek, Wojciech
2017-01-01
This book focuses on grammatical inference, presenting classic and modern methods of grammatical inference from the perspective of practitioners. To do so, it employs the Python programming language to present all of the methods discussed. Grammatical inference is a field that lies at the intersection of multiple disciplines, with contributions from computational linguistics, pattern recognition, machine learning, computational biology, formal learning theory and many others. Though the book is largely practical, it also includes elements of learning theory, combinatorics on words, the theory of automata and formal languages, plus references to real-world problems. The listings presented here can be directly copied and pasted into other programs, thus making the book a valuable source of ready recipes for students, academic researchers, and programmers alike, as well as an inspiration for their further development.>.
Automatic transformations in the inference process
Energy Technology Data Exchange (ETDEWEB)
Veroff, R. L.
1980-07-01
A technique for incorporating automatic transformations into processes such as the application of inference rules, subsumption, and demodulation provides a mechanism for improving search strategies for theorem proving problems arising from the field of program verification. The incorporation of automatic transformations into the inference process can alter the search space for a given problem, and is particularly useful for problems having broad rather than deep proofs. The technique can also be used to permit the generation of inferences that might otherwise be blocked and to build some commutativity or associativity into the unification process. Appropriate choice of transformations, and new literal clashing and unification algorithms for applying them, showed significant improvement on several real problems according to several distinct criteria. 22 references, 1 figure.
Uncertainty in prediction and in inference
International Nuclear Information System (INIS)
Hilgevoord, J.; Uffink, J.
1991-01-01
The concepts of uncertainty in prediction and inference are introduced and illustrated using the diffraction of light as an example. The close relationship between the concepts of uncertainty in inference and resolving power is noted. A general quantitative measure of uncertainty in inference can be obtained by means of the so-called statistical distance between probability distributions. When applied to quantum mechanics, this distance leads to a measure of the distinguishability of quantum states, which essentially is the absolute value of the matrix element between the states. The importance of this result to the quantum mechanical uncertainty principle is noted. The second part of the paper provides a derivation of the statistical distance on the basis of the so-called method of support
Examples in parametric inference with R
Dixit, Ulhas Jayram
2016-01-01
This book discusses examples in parametric inference with R. Combining basic theory with modern approaches, it presents the latest developments and trends in statistical inference for students who do not have an advanced mathematical and statistical background. The topics discussed in the book are fundamental and common to many fields of statistical inference and thus serve as a point of departure for in-depth study. The book is divided into eight chapters: Chapter 1 provides an overview of topics on sufficiency and completeness, while Chapter 2 briefly discusses unbiased estimation. Chapter 3 focuses on the study of moments and maximum likelihood estimators, and Chapter 4 presents bounds for the variance. In Chapter 5, topics on consistent estimator are discussed. Chapter 6 discusses Bayes, while Chapter 7 studies some more powerful tests. Lastly, Chapter 8 examines unbiased and other tests. Senior undergraduate and graduate students in statistics and mathematics, and those who have taken an introductory cou...
Inferences from counterfactual threats and promises.
Egan, Suzanne M; Byrne, Ruth M J
2012-01-01
We examine how people understand and reason from counterfactual threats, for example, "if you had hit your sister, I would have grounded you" and counterfactual promises, for example, "if you had tidied your room, I would have given you ice-cream." The first experiment shows that people consider counterfactual threats, but not counterfactual promises, to have the illocutionary force of an inducement. They also make the immediate inference that the action mentioned in the "if" part of the counterfactual threat and promise did not occur. The second experiment shows that people make more negative inferences (modus tollens and denial of the antecedent) than affirmative inferences (modus ponens and affirmation of the consequent) from counterfactual threats and promises, unlike indicative threats and promises. We discuss the implications of the results for theories of the mental representations and cognitive processes that underlie conditional inducements.
Mathematical inference and control of molecular networks from perturbation experiments
Mohammed-Rasheed, Mohammed
in order to affect the time evolution of molecular activity in a desirable manner. In this proposal, we address both the inference and control problems of GRNs. In the first part of the thesis, we consider the control problem. We assume that we are given a general topology network structure, whose dynamics follow a discrete-time Markov chain model. We subsequently develop a comprehensive framework for optimal perturbation control of the network. The aim of the perturbation is to drive the network away from undesirable steady-states and to force it to converge to a unique desirable steady-state. The proposed framework does not make any assumptions about the topology of the initial network (e.g., ergodicity, weak and strong connectivity), and is thus applicable to general topology networks. We define the optimal perturbation as the minimum-energy perturbation measured in terms of the Frobenius norm between the initial and perturbed networks. We subsequently demonstrate that there exists at most one optimal perturbation that forces the network into the desirable steady-state. In the event where the optimal perturbation does not exist, we construct a family of sub-optimal perturbations that approximate the optimal solution arbitrarily closely. In the second part of the thesis, we address the inference problem of GRNs from time series data. We model the dynamics of the molecules using a system of ordinary differential equations corrupted by additive white noise. For large-scale networks, we formulate the inference problem as a constrained maximum likelihood estimation problem. We derive the molecular interactions that maximize the likelihood function while constraining the network to be sparse. We further propose a procedure to recover weak interactions based on the Bayesian information criterion. For small-size networks, we investigated the inference of a globally stable 7-gene melanoma genetic regulatory network from genetic perturbation experiments. We considered five
IMAGINE: Interstellar MAGnetic field INference Engine
Steininger, Theo
2018-03-01
IMAGINE (Interstellar MAGnetic field INference Engine) performs inference on generic parametric models of the Galaxy. The modular open source framework uses highly optimized tools and technology such as the MultiNest sampler (ascl:1109.006) and the information field theory framework NIFTy (ascl:1302.013) to create an instance of the Milky Way based on a set of parameters for physical observables, using Bayesian statistics to judge the mismatch between measured data and model prediction. The flexibility of the IMAGINE framework allows for simple refitting for newly available data sets and makes state-of-the-art Bayesian methods easily accessible particularly for random components of the Galactic magnetic field.
Improved Inference of Heteroscedastic Fixed Effects Models
Directory of Open Access Journals (Sweden)
Afshan Saeed
2016-12-01
Full Text Available Heteroscedasticity is a stern problem that distorts estimation and testing of panel data model (PDM. Arellano (1987 proposed the White (1980 estimator for PDM with heteroscedastic errors but it provides erroneous inference for the data sets including high leverage points. In this paper, our attempt is to improve heteroscedastic consistent covariance matrix estimator (HCCME for panel dataset with high leverage points. To draw robust inference for the PDM, our focus is to improve kernel bootstrap estimators, proposed by Racine and MacKinnon (2007. The Monte Carlo scheme is used for assertion of the results.
Inferring causality from noisy time series data
DEFF Research Database (Denmark)
Mønster, Dan; Fusaroli, Riccardo; Tylén, Kristian
2016-01-01
Convergent Cross-Mapping (CCM) has shown high potential to perform causal inference in the absence of models. We assess the strengths and weaknesses of the method by varying coupling strength and noise levels in coupled logistic maps. We find that CCM fails to infer accurate coupling strength...... and even causality direction in synchronized time-series and in the presence of intermediate coupling. We find that the presence of noise deterministically reduces the level of cross-mapping fidelity, while the convergence rate exhibits higher levels of robustness. Finally, we propose that controlled noise...
Likelihood inference for unions of interacting discs
DEFF Research Database (Denmark)
Møller, Jesper; Helisova, K.
2010-01-01
This is probably the first paper which discusses likelihood inference for a random set using a germ-grain model, where the individual grains are unobservable, edge effects occur and other complications appear. We consider the case where the grains form a disc process modelled by a marked point...... with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analysing Peter Diggle's heather data set, where we discuss the results of simulation......-based maximum likelihood inference and the effect of specifying different reference Poisson models....
The aggregate site frequency spectrum for comparative population genomic inference.
Xue, Alexander T; Hickerson, Michael J
2015-12-01
Understanding how assemblages of species responded to past climate change is a central goal of comparative phylogeography and comparative population genomics, an endeavour that has increasing potential to integrate with community ecology. New sequencing technology now provides the potential to perform complex demographic inference at unprecedented resolution across assemblages of nonmodel species. To this end, we introduce the aggregate site frequency spectrum (aSFS), an expansion of the site frequency spectrum to use single nucleotide polymorphism (SNP) data sets collected from multiple, co-distributed species for assemblage-level demographic inference. We describe how the aSFS is constructed over an arbitrary number of independent population samples and then demonstrate how the aSFS can differentiate various multispecies demographic histories under a wide range of sampling configurations while allowing effective population sizes and expansion magnitudes to vary independently. We subsequently couple the aSFS with a hierarchical approximate Bayesian computation (hABC) framework to estimate degree of temporal synchronicity in expansion times across taxa, including an empirical demonstration with a data set consisting of five populations of the threespine stickleback (Gasterosteus aculeatus). Corroborating what is generally understood about the recent postglacial origins of these populations, the joint aSFS/hABC analysis strongly suggests that the stickleback data are most consistent with synchronous expansion after the Last Glacial Maximum (posterior probability = 0.99). The aSFS will have general application for multilevel statistical frameworks to test models involving assemblages and/or communities, and as large-scale SNP data from nonmodel species become routine, the aSFS expands the potential for powerful next-generation comparative population genomic inference. © 2015 The Authors. Molecular Ecology Published by John Wiley & Sons Ltd.
Strongly correlated systems experimental techniques
Mancini, Ferdinando
2015-01-01
The continuous evolution and development of experimental techniques is at the basis of any fundamental achievement in modern physics. Strongly correlated systems (SCS), more than any other, need to be investigated through the greatest variety of experimental techniques in order to unveil and crosscheck the numerous and puzzling anomalous behaviors characterizing them. The study of SCS fostered the improvement of many old experimental techniques, but also the advent of many new ones just invented in order to analyze the complex behaviors of these systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and materials science, belong to this class of systems. The volume presents a representative collection of the modern experimental techniques specifically tailored for the analysis of strongly correlated systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognize...
Strongly Correlated Systems Theoretical Methods
Avella, Adolfo
2012-01-01
The volume presents, for the very first time, an exhaustive collection of those modern theoretical methods specifically tailored for the analysis of Strongly Correlated Systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and materials science, belong to this class of systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognized main contributors. The exposition has a clear pedagogical cut and fully reports on the most relevant case study where the specific technique showed to be very successful in describing and enlightening the puzzling physics of a particular strongly correlated system. The book is intended for advanced graduate students and post-docs in the field as textbook and/or main reference, but also for other researchers in the field who appreciates consulting a single, but comprehensive, source or wishes to get acquainted, in a as painless as po...
Strongly correlated systems numerical methods
Mancini, Ferdinando
2013-01-01
This volume presents, for the very first time, an exhaustive collection of those modern numerical methods specifically tailored for the analysis of Strongly Correlated Systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and material science, belong to this class of systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognized main contributors. The exposition has a clear pedagogical cut and fully reports on the most relevant case study where the specific technique showed to be very successful in describing and enlightening the puzzling physics of a particular strongly correlated system. The book is intended for advanced graduate students and post-docs in the field as textbook and/or main reference, but also for other researchers in the field who appreciate consulting a single, but comprehensive, source or wishes to get acquainted, in a as painless as possi...
Strongly nonlinear oscillators analytical solutions
Cveticanin, Livija
2014-01-01
This book provides the presentation of the motion of pure nonlinear oscillatory systems and various solution procedures which give the approximate solutions of the strong nonlinear oscillator equations. The book presents the original author’s method for the analytical solution procedure of the pure nonlinear oscillator system. After an introduction, the physical explanation of the pure nonlinearity and of the pure nonlinear oscillator is given. The analytical solution for free and forced vibrations of the one-degree-of-freedom strong nonlinear system with constant and time variable parameter is considered. Special attention is given to the one and two mass oscillatory systems with two-degrees-of-freedom. The criteria for the deterministic chaos in ideal and non-ideal pure nonlinear oscillators are derived analytically. The method for suppressing chaos is developed. Important problems are discussed in didactic exercises. The book is self-consistent and suitable as a textbook for students and also for profess...
Flavour Democracy in Strong Unification
Abel, S A; Abel, Steven; King, Steven
1998-01-01
We show that the fermion mass spectrum may naturally be understood in terms of flavour democratic fixed points in supersymmetric theories which have a large domain of attraction in the presence of "strong unification". Our approach provides an alternative to the approximate Yukawa texture zeroes of the Froggatt-Nielsen mechanism. We discuss a particular model based on a broken gauged $SU(3)_L\\times SU(3)_R$ family symmetry which illustrates our approach.
Strong Gravitational Lensing in a Brane-World Black Hole
Li, GuoPing; Cao, Biao; Feng, Zhongwen; Zu, Xiaotao
2015-09-01
Adopting the strong field limit approach, we investigated the strong gravitational lensing in a Brane-World black hole, which means that the strong field limit coefficients and the deflection angle in this gravitational field are obtained. With this result, it can be said with certainly that the strong gravitational lensing is related to the metric of gravitational fields closely, the cosmology parameter α and the dark matter parameter β come from the Brane-World black hole exerts a great influence on it. Comparing with the Schwarzschild-AdS spacetime and the Schwarzschild-XCMD spacetime, the parameters α, β of black holes have the similar effects on the gravitational lensing. In some way, we infer that the real gravitational fields in our universe can be described by this metric, so the results of the strong gravitational lensing in this spacetime will be more reasonable for us to observe. Finally, it has to be noticed that the influence which the parameters α, β exerted on the main observable quantities of this gravitational field is discussed.
Strong Purifying Selection at Synonymous Sites in D. melanogaster
Lawrie, David S.; Messer, Philipp W.; Hershberg, Ruth; Petrov, Dmitri A.
2013-01-01
Synonymous sites are generally assumed to be subject to weak selective constraint. For this reason, they are often neglected as a possible source of important functional variation. We use site frequency spectra from deep population sequencing data to show that, contrary to this expectation, 22% of four-fold synonymous (4D) sites in Drosophila melanogaster evolve under very strong selective constraint while few, if any, appear to be under weak constraint. Linking polymorphism with divergence data, we further find that the fraction of synonymous sites exposed to strong purifying selection is higher for those positions that show slower evolution on the Drosophila phylogeny. The function underlying the inferred strong constraint appears to be separate from splicing enhancers, nucleosome positioning, and the translational optimization generating canonical codon bias. The fraction of synonymous sites under strong constraint within a gene correlates well with gene expression, particularly in the mid-late embryo, pupae, and adult developmental stages. Genes enriched in strongly constrained synonymous sites tend to be particularly functionally important and are often involved in key developmental pathways. Given that the observed widespread constraint acting on synonymous sites is likely not limited to Drosophila, the role of synonymous sites in genetic disease and adaptation should be reevaluated. PMID:23737754
Double jeopardy in inferring cognitive processes.
Fific, Mario
2014-01-01
Inferences we make about underlying cognitive processes can be jeopardized in two ways due to problematic forms of aggregation. First, averaging across individuals is typically considered a very useful tool for removing random variability. The threat is that averaging across subjects leads to averaging across different cognitive strategies, thus harming our inferences. The second threat comes from the construction of inadequate research designs possessing a low diagnostic accuracy of cognitive processes. For that reason we introduced the systems factorial technology (SFT), which has primarily been designed to make inferences about underlying processing order (serial, parallel, coactive), stopping rule (terminating, exhaustive), and process dependency. SFT proposes that the minimal research design complexity to learn about n number of cognitive processes should be equal to 2 (n) . In addition, SFT proposes that (a) each cognitive process should be controlled by a separate experimental factor, and (b) The saliency levels of all factors should be combined in a full factorial design. In the current study, the author cross combined the levels of jeopardies in a 2 × 2 analysis, leading to four different analysis conditions. The results indicate a decline in the diagnostic accuracy of inferences made about cognitive processes due to the presence of each jeopardy in isolation and when combined. The results warrant the development of more individual subject analyses and the utilization of full-factorial (SFT) experimental designs.
Colligation, Or the Logical Inference of Interconnection
DEFF Research Database (Denmark)
Falster, Peter
1998-01-01
laws or assumptions. Yet interconnection as an abstract concept seems to be without scientific underpinning in pure logic. Adopting a historical viewpoint, our aim is to show that the reasoning of interconnection may be identified with a neglected kind of logical inference, called "colligation...
Colligation or, The Logical Inference of Interconnection
DEFF Research Database (Denmark)
Franksen, Ole Immanuel; Falster, Peter
2000-01-01
laws or assumptions. Yet interconnection as an abstract concept seems to be without scientific underpinning in oure logic. Adopting a historical viewpoint, our aim is to show that the reasoning of interconnection may be identified with a neglected kind of logical inference, called "colligation...
Investigating Mathematics Teachers' Thoughts of Statistical Inference
Yang, Kai-Lin
2012-01-01
Research on statistical cognition and application suggests that statistical inference concepts are commonly misunderstood by students and even misinterpreted by researchers. Although some research has been done on students' misunderstanding or misconceptions of confidence intervals (CIs), few studies explore either students' or mathematics…
Theory change and Bayesian statistical inference
Romeijn, Jan-Willem
2005-01-01
This paper addresses the problem that Bayesian statistical inference cannot accommodate theory change, and proposes a framework for dealing with such changes. It first presents a scheme for generating predictions from observations by means of hypotheses. An example shows how the hypotheses represent
lems that arise in statistical inference. Ther
African Journals Online (AJOL)
Administrateur
ample, may require the integration of the nuisance parameters. Also, several optimization problems in statistical inference use numerical integration steps, such as EM algorithm in [6] and [2]. Here we are concerned with simulation based integration methods that uses the generation of random variables. Such methods are.
Eleusis: complexity and interaction in inductive inference
Kurzen, L.; Arrazola, X.; Ponte, M.
2010-01-01
This paper analyzes the computational complexity of the inductive inference game Eleusis. Eleusis is a card game in which one player constructs a secret rule which has to be discovered by the other players. We determine the complexity of various decision problems that arise in Eleusis. We show that
Quasi-Experimental Designs for Causal Inference
Kim, Yongnam; Steiner, Peter
2016-01-01
When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…
Conditional Inference and Advanced Mathematical Study
Inglis, Matthew; Simpson, Adrian
2008-01-01
Many mathematicians and curriculum bodies have argued in favour of the theory of formal discipline: that studying advanced mathematics develops one's ability to reason logically. In this paper we explore this view by directly comparing the inferences drawn from abstract conditional statements by advanced mathematics students and well-educated arts…
Culture and Pragmatic Inference in Interpersonal Communication
African Journals Online (AJOL)
cognitive process, and that the human capacity for inference is crucially important in interpersonal communication in these contexts. Generally, communication involves 'the transmission of messages between individuals acting consciously and intentionally for that end' (Harder, 2009, p. 62). It is an integral part of our ...
Understanding COBOL systems using inferred types
A. van Deursen (Arie); L.M.F. Moonen (Leon)
1999-01-01
textabstractIn a typical COBOL program, the data division consists of 50 of the lines of code. Automatic type inference can help to understand the large collections of variable declarations contained therein, showing how variables are related based on their actual usage. The most problematic aspect
Efficient Bayesian inference for ARFIMA processes
Graves, T.; Gramacy, R. B.; Franzke, C. L. E.; Watkins, N. W.
2015-03-01
Many geophysical quantities, like atmospheric temperature, water levels in rivers, and wind speeds, have shown evidence of long-range dependence (LRD). LRD means that these quantities experience non-trivial temporal memory, which potentially enhances their predictability, but also hampers the detection of externally forced trends. Thus, it is important to reliably identify whether or not a system exhibits LRD. In this paper we present a modern and systematic approach to the inference of LRD. Rather than Mandelbrot's fractional Gaussian noise, we use the more flexible Autoregressive Fractional Integrated Moving Average (ARFIMA) model which is widely used in time series analysis, and of increasing interest in climate science. Unlike most previous work on the inference of LRD, which is frequentist in nature, we provide a systematic treatment of Bayesian inference. In particular, we provide a new approximate likelihood for efficient parameter inference, and show how nuisance parameters (e.g. short memory effects) can be integrated over in order to focus on long memory parameters, and hypothesis testing more directly. We illustrate our new methodology on the Nile water level data, with favorable comparison to the standard estimators.
Diet elucidation: Supplementary inferences from mysid feeding ...
African Journals Online (AJOL)
Comparison of the structure of the feeding appendages of these two marine mysid species allows dietary inferences supplementing data derived from gut content analyses. Both species occur in abundance in the surf zone off sandy beaches where they contribute significantly to energy transfer through the food web.
Theory Change and Bayesian Statistical Inference
Romeyn, Jan-Willem
2008-01-01
This paper addresses the problem that Bayesian statistical inference cannot accommodate theory change, and proposes a framework for dealing with such changes. It first presents a scheme for generating predictions from observations by means of hypotheses. An example shows how the hypotheses represent
Ignorability in Statistical and Probabilistic Inference
DEFF Research Database (Denmark)
Jaeger, Manfred
2005-01-01
When dealing with incomplete data in statistical learning, or incomplete observations in probabilistic inference, one needs to distinguish the fact that a certain event is observed from the fact that the observed event has happened. Since the modeling and computational complexities entailed by ma...
Spurious correlations and inference in landscape genetics
Samuel A. Cushman; Erin L. Landguth
2010-01-01
Reliable interpretation of landscape genetic analyses depends on statistical methods that have high power to identify the correct process driving gene flow while rejecting incorrect alternative hypotheses. Little is known about statistical power and inference in individual-based landscape genetics. Our objective was to evaluate the power of causalmodelling with partial...
Abduction and Inference to the Best Explanation
Directory of Open Access Journals (Sweden)
Valeriano Iranzo
2009-12-01
Full Text Available The paper deals with the relation between abduction and inference to the best explanation (IBE. A heuristic and a normative interpretation of IBE are distinguished. Besides, two different normative interpretations —those vindicated by I. Niiniluoto and S. Psillos— are discussed. I conclude that, in principle, Aliseda’s theory of abduction fits better with a heuristic account of IBE
Supplementary inferences from mysid feeding appendage morphology
African Journals Online (AJOL)
Diet elucidation: Supplementary inferences from mysid feeding appendage morphology. P. Webb* and T.H. Wooldridge. Institute of Coastal Research and Department of Zoology, University of Port Elizabeth, P.O. Box 1600, Port Elizabeth,. 6000 Republic of South Africa. Received 9 August 1988; accepted 24 October 1988.
Inference and the Introductory Statistics Course
Pfannkuch, Maxine; Regan, Matt; Wild, Chris; Budgett, Stephanie; Forbes, Sharleen; Harraway, John; Parsonage, Ross
2011-01-01
This article sets out some of the rationale and arguments for making major changes to the teaching and learning of statistical inference in introductory courses at our universities by changing from a norm-based, mathematical approach to more conceptually accessible computer-based approaches. The core problem of the inferential argument with its…
Protein Inference Using Peptide Quantification Patterns
Lukasse, P.N.J.; America, A.H.P.
2014-01-01
Determining the list of proteins present in a sample, based on the list of identified peptides, is a crucial step in the untargeted proteomics LC-MS/MS data-processing pipeline. This step, commonly referred to as protein inference, turns out to be a very challenging problem because many peptide
Inferring motion and location using WLAN RSSI
Kavitha Muthukrishnan, K.; van der Zwaag, B.J.; Havinga, Paul J.M.; Fuller, R.; Koutsoukos, X.
2009-01-01
We present novel algorithms to infer movement by making use of inherent fluctuations in the received signal strengths from existing WLAN infrastructure. We evaluate the performance of the presented algorithms based on classification metrics such as recall and precision using annotated traces
International Nuclear Information System (INIS)
L'Huillier, A.
2002-01-01
When a high-power laser focuses into a gas of atoms, the electromagnetic field becomes of the same magnitude as the Coulomb field which binds a 1s electron in a hydrogen atom. 3 highly non-linear phenomena can happen: 1) ATI (above threshold ionization): electrons initially in the ground state absorb a large number of photons, many more than the minimum number required for ionization; 2) multiple ionization: many electrons can be emitted one at a time, in a sequential process, or simultaneously in a mechanism called direct or non-sequential; and 3) high order harmonic generation (HHG): efficient photon emission in the extreme ultraviolet range, in the form of high-order harmonics of the fundamental laser field can occur. The theoretical problem consists in solving the time dependent Schroedinger equation (TDSE) that describes the interaction of a many-electron atom with a laser field. A number of methods have been proposed to solve this problem in the case of a hydrogen atom or a single-active electron atom in a strong laser field. A large effort is presently being devoted to go beyond the single-active approximation. The understanding of the physics of the interaction between atoms and strong laser fields has been provided by a very simple model called ''simple man's theory''. A unified view of HHG, ATI, and non-sequential ionization, originating from the simple man's model and the strong field approximation, expressed in terms of electrons trajectories or quantum paths is slowly emerging. (A.C.)
Strongly Interacting Light Dark Matter
Directory of Open Access Journals (Sweden)
Sebastian Bruggisser, Francesco Riva, Alfredo Urbano
2017-09-01
Full Text Available In the presence of approximate global symmetries that forbid relevant interactions, strongly coupled light Dark Matter (DM can appear weakly coupled at small energy and generate a sizable relic abundance. Fundamental principles like unitarity restrict these symmetries to a small class, where the leading interactions are captured by effective operators up to dimension-8. Chiral symmetry, spontaneously broken global symmetries and non-linearly realized supersymmetry are examples of this. Their DM candidates (composite fermions, pseudo Nambu-Goldstone Bosons and Goldstini are interesting targets for LHC missing-energy searches.
Strongly interacting light dark matter
International Nuclear Information System (INIS)
Bruggisser, Sebastian; Riva, Francesco; Urbano, Alfredo
2016-07-01
In the presence of approximate global symmetries that forbid relevant interactions, strongly coupled light Dark Matter (DM) can appear weakly coupled at small-energy and generate a sizable relic abundance. Fundamental principles like unitarity restrict these symmetries to a small class, where the leading interactions are captured by effective operators up to dimension-8. Chiral symmetry, spontaneously broken global symmetries and non-linearly realized supersymmetry are examples of this. Their DM candidates (composite fermions, pseudo-Nambu-Goldstone Bosons and Goldstini) are interesting targets for LHC missing-energy searches.
Rydberg atoms in strong fields
International Nuclear Information System (INIS)
Kleppner, D.; Tsimmerman, M.
1985-01-01
Experimental and theoretical achievements in studying Rydberg atoms in external fields are considered. Only static (or quasistatic) fields and ''one-electron'' atoms, i.e. atoms that are well described by one-electron states, are discussed. Mainly behaviour of alkali metal atoms in electric field is considered. The state of theoretical investigations for hydrogen atom in magnetic field is described, but experimental data for atoms of alkali metals are presented as an illustration. Results of the latest experimental and theoretical investigations into the structure of Rydberg atoms in strong fields are presented
Scalar strong interaction hadron theory
Hoh, Fang Chao
2015-01-01
The scalar strong interaction hadron theory, SSI, is a first principles' and nonlocal theory at quantum mechanical level that provides an alternative to low energy QCD and Higgs related part of the standard model. The quark-quark interaction is scalar rather than color-vectorial. A set of equations of motion for mesons and another set for baryons have been constructed. This book provides an account of the present state of a theory supposedly still at its early stage of development. This work will facilitate researchers interested in entering into this field and serve as a basis for possible future development of this theory.
Strong Plate, Weak Slab Dichotomy
Petersen, R. I.; Stegman, D. R.; Tackley, P.
2015-12-01
Models of mantle convection on Earth produce styles of convection that are not observed on Earth.Moreover non-Earth-like modes, such as two-sided downwellings, are the de facto mode of convection in such models.To recreate Earth style subduction, i.e. one-sided asymmetric recycling of the lithosphere, proper treatment of the plates and plate interface are required. Previous work has identified several model features that promote subduction. A free surface or pseudo-free surface and a layer of material with a relatively low strength material (weak crust) allow downgoing plates to bend and slide past overriding without creating undue stress at the plate interface. (Crameri, et al. 2012, GRL)A low viscosity mantle wedge, possibly a result of slab dehydration, decouples the plates in the system. (Gerya et al. 2007, Geo)Plates must be composed of material which, in the case of the overriding plate, are is strong enough to resist bending stresses imposed by the subducting plate and yet, as in the case of the subducting plate, be weak enough to bend and subduct when pulled by the already subducted slab. (Petersen et al. 2015, PEPI) Though strong surface plates are required for subduction such plates may present a problem when they encounter the lower mantle.As the subducting slab approaches the higher viscosity, lower mantle stresses are imposed on the tip.Strong slabs transmit this stress to the surface.There the stress field at the plate interface is modified and potentially modifies the style of convection. In addition to modifying the stress at the plate interface, the strength of the slab affects the morphology of the slab at the base of the upper mantle. (Stegman, et al 2010, Tectonophysics)Slabs that maintain a sufficient portion of their strength after being bent require high stresses to unbend or otherwise change their shape.On the other hand slabs that are weakened though the bending process are more amenable to changes in morphology. We present the results of
Active inference, sensory attenuation and illusions.
Brown, Harriet; Adams, Rick A; Parees, Isabel; Edwards, Mark; Friston, Karl
2013-11-01
Active inference provides a simple and neurobiologically plausible account of how action and perception are coupled in producing (Bayes) optimal behaviour. This can be seen most easily as minimising prediction error: we can either change our predictions to explain sensory input through perception. Alternatively, we can actively change sensory input to fulfil our predictions. In active inference, this action is mediated by classical reflex arcs that minimise proprioceptive prediction error created by descending proprioceptive predictions. However, this creates a conflict between action and perception; in that, self-generated movements require predictions to override the sensory evidence that one is not actually moving. However, ignoring sensory evidence means that externally generated sensations will not be perceived. Conversely, attending to (proprioceptive and somatosensory) sensations enables the detection of externally generated events but precludes generation of actions. This conflict can be resolved by attenuating the precision of sensory evidence during movement or, equivalently, attending away from the consequences of self-made acts. We propose that this Bayes optimal withdrawal of precise sensory evidence during movement is the cause of psychophysical sensory attenuation. Furthermore, it explains the force-matching illusion and reproduces empirical results almost exactly. Finally, if attenuation is removed, the force-matching illusion disappears and false (delusional) inferences about agency emerge. This is important, given the negative correlation between sensory attenuation and delusional beliefs in normal subjects--and the reduction in the magnitude of the illusion in schizophrenia. Active inference therefore links the neuromodulatory optimisation of precision to sensory attenuation and illusory phenomena during the attribution of agency in normal subjects. It also provides a functional account of deficits in syndromes characterised by false inference
Okalebo, J. A.; Das Choudhury, S.; Awada, T.; Suyker, A.; LeBauer, D.; Newcomb, M.; Ward, R.
2017-12-01
The Long-term Agroecosystem Research (LTAR) network is a USDA-ARS effort that focuses on conducting research that addresses current and emerging issues in agriculture related to sustainability and profitability of agroecosystems in the face of climate change and population growth. There are 18 sites across the USA covering key agricultural production regions. In Nebraska, a partnership between the University of Nebraska - Lincoln and ARD/USDA resulted in the establishment of the Platte River - High Plains Aquifer LTAR site in 2014. The site conducts research to sustain multiple ecosystem services focusing specifically on Nebraska's main agronomic production agroecosystems that comprise of abundant corn, soybeans, managed grasslands and beef production. As part of the national LTAR network, PR-HPA participates and contributes near-surface remotely sensed imagery of corn, soybean and grassland canopy phenology to the PhenoCam Network through high-resolution digital cameras. This poster highlights the application, advantages and usefulness of near-surface remotely sensed imagery in agroecosystem studies and management. It demonstrates how both Infrared and Red-Green-Blue imagery may be applied to monitor phenological events as well as crop abiotic stresses. Computer-based algorithms and analytic techniques proved very instrumental in revealing crop phenological changes such as green-up and tasseling in corn. This poster also reports the suitability and applicability of corn-derived computer based algorithms for evaluating phenological development of sorghum since both crops have similarities in their phenology; with sorghum panicles being similar to corn tassels. This later assessment was carried out using a sorghum dataset obtained from the Transportation Energy Resources from Renewable Agriculture Phenotyping Reference Platform project, Maricopa Agricultural Center, Arizona.
EDITORIAL: Strongly correlated electron systems Strongly correlated electron systems
Ronning, Filip; Batista, Cristian
2011-03-01
Strongly correlated electrons is an exciting and diverse field in condensed matter physics. This special issue aims to capture some of that excitement and recent developments in the field. Given that this issue was inspired by the 2010 International Conference on Strongly Correlated Electron Systems (SCES 2010), we briefly give some history in order to place this issue in context. The 2010 International Conference on Strongly Correlated Electron Systems was held in Santa Fe, New Mexico, a reunion of sorts from the 1989 International Conference on the Physics of Highly Correlated Electron Systems that also convened in Santa Fe. SCES 2010—co-chaired by John Sarrao and Joe Thompson—followed the tradition of earlier conferences, in this century, hosted by Buzios (2008), Houston (2007), Vienna (2005), Karlsruhe (2004), Krakow (2002) and Ann Arbor (2001). Every three years since 1997, SCES has joined the International Conference on Magnetism (ICM), held in Recife (2000), Rome (2003), Kyoto (2006) and Karlsruhe (2009). Like its predecessors, SCES 2010 topics included strongly correlated f- and d-electron systems, heavy-fermion behaviors, quantum-phase transitions, non-Fermi liquid phenomena, unconventional superconductivity, and emergent states that arise from electronic correlations. Recent developments from studies of quantum magnetism and cold atoms complemented the traditional subjects and were included in SCES 2010. 2010 celebrated the 400th anniversary of Santa Fe as well as the birth of astronomy. So what's the connection to SCES? The Dutch invention of the first practical telescope and its use by Galileo in 1610 and subsequent years overturned dogma that the sun revolved about the earth. This revolutionary, and at the time heretical, conclusion required innovative combinations of new instrumentation, observation and mathematics. These same combinations are just as important 400 years later and are the foundation of scientific discoveries that were discussed
Physics of Strongly Coupled Plasma
Energy Technology Data Exchange (ETDEWEB)
Kraeft, Wolf-Dietrich [Universitat Rostock (Germany)
2007-07-15
Strongly coupled plasmas (or non-ideal plasmas) are multi-component charged many-particle systems, in which the mean value of the potential energy of the system is of the same order as or even higher than the mean value of the kinetic energy. The constituents are electrons, ions, atoms and molecules. Dusty (or complex) plasmas contain still mesoscopic (multiply charged) particles. In such systems, the effects of strong coupling (non-ideality) lead to considerable deviations of physical properties from the corresponding properties of ideal plasmas, i.e., of plasmas in which the mean kinetic energy is essentially larger than the mean potential energy. For instance, bound state energies become density dependent and vanish at higher densities (Mott effect) due to the interaction of the pair with the surrounding particles. Non-ideal plasmas are of interest both for general scientific reasons (including, for example, astrophysical questions), and for technical applications such as inertially confined fusion. In spite of great efforts both experimentally and theoretically, satisfactory information on the physical properties of strongly coupled plasmas is not at hand for any temperature and density. For example, the theoretical description of non-ideal plasmas is possible only at low densities/high temperatures and at extremely high densities (high degeneracy). For intermediate degeneracy, however, numerical experiments have to fill the gap. Experiments are difficult in the region of 'warm dense matter'. The monograph tries to present the state of the art concerning both theoretical and experimental attempts. It mainly includes results of the work performed in famous Russian laboratories in recent decades. After outlining basic concepts (chapter 1), the generation of plasmas is considered (chapter 2, chapter 3). Questions of partial (chapter 4) and full ionization (chapter 5) are discussed including Mott transition and Wigner crystallization. Electrical and
Strongly coupled dust coulomb clusters
International Nuclear Information System (INIS)
Juan Wentau; Lai Yingju; Chen Mingheng; I Lin
1999-01-01
The structures and motions of quasi-2-dimensional strongly coupled dust Coulomb clusters with particle number N from few to hundreds in a cylindrical rf plasma trap are studied and compared with the results from the molecular dynamic simulation using more ideal models. Shell structures with periodic packing in different shells and intershell rotational motion dominated excitations are observed at small N. As N increases, the boundary has less effect, the system recovers to the triangular lattice with isotropic vortex type cooperative excitations similar to an infinite N system except the outer shell region. The above generic behaviors are mainly determined by the system symmetry and agree with the simulation results. The detailed interaction form causes minor effect such as the fine structure of packing
Probability densities in strong turbulence
Yakhot, Victor
2006-03-01
In this work we, using Mellin’s transform combined with the Gaussian large-scale boundary condition, calculate probability densities (PDFs) of velocity increments P(δu,r), velocity derivatives P(u,r) and the PDF of the fluctuating dissipation scales Q(η,Re), where Re is the large-scale Reynolds number. The resulting expressions strongly deviate from the Log-normal PDF P(δu,r) often quoted in the literature. It is shown that the probability density of the small-scale velocity fluctuations includes information about the large (integral) scale dynamics which is responsible for the deviation of P(δu,r) from P(δu,r). An expression for the function D(h) of the multifractal theory, free from spurious logarithms recently discussed in [U. Frisch, M. Martins Afonso, A. Mazzino, V. Yakhot, J. Fluid Mech. 542 (2005) 97] is also obtained.
Inferences on Children’s Reading Groups
Directory of Open Access Journals (Sweden)
Javier González García
2009-05-01
Full Text Available This article focuses on the non-literal information of a text, which can be inferred from key elements or clues offered by the text itself. This kind of text is called implicit text or inference, due to the thinking process that it stimulates. The explicit resources that lead to information retrieval are related to others of implicit information, which have increased their relevance. In this study, during two courses, how two teachers interpret three stories and how they establish a debate dividing the class into three student groups, was analyzed. The sample was formed by two classes of two urban public schools of Burgos capital (Spain, and two of public schools of Tampico (Mexico. This allowed us to observe an increasing percentage value of the group focused in text comprehension, and a lesser percentage of the group perceiving comprehension as a secondary objective.
Working with sample data exploration and inference
Chaffe-Stengel, Priscilla
2014-01-01
Managers and analysts routinely collect and examine key performance measures to better understand their operations and make good decisions. Being able to render the complexity of operations data into a coherent account of significant events requires an understanding of how to work well with raw data and to make appropriate inferences. Although some statistical techniques for analyzing data and making inferences are sophisticated and require specialized expertise, there are methods that are understandable and applicable by anyone with basic algebra skills and the support of a spreadsheet package. By applying these fundamental methods themselves rather than turning over both the data and the responsibility for analysis and interpretation to an expert, managers will develop a richer understanding and potentially gain better control over their environment. This text is intended to describe these fundamental statistical techniques to managers, data analysts, and students. Statistical analysis of sample data is enh...
Likelihood inference for unions of interacting discs
DEFF Research Database (Denmark)
Møller, Jesper; Helisová, Katarina
To the best of our knowledge, this is the first paper which discusses likelihood inference or a random set using a germ-grain model, where the individual grains are unobservable edge effects occur, and other complications appear. We consider the case where the grains form a disc process modelled...... is specified with respect to a given marked Poisson model (i.e. a Boolean model). We show how edge effects and other complications can be handled by considering a certain conditional likelihood. Our methodology is illustrated by analyzing Peter Diggle's heather dataset, where we discuss the results...... of simulation-based maximum likelihood inference and the effect of specifying different reference Poisson models....
Bayesianism and inference to the best explanation
Directory of Open Access Journals (Sweden)
Valeriano IRANZO
2008-01-01
Full Text Available Bayesianism and Inference to the best explanation (IBE are two different models of inference. Recently there has been some debate about the possibility of “bayesianizing” IBE. Firstly I explore several alternatives to include explanatory considerations in Bayes’s Theorem. Then I distinguish two different interpretations of prior probabilities: “IBE-Bayesianism” (IBE-Bay and “frequentist-Bayesianism” (Freq-Bay. After detailing the content of the latter, I propose a rule for assessing the priors. I also argue that Freq-Bay: (i endorses a role for explanatory value in the assessment of scientific hypotheses; (ii avoids a purely subjectivist reading of prior probabilities; and (iii fits better than IBE-Bayesianism with two basic facts about science, i.e., the prominent role played by empirical testing and the existence of many scientific theories in the past that failed to fulfil their promises and were subsequently abandoned.
An emergent approach to analogical inference
Thibodeau, Paul H.; Flusberg, Stephen J.; Glick, Jeremy J.; Sternberg, Daniel A.
2013-03-01
In recent years, a growing number of researchers have proposed that analogy is a core component of human cognition. According to the dominant theoretical viewpoint, analogical reasoning requires a specific suite of cognitive machinery, including explicitly coded symbolic representations and a mapping or binding mechanism that operates over these representations. Here we offer an alternative approach: we find that analogical inference can emerge naturally and spontaneously from a relatively simple, error-driven learning mechanism without the need to posit any additional analogy-specific machinery. The results also parallel findings from the developmental literature on analogy, demonstrating a shift from an initial reliance on surface feature similarity to the use of relational similarity later in training. Variants of the model allow us to consider and rule out alternative accounts of its performance. We conclude by discussing how these findings can potentially refine our understanding of the processes that are required to perform analogical inference.
Inferring genetic interactions from comparative fitness data.
Crona, Kristina; Gavryushkin, Alex; Greene, Devin; Beerenwinkel, Niko
2017-12-20
Darwinian fitness is a central concept in evolutionary biology. In practice, however, it is hardly possible to measure fitness for all genotypes in a natural population. Here, we present quantitative tools to make inferences about epistatic gene interactions when the fitness landscape is only incompletely determined due to imprecise measurements or missing observations. We demonstrate that genetic interactions can often be inferred from fitness rank orders, where all genotypes are ordered according to fitness, and even from partial fitness orders. We provide a complete characterization of rank orders that imply higher order epistasis. Our theory applies to all common types of gene interactions and facilitates comprehensive investigations of diverse genetic interactions. We analyzed various genetic systems comprising HIV-1, the malaria-causing parasite Plasmodium vivax , the fungus Aspergillus niger , and the TEM-family of β-lactamase associated with antibiotic resistance. For all systems, our approach revealed higher order interactions among mutations.
Pointwise probability reinforcements for robust statistical inference.
Frénay, Benoît; Verleysen, Michel
2014-02-01
Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). AFDs are observations that are much more frequent in the training sample that they should be, with respect to their theoretical probability, and include e.g. outliers. Estimates of parameters tend to be biased towards models which support such data. This paper proposes to introduce pointwise probability reinforcements (PPRs): the probability of each observation is reinforced by a PPR and a regularisation allows controlling the amount of reinforcement which compensates for AFDs. The proposed solution is very generic, since it can be used to robustify any statistical inference method which can be formulated as a likelihood maximisation. Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. Moreover, outliers can be filtered manually since an abnormality degree is obtained for each observation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Statistical inference from imperfect photon detection
International Nuclear Information System (INIS)
Audenaert, Koenraad M R; Scheel, Stefan
2009-01-01
We consider the statistical properties of photon detection with imperfect detectors that exhibit dark counts and less than unit efficiency, in the context of tomographic reconstruction. In this context, the detectors are used to implement certain positive operator-valued measures (POVMs) that would allow us to reconstruct the quantum state or quantum process under consideration. Here we look at the intermediate step of inferring outcome probabilities from measured outcome frequencies, and show how this inference can be performed in a statistically sound way in the presence of detector imperfections. Merging outcome probabilities for different sets of POVMs into a consistent quantum state picture has been treated elsewhere (Audenaert and Scheel 2009 New J. Phys. 11 023028). Single-photon pulsed measurements as well as continuous wave measurements are covered.
An Intuitive Dashboard for Bayesian Network Inference
International Nuclear Information System (INIS)
Reddy, Vikas; Farr, Anna Charisse; Wu, Paul; Mengersen, Kerrie; Yarlagadda, Prasad K D V
2014-01-01
Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++
Inferring Genetic Ancestry: Opportunities, Challenges, and Implications
Royal, Charmaine D.; Novembre, John; Fullerton, Stephanie M.; Goldstein, David B.; Long, Jeffrey C.; Bamshad, Michael J.; Clark, Andrew G.
2010-01-01
Increasing public interest in direct-to-consumer (DTC) genetic ancestry testing has been accompanied by growing concern about issues ranging from the personal and societal implications of the testing to the scientific validity of ancestry inference. The very concept of “ancestry” is subject to misunderstanding in both the general and scientific communities. What do we mean by ancestry? How exactly is ancestry measured? How far back can such ancestry be defined and by which genetic tools? How ...
Inferring ontology graph structures using OWL reasoning.
Rodríguez-García, Miguel Ángel; Hoehndorf, Robert
2018-01-05
Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies' semantic content remains a challenge. We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph . Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.
Context recognition for a hyperintensional inference machine
Duží, Marie; Fait, Michal; Menšík, Marek
2017-07-01
The goal of this paper is to introduce the algorithm of context recognition in the functional programming language TIL-Script, which is a necessary condition for the implementation of the TIL-Script inference machine. The TIL-Script language is an operationally isomorphic syntactic variant of Tichý's Transparent Intensional Logic (TIL). From the formal point of view, TIL is a hyperintensional, partial, typed λ-calculus with procedural semantics. Hyperintensional, because TIL λ-terms denote procedures (defined as TIL constructions) producing set-theoretic functions rather than the functions themselves; partial, because TIL is a logic of partial functions; and typed, because all the entities of TIL ontology, including constructions, receive a type within a ramified hierarchy of types. These features make it possible to distinguish three levels of abstraction at which TIL constructions operate. At the highest hyperintensional level the object to operate on is a construction (though a higher-order construction is needed to present this lower-order construction as an object of predication). At the middle intensional level the object to operate on is the function presented, or constructed, by a construction, while at the lowest extensional level the object to operate on is the value (if any) of the presented function. Thus a necessary condition for the development of an inference machine for the TIL-Script language is recognizing a context in which a construction occurs, namely extensional, intensional and hyperintensional context, in order to determine the type of an argument at which a given inference rule can be properly applied. As a result, our logic does not flout logical rules of extensional logic, which makes it possible to develop a hyperintensional inference machine for the TIL-Script language.
Inferring ontology graph structures using OWL reasoning
Rodriguez-Garcia, Miguel Angel
2018-01-05
Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies\\' semantic content remains a challenge.We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies\\' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph .Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.
Thermodynamics of statistical inference by cells.
Lang, Alex H; Fisher, Charles K; Mora, Thierry; Mehta, Pankaj
2014-10-03
The deep connection between thermodynamics, computation, and information is now well established both theoretically and experimentally. Here, we extend these ideas to show that thermodynamics also places fundamental constraints on statistical estimation and learning. To do so, we investigate the constraints placed by (nonequilibrium) thermodynamics on the ability of biochemical signaling networks to estimate the concentration of an external signal. We show that accuracy is limited by energy consumption, suggesting that there are fundamental thermodynamic constraints on statistical inference.
Constrained bayesian inference of project performance models
Sunmola, Funlade
2013-01-01
Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...
Racing algorithms for conditional independence inference
Czech Academy of Sciences Publication Activity Database
Bouckaert, R. R.; Studený, Milan
2007-01-01
Roč. 45, č. 2 (2007), s. 386-401 ISSN 0888-613X R&D Projects: GA ČR GA201/04/0393 Institutional research plan: CEZ:AV0Z10750506 Keywords : conditonal independence * inference * imset * algorithm Subject RIV: BA - General Mathematics Impact factor: 1.220, year: 2007 http://library.utia.cas.cz/separaty/2007/mtr/studeny-0083472.pdf
Controlling Selection Bias in Causal Inference
2012-02-01
and cervix . Journal of the National Cancer Institute 11 1269–1275. Didelez, V., Kreiner, S. and Keiding, N. (2010). Graphical models for inference...Endometrial Cancer (Y ) was overestimated in the data studied. One of the symptoms of the use of Oe- strogen is vaginal bleeding (W ) (Fig. 1(c)), and the...whether similar bounds can be de - rived in the presence of selection bias. We will show that selection bias can be removed entirely through the use of
Strong Ideal Convergence in Probabilistic Metric Spaces
Indian Academy of Sciences (India)
In the present paper we introduce the concepts of strongly ideal convergent sequence and strong ideal Cauchy sequence in a probabilistic metric (PM) space endowed with the strong topology, and establish some basic facts. Next, we define the strong ideal limit points and the strong ideal cluster points of a sequence in this ...
Strong ideal convergence in probabilistic metric spaces
Indian Academy of Sciences (India)
In the present paper we introduce the concepts of strongly ideal convergent sequence and strong ideal Cauchy sequence in a probabilistic metric (PM) space endowed with the strong topology, and establish some basic facts. Next, we define the strong ideal limit points and the strong ideal cluster points of a sequence in this ...
Bootstrap inference when using multiple imputation.
Schomaker, Michael; Heumann, Christian
2018-04-16
Many modern estimators require bootstrapping to calculate confidence intervals because either no analytic standard error is available or the distribution of the parameter of interest is nonsymmetric. It remains however unclear how to obtain valid bootstrap inference when dealing with multiple imputation to address missing data. We present 4 methods that are intuitively appealing, easy to implement, and combine bootstrap estimation with multiple imputation. We show that 3 of the 4 approaches yield valid inference, but that the performance of the methods varies with respect to the number of imputed data sets and the extent of missingness. Simulation studies reveal the behavior of our approaches in finite samples. A topical analysis from HIV treatment research, which determines the optimal timing of antiretroviral treatment initiation in young children, demonstrates the practical implications of the 4 methods in a sophisticated and realistic setting. This analysis suffers from missing data and uses the g-formula for inference, a method for which no standard errors are available. Copyright © 2018 John Wiley & Sons, Ltd.
Inferring gene ontologies from pairwise similarity data
Kramer, Michael; Dutkowski, Janusz; Yu, Michael; Bafna, Vineet; Ideker, Trey
2014-01-01
Motivation: While the manually curated Gene Ontology (GO) is widely used, inferring a GO directly from -omics data is a compelling new problem. Recognizing that ontologies are a directed acyclic graph (DAG) of terms and hierarchical relations, algorithms are needed that: analyze a full matrix of gene–gene pairwise similarities from -omics data;infer true hierarchical structure in these data rather than enforcing hierarchy as a computational artifact; andrespect biological pleiotropy, by which a term in the hierarchy can relate to multiple higher level terms. Methods addressing these requirements are just beginning to emerge—none has been evaluated for GO inference. Methods: We consider two algorithms [Clique Extracted Ontology (CliXO), LocalFitness] that uniquely satisfy these requirements, compared with methods including standard clustering. CliXO is a new approach that finds maximal cliques in a network induced by progressive thresholding of a similarity matrix. We evaluate each method’s ability to reconstruct the GO biological process ontology from a similarity matrix based on (a) semantic similarities for GO itself or (b) three -omics datasets for yeast. Results: For task (a) using semantic similarity, CliXO accurately reconstructs GO (>99% precision, recall) and outperforms other approaches (Ontology) and better than LocalFitness or standard clustering (20–25% precision, recall). Conclusion: This study provides algorithmic foundation for building gene ontologies by capturing hierarchical and pleiotropic structure embedded in biomolecular data. Contact: tideker@ucsd.edu PMID:24932003
Inferring epidemic network topology from surveillance data.
Directory of Open Access Journals (Sweden)
Xiang Wan
Full Text Available The transmission of infectious diseases can be affected by many or even hidden factors, making it difficult to accurately predict when and where outbreaks may emerge. One approach at the moment is to develop and deploy surveillance systems in an effort to detect outbreaks as timely as possible. This enables policy makers to modify and implement strategies for the control of the transmission. The accumulated surveillance data including temporal, spatial, clinical, and demographic information, can provide valuable information with which to infer the underlying epidemic networks. Such networks can be quite informative and insightful as they characterize how infectious diseases transmit from one location to another. The aim of this work is to develop a computational model that allows inferences to be made regarding epidemic network topology in heterogeneous populations. We apply our model on the surveillance data from the 2009 H1N1 pandemic in Hong Kong. The inferred epidemic network displays significant effect on the propagation of infectious diseases.
Effects of low doses: Proof and inferences
International Nuclear Information System (INIS)
Hubert, Ph.
2010-01-01
It is essential to discuss the plausibility of 'low-dose' effects from environmental exposures. The question, nonetheless, is wrongly labelled, for it is not the magnitude of the dose that matters, but rather the effect. The question thus concerns 'doses with low effects'. More precisely, because the low effects on large populations are not that small, even when epidemiological tools fail to detect them, it would be more accurate to talk about 'doses with undetectable or barely detectable effects'. Hereafter, we describe this 'low-effect dose' concept from the viewpoint of toxicology and epidemiology and discuss the fragile boundary line for these low-effect doses. Next, we review the different types of inference from observed situations (i.e., with high effects) to situations relevant to public health, to characterize the level of confidence to be accorded them. The first type is extrapolation - from higher to lower doses or from higher to lower dose rates. The second type is transposition - from humans to other humans or from animals to humans. The third type can be called 'analogy' as in 'read across' approaches, where QSAR (Quantitative Structure Activity Relationship) methodology can be used. These three types of inferences can be based on an estimate of the 'distance' between observed and predicted areas, but can also rely on knowledge and theories of the relevant mechanisms. The new tools of predictive toxicology are helpful both in deriving quantitative estimates and grounding inferences on sound bases. (author)
Intuitive Mechanics: Inferences of Vertical Projectile Motion
Directory of Open Access Journals (Sweden)
Milana Damjenić
2016-07-01
Full Text Available Our intuitive knowledge of physics mechanics, i.e. knowledge defined through personal experience about velocity, acceleration, motion causes, etc., is often wrong. This research examined whether similar misconceptions occur systematically in the case of vertical projectiles launched upwards. The first experiment examined inferences of velocity and acceleration of the ball moving vertically upwards, while the second experiment examined whether the mass of the thrown ball and force of the throw have an impact on the inference. The results showed that more than three quarters of the participants wrongly assumed that maximum velocity and peak acceleration did not occur at the initial launch of the projectile. There was no effect of object mass or effect of the force of the throw on the inference relating to the velocity and acceleration of the ball. The results exceed the explanatory reach of the impetus theory, most commonly used to explain the naive understanding of the mechanics of object motion. This research supports that the actions on objects approach and the property transmission heuristics may more aptly explain the dissidence between perceived and actual implications in projectile motion.
Markov Logic Based Inference Engine for CDSS
International Nuclear Information System (INIS)
Bajwa, I.S.; Ramzan, B.; Ramzan, S.
2017-01-01
CDSS (Clinical Decision Support System) is typically a diagnostic application and a modern technology that can be employed to provide standardized and quality medical facilities to the medical patients especially when expert doctors are not available at the medical centres. These days the use of the CDSSs is quite common in medical practice at remote areas. A CDSS can be very helpful not only in preventive health care but also in computerized diagnosis. However, a typical problem of CDSS based diagnosis is uncertainty. Typically, an ambiguity can occur when a patient is not able to explain the symptoms of his disease in a better way. The typically used forward chaining mechanisms in rule based decision support systems perform reasoning with uncertain data. ML (Markov Logic) is a new technique that has ability to deal with uncertainty of data by integrating FOL (First-Order-Logic) with probabilistic graphical models. In this paper, we have proposed the architecture of a ML based inference engine for a rule based CDSS and we have also presented an algorithm to use ML based forward chaining mechanism in the proposed inference engine. The results of the experiments show that the proposed inference engine would be intelligent enough to diagnose a patient's disease even from uncertain or incomplete/partial information. (author)
Role of Speaker Cues in Attention Inference
Directory of Open Access Journals (Sweden)
Jin Joo Lee
2017-10-01
Full Text Available Current state-of-the-art approaches to emotion recognition primarily focus on modeling the nonverbal expressions of the sole individual without reference to contextual elements such as the co-presence of the partner. In this paper, we demonstrate that the accurate inference of listeners’ social-emotional state of attention depends on accounting for the nonverbal behaviors of their storytelling partner, namely their speaker cues. To gain a deeper understanding of the role of speaker cues in attention inference, we conduct investigations into real-world interactions of children (5–6 years old storytelling with their peers. Through in-depth analysis of human–human interaction data, we first identify nonverbal speaker cues (i.e., backchannel-inviting cues and listener responses (i.e., backchannel feedback. We then demonstrate how speaker cues can modify the interpretation of attention-related backchannels as well as serve as a means to regulate the responsiveness of listeners. We discuss the design implications of our findings toward our primary goal of developing attention recognition models for storytelling robots, and we argue that social robots can proactively use speaker cues to form more accurate inferences about the attentive state of their human partners.
Efficient Bayesian inference under the structured coalescent.
Vaughan, Timothy G; Kühnert, Denise; Popinga, Alex; Welch, David; Drummond, Alexei J
2014-08-15
Population structure significantly affects evolutionary dynamics. Such structure may be due to spatial segregation, but may also reflect any other gene-flow-limiting aspect of a model. In combination with the structured coalescent, this fact can be used to inform phylogenetic tree reconstruction, as well as to infer parameters such as migration rates and subpopulation sizes from annotated sequence data. However, conducting Bayesian inference under the structured coalescent is impeded by the difficulty of constructing Markov Chain Monte Carlo (MCMC) sampling algorithms (samplers) capable of efficiently exploring the state space. In this article, we present a new MCMC sampler capable of sampling from posterior distributions over structured trees: timed phylogenetic trees in which lineages are associated with the distinct subpopulation in which they lie. The sampler includes a set of MCMC proposal functions that offer significant mixing improvements over a previously published method. Furthermore, its implementation as a BEAST 2 package ensures maximum flexibility with respect to model and prior specification. We demonstrate the usefulness of this new sampler by using it to infer migration rates and effective population sizes of H3N2 influenza between New Zealand, New York and Hong Kong from publicly available hemagglutinin (HA) gene sequences under the structured coalescent. The sampler has been implemented as a publicly available BEAST 2 package that is distributed under version 3 of the GNU General Public License at http://compevol.github.io/MultiTypeTree. © The Author 2014. Published by Oxford University Press.
Elements of Causal Inference: Foundations and Learning Algorithms
DEFF Research Database (Denmark)
Peters, Jonas Martin; Janzing, Dominik; Schölkopf, Bernhard
A concise and self-contained introduction to causal inference, increasingly important in data science and machine learning......A concise and self-contained introduction to causal inference, increasingly important in data science and machine learning...
Remnants of strong tidal interactions
International Nuclear Information System (INIS)
Mcglynn, T.A.
1990-01-01
This paper examines the properties of stellar systems that have recently undergone a strong tidal shock, i.e., a shock which removes a significant fraction of the particles in the system, and where the shocked system has a much smaller mass than the producer of the tidal field. N-body calculations of King models shocked in a variety of ways are performed, and the consequences of the shocks are investigated. The results confirm the prediction of Jaffe for shocked systems. Several models are also run where the tidal forces on the system are constant, simulating a circular orbit around a primary, and the development of tidal radii under these static conditions appears to be a mild process which does not dramatically affect material that is not stripped. The tidal radii are about twice as large as classical formulas would predict. Remnant density profiles are compared with a sample of elliptical galaxies, and the implications of the results for the development of stellar populations and galaxies are considered. 38 refs
2006-01-01
Our friend and colleague John Strong was cruelly taken from us by a brain tumour on 31 July, a few days before his 65th birthday. John started his career and obtained his PhD in a group from Westfield College, initially working on experiments at Rutherford Appleton Laboratory (RAL). From the early 1970s onwards, however, his research was focused on experiments in CERN, with several particularly notable contributions. The Omega spectrometer adopted a system John had originally developed for experiments at RAL using vidicon cameras (a type of television camera) to record the sparks in the spark chambers. This highly automated system allowed Omega to be used in a similar way to bubble chambers. He contributed to the success of NA1 and NA7, where he became heavily involved in the electronic trigger systems. In these experiments the Westfield group joined forces with Italian colleagues to measure the form factors of the pion and the kaon, and the lifetime of some of the newly discovered charm particles. Such h...
Strong seismic ground motion propagation
International Nuclear Information System (INIS)
Seale, S.; Archuleta, R.; Pecker, A.; Bouchon, M.; Mohammadioun, G.; Murphy, A.; Mohammadioun, B.
1988-10-01
At the McGee Creek, California, site, 3-component strong-motion accelerometers are located at depths of 166 m, 35 m and 0 m. The surface material is glacial moraine, to a depth of 30.5 m, overlying homfels. Accelerations were recorded from two California earthquakes: Round Valley, M L 5.8, November 23, 1984, 18:08 UTC and Chalfant Valley, M L 6.4, July 21, 1986, 14:42 UTC. By separating out the SH components of acceleration, we were able to determine the orientations of the downhole instruments. By separating out the SV component of acceleration, we were able to determine the approximate angle of incidence of the signal at 166 m. A constant phase velocity Haskell-Thomson model was applied to generate synthetic SH seismograms at the surface using the accelerations recorded at 166 m. In the frequency band 0.0 - 10.0 Hz, we compared the filtered synthetic records to the filtered surface data. The onset of the SH pulse is clearly seen, as are the reflections from the interface at 30.5 m. The synthetic record closely matches the data in amplitude and phase. The fit between the synthetic accelerogram and the data shows that the seismic amplification at the surface is a result of the contrast of the impedances (shear stiffnesses) of the near surface materials
Bootstrapping phylogenies inferred from rearrangement data
Directory of Open Access Journals (Sweden)
Lin Yu
2012-08-01
Full Text Available Abstract Background Large-scale sequencing of genomes has enabled the inference of phylogenies based on the evolution of genomic architecture, under such events as rearrangements, duplications, and losses. Many evolutionary models and associated algorithms have been designed over the last few years and have found use in comparative genomics and phylogenetic inference. However, the assessment of phylogenies built from such data has not been properly addressed to date. The standard method used in sequence-based phylogenetic inference is the bootstrap, but it relies on a large number of homologous characters that can be resampled; yet in the case of rearrangements, the entire genome is a single character. Alternatives such as the jackknife suffer from the same problem, while likelihood tests cannot be applied in the absence of well established probabilistic models. Results We present a new approach to the assessment of distance-based phylogenetic inference from whole-genome data; our approach combines features of the jackknife and the bootstrap and remains nonparametric. For each feature of our method, we give an equivalent feature in the sequence-based framework; we also present the results of extensive experimental testing, in both sequence-based and genome-based frameworks. Through the feature-by-feature comparison and the experimental results, we show that our bootstrapping approach is on par with the classic phylogenetic bootstrap used in sequence-based reconstruction, and we establish the clear superiority of the classic bootstrap for sequence data and of our corresponding new approach for rearrangement data over proposed variants. Finally, we test our approach on a small dataset of mammalian genomes, verifying that the support values match current thinking about the respective branches. Conclusions Our method is the first to provide a standard of assessment to match that of the classic phylogenetic bootstrap for aligned sequences. Its
Bootstrapping phylogenies inferred from rearrangement data.
Lin, Yu; Rajan, Vaibhav; Moret, Bernard Me
2012-08-29
Large-scale sequencing of genomes has enabled the inference of phylogenies based on the evolution of genomic architecture, under such events as rearrangements, duplications, and losses. Many evolutionary models and associated algorithms have been designed over the last few years and have found use in comparative genomics and phylogenetic inference. However, the assessment of phylogenies built from such data has not been properly addressed to date. The standard method used in sequence-based phylogenetic inference is the bootstrap, but it relies on a large number of homologous characters that can be resampled; yet in the case of rearrangements, the entire genome is a single character. Alternatives such as the jackknife suffer from the same problem, while likelihood tests cannot be applied in the absence of well established probabilistic models. We present a new approach to the assessment of distance-based phylogenetic inference from whole-genome data; our approach combines features of the jackknife and the bootstrap and remains nonparametric. For each feature of our method, we give an equivalent feature in the sequence-based framework; we also present the results of extensive experimental testing, in both sequence-based and genome-based frameworks. Through the feature-by-feature comparison and the experimental results, we show that our bootstrapping approach is on par with the classic phylogenetic bootstrap used in sequence-based reconstruction, and we establish the clear superiority of the classic bootstrap for sequence data and of our corresponding new approach for rearrangement data over proposed variants. Finally, we test our approach on a small dataset of mammalian genomes, verifying that the support values match current thinking about the respective branches. Our method is the first to provide a standard of assessment to match that of the classic phylogenetic bootstrap for aligned sequences. Its support values follow a similar scale and its receiver
Statistical Inference at Work: Statistical Process Control as an Example
Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia
2008-01-01
To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…
Type Inference for Session Types in the Pi-Calculus
DEFF Research Database (Denmark)
Graversen, Eva Fajstrup; Harbo, Jacob Buchreitz; Huttel, Hans
2014-01-01
In this paper we present a direct algorithm for session type inference for the π-calculus. Type inference for session types has previously been achieved by either imposing limitations and restriction on the π-calculus, or by reducing the type inference problem to that for linear types. Our approach...
Reasoning about Informal Statistical Inference: One Statistician's View
Rossman, Allan J.
2008-01-01
This paper identifies key concepts and issues associated with the reasoning of informal statistical inference. I focus on key ideas of inference that I think all students should learn, including at secondary level as well as tertiary. I argue that a fundamental component of inference is to go beyond the data at hand, and I propose that statistical…
Strongly interacting photons and atoms
International Nuclear Information System (INIS)
Alge, W.
1999-05-01
This thesis contains the main results of the research topics I have pursued during the my PhD studies at the University of Innsbruck and partly in collaboration with the Institut d' Optique in Orsay, France. It is divided into three parts. The first and largest part discusses the possibility of using strong standing waves as a tool to cool and trap neutral atoms in optical cavities. This is very important in the field of nonlinear optics where several successful experiments with cold atoms in cavities have been performed recently. A discussion of the optical parametric oscillator in a regime where the nonlinearity dominates the evolution is the topic of the second part. We investigated mainly the statistical properties of the cavity output of the three interactive cavity modes. Very recently a system has been proposed which promises fantastic properties. It should exhibit a giant Kerr nonlinearity with negligible absorption thus leading to a photonic turnstile device based on cold atoms in cavity. We have shown that this model suffers from overly simplistic assumptions and developed several more comprehensive approaches to study the behavior of this system. Apart from the division into three parts of different contents the thesis is divided into publications, supplements and invisible stuff. The intention of the supplements is to reach researchers which work in related areas and provide them with more detailed information about the concepts and the numerical tools we used. It is written especially for diploma and PhD students to give them a chance to use the third part of our work which is actually the largest one. They consist of a large number of computer programs we wrote to investigate the behavior of the systems in parameter regions where no hope exists to solve the equations analytically. (author)
Topics in strong Langmuir turbulence
International Nuclear Information System (INIS)
Skoric, M.M.
1981-01-01
This thesis discusses certain aspects of the turbulence of a fully ionised non-isothermal plasma dominated by the Langmuir mode. Some of the basic properties of strongly turbulent plasmas are reviewed. In particular, interest is focused on the state of Langmuir turbulence, that is the turbulence of a simple externally unmagnetized plasma. The problem of the existence and dynamics of Langmuir collapse is discussed, often met as a non-linear stage of the modulational instability in the framework of the Zakharov equations (i.e. simple time-averaged dynamical equations). Possible macroscopic consequences of such dynamical turbulent models are investigated. In order to study highly non-linear collapse dynamics in its advanced stage, a set of generalized Zakharov equations are derived. Going beyond the original approximation, the author includes the effects of higher electron non-linearities and a breakdown of slow-timescale quasi-neutrality. He investigates how these corrections may influence the collapse stabilisation. Recently, it has been realised that the modulational instability in a Langmuir plasma will be accompanied by the collisionless-generation of a slow-timescale magnetic field. Accordingly, a novel physical situation has emerged which is investigated in detail. The stability of monochromatic Langmuir waves in a self-magnetized Langmuir plasma, is discussed, and the existence of a novel magneto-modulational instability shown. The wave collapse dynamics is investigated and a physical interpretation of the basic results is given. A problem of the transient analysis of an interaction of time-dependent electromagnetic pulses with linear cold plasma media is investigated. (Auth.)
Promoting Strong Written Communication Skills
Narayanan, M.
2015-12-01
The reason that an improvement in the quality of technical writing is still needed in the classroom is due to the fact that universities are facing challenging problems not only on the technological front but also on the socio-economic front. The universities are actively responding to the changes that are taking place in the global consumer marketplace. Obviously, there are numerous benefits of promoting strong written communication skills. They can be summarized into the following six categories. First, and perhaps the most important: The University achieves learner satisfaction. The learner has documented verbally, that the necessary knowledge has been successfully acquired. This results in learner loyalty that in turn will attract more qualified learners.Second, quality communication lowers the cost per pupil, consequently resulting in increased productivity backed by a stronger economic structure and forecast. Third, quality communications help to improve the cash flow and cash reserves of the university. Fourth, having high quality communication enables the university to justify the need for high costs of tuition and fees. Fifth, better quality in written communication skills result in attracting top-quality learners. This will lead to happier and satisfied learners, not to mention greater prosperity for the university as a whole. Sixth, quality written communication skills result in reduced complaints, thus meaning fewer hours spent on answering or correcting the situation. The University faculty and staff are thus able to devote more time on scholarly activities, meaningful research and productive community service. References Boyer, Ernest L. (1990). Scholarship reconsidered: Priorities of the Professorate.Princeton, NJ: Carnegie Foundation for the Advancement of Teaching. Hawkins, P., & Winter, J. (1997). Mastering change: Learning the lessons of the enterprise.London: Department for Education and Employment. Buzzel, Robert D., and Bradley T. Gale. (1987
Malle, Bertram F; Holbrook, Jess
2012-04-01
People interpret behavior by making inferences about agents' intentionality, mind, and personality. Past research studied such inferences 1 at a time; in real life, people make these inferences simultaneously. The present studies therefore examined whether 4 major inferences (intentionality, desire, belief, and personality), elicited simultaneously in response to an observed behavior, might be ordered in a hierarchy of likelihood and speed. To achieve generalizability, the studies included a wide range of stimulus behaviors, presented them verbally and as dynamic videos, and assessed inferences both in a retrieval paradigm (measuring the likelihood and speed of accessing inferences immediately after they were made) and in an online processing paradigm (measuring the speed of forming inferences during behavior observation). Five studies provide evidence for a hierarchy of social inferences-from intentionality and desire to belief to personality-that is stable across verbal and visual presentations and that parallels the order found in developmental and primate research. (c) 2012 APA, all rights reserved.
Improved functional overview of protein complexes using inferred epistatic relationships
LENUS (Irish Health Repository)
Ryan, Colm
2011-05-23
Abstract Background Epistatic Miniarray Profiling(E-MAP) quantifies the net effect on growth rate of disrupting pairs of genes, often producing phenotypes that may be more (negative epistasis) or less (positive epistasis) severe than the phenotype predicted based on single gene disruptions. Epistatic interactions are important for understanding cell biology because they define relationships between individual genes, and between sets of genes involved in biochemical pathways and protein complexes. Each E-MAP screen quantifies the interactions between a logically selected subset of genes (e.g. genes whose products share a common function). Interactions that occur between genes involved in different cellular processes are not as frequently measured, yet these interactions are important for providing an overview of cellular organization. Results We introduce a method for combining overlapping E-MAP screens and inferring new interactions between them. We use this method to infer with high confidence 2,240 new strongly epistatic interactions and 34,469 weakly epistatic or neutral interactions. We show that accuracy of the predicted interactions approaches that of replicate experiments and that, like measured interactions, they are enriched for features such as shared biochemical pathways and knockout phenotypes. We constructed an expanded epistasis map for yeast cell protein complexes and show that our new interactions increase the evidence for previously proposed inter-complex connections, and predict many new links. We validated a number of these in the laboratory, including new interactions linking the SWR-C chromatin modifying complex and the nuclear transport apparatus. Conclusion Overall, our data support a modular model of yeast cell protein network organization and show how prediction methods can considerably extend the information that can be extracted from overlapping E-MAP screens.
Nonparametric inference of network structure and dynamics
Peixoto, Tiago P.
The network structure of complex systems determine their function and serve as evidence for the evolutionary mechanisms that lie behind them. Despite considerable effort in recent years, it remains an open challenge to formulate general descriptions of the large-scale structure of network systems, and how to reliably extract such information from data. Although many approaches have been proposed, few methods attempt to gauge the statistical significance of the uncovered structures, and hence the majority cannot reliably separate actual structure from stochastic fluctuations. Due to the sheer size and high-dimensionality of many networks, this represents a major limitation that prevents meaningful interpretations of the results obtained with such nonstatistical methods. In this talk, I will show how these issues can be tackled in a principled and efficient fashion by formulating appropriate generative models of network structure that can have their parameters inferred from data. By employing a Bayesian description of such models, the inference can be performed in a nonparametric fashion, that does not require any a priori knowledge or ad hoc assumptions about the data. I will show how this approach can be used to perform model comparison, and how hierarchical models yield the most appropriate trade-off between model complexity and quality of fit based on the statistical evidence present in the data. I will also show how this general approach can be elegantly extended to networks with edge attributes, that are embedded in latent spaces, and that change in time. The latter is obtained via a fully dynamic generative network model, based on arbitrary-order Markov chains, that can also be inferred in a nonparametric fashion. Throughout the talk I will illustrate the application of the methods with many empirical networks such as the internet at the autonomous systems level, the global airport network, the network of actors and films, social networks, citations among
Gravity as a Strong Prior: Implications for Perception and Action
Directory of Open Access Journals (Sweden)
Joan López-Moliner
2017-04-01
Full Text Available In the future, humans are likely to be exposed to environments with altered gravity conditions, be it only visually (Virtual and Augmented Reality, or visually and bodily (space travel. As visually and bodily perceived gravity as well as an interiorized representation of earth gravity are involved in a series of tasks, such as catching, grasping, body orientation estimation and spatial inferences, humans will need to adapt to these new gravity conditions. Performance under earth gravity discrepant conditions has been shown to be relatively poor, and few studies conducted in gravity adaptation are rather discouraging. Especially in VR on earth, conflicts between bodily and visual gravity cues seem to make a full adaptation to visually perceived earth-discrepant gravities nearly impossible, and even in space, when visual and bodily cues are congruent, adaptation is extremely slow. We invoke a Bayesian framework for gravity related perceptual processes, in which earth gravity holds the status of a so called “strong prior”. As other strong priors, the gravity prior has developed through years and years of experience in an earth gravity environment. For this reason, the reliability of this representation is extremely high and overrules any sensory information to its contrary. While also other factors such as the multisensory nature of gravity perception need to be taken into account, we present the strong prior account as a unifying explanation for empirical results in gravity perception and adaptation to earth-discrepant gravities.
Lower complexity bounds for lifted inference
DEFF Research Database (Denmark)
Jaeger, Manfred
2015-01-01
One of the big challenges in the development of probabilistic relational (or probabilistic logical) modeling and learning frameworks is the design of inference techniques that operate on the level of the abstract model representation language, rather than on the level of ground, propositional...... probabilistic relational models. Artificial Intelligence 117, 297–308). However, it is not immediate that these results also apply to the type of modeling languages that currently receive the most attention, i.e., weighted, quantifier-free formulas. In this paper we extend these earlier results, and show...
Robust Inference with Multi-way Clustering
A. Colin Cameron; Jonah B. Gelbach; Douglas L. Miller; Doug Miller
2009-01-01
In this paper we propose a variance estimator for the OLS estimator as well as for nonlinear estimators such as logit, probit and GMM. This variance estimator enables cluster-robust inference when there is two-way or multi-way clustering that is non-nested. The variance estimator extends the standard cluster-robust variance estimator or sandwich estimator for one-way clustering (e.g. Liang and Zeger (1986), Arellano (1987)) and relies on similar relatively weak distributional assumptions. Our...
Bayesian inference and the parametric bootstrap
Efron, Bradley
2013-01-01
The parametric bootstrap can be used for the efficient computation of Bayes posterior distributions. Importance sampling formulas take on an easy form relating to the deviance in exponential families, and are particularly simple starting from Jeffreys invariant prior. Because of the i.i.d. nature of bootstrap sampling, familiar formulas describe the computational accuracy of the Bayes estimates. Besides computational methods, the theory provides a connection between Bayesian and frequentist analysis. Efficient algorithms for the frequentist accuracy of Bayesian inferences are developed and demonstrated in a model selection example. PMID:23843930
Ignorability in Statistical and Probabilistic Inference
DEFF Research Database (Denmark)
Jaeger, Manfred
2005-01-01
When dealing with incomplete data in statistical learning, or incomplete observations in probabilistic inference, one needs to distinguish the fact that a certain event is observed from the fact that the observed event has happened. Since the modeling and computational complexities entailed...... the question of when the mar/car assumption is warranted. First we provide a ``static'' analysis that characterizes the admissibility of the car assumption in terms of the support structure of the joint probability distribution of complete data and incomplete observations. Here we obtain an equivalence...
Approximate Inference and Deep Generative Models
CERN. Geneva
2018-01-01
Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.
Abductive Inference using Array-Based Logic
DEFF Research Database (Denmark)
Frisvad, Jeppe Revall; Falster, Peter; Møller, Gert L.
employed in array-based logic we embrace abduction in a simple structural operation. We argue that a theory of abduction on this form allows for an implementation which, at runtime, can perform abductive inference quite efficiently on arbitrary rules of logic representing knowledge of finite domains.......The notion of abduction has found its usage within a wide variety of AI fields. Computing abductive solutions has, however, shown to be highly intractable in logic programming. To avoid this intractability we present a new approach to logicbased abduction; through the geometrical view of data...
Facility Activity Inference Using Radiation Networks
Energy Technology Data Exchange (ETDEWEB)
Rao, Nageswara S. [ORNL; Ramirez Aviles, Camila A. [ORNL
2017-11-01
We consider the problem of inferring the operational status of a reactor facility using measurements from a radiation sensor network deployed around the facility’s ventilation off-gas stack. The intensity of stack emissions decays with distance, and the sensor counts or measurements are inherently random with parameters determined by the intensity at the sensor’s location. We utilize the measurements to estimate the intensity at the stack, and use it in a one-sided Sequential Probability Ratio Test (SPRT) to infer on/off status of the reactor. We demonstrate the superior performance of this method over conventional majority fusers and individual sensors using (i) test measurements from a network of 21 NaI detectors, and (ii) effluence measurements collected at the stack of a reactor facility. We also analytically establish the superior detection performance of the network over individual sensors with fixed and adaptive thresholds by utilizing the Poisson distribution of the counts. We quantify the performance improvements of the network detection over individual sensors using the packing number of the intensity space.
Inferring network topology from complex dynamics
International Nuclear Information System (INIS)
Shandilya, Srinivas Gorur; Timme, Marc
2011-01-01
Inferring the network topology from dynamical observations is a fundamental problem pervading research on complex systems. Here, we present a simple, direct method for inferring the structural connection topology of a network, given an observation of one collective dynamical trajectory. The general theoretical framework is applicable to arbitrary network dynamical systems described by ordinary differential equations. No interference (external driving) is required and the type of dynamics is hardly restricted in any way. In particular, the observed dynamics may be arbitrarily complex; stationary, invariant or transient; synchronous or asynchronous and chaotic or periodic. Presupposing a knowledge of the functional form of the dynamical units and of the coupling functions between them, we present an analytical solution to the inverse problem of finding the network topology from observing a time series of state variables only. Robust reconstruction is achieved in any sufficiently long generic observation of the system. We extend our method to simultaneously reconstructing both the entire network topology and all parameters appearing linear in the system's equations of motion. Reconstruction of network topology and system parameters is viable even in the presence of external noise that distorts the original dynamics substantially. The method provides a conceptually new step towards reconstructing a variety of real-world networks, including gene and protein interaction networks and neuronal circuits.
PREMER: a Tool to Infer Biological Networks.
Villaverde, Alejandro F; Becker, Kolja; Banga, Julio R
2017-10-04
Inferring the structure of unknown cellular networks is a main challenge in computational biology. Data-driven approaches based on information theory can determine the existence of interactions among network nodes automatically. However, the elucidation of certain features - such as distinguishing between direct and indirect interactions or determining the direction of a causal link - requires estimating information-theoretic quantities in a multidimensional space. This can be a computationally demanding task, which acts as a bottleneck for the application of elaborate algorithms to large-scale network inference problems. The computational cost of such calculations can be alleviated by the use of compiled programs and parallelization. To this end we have developed PREMER (Parallel Reverse Engineering with Mutual information & Entropy Reduction), a software toolbox that can run in parallel and sequential environments. It uses information theoretic criteria to recover network topology and determine the strength and causality of interactions, and allows incorporating prior knowledge, imputing missing data, and correcting outliers. PREMER is a free, open source software tool that does not require any commercial software. Its core algorithms are programmed in FORTRAN 90 and implement OpenMP directives. It has user interfaces in Python and MATLAB/Octave, and runs on Windows, Linux and OSX (https://sites.google.com/site/premertoolbox/).
Automated adaptive inference of phenomenological dynamical models
Daniels, Bryan
Understanding the dynamics of biochemical systems can seem impossibly complicated at the microscopic level: detailed properties of every molecular species, including those that have not yet been discovered, could be important for producing macroscopic behavior. The profusion of data in this area has raised the hope that microscopic dynamics might be recovered in an automated search over possible models, yet the combinatorial growth of this space has limited these techniques to systems that contain only a few interacting species. We take a different approach inspired by coarse-grained, phenomenological models in physics. Akin to a Taylor series producing Hooke's Law, forgoing microscopic accuracy allows us to constrain the search over dynamical models to a single dimension. This makes it feasible to infer dynamics with very limited data, including cases in which important dynamical variables are unobserved. We name our method Sir Isaac after its ability to infer the dynamical structure of the law of gravitation given simulated planetary motion data. Applying the method to output from a microscopically complicated but macroscopically simple biological signaling model, it is able to adapt the level of detail to the amount of available data. Finally, using nematode behavioral time series data, the method discovers an effective switch between behavioral attractors after the application of a painful stimulus.
Inference with the Median of a Prior
Directory of Open Access Journals (Sweden)
Ali Mohammad-Djafari
2006-06-01
Full Text Available We consider the problem of inference on one of the two parameters of a probability distribution when we have some prior information on a nuisance parameter. When a prior probability distribution on this nuisance parameter is given, the marginal distribution is the classical tool to account for it. If the prior distribution is not given, but we have partial knowledge such as a fixed number of moments, we can use the maximum entropy principle to assign a prior law and thus go back to the previous case. In this work, we consider the case where we only know the median of the prior and propose a new tool for this case. This new inference tool looks like a marginal distribution. It is obtained by first remarking that the marginal distribution can be considered as the mean value of the original distribution with respect to the prior probability law of the nuisance parameter, and then, by using the median in place of the mean.
Graphical models for inferring single molecule dynamics
Directory of Open Access Journals (Sweden)
Gonzalez Ruben L
2010-10-01
Full Text Available Abstract Background The recent explosion of experimental techniques in single molecule biophysics has generated a variety of novel time series data requiring equally novel computational tools for analysis and inference. This article describes in general terms how graphical modeling may be used to learn from biophysical time series data using the variational Bayesian expectation maximization algorithm (VBEM. The discussion is illustrated by the example of single-molecule fluorescence resonance energy transfer (smFRET versus time data, where the smFRET time series is modeled as a hidden Markov model (HMM with Gaussian observables. A detailed description of smFRET is provided as well. Results The VBEM algorithm returns the model’s evidence and an approximating posterior parameter distribution given the data. The former provides a metric for model selection via maximum evidence (ME, and the latter a description of the model’s parameters learned from the data. ME/VBEM provide several advantages over the more commonly used approach of maximum likelihood (ML optimized by the expectation maximization (EM algorithm, the most important being a natural form of model selection and a well-posed (non-divergent optimization problem. Conclusions The results demonstrate the utility of graphical modeling for inference of dynamic processes in single molecule biophysics.
Accelerating Bayesian inference for evolutionary biology models.
Meyer, Xavier; Chopard, Bastien; Salamin, Nicolas
2017-03-01
Bayesian inference is widely used nowadays and relies largely on Markov chain Monte Carlo (MCMC) methods. Evolutionary biology has greatly benefited from the developments of MCMC methods, but the design of more complex and realistic models and the ever growing availability of novel data is pushing the limits of the current use of these methods. We present a parallel Metropolis-Hastings (M-H) framework built with a novel combination of enhancements aimed towards parameter-rich and complex models. We show on a parameter-rich macroevolutionary model increases of the sampling speed up to 35 times with 32 processors when compared to a sequential M-H process. More importantly, our framework achieves up to a twentyfold faster convergence to estimate the posterior probability of phylogenetic trees using 32 processors when compared to the well-known software MrBayes for Bayesian inference of phylogenetic trees. https://bitbucket.org/XavMeyer/hogan. nicolas.salamin@unil.ch. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Information Theory, Inference and Learning Algorithms
Mackay, David J. C.
2003-10-01
Information theory and inference, often taught separately, are here united in one entertaining textbook. These topics lie at the heart of many exciting areas of contemporary science and engineering - communication, signal processing, data mining, machine learning, pattern recognition, computational neuroscience, bioinformatics, and cryptography. This textbook introduces theory in tandem with applications. Information theory is taught alongside practical communication systems, such as arithmetic coding for data compression and sparse-graph codes for error-correction. A toolbox of inference techniques, including message-passing algorithms, Monte Carlo methods, and variational approximations, are developed alongside applications of these tools to clustering, convolutional codes, independent component analysis, and neural networks. The final part of the book describes the state of the art in error-correcting codes, including low-density parity-check codes, turbo codes, and digital fountain codes -- the twenty-first century standards for satellite communications, disk drives, and data broadcast. Richly illustrated, filled with worked examples and over 400 exercises, some with detailed solutions, David MacKay's groundbreaking book is ideal for self-learning and for undergraduate or graduate courses. Interludes on crosswords, evolution, and sex provide entertainment along the way. In sum, this is a textbook on information, communication, and coding for a new generation of students, and an unparalleled entry point into these subjects for professionals in areas as diverse as computational biology, financial engineering, and machine learning.
Causal inference, probability theory, and graphical insights.
Baker, Stuart G
2013-11-10
Causal inference from observational studies is a fundamental topic in biostatistics. The causal graph literature typically views probability theory as insufficient to express causal concepts in observational studies. In contrast, the view here is that probability theory is a desirable and sufficient basis for many topics in causal inference for the following two reasons. First, probability theory is generally more flexible than causal graphs: Besides explaining such causal graph topics as M-bias (adjusting for a collider) and bias amplification and attenuation (when adjusting for instrumental variable), probability theory is also the foundation of the paired availability design for historical controls, which does not fit into a causal graph framework. Second, probability theory is the basis for insightful graphical displays including the BK-Plot for understanding Simpson's paradox with a binary confounder, the BK2-Plot for understanding bias amplification and attenuation in the presence of an unobserved binary confounder, and the PAD-Plot for understanding the principal stratification component of the paired availability design. Published 2013. This article is a US Government work and is in the public domain in the USA.
Causal Inference in the Perception of Verticality.
de Winkel, Ksander N; Katliar, Mikhail; Diers, Daniel; Bülthoff, Heinrich H
2018-04-03
The perceptual upright is thought to be constructed by the central nervous system (CNS) as a vector sum; by combining estimates on the upright provided by the visual system and the body's inertial sensors with prior knowledge that upright is usually above the head. Recent findings furthermore show that the weighting of the respective sensory signals is proportional to their reliability, consistent with a Bayesian interpretation of a vector sum (Forced Fusion, FF). However, violations of FF have also been reported, suggesting that the CNS may rely on a single sensory system (Cue Capture, CC), or choose to process sensory signals based on inferred signal causality (Causal Inference, CI). We developed a novel alternative-reality system to manipulate visual and physical tilt independently. We tasked participants (n = 36) to indicate the perceived upright for various (in-)congruent combinations of visual-inertial stimuli, and compared models based on their agreement with the data. The results favor the CI model over FF, although this effect became unambiguous only for large discrepancies (±60°). We conclude that the notion of a vector sum does not provide a comprehensive explanation of the perception of the upright, and that CI offers a better alternative.
Directory of Open Access Journals (Sweden)
Yinyin Yuan
Full Text Available Inferring regulatory relationships among many genes based on their temporal variation in transcript abundance has been a popular research topic. Due to the nature of microarray experiments, classical tools for time series analysis lose power since the number of variables far exceeds the number of the samples. In this paper, we describe some of the existing multivariate inference techniques that are applicable to hundreds of variables and show the potential challenges for small-sample, large-scale data. We propose a directed partial correlation (DPC method as an efficient and effective solution to regulatory network inference using these data. Specifically for genomic data, the proposed method is designed to deal with large-scale datasets. It combines the efficiency of partial correlation for setting up network topology by testing conditional independence, and the concept of Granger causality to assess topology change with induced interruptions. The idea is that when a transcription factor is induced artificially within a gene network, the disruption of the network by the induction signifies a genes role in transcriptional regulation. The benchmarking results using GeneNetWeaver, the simulator for the DREAM challenges, provide strong evidence of the outstanding performance of the proposed DPC method. When applied to real biological data, the inferred starch metabolism network in Arabidopsis reveals many biologically meaningful network modules worthy of further investigation. These results collectively suggest DPC is a versatile tool for genomics research. The R package DPC is available for download (http://code.google.com/p/dpcnet/.
Saros, Jasmine E.; Stone, Jeffery R.; Pederson, Gregory T.; Slemmons, Krista; Spanbauer, Trisha; Schliep, Anna; Cahl, Douglas; Williamson, Craig E.; Engstrom, Daniel R.
2015-01-01
Over the 20th century, surface water temperatures have increased in many lake ecosystems around the world, but long-term trends in the vertical thermal structure of lakes remain unclear, despite the strong control that thermal stratification exerts on the biological response of lakes to climate change. Here we used both neo- and paleoecological approaches to develop a fossil-based inference model for lake mixing depths and thereby refine understanding of lake thermal structure change. We focused on three common planktonic diatom taxa, the distributions of which previous research suggests might be affected by mixing depth. Comparative lake surveys and growth rate experiments revealed that these species respond to lake thermal structure when nitrogen is sufficient, with species optima ranging from shallower to deeper mixing depths. The diatom-based mixing depth model was applied to sedimentary diatom profiles extending back to 1750 AD in two lakes with moderate nitrate concentrations but differing climate settings. Thermal reconstructions were consistent with expected changes, with shallower mixing depths inferred for an alpine lake where treeline has advanced, and deeper mixing depths inferred for a boreal lake where wind strength has increased. The inference model developed here provides a new tool to expand and refine understanding of climate-induced changes in lake ecosystems.
Inference generation and story comprehension among children with ADHD.
Van Neste, Jessica; Hayden, Angela; Lorch, Elizabeth P; Milich, Richard
2015-02-01
Academic difficulties are well-documented among children with ADHD. Exploring these difficulties through story comprehension research has revealed deficits among children with ADHD in making causal connections between events and in using causal structure and thematic importance to guide recall of stories. Important to theories of story comprehension and implied in these deficits is the ability to make inferences. Often, characters' goals are implicit and explanations of events must be inferred. The purpose of the present study was to compare the inferences generated during story comprehension by 23 7- to 11-year-old children with ADHD (16 males) and 35 comparison peers (19 males). Children watched two televised stories, each paused at five points. In the experimental condition, at each pause children told what they were thinking about the story, whereas in the control condition no responses were made during pauses. After viewing, children recalled the story. Several types of inferences and inference plausibility were coded. Children with ADHD generated fewer of the most essential inferences, plausible explanatory inferences, than did comparison children, both during story processing and during story recall. The groups did not differ on production of other types of inferences. Group differences in generating inferences during the think-aloud task significantly mediated group differences in patterns of recall. Both groups recalled more of the most important story information after completing the think-aloud task. Generating fewer explanatory inferences has important implications for story comprehension deficits in children with ADHD.
Children's inference generation: The role of vocabulary and working memory.
Currie, Nicola Kate; Cain, Kate
2015-09-01
Inferences are crucial to successful discourse comprehension. We assessed the contributions of vocabulary and working memory to inference making in children aged 5 and 6years (n=44), 7 and 8years (n=43), and 9 and 10years (n=43). Children listened to short narratives and answered questions to assess local and global coherence inferences after each one. Analysis of variance (ANOVA) confirmed developmental improvements on both types of inference. Although standardized measures of both vocabulary and working memory were correlated with inference making, multiple regression analyses determined that vocabulary was the key predictor. For local coherence inferences, only vocabulary predicted unique variance for the 6- and 8-year-olds; in contrast, none of the variables predicted performance for the 10-year-olds. For global coherence inferences, vocabulary was the only unique predictor for each age group. Mediation analysis confirmed that although working memory was associated with the ability to generate local and global coherence inferences in 6- to 10-year-olds, the effect was mediated by vocabulary. We conclude that vocabulary knowledge supports inference making in two ways: through knowledge of word meanings required to generate inferences and through its contribution to memory processes. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Palamara, Gian Marco; Childs, Dylan Z; Clements, Christopher F; Petchey, Owen L; Plebani, Marco; Smith, Matthew J
2014-12-01
Understanding and quantifying the temperature dependence of population parameters, such as intrinsic growth rate and carrying capacity, is critical for predicting the ecological responses to environmental change. Many studies provide empirical estimates of such temperature dependencies, but a thorough investigation of the methods used to infer them has not been performed yet. We created artificial population time series using a stochastic logistic model parameterized with the Arrhenius equation, so that activation energy drives the temperature dependence of population parameters. We simulated different experimental designs and used different inference methods, varying the likelihood functions and other aspects of the parameter estimation methods. Finally, we applied the best performing inference methods to real data for the species Paramecium caudatum. The relative error of the estimates of activation energy varied between 5% and 30%. The fraction of habitat sampled played the most important role in determining the relative error; sampling at least 1% of the habitat kept it below 50%. We found that methods that simultaneously use all time series data (direct methods) and methods that estimate population parameters separately for each temperature (indirect methods) are complementary. Indirect methods provide a clearer insight into the shape of the functional form describing the temperature dependence of population parameters; direct methods enable a more accurate estimation of the parameters of such functional forms. Using both methods, we found that growth rate and carrying capacity of Paramecium caudatum scale with temperature according to different activation energies. Our study shows how careful choice of experimental design and inference methods can increase the accuracy of the inferred relationships between temperature and population parameters. The comparison of estimation methods provided here can increase the accuracy of model predictions, with important
Human brain lesion-deficit inference remapped.
Mah, Yee-Haur; Husain, Masud; Rees, Geraint; Nachev, Parashkev
2014-09-01
Our knowledge of the anatomical organization of the human brain in health and disease draws heavily on the study of patients with focal brain lesions. Historically the first method of mapping brain function, it is still potentially the most powerful, establishing the necessity of any putative neural substrate for a given function or deficit. Great inferential power, however, carries a crucial vulnerability: without stronger alternatives any consistent error cannot be easily detected. A hitherto unexamined source of such error is the structure of the high-dimensional distribution of patterns of focal damage, especially in ischaemic injury-the commonest aetiology in lesion-deficit studies-where the anatomy is naturally shaped by the architecture of the vascular tree. This distribution is so complex that analysis of lesion data sets of conventional size cannot illuminate its structure, leaving us in the dark about the presence or absence of such error. To examine this crucial question we assembled the largest known set of focal brain lesions (n = 581), derived from unselected patients with acute ischaemic injury (mean age = 62.3 years, standard deviation = 17.8, male:female ratio = 0.547), visualized with diffusion-weighted magnetic resonance imaging, and processed with validated automated lesion segmentation routines. High-dimensional analysis of this data revealed a hidden bias within the multivariate patterns of damage that will consistently distort lesion-deficit maps, displacing inferred critical regions from their true locations, in a manner opaque to replication. Quantifying the size of this mislocalization demonstrates that past lesion-deficit relationships estimated with conventional inferential methodology are likely to be significantly displaced, by a magnitude dependent on the unknown underlying lesion-deficit relationship itself. Past studies therefore cannot be retrospectively corrected, except by new knowledge that would render them redundant
Species tree inference by minimizing deep coalescences.
Than, Cuong; Nakhleh, Luay
2009-09-01
In a 1997 seminal paper, W. Maddison proposed minimizing deep coalescences, or MDC, as an optimization criterion for inferring the species tree from a set of incongruent gene trees, assuming the incongruence is exclusively due to lineage sorting. In a subsequent paper, Maddison and Knowles provided and implemented a search heuristic for optimizing the MDC criterion, given a set of gene trees. However, the heuristic is not guaranteed to compute optimal solutions, and its hill-climbing search makes it slow in practice. In this paper, we provide two exact solutions to the problem of inferring the species tree from a set of gene trees under the MDC criterion. In other words, our solutions are guaranteed to find the tree that minimizes the total number of deep coalescences from a set of gene trees. One solution is based on a novel integer linear programming (ILP) formulation, and another is based on a simple dynamic programming (DP) approach. Powerful ILP solvers, such as CPLEX, make the first solution appealing, particularly for very large-scale instances of the problem, whereas the DP-based solution eliminates dependence on proprietary tools, and its simplicity makes it easy to integrate with other genomic events that may cause gene tree incongruence. Using the exact solutions, we analyze a data set of 106 loci from eight yeast species, a data set of 268 loci from eight Apicomplexan species, and several simulated data sets. We show that the MDC criterion provides very accurate estimates of the species tree topologies, and that our solutions are very fast, thus allowing for the accurate analysis of genome-scale data sets. Further, the efficiency of the solutions allow for quick exploration of sub-optimal solutions, which is important for a parsimony-based criterion such as MDC, as we show. We show that searching for the species tree in the compatibility graph of the clusters induced by the gene trees may be sufficient in practice, a finding that helps ameliorate the
Species tree inference by minimizing deep coalescences.
Directory of Open Access Journals (Sweden)
Cuong Than
2009-09-01
Full Text Available In a 1997 seminal paper, W. Maddison proposed minimizing deep coalescences, or MDC, as an optimization criterion for inferring the species tree from a set of incongruent gene trees, assuming the incongruence is exclusively due to lineage sorting. In a subsequent paper, Maddison and Knowles provided and implemented a search heuristic for optimizing the MDC criterion, given a set of gene trees. However, the heuristic is not guaranteed to compute optimal solutions, and its hill-climbing search makes it slow in practice. In this paper, we provide two exact solutions to the problem of inferring the species tree from a set of gene trees under the MDC criterion. In other words, our solutions are guaranteed to find the tree that minimizes the total number of deep coalescences from a set of gene trees. One solution is based on a novel integer linear programming (ILP formulation, and another is based on a simple dynamic programming (DP approach. Powerful ILP solvers, such as CPLEX, make the first solution appealing, particularly for very large-scale instances of the problem, whereas the DP-based solution eliminates dependence on proprietary tools, and its simplicity makes it easy to integrate with other genomic events that may cause gene tree incongruence. Using the exact solutions, we analyze a data set of 106 loci from eight yeast species, a data set of 268 loci from eight Apicomplexan species, and several simulated data sets. We show that the MDC criterion provides very accurate estimates of the species tree topologies, and that our solutions are very fast, thus allowing for the accurate analysis of genome-scale data sets. Further, the efficiency of the solutions allow for quick exploration of sub-optimal solutions, which is important for a parsimony-based criterion such as MDC, as we show. We show that searching for the species tree in the compatibility graph of the clusters induced by the gene trees may be sufficient in practice, a finding that helps
Bayesian inference data evaluation and decisions
Harney, Hanns Ludwig
2016-01-01
This new edition offers a comprehensive introduction to the analysis of data using Bayes rule. It generalizes Gaussian error intervals to situations in which the data follow distributions other than Gaussian. This is particularly useful when the observed parameter is barely above the background or the histogram of multiparametric data contains many empty bins, so that the determination of the validity of a theory cannot be based on the chi-squared-criterion. In addition to the solutions of practical problems, this approach provides an epistemic insight: the logic of quantum mechanics is obtained as the logic of unbiased inference from counting data. New sections feature factorizing parameters, commuting parameters, observables in quantum mechanics, the art of fitting with coherent and with incoherent alternatives and fitting with multinomial distribution. Additional problems and examples help deepen the knowledge. Requiring no knowledge of quantum mechanics, the book is written on introductory level, with man...
Automatic inference of indexing rules for MEDLINE
Directory of Open Access Journals (Sweden)
Shooshan Sonya E
2008-11-01
Full Text Available Abstract Background: Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. Methods: In this paper, we describe the use and the customization of Inductive Logic Programming (ILP to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Results: Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI, a system producing automatic indexing recommendations for MEDLINE. Conclusion: We expect the sets of ILP rules obtained in this experiment to be integrated into MTI.
Progression inference for somatic mutations in cancer
Directory of Open Access Journals (Sweden)
Leif E. Peterson
2017-04-01
Full Text Available Computational methods were employed to determine progression inference of genomic alterations in commonly occurring cancers. Using cross-sectional TCGA data, we computed evolutionary trajectories involving selectivity relationships among pairs of gene-specific genomic alterations such as somatic mutations, deletions, amplifications, downregulation, and upregulation among the top 20 driver genes associated with each cancer. Results indicate that the majority of hierarchies involved TP53, PIK3CA, ERBB2, APC, KRAS, EGFR, IDH1, VHL, etc. Research into the order and accumulation of genomic alterations among cancer driver genes will ever-increase as the costs of nextgen sequencing subside, and personalized/precision medicine incorporates whole-genome scans into the diagnosis and treatment of cancer. Keywords: Oncology, Cancer research, Genetics, Computational biology
Inferring human intentions from the brain data
DEFF Research Database (Denmark)
Stanek, Konrad
discharges across the neural tissue are responsible for emergence of high cognitive function, conscious perception and voluntary action. The brain’s capacity to exercise free will, or internally generated free choice, has long been investigated by philosophers, psychologists and neuroscientists. Rather than......The human brain is a massively complex organ composed of approximately a hundred billion densely interconnected, interacting neural cells. The neurons are not wired randomly - instead, they are organized in local functional assemblies. It is believed that the complex patterns of dynamic electric...... assuming a causal power of conscious will, the neuroscience of volition is based on the premise that "mental states rest on brain processes”, and hence by measuring spatial and temporal correlates of volition in carefully controlled experiments we can infer about their underlying mind processes, including...
Cancer Evolution: Mathematical Models and Computational Inference
Beerenwinkel, Niko; Schwarz, Roland F.; Gerstung, Moritz; Markowetz, Florian
2015-01-01
Cancer is a somatic evolutionary process characterized by the accumulation of mutations, which contribute to tumor growth, clinical progression, immune escape, and drug resistance development. Evolutionary theory can be used to analyze the dynamics of tumor cell populations and to make inference about the evolutionary history of a tumor from molecular data. We review recent approaches to modeling the evolution of cancer, including population dynamics models of tumor initiation and progression, phylogenetic methods to model the evolutionary relationship between tumor subclones, and probabilistic graphical models to describe dependencies among mutations. Evolutionary modeling helps to understand how tumors arise and will also play an increasingly important prognostic role in predicting disease progression and the outcome of medical interventions, such as targeted therapy. PMID:25293804
Supplier Selection Using Fuzzy Inference System
Directory of Open Access Journals (Sweden)
hamidreza kadhodazadeh
2014-01-01
Full Text Available Suppliers are one of the most vital parts of supply chain whose operation has significant indirect effect on customer satisfaction. Since customer's expectations from organization are different, organizations should consider different standards, respectively. There are many researches in this field using different standards and methods in recent years. The purpose of this study is to propose an approach for choosing a supplier in a food manufacturing company considering cost, quality, service, type of relationship and structure standards of the supplier organization. To evaluate supplier according to the above standards, the fuzzy inference system has been used. Input data of this system includes supplier's score in any standard that is achieved by AHP approach and the output is final score of each supplier. Finally, a supplier has been selected that although is not the best in price and quality, has achieved good score in all of the standards.
Field dynamics inference via spectral density estimation
Frank, Philipp; Steininger, Theo; Enßlin, Torsten A.
2017-11-01
Stochastic differential equations are of utmost importance in various scientific and industrial areas. They are the natural description of dynamical processes whose precise equations of motion are either not known or too expensive to solve, e.g., when modeling Brownian motion. In some cases, the equations governing the dynamics of a physical system on macroscopic scales occur to be unknown since they typically cannot be deduced from general principles. In this work, we describe how the underlying laws of a stochastic process can be approximated by the spectral density of the corresponding process. Furthermore, we show how the density can be inferred from possibly very noisy and incomplete measurements of the dynamical field. Generally, inverse problems like these can be tackled with the help of Information Field Theory. For now, we restrict to linear and autonomous processes. To demonstrate its applicability, we employ our reconstruction algorithm on a time-series and spatiotemporal processes.
BOOTSTRAP-BASED INFERENCE FOR GROUPED DATA
Directory of Open Access Journals (Sweden)
Jorge Iván Vélez
2015-07-01
Full Text Available Grouped data refers to continuous variables that are partitioned in intervals, not necessarily of the same length, to facilitate its interpretation. Unlike in ungrouped data, estimating simple summary statistics as the mean and mode, or more complex ones as a percentile or the coefficient of variation, is a difficult endeavour in grouped data. When the probability distribution generating the data is unknown, inference in ungrouped data is carried out using parametric or nonparametric resampling methods. However, there are no equivalent methods in the case of grouped data. Here, a bootstrap-based procedure to estimate the parameters of an unknown distribution based on grouped data is proposed, described and illustrated.
Inferring Past Environments from Ancient Epigenomes.
Gokhman, David; Malul, Anat; Carmel, Liran
2017-10-01
Analyzing the conditions in which past individuals lived is key to understanding the environments and cultural transitions to which humans had to adapt. Here, we suggest a methodology to probe into past environments, using reconstructed premortem DNA methylation maps of ancient individuals. We review a large body of research showing that differential DNA methylation is associated with changes in various external and internal factors, and propose that loci whose DNA methylation level is environmentally responsive could serve as markers to infer about ancient daily life, diseases, nutrition, exposure to toxins, and more. We demonstrate this approach by showing that hunger-related DNA methylation changes are found in ancient hunter-gatherers. The strategy we present here opens a window to reconstruct previously inaccessible aspects of the lives of past individuals. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Automatic inference of indexing rules for MEDLINE.
Névéol, Aurélie; Shooshan, Sonya E; Claveau, Vincent
2008-11-19
Indexing is a crucial step in any information retrieval system. In MEDLINE, a widely used database of the biomedical literature, the indexing process involves the selection of Medical Subject Headings in order to describe the subject matter of articles. The need for automatic tools to assist MEDLINE indexers in this task is growing with the increasing number of publications being added to MEDLINE. In this paper, we describe the use and the customization of Inductive Logic Programming (ILP) to infer indexing rules that may be used to produce automatic indexing recommendations for MEDLINE indexers. Our results show that this original ILP-based approach outperforms manual rules when they exist. In addition, the use of ILP rules also improves the overall performance of the Medical Text Indexer (MTI), a system producing automatic indexing recommendations for MEDLINE. We expect the sets of ILP rules obtained in this experiment to be integrated into MTI.
MISTIC: mutual information server to infer coevolution
DEFF Research Database (Denmark)
Simonetti, Franco L.; Teppa, Elin; Chernomoretz, Ariel
2013-01-01
MISTIC (mutual information server to infer coevolution) is a web server for graphical representation of the information contained within a MSA (multiple sequence alignment) and a complete analysis tool for Mutual Information networks in protein families. The server outputs a graphical visualization...... containing all results can be downloaded. The server is available at http://mistic.leloir.org.ar. In summary, MISTIC allows for a comprehensive, compact, visually rich view of the information contained within an MSA in a manner unique to any other publicly available web server. In particular, the use...... of several information-related quantities using a circos representation. This provides an integrated view of the MSA in terms of (i) the mutual information (MI) between residue pairs, (ii) sequence conservation and (iii) the residue cumulative and proximity MI scores. Further, an interactive interface...
Active inference and the anatomy of oculomotion.
Parr, Thomas; Friston, Karl J
2018-03-01
Given that eye movement control can be framed as an inferential process, how are the requisite forces generated to produce anticipated or desired fixation? Starting from a generative model based on simple Newtonian equations of motion, we derive a variational solution to this problem and illustrate the plausibility of its implementation in the oculomotor brainstem. We show, through simulation, that the Bayesian filtering equations that implement 'planning as inference' can generate both saccadic and smooth pursuit eye movements. Crucially, the associated message passing maps well onto the known connectivity and neuroanatomy of the brainstem - and the changes in these messages over time are strikingly similar to single unit recordings of neurons in the corresponding nuclei. Furthermore, we show that simulated lesions to axonal pathways reproduce eye movement patterns of neurological patients with damage to these tracts. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Inferring Phylogenetic Networks from Gene Order Data
Directory of Open Access Journals (Sweden)
Alexey Anatolievich Morozov
2013-01-01
Full Text Available Existing algorithms allow us to infer phylogenetic networks from sequences (DNA, protein or binary, sets of trees, and distance matrices, but there are no methods to build them using the gene order data as an input. Here we describe several methods to build split networks from the gene order data, perform simulation studies, and use our methods for analyzing and interpreting different real gene order datasets. All proposed methods are based on intermediate data, which can be generated from genome structures under study and used as an input for network construction algorithms. Three intermediates are used: set of jackknife trees, distance matrix, and binary encoding. According to simulations and case studies, the best intermediates are jackknife trees and distance matrix (when used with Neighbor-Net algorithm. Binary encoding can also be useful, but only when the methods mentioned above cannot be used.
Population inference from contemporary American craniometrics.
Algee-Hewitt, Bridget F B
2016-08-01
This analysis delivers a composite picture of population structure, admixture, ancestry variation, and personal identity in the United States, as observed through the lens of forensic anthropological casework and modern skeletal collections. It tests the applicability of the probabilistic clustering methods commonly used in human population genetics for the analysis of continuous, cranial measurement data, to improve population inference for admixed individuals without prior knowledge of sample origins. The unsupervised model-based clustering methods of finite mixture analysis are used here to reveal latent population structure and generate admixture proportions for craniofacial measurements from the Forensic Anthropology Data Bank (FDB). Craniometric estimates of ancestry are also generated under a three contributor model, sourcing parental reference populations from the Howells Craniometric Dataset. Tests of association are made among the coefficients of cluster memberships and the demographic information documented for each individual in the FDB. Clustering results are contextualized within the framework of conventional approaches to population structure analysis and individual ancestry estimation to discuss method compatibility. The findings reported here for contemporary American craniometrics are in agreement with the expected patterns of intergroup relationships, geographic origins and results from published genetic analyses. Population inference methods that allow for the model-bound estimation of admixture and ancestry proportions from craniometric data not only enable parallel-skeletal and genetic-analyses but they are also shown to be more informative than those methods that perform hard classifications using externally-imposed categories or seek to explain gross variation by low-dimensional projections. Am J Phys Anthropol 160:604-624, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Network geometry inference using common neighbors
Papadopoulos, Fragkiskos; Aldecoa, Rodrigo; Krioukov, Dmitri
2015-08-01
We introduce and explore a method for inferring hidden geometric coordinates of nodes in complex networks based on the number of common neighbors between the nodes. We compare this approach to the HyperMap method, which is based only on the connections (and disconnections) between the nodes, i.e., on the links that the nodes have (or do not have). We find that for high degree nodes, the common-neighbors approach yields a more accurate inference than the link-based method, unless heuristic periodic adjustments (or "correction steps") are used in the latter. The common-neighbors approach is computationally intensive, requiring O (t4) running time to map a network of t nodes, versus O (t3) in the link-based method. But we also develop a hybrid method with O (t3) running time, which combines the common-neighbors and link-based approaches, and we explore a heuristic that reduces its running time further to O (t2) , without significant reduction in the mapping accuracy. We apply this method to the autonomous systems (ASs) Internet, and we reveal how soft communities of ASs evolve over time in the similarity space. We further demonstrate the method's predictive power by forecasting future links between ASs. Taken altogether, our results advance our understanding of how to efficiently and accurately map real networks to their latent geometric spaces, which is an important necessary step toward understanding the laws that govern the dynamics of nodes in these spaces, and the fine-grained dynamics of network connections.
Atoms and clusters in strong laser fields
Marchenko, T.
2008-01-01
This thesis describes experimental and theoretical studies on the interaction of strong infrared laser fields with atoms and atomic clusters. Part I provides an overview of the main strong-field phenomena in atoms, molecules and clusters and describes the state-of-the-art in strong-field science.
Strong Bisimilarity of Simple Process Algebras
DEFF Research Database (Denmark)
Srba, Jirí
2003-01-01
We study bisimilarity and regularity problems of simple process algebras. In particular, we show PSPACE-hardness of the following problems: (i) strong bisimilarity of Basic Parallel Processes (BPP), (ii) strong bisimilarity of Basic Process Algebra (BPA), (iii) strong regularity of BPP, and (iv) ...
78 FR 15710 - Strong Sensitizer Guidance
2013-03-12
... definition of ``strong sensitizer'' found at 16 CFR 1500.3(c)(5). The Commission is proposing to revise the supplemental definition of ``strong sensitizer'' due to advancements in the science of sensitization that have... document is intended to clarify the ``strong sensitizer'' definition, assist manufacturers in understanding...
[Inferences and verbal comprehension in children with developmental language disorders].
Monfort, Isabelle; Monfort, Marc
2013-02-22
We review the concept of inference in language comprehension -both oral and written- recalling the different proposals of classification. We analyze the type of difficulties that children might encounter in their application of the inferences, depending on the type of language or development pathology. Finally, we describe the proposals for intervention that have been made to enhance the ability to apply inferences in language comprehension.
VINE: A Variational Inference -Based Bayesian Neural Network Engine
2018-01-01
functions and learning rates. The Python implementation that will be turned in is a parameterized implementation of the EASI algorithm in the sense that...Inference (VI) engine to perform inference and learning (statically and on-the-fly) under uncertain or incomplete input and output features. A secondary...realization, and that can not only do inference but also can be retrained on-the-fly based on incoming data. 15. SUBJECT TERMS Machine learning
Probabilistic logic networks a comprehensive framework for uncertain inference
Goertzel, Ben; Goertzel, Izabela Freire; Heljakka, Ari
2008-01-01
This comprehensive book describes Probabilistic Logic Networks (PLN), a novel conceptual, mathematical and computational approach to uncertain inference. A broad scope of reasoning types are considered.
Parametric statistical inference basic theory and modern approaches
Zacks, Shelemyahu; Tsokos, C P
1981-01-01
Parametric Statistical Inference: Basic Theory and Modern Approaches presents the developments and modern trends in statistical inference to students who do not have advanced mathematical and statistical preparation. The topics discussed in the book are basic and common to many fields of statistical inference and thus serve as a jumping board for in-depth study. The book is organized into eight chapters. Chapter 1 provides an overview of how the theory of statistical inference is presented in subsequent chapters. Chapter 2 briefly discusses statistical distributions and their properties. Chapt
Multi-Modal Inference in Animacy Perception for Artificial Object
Directory of Open Access Journals (Sweden)
Kohske Takahashi
2011-10-01
Full Text Available Sometimes we feel animacy for artificial objects and their motion. Animals usually interact with environments through multiple sensory modalities. Here we investigated how the sensory responsiveness of artificial objects to the environment would contribute to animacy judgment for them. In a 90-s trial, observers freely viewed four objects moving in a virtual 3D space. The objects, whose position and motion were determined following Perlin-noise series, kept drifting independently in the space. Visual flashes, auditory bursts, or synchronous flashes and bursts appeared with 1–2 s intervals. The first object abruptly accelerated their motion just after visual flashes, giving an impression of responding to the flash. The second object responded to bursts. The third object responded to synchronous flashes and bursts. The forth object accelerated at a random timing independent of flashes and bursts. The observers rated how strongly they felt animacy for each object. The results showed that the object responding to the auditory bursts was rated as having weaker animacy compared to the other objects. This implies that sensory modality through which an object interacts with the environment may be a factor for animacy perception in the object and may serve as the basis of multi-modal and cross-modal inference of animacy.
The origins of probabilistic inference in human infants.
Denison, Stephanie; Xu, Fei
2014-03-01
Reasoning under uncertainty is the bread and butter of everyday life. Many areas of psychology, from cognitive, developmental, social, to clinical, are interested in how individuals make inferences and decisions with incomplete information. The ability to reason under uncertainty necessarily involves probability computations, be they exact calculations or estimations. What are the developmental origins of probabilistic reasoning? Recent work has begun to examine whether infants and toddlers can compute probabilities; however, previous experiments have confounded quantity and probability-in most cases young human learners could have relied on simple comparisons of absolute quantities, as opposed to proportions, to succeed in these tasks. We present four experiments providing evidence that infants younger than 12 months show sensitivity to probabilities based on proportions. Furthermore, infants use this sensitivity to make predictions and fulfill their own desires, providing the first demonstration that even preverbal learners use probabilistic information to navigate the world. These results provide strong evidence for a rich quantitative and statistical reasoning system in infants. Copyright © 2013 Elsevier B.V. All rights reserved.
Strong interaction effects in high-Z K sup minus atoms
Energy Technology Data Exchange (ETDEWEB)
Batty, C.J.; Eckhause, M.; Gall, K.P.; Guss, P.P.; Hertzog, D.W.; Kane, J.R.; Kunselman, A.R.; Miller, J.P.; O' Brien, F.; Phillips, W.C.; Powers, R.J.; Roberts, B.L.; Sutton, R.B.; Vulcan, W.F.; Welsh, R.E.; Whyley, R.J.; Winter, R.G. (Rutherford-Appleton Laboratory, Chilton, Didcot OX11 0QX, United Kingdom (GB) College of William and Mary, Williamsburg, Virginia 23185 Boston University, Boston, Massachusetts 02215 University of Wyoming, Laramie, Wyoming 82071 California Institute of Technology, Pasadena, California 91125 Carnegie-Mellon University, Pittsburgh, Pennsylvania 15213)
1989-11-01
A systematic experimental study of strong interaction shifts, widths, and yields from high-{ital Z} kaonic atoms is reported. Strong interaction effects for the {ital K}{sup {minus}}(8{r arrow}7) transition were measured in U, Pb, and W, and the {ital K}{sup {minus}}(7{r arrow}6) transition in W was also observed. This is the first observation of two measurably broadened and shifted kaonic transitions in a single target and thus permitted the width of the upper state to be determined directly, rather than being inferred from yield data. The results are compared with optical-model calculations.
Inferring learning rules from distribution of firing rates in cortical neurons
Lim, Sukbin; McKee, Jillian L.; Woloszyn, Luke; Amit, Yali; Freedman, David J.; Sheinberg, David L.; Brunel, Nicolas
2015-01-01
Information about external stimuli is thought to be stored in cortical circuits through experience-dependent modifications of synaptic connectivity. These modifications of network connectivity should lead to changes in neuronal activity, as a particular stimulus is repeatedly encountered. Here, we ask what plasticity rules are consistent with the differences in the statistics of the visual response to novel and familiar stimuli in inferior temporal cortex, an area underlying visual object recognition. We introduce a method that allows inferring the dependence of the ‘learning rule’ on post-synaptic firing rate, and show that the inferred learning rule exhibits depression for low post-synaptic rates and potentiation for high rates. The threshold separating depression from potentiation is strongly correlated with both mean and standard deviation of the firing rate distribution. Finally, we show that network models implementing a rule extracted from data show stable learning dynamics, and lead to sparser representations of stimuli. PMID:26523643
DEFF Research Database (Denmark)
Bataillon, Thomas; Duan, Jinjie; Hvilsom, Christina
2015-01-01
of recent gene flow from Western into Eastern chimpanzees. The striking contrast in X-linked vs. autosomal polymorphism and divergence previously reported in Central chimpanzees is also found in Eastern and Western chimpanzees. We show that the direction of selection (DoS) statistic exhibits a strong non......-monotonic relationship with the strength of purifying selection S, making it inappropriate for estimating S. We instead use counts in synonymous vs. non-synonymous frequency classes to infer the distribution of S coefficients acting on non-synonymous mutations in each subspecies. The strength of purifying selection we...... infer is congruent with the differences in effective sizes of each subspecies: Central chimpanzees are undergoing the strongest purifying selection followed by Eastern and Western chimpanzees. Coding indels show stronger selection against indels changing the reading frame than observed in human...
Application of strong phosphoric acid to radiochemistry
International Nuclear Information System (INIS)
Terada, Kikuo
1977-01-01
Not only inorganic and organic compounds but also natural substrances, such as accumulations in soil, are completely decomposed and distilled by heating with strong phosphoric acid for 30 to 50 minutes. As applications of strong phosphoric acid to radiochemistry, determination of uranium and boron by use of solubilization effect of this substance, titration of uranyl ion by use of sulfuric iron (II) contained in this substance, application to tracer experiment, and determination of radioactive ruthenium in environmental samples are reviewed. Strong phosphoric acid is also applied to activation analysis, for example, determination of N in pyrographite with iodate potassium-strong phosphoric acid method, separation of Os and Ru with sulfuric cerium (IV) - strong phosphoric acid method or potassium dechromate-strong phosphoric acid method, analysis of Se, As and Sb rocks and accumulations with ammonium bromide, sodium chloride and sodium bromide-strong phosphoric acid method. (Kanao, N.)
A bias-corrected covariance estimate for improved inference with quadratic inference functions.
Westgate, Philip M
2012-12-20
The method of quadratic inference functions (QIF) is an increasingly popular method for the analysis of correlated data because of its multiple advantages over generalized estimating equations (GEE). One advantage is that it is more efficient for parameter estimation when the working covariance structure for the data is misspecified. In the QIF literature, the asymptotic covariance formula is used to obtain standard errors. We show that in small to moderately sized samples, these standard error estimates can be severely biased downward, therefore inflating test size and decreasing coverage probability. We propose adjustments to the asymptotic covariance formula that eliminate finite-sample biases and, as shown via simulation, lead to substantial improvements in standard error estimates, inference, and coverage. The proposed method is illustrated in application to a cluster randomized trial and a longitudinal study. Furthermore, QIF and GEE are contrasted via simulation and these applications. Copyright © 2012 John Wiley & Sons, Ltd.
DCS Survey Submission for Platte County, MO
Federal Emergency Management Agency, Department of Homeland Security — Survey data includes spatial datasets and data tables necessary to digitally represent data collected in the survey phase of the study. (Source: FEMA Guidelines and...
Serang, Oliver
2014-01-01
Exact Bayesian inference can sometimes be performed efficiently for special cases where a function has commutative and associative symmetry of its inputs (called “causal independence”). For this reason, it is desirable to exploit such symmetry on big data sets. Here we present a method to exploit a general form of this symmetry on probabilistic adder nodes by transforming those probabilistic adder nodes into a probabilistic convolution tree with which dynamic programming computes exact probabilities. A substantial speedup is demonstrated using an illustration example that can arise when identifying splice forms with bottom-up mass spectrometry-based proteomics. On this example, even state-of-the-art exact inference algorithms require a runtime more than exponential in the number of splice forms considered. By using the probabilistic convolution tree, we reduce the runtime to and the space to where is the number of variables joined by an additive or cardinal operator. This approach, which can also be used with junction tree inference, is applicable to graphs with arbitrary dependency on counting variables or cardinalities and can be used on diverse problems and fields like forward error correcting codes, elemental decomposition, and spectral demixing. The approach also trivially generalizes to multiple dimensions. PMID:24626234
Making inference from wildlife collision data: inferring predator absence from prey strikes.
Caley, Peter; Hosack, Geoffrey R; Barry, Simon C
2017-01-01
Wildlife collision data are ubiquitous, though challenging for making ecological inference due to typically irreducible uncertainty relating to the sampling process. We illustrate a new approach that is useful for generating inference from predator data arising from wildlife collisions. By simply conditioning on a second prey species sampled via the same collision process, and by using a biologically realistic numerical response functions, we can produce a coherent numerical response relationship between predator and prey. This relationship can then be used to make inference on the population size of the predator species, including the probability of extinction. The statistical conditioning enables us to account for unmeasured variation in factors influencing the runway strike incidence for individual airports and to enable valid comparisons. A practical application of the approach for testing hypotheses about the distribution and abundance of a predator species is illustrated using the hypothesized red fox incursion into Tasmania, Australia. We estimate that conditional on the numerical response between fox and lagomorph runway strikes on mainland Australia, the predictive probability of observing no runway strikes of foxes in Tasmania after observing 15 lagomorph strikes is 0.001. We conclude there is enough evidence to safely reject the null hypothesis that there is a widespread red fox population in Tasmania at a population density consistent with prey availability. The method is novel and has potential wider application.
Making inference from wildlife collision data: inferring predator absence from prey strikes
Directory of Open Access Journals (Sweden)
Peter Caley
2017-02-01
Full Text Available Wildlife collision data are ubiquitous, though challenging for making ecological inference due to typically irreducible uncertainty relating to the sampling process. We illustrate a new approach that is useful for generating inference from predator data arising from wildlife collisions. By simply conditioning on a second prey species sampled via the same collision process, and by using a biologically realistic numerical response functions, we can produce a coherent numerical response relationship between predator and prey. This relationship can then be used to make inference on the population size of the predator species, including the probability of extinction. The statistical conditioning enables us to account for unmeasured variation in factors influencing the runway strike incidence for individual airports and to enable valid comparisons. A practical application of the approach for testing hypotheses about the distribution and abundance of a predator species is illustrated using the hypothesized red fox incursion into Tasmania, Australia. We estimate that conditional on the numerical response between fox and lagomorph runway strikes on mainland Australia, the predictive probability of observing no runway strikes of foxes in Tasmania after observing 15 lagomorph strikes is 0.001. We conclude there is enough evidence to safely reject the null hypothesis that there is a widespread red fox population in Tasmania at a population density consistent with prey availability. The method is novel and has potential wider application.
DEFF Research Database (Denmark)
Pedersen, Casper-Emil Tingskov; Frandsen, Peter; Wekesa, Sabenzia N.
2015-01-01
through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer......With the emergence of analytical software for the inference of viral evolution, a number of studies have focused on estimating important parameters such as the substitution rate and the time to the most recent common ancestor (tMRCA) for rapidly evolving viruses. Coupled with an increasing...... to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully...
Nuclear Forensic Inferences Using Iterative Multidimensional Statistics
Energy Technology Data Exchange (ETDEWEB)
Robel, M; Kristo, M J; Heller, M A
2009-06-09
Nuclear forensics involves the analysis of interdicted nuclear material for specific material characteristics (referred to as 'signatures') that imply specific geographical locations, production processes, culprit intentions, etc. Predictive signatures rely on expert knowledge of physics, chemistry, and engineering to develop inferences from these material characteristics. Comparative signatures, on the other hand, rely on comparison of the material characteristics of the interdicted sample (the 'questioned sample' in FBI parlance) with those of a set of known samples. In the ideal case, the set of known samples would be a comprehensive nuclear forensics database, a database which does not currently exist. In fact, our ability to analyze interdicted samples and produce an extensive list of precise materials characteristics far exceeds our ability to interpret the results. Therefore, as we seek to develop the extensive databases necessary for nuclear forensics, we must also develop the methods necessary to produce the necessary inferences from comparison of our analytical results with these large, multidimensional sets of data. In the work reported here, we used a large, multidimensional dataset of results from quality control analyses of uranium ore concentrate (UOC, sometimes called 'yellowcake'). We have found that traditional multidimensional techniques, such as principal components analysis (PCA), are especially useful for understanding such datasets and drawing relevant conclusions. In particular, we have developed an iterative partial least squares-discriminant analysis (PLS-DA) procedure that has proven especially adept at identifying the production location of unknown UOC samples. By removing classes which fell far outside the initial decision boundary, and then rebuilding the PLS-DA model, we have consistently produced better and more definitive attributions than with a single pass classification approach. Performance of the
International Nuclear Information System (INIS)
Gralla, Megan B.; Gladders, Michael D.; Marrone, Daniel P.; Bayliss, Matthew; Carlstrom, John E.; Greer, Christopher; Hennessy, Ryan; Koester, Benjamin; Leitch, Erik; Sharon, Keren; Barrientos, L. Felipe; Bonamente, Massimiliano; Bulbul, Esra; Hasler, Nicole; Culverhouse, Thomas; Hawkins, David; Lamb, James; Gilbank, David G.; Joy, Marshall; Miller, Amber
2011-01-01
We have measured the Sunyaev-Zel'dovich (SZ) effect for a sample of 10 strong lensing selected galaxy clusters using the Sunyaev-Zel'dovich Array (SZA). The SZA is sensitive to structures on spatial scales of a few arcminutes, while the strong lensing mass modeling constrains the mass at small scales (typically <30''). Combining the two provides information about the projected concentrations of the strong lensing clusters. The Einstein radii we measure are twice as large as expected given the masses inferred from SZ scaling relations. A Monte Carlo simulation indicates that a sample randomly drawn from the expected distribution would have a larger median Einstein radius than the observed clusters about 3% of the time. The implied overconcentration has been noted in previous studies and persists for this sample, even when we take into account that we are selecting large Einstein radius systems, suggesting that the theoretical models still do not fully describe the observed properties of strong lensing clusters.
Stochastic Collapsed Variational Bayesian Inference for Latent Dirichlet Allocation
Foulds, J.; Boyles, L.; DuBois, C.; Smyth, P.; Welling, M.; Dhillon, I.S.; Koren, Y.; Ghani, R.; Senator, T.E.; Bradley, P.; Parekh, R.; He, J.; Grossman, R.L.; Uthurusamy, R.
2013-01-01
There has been an explosion in the amount of digital text information available in recent years, leading to challenges of scale for traditional inference algorithms for topic models. Recent advances in stochastic variational inference algorithms for latent Dirichlet allocation (LDA) have made it
A Comparative Analysis of Fuzzy Inference Engines in Context of ...
African Journals Online (AJOL)
Fuzzy inference engine has found successful applications in a wide variety of fields, such as automatic control, data classification, decision analysis, expert engines, time series prediction, robotics, pattern recognition, etc. This paper presents a comparative analysis of three fuzzy inference engines, max-product, max-min ...
A Comparative Analysis of Fuzzy Inference Engines in Context of ...
African Journals Online (AJOL)
PROF. O. E. OSUAGWU
The horizontal coordinate of the "fuzzy centroid" of the area under that function is taken as the output. This method does not combine the effects of all applicable rules but does produce a continuous output function and is easy to implement. The product inference engine and the minimum inference engine are the most ...
Application of adaptive neuro-fuzzy inference system technique in ...
African Journals Online (AJOL)
In this paper, an adaptive neuro‐fuzzy inference systems (ANFIS) technique is used in design of MPA. This artificial Intelligence (AI) technique is used in determining the parameters used in the design of a rectangular microstrip patch antenna. The ANFIS has the advantages of expert knowledge of fuzzy inference system ...
Developing Measures and Predictors of Observation and Inference Abilities
1976-05-01
SUPPLEMENTARY NOTES IS. KEY WORDS ( Continuo on reverse side It necessary and Identify by block number) Observation Multiple Measurement Inference...Discussion SI group observation) to twenty-three out of thirty-nine (Table V-3, Film inference - Bob). The data indicate modest improvement in item charac
Strongly correlating liquids and their isomorphs
Pedersen, Ulf R.; Gnan, Nicoletta; Bailey, Nicholas P.; Schröder, Thomas B.; Dyre, Jeppe C.
2010-01-01
This paper summarizes the properties of strongly correlating liquids, i.e., liquids with strong correlations between virial and potential energy equilibrium fluctuations at constant volume. We proceed to focus on the experimental predictions for strongly correlating glass-forming liquids. These predictions include i) density scaling, ii) isochronal superposition, iii) that there is a single function from which all frequency-dependent viscoelastic response functions may be calculated, iv) that...
Atom collisions in a strong electromagnetic field
International Nuclear Information System (INIS)
Smirnov, V.S.; Chaplik, A.V.
1976-01-01
It is shown that the long-range part of interatomic interaction is considerably altered in a strong electromagnetic field. Instead of the van der Waals law the potential asymptote can best be described by a dipole-dipole R -3 law. Impact broadening and the line shift in a strong nonresonant field are calculated. The possibility of bound states of two atoms being formed in a strong light field is discussed
Primate diversification inferred from phylogenies and fossils.
Herrera, James P
2017-12-01
Biodiversity arises from the balance between speciation and extinction. Fossils record the origins and disappearance of organisms, and the branching patterns of molecular phylogenies allow estimation of speciation and extinction rates, but the patterns of diversification are frequently incongruent between these two data sources. I tested two hypotheses about the diversification of primates based on ∼600 fossil species and 90% complete phylogenies of living species: (1) diversification rates increased through time; (2) a significant extinction event occurred in the Oligocene. Consistent with the first hypothesis, analyses of phylogenies supported increasing speciation rates and negligible extinction rates. In contrast, fossils showed that while speciation rates increased, speciation and extinction rates tended to be nearly equal, resulting in zero net diversification. Partially supporting the second hypothesis, the fossil data recorded a clear pattern of diversity decline in the Oligocene, although diversification rates were near zero. The phylogeny supported increased extinction ∼34 Ma, but also elevated extinction ∼10 Ma, coinciding with diversity declines in some fossil clades. The results demonstrated that estimates of speciation and extinction ignoring fossils are insufficient to infer diversification and information on extinct lineages should be incorporated into phylogenetic analyses. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Functional network inference of the suprachiasmatic nucleus
Energy Technology Data Exchange (ETDEWEB)
Abel, John H.; Meeker, Kirsten; Granados-Fuentes, Daniel; St. John, Peter C.; Wang, Thomas J.; Bales, Benjamin B.; Doyle, Francis J.; Herzog, Erik D.; Petzold, Linda R.
2016-04-04
In the mammalian suprachiasmatic nucleus (SCN), noisy cellular oscillators communicate within a neuronal network to generate precise system-wide circadian rhythms. Although the intracellular genetic oscillator and intercellular biochemical coupling mechanisms have been examined previously, the network topology driving synchronization of the SCN has not been elucidated. This network has been particularly challenging to probe, due to its oscillatory components and slow coupling timescale. In this work, we investigated the SCN network at a single-cell resolution through a chemically induced desynchronization. We then inferred functional connections in the SCN by applying the maximal information coefficient statistic to bioluminescence reporter data from individual neurons while they resynchronized their circadian cycling. Our results demonstrate that the functional network of circadian cells associated with resynchronization has small-world characteristics, with a node degree distribution that is exponential. We show that hubs of this small-world network are preferentially located in the central SCN, with sparsely connected shells surrounding these cores. Finally, we used two computational models of circadian neurons to validate our predictions of network structure.
Statistical Inference Based on L-Moments
Directory of Open Access Journals (Sweden)
Tereza Šimková
2017-03-01
Full Text Available To overcome drawbacks of central moments and comoment matrices usually used to characterize univariate and multivariate distributions, respectively, their generalization, termed L-moments, has been proposed. L-moments of all orders are defined for any random variable or vector with finite mean. L-moments have been widely employed in the past 20 years in statistical inference. The aim of the paper is to present the review of the theory of L-moments and to illustrate their application in parameter estimating and hypothesis testing. The problem of estimating the three-parameter generalized Pareto distribution’s (GPD parameters that is generally used in modelling extreme events is considered. A small simulation study is performed to show the superiority of the L-moment method in some cases. Because nowadays L-moments are often employed in estimating extreme events by regional approaches, the focus is on the key assumption of index-flood based regional frequency analysis (RFA, that is homogeneity testing. The benefits of the nonparametric L-moment homogeneity test are implemented on extreme meteorological events observed in the Czech Republic.
Bayesian Inference of a Multivariate Regression Model
Directory of Open Access Journals (Sweden)
Marick S. Sinay
2014-01-01
Full Text Available We explore Bayesian inference of a multivariate linear regression model with use of a flexible prior for the covariance structure. The commonly adopted Bayesian setup involves the conjugate prior, multivariate normal distribution for the regression coefficients and inverse Wishart specification for the covariance matrix. Here we depart from this approach and propose a novel Bayesian estimator for the covariance. A multivariate normal prior for the unique elements of the matrix logarithm of the covariance matrix is considered. Such structure allows for a richer class of prior distributions for the covariance, with respect to strength of beliefs in prior location hyperparameters, as well as the added ability, to model potential correlation amongst the covariance structure. The posterior moments of all relevant parameters of interest are calculated based upon numerical results via a Markov chain Monte Carlo procedure. The Metropolis-Hastings-within-Gibbs algorithm is invoked to account for the construction of a proposal density that closely matches the shape of the target posterior distribution. As an application of the proposed technique, we investigate a multiple regression based upon the 1980 High School and Beyond Survey.
Aesthetic quality inference for online fashion shopping
Chen, Ming; Allebach, Jan
2014-03-01
On-line fashion communities in which participants post photos of personal fashion items for viewing and possible purchase by others are becoming increasingly popular. Generally, these photos are taken by individuals who have no training in photography with low-cost mobile phone cameras. It is desired that photos of the products have high aesthetic quality to improve the users' online shopping experience. In this work, we design features for aesthetic quality inference in the context of online fashion shopping. Psychophysical experiments are conducted to construct a database of the photos' aesthetic evaluation, specifically for photos from an online fashion shopping website. We then extract both generic low-level features and high-level image attributes to represent the aesthetic quality. Using a support vector machine framework, we train a predictor of the aesthetic quality rating based on the feature vector. Experimental results validate the efficacy of our approach. Metadata such as the product type are also used to further improve the result.
Inferring gene networks from discrete expression data
Zhang, L.
2013-07-18
The modeling of gene networks from transcriptional expression data is an important tool in biomedical research to reveal signaling pathways and to identify treatment targets. Current gene network modeling is primarily based on the use of Gaussian graphical models applied to continuous data, which give a closedformmarginal likelihood. In this paper,we extend network modeling to discrete data, specifically data from serial analysis of gene expression, and RNA-sequencing experiments, both of which generate counts of mRNAtranscripts in cell samples.We propose a generalized linear model to fit the discrete gene expression data and assume that the log ratios of the mean expression levels follow a Gaussian distribution.We restrict the gene network structures to decomposable graphs and derive the graphs by selecting the covariance matrix of the Gaussian distribution with the hyper-inverse Wishart priors. Furthermore, we incorporate prior network models based on gene ontology information, which avails existing biological information on the genes of interest. We conduct simulation studies to examine the performance of our discrete graphical model and apply the method to two real datasets for gene network inference. © The Author 2013. Published by Oxford University Press. All rights reserved.
Probabilistic phylogenetic inference with insertions and deletions.
Directory of Open Access Journals (Sweden)
Elena Rivas
2008-09-01
Full Text Available A fundamental task in sequence analysis is to calculate the probability of a multiple alignment given a phylogenetic tree relating the sequences and an evolutionary model describing how sequences change over time. However, the most widely used phylogenetic models only account for residue substitution events. We describe a probabilistic model of a multiple sequence alignment that accounts for insertion and deletion events in addition to substitutions, given a phylogenetic tree, using a rate matrix augmented by the gap character. Starting from a continuous Markov process, we construct a non-reversible generative (birth-death evolutionary model for insertions and deletions. The model assumes that insertion and deletion events occur one residue at a time. We apply this model to phylogenetic tree inference by extending the program dnaml in phylip. Using standard benchmarking methods on simulated data and a new "concordance test" benchmark on real ribosomal RNA alignments, we show that the extended program dnamlepsilon improves accuracy relative to the usual approach of ignoring gaps, while retaining the computational efficiency of the Felsenstein peeling algorithm.
Information-Theoretic Inference of Common Ancestors
Directory of Open Access Journals (Sweden)
Bastian Steudel
2015-04-01
Full Text Available A directed acyclic graph (DAG partially represents the conditional independence structure among observations of a system if the local Markov condition holds, that is if every variable is independent of its non-descendants given its parents. In general, there is a whole class of DAGs that represents a given set of conditional independence relations. We are interested in properties of this class that can be derived from observations of a subsystem only. To this end, we prove an information-theoretic inequality that allows for the inference of common ancestors of observed parts in any DAG representing some unknown larger system. More explicitly, we show that a large amount of dependence in terms of mutual information among the observations implies the existence of a common ancestor that distributes this information. Within the causal interpretation of DAGs, our result can be seen as a quantitative extension of Reichenbach’s principle of common cause to more than two variables. Our conclusions are valid also for non-probabilistic observations, such as binary strings, since we state the proof for an axiomatized notion of “mutual information” that includes the stochastic as well as the algorithmic version.
Inference-based procedural modeling of solids
Biggers, Keith
2011-11-01
As virtual environments become larger and more complex, there is an increasing need for more automated construction algorithms to support the development process. We present an approach for modeling solids by combining prior examples with a simple sketch. Our algorithm uses an inference-based approach to incrementally fit patches together in a consistent fashion to define the boundary of an object. This algorithm samples and extracts surface patches from input models, and develops a Petri net structure that describes the relationship between patches along an imposed parameterization. Then, given a new parameterized line or curve, we use the Petri net to logically fit patches together in a manner consistent with the input model. This allows us to easily construct objects of varying sizes and configurations using arbitrary articulation, repetition, and interchanging of parts. The result of our process is a solid model representation of the constructed object that can be integrated into a simulation-based environment. © 2011 Elsevier Ltd. All rights reserved.
Logical inference techniques for loop parallelization
Oancea, Cosmin E.
2012-01-01
This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the parallelization transformation by verifying the independence of the loop\\'s memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S = Ø, where S is a set expression representing array indexes. Using a language instead of an array-abstraction representation for S results in a smaller number of conservative approximations but exhibits a potentially-high runtime cost. To alleviate this cost we introduce a language translation F from the USR set-expression language to an equally rich language of predicates (F(S) ⇒ S = Ø). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates (F(S)) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order of their estimated complexities. We evaluate our automated solution on 26 benchmarks from PERFECTCLUB and SPEC suites and show that our approach is effective in parallelizing large, complex loops and obtains much better full program speedups than the Intel and IBM Fortran compilers. Copyright © 2012 ACM.
Virtual reality and consciousness inference in dreaming.
Hobson, J Allan; Hong, Charles C-H; Friston, Karl J
2014-01-01
This article explores the notion that the brain is genetically endowed with an innate virtual reality generator that - through experience-dependent plasticity - becomes a generative or predictive model of the world. This model, which is most clearly revealed in rapid eye movement (REM) sleep dreaming, may provide the theater for conscious experience. Functional neuroimaging evidence for brain activations that are time-locked to rapid eye movements (REMs) endorses the view that waking consciousness emerges from REM sleep - and dreaming lays the foundations for waking perception. In this view, the brain is equipped with a virtual model of the world that generates predictions of its sensations. This model is continually updated and entrained by sensory prediction errors in wakefulness to ensure veridical perception, but not in dreaming. In contrast, dreaming plays an essential role in maintaining and enhancing the capacity to model the world by minimizing model complexity and thereby maximizing both statistical and thermodynamic efficiency. This perspective suggests that consciousness corresponds to the embodied process of inference, realized through the generation of virtual realities (in both sleep and wakefulness). In short, our premise or hypothesis is that the waking brain engages with the world to predict the causes of sensations, while in sleep the brain's generative model is actively refined so that it generates more efficient predictions during waking. We review the evidence in support of this hypothesis - evidence that grounds consciousness in biophysical computations whose neuronal and neurochemical infrastructure has been disclosed by sleep research.
Multiple sequence alignment accuracy and phylogenetic inference.
Ogden, T Heath; Rosenberg, Michael S
2006-04-01
Phylogenies are often thought to be more dependent upon the specifics of the sequence alignment rather than on the method of reconstruction. Simulation of sequences containing insertion and deletion events was performed in order to determine the role that alignment accuracy plays during phylogenetic inference. Data sets were simulated for pectinate, balanced, and random tree shapes under different conditions (ultrametric equal branch length, ultrametric random branch length, nonultrametric random branch length). Comparisons between hypothesized alignments and true alignments enabled determination of two measures of alignment accuracy, that of the total data set and that of individual branches. In general, our results indicate that as alignment error increases, topological accuracy decreases. This trend was much more pronounced for data sets derived from more pectinate topologies. In contrast, for balanced, ultrametric, equal branch length tree shapes, alignment inaccuracy had little average effect on tree reconstruction. These conclusions are based on average trends of many analyses under different conditions, and any one specific analysis, independent of the alignment accuracy, may recover very accurate or inaccurate topologies. Maximum likelihood and Bayesian, in general, outperformed neighbor joining and maximum parsimony in terms of tree reconstruction accuracy. Results also indicated that as the length of the branch and of the neighboring branches increase, alignment accuracy decreases, and the length of the neighboring branches is the major factor in topological accuracy. Thus, multiple-sequence alignment can be an important factor in downstream effects on topological reconstruction.
Phylogenetic inference with weighted codon evolutionary distances.
Criscuolo, Alexis; Michel, Christian J
2009-04-01
We develop a new approach to estimate a matrix of pairwise evolutionary distances from a codon-based alignment based on a codon evolutionary model. The method first computes a standard distance matrix for each of the three codon positions. Then these three distance matrices are weighted according to an estimate of the global evolutionary rate of each codon position and averaged into a unique distance matrix. Using a large set of both real and simulated codon-based alignments of nucleotide sequences, we show that this approach leads to distance matrices that have a significantly better treelikeness compared to those obtained by standard nucleotide evolutionary distances. We also propose an alternative weighting to eliminate the part of the noise often associated with some codon positions, particularly the third position, which is known to induce a fast evolutionary rate. Simulation results show that fast distance-based tree reconstruction algorithms on distance matrices based on this codon position weighting can lead to phylogenetic trees that are at least as accurate as, if not better, than those inferred by maximum likelihood. Finally, a well-known multigene dataset composed of eight yeast species and 106 codon-based alignments is reanalyzed and shows that our codon evolutionary distances allow building a phylogenetic tree which is similar to those obtained by non-distance-based methods (e.g., maximum parsimony and maximum likelihood) and also significantly improved compared to standard nucleotide evolutionary distance estimates.
Towards Inferring Protein Interactions: Challenges and Solutions
Directory of Open Access Journals (Sweden)
Ji Xiang
2006-01-01
Full Text Available Discovering interacting proteins has been an essential part of functional genomics. However, existing experimental techniques only uncover a small portion of any interactome. Furthermore, these data often have a very high false rate. By conceptualizing the interactions at domain level, we provide a more abstract representation of interactome, which also facilitates the discovery of unobserved protein-protein interactions. Although several domain-based approaches have been proposed to predict protein-protein interactions, they usually assume that domain interactions are independent on each other for the convenience of computational modeling. A new framework to predict protein interactions is proposed in this paper, where no assumption is made about domain interactions. Protein interactions may be the result of multiple domain interactions which are dependent on each other. A conjunctive norm form representation is used to capture the relationships between protein interactions and domain interactions. The problem of interaction inference is then modeled as a constraint satisfiability problem and solved via linear programing. Experimental results on a combined yeast data set have demonstrated the robustness and the accuracy of the proposed algorithm. Moreover, we also map some predicted interacting domains to three-dimensional structures of protein complexes to show the validity of our predictions.
Bayesian inference of radiation belt loss timescales.
Camporeale, E.; Chandorkar, M.
2017-12-01
Electron fluxes in the Earth's radiation belts are routinely studied using the classical quasi-linear radial diffusion model. Although this simplified linear equation has proven to be an indispensable tool in understanding the dynamics of the radiation belt, it requires specification of quantities such as the diffusion coefficient and electron loss timescales that are never directly measured. Researchers have so far assumed a-priori parameterisations for radiation belt quantities and derived the best fit using satellite data. The state of the art in this domain lacks a coherent formulation of this problem in a probabilistic framework. We present some recent progress that we have made in performing Bayesian inference of radial diffusion parameters. We achieve this by making extensive use of the theory connecting Gaussian Processes and linear partial differential equations, and performing Markov Chain Monte Carlo sampling of radial diffusion parameters. These results are important for understanding the role and the propagation of uncertainties in radiation belt simulations and, eventually, for providing a probabilistic forecast of energetic electron fluxes in a Space Weather context.
Inferring Molecular Processes Heterogeneity from Transcriptional Data.
Gogolewski, Krzysztof; Wronowska, Weronika; Lech, Agnieszka; Lesyng, Bogdan; Gambin, Anna
2017-01-01
RNA microarrays and RNA-seq are nowadays standard technologies to study the transcriptional activity of cells. Most studies focus on tracking transcriptional changes caused by specific experimental conditions. Information referring to genes up- and downregulation is evaluated analyzing the behaviour of relatively large population of cells by averaging its properties. However, even assuming perfect sample homogeneity, different subpopulations of cells can exhibit diverse transcriptomic profiles, as they may follow different regulatory/signaling pathways. The purpose of this study is to provide a novel methodological scheme to account for possible internal, functional heterogeneity in homogeneous cell lines, including cancer ones. We propose a novel computational method to infer the proportion between subpopulations of cells that manifest various functional behaviour in a given sample. Our method was validated using two datasets from RNA microarray experiments. Both experiments aimed to examine cell viability in specific experimental conditions. The presented methodology can be easily extended to RNA-seq data as well as other molecular processes. Moreover, it complements standard tools to indicate most important networks from transcriptomic data and in particular could be useful in the analysis of cancer cell lines affected by biologically active compounds or drugs.
Scalable inference for stochastic block models
Peng, Chengbin
2017-12-08
Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference algorithms for such a model are increasingly limited due to their high time complexity and poor scalability. In this paper, we propose a multi-stage maximum likelihood approach to recover the latent parameters of the stochastic block model, in time linear with respect to the number of edges. We also propose a parallel algorithm based on message passing. Our algorithm can overlap communication and computation, providing speedup without compromising accuracy as the number of processors grows. For example, to process a real-world graph with about 1.3 million nodes and 10 million edges, our algorithm requires about 6 seconds on 64 cores of a contemporary commodity Linux cluster. Experiments demonstrate that the algorithm can produce high quality results on both benchmark and real-world graphs. An example of finding more meaningful communities is illustrated consequently in comparison with a popular modularity maximization algorithm.
MISTIC: Mutual information server to infer coevolution.
Simonetti, Franco L; Teppa, Elin; Chernomoretz, Ariel; Nielsen, Morten; Marino Buslje, Cristina
2013-07-01
MISTIC (mutual information server to infer coevolution) is a web server for graphical representation of the information contained within a MSA (multiple sequence alignment) and a complete analysis tool for Mutual Information networks in protein families. The server outputs a graphical visualization of several information-related quantities using a circos representation. This provides an integrated view of the MSA in terms of (i) the mutual information (MI) between residue pairs, (ii) sequence conservation and (iii) the residue cumulative and proximity MI scores. Further, an interactive interface to explore and characterize the MI network is provided. Several tools are offered for selecting subsets of nodes from the network for visualization. Node coloring can be set to match different attributes, such as conservation, cumulative MI, proximity MI and secondary structure. Finally, a zip file containing all results can be downloaded. The server is available at http://mistic.leloir.org.ar. In summary, MISTIC allows for a comprehensive, compact, visually rich view of the information contained within an MSA in a manner unique to any other publicly available web server. In particular, the use of circos representation of MI networks and the visualization of the cumulative MI and proximity MI concepts is novel.
Attention as a Bayesian inference process
Chikkerur, Sharat; Serre, Thomas; Tan, Cheston; Poggio, Tomaso
2011-03-01
David Marr famously defined vision as "knowing what is where by seeing". In the framework described here, attention is the inference process that solves the visual recognition problem of what is where. The theory proposes a computational role for attention and leads to a model that performs well in recognition tasks and that predicts some of the main properties of attention at the level of psychophysics and physiology. We propose an algorithmic implementation a Bayesian network that can be mapped into the basic functional anatomy of attention involving the ventral stream and the dorsal stream. This description integrates bottom-up, feature-based as well as spatial (context based) attentional mechanisms. We show that the Bayesian model predicts well human eye fixations (considered as a proxy for shifts of attention) in natural scenes, and can improve accuracy in object recognition tasks involving cluttered real world images. In both cases, we found that the proposed model can predict human performance better than existing bottom-up and top-down computational models.
Logical inference techniques for loop parallelization
DEFF Research Database (Denmark)
Oancea, Cosmin Eugen; Rauchwerger, Lawrence
2012-01-01
This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates the paralleliza......This paper presents a fully automatic approach to loop parallelization that integrates the use of static and run-time analysis and thus overcomes many known difficulties such as nonlinear and indirect array indexing and complex control flow. Our hybrid analysis framework validates...... the parallelization transformation by verifying the independence of the loop's memory references. To this end it represents array references using the USR (uniform set representation) language and expresses the independence condition as an equation, S={}, where S is a set expression representing array indexes. Using...... ( F(S) => S = {} ). Loop parallelization is then validated using a novel logic inference algorithm that factorizes the obtained complex predicates F(S) into a sequence of sufficient-independence conditions that are evaluated first statically and, when needed, dynamically, in increasing order...
Inference by replication in densely connected systems.
Neirotti, Juan P; Saad, David
2007-10-01
An efficient Bayesian inference method for problems that can be mapped onto dense graphs is presented. The approach is based on message passing where messages are averaged over a large number of replicated variable systems exposed to the same evidential nodes. An assumption about the symmetry of the solutions is required for carrying out the averages; here we extend the previous derivation based on a replica-symmetric- (RS)-like structure to include a more complex one-step replica-symmetry-breaking-like (1RSB-like) ansatz. To demonstrate the potential of the approach it is employed for studying critical properties of the Ising linear perceptron and for multiuser detection in code division multiple access (CDMA) under different noise models. Results obtained under the RS assumption in the noncritical regime give rise to a highly efficient signal detection algorithm in the context of CDMA; while in the critical regime one observes a first-order transition line that ends in a continuous phase transition point. Finite size effects are also observed. While the 1RSB ansatz is not required for the original problems, it was applied to the CDMA signal detection problem with a more complex noise model that exhibits RSB behavior, resulting in an improvement in performance.
Statistical causal inferences and their applications in public health research
Wu, Pan; Chen, Ding-Geng
2016-01-01
This book compiles and presents new developments in statistical causal inference. The accompanying data and computer programs are publicly available so readers may replicate the model development and data analysis presented in each chapter. In this way, methodology is taught so that readers may implement it directly. The book brings together experts engaged in causal inference research to present and discuss recent issues in causal inference methodological development. This is also a timely look at causal inference applied to scenarios that range from clinical trials to mediation and public health research more broadly. In an academic setting, this book will serve as a reference and guide to a course in causal inference at the graduate level (Master's or Doctorate). It is particularly relevant for students pursuing degrees in Statistics, Biostatistics and Computational Biology. Researchers and data analysts in public health and biomedical research will also find this book to be an important reference.
An Integrated Procedure for Bayesian Reliability Inference Using MCMC
Directory of Open Access Journals (Sweden)
Jing Lin
2014-01-01
Full Text Available The recent proliferation of Markov chain Monte Carlo (MCMC approaches has led to the use of the Bayesian inference in a wide variety of fields. To facilitate MCMC applications, this paper proposes an integrated procedure for Bayesian inference using MCMC methods, from a reliability perspective. The goal is to build a framework for related academic research and engineering applications to implement modern computational-based Bayesian approaches, especially for reliability inferences. The procedure developed here is a continuous improvement process with four stages (Plan, Do, Study, and Action and 11 steps, including: (1 data preparation; (2 prior inspection and integration; (3 prior selection; (4 model selection; (5 posterior sampling; (6 MCMC convergence diagnostic; (7 Monte Carlo error diagnostic; (8 model improvement; (9 model comparison; (10 inference making; (11 data updating and inference improvement. The paper illustrates the proposed procedure using a case study.
Human Inferences about Sequences: A Minimal Transition Probability Model.
Directory of Open Access Journals (Sweden)
Florent Meyniel
2016-12-01
Full Text Available The brain constantly infers the causes of the inputs it receives and uses these inferences to generate statistical expectations about future observations. Experimental evidence for these expectations and their violations include explicit reports, sequential effects on reaction times, and mismatch or surprise signals recorded in electrophysiology and functional MRI. Here, we explore the hypothesis that the brain acts as a near-optimal inference device that constantly attempts to infer the time-varying matrix of transition probabilities between the stimuli it receives, even when those stimuli are in fact fully unpredictable. This parsimonious Bayesian model, with a single free parameter, accounts for a broad range of findings on surprise signals, sequential effects and the perception of randomness. Notably, it explains the pervasive asymmetry between repetitions and alternations encountered in those studies. Our analysis suggests that a neural machinery for inferring transition probabilities lies at the core of human sequence knowledge.
Audain, Enrique; Uszkoreit, Julian; Sachsenberg, Timo; Pfeuffer, Julianus; Liang, Xiao; Hermjakob, Henning; Sanchez, Aniel; Eisenacher, Martin; Reinert, Knut; Tabb, David L; Kohlbacher, Oliver; Perez-Riverol, Yasset
2017-01-06
In mass spectrometry-based shotgun proteomics, protein identifications are usually the desired result. However, most of the analytical methods are based on the identification of reliable peptides and not the direct identification of intact proteins. Thus, assembling peptides identified from tandem mass spectra into a list of proteins, referred to as protein inference, is a critical step in proteomics research. Currently, different protein inference algorithms and tools are available for the proteomics community. Here, we evaluated five software tools for protein inference (PIA, ProteinProphet, Fido, ProteinLP, MSBayesPro) using three popular database search engines: Mascot, X!Tandem, and MS-GF+. All the algorithms were evaluated using a highly customizable KNIME workflow using four different public datasets with varying complexities (different sample preparation, species and analytical instruments). We defined a set of quality control metrics to evaluate the performance of each combination of search engines, protein inference algorithm, and parameters on each dataset. We show that the results for complex samples vary not only regarding the actual numbers of reported protein groups but also concerning the actual composition of groups. Furthermore, the robustness of reported proteins when using databases of differing complexities is strongly dependant on the applied inference algorithm. Finally, merging the identifications of multiple search engines does not necessarily increase the number of reported proteins, but does increase the number of peptides per protein and thus can generally be recommended. Protein inference is one of the major challenges in MS-based proteomics nowadays. Currently, there are a vast number of protein inference algorithms and implementations available for the proteomics community. Protein assembly impacts in the final results of the research, the quantitation values and the final claims in the research manuscript. Even though protein
Strong ideal convergence in probabilistic metric spaces
Indian Academy of Sciences (India)
sequence and strong ideal Cauchy sequence in a probabilistic metric (PM) space endowed with the strong topology, and ... also important applications in nonlinear analysis [2]. The theory was brought to ..... for each t > 0 since each set on the right-hand side of the relation (3.1) belongs to I. Thus, by Definition 2.11 and the ...
Large N baryons, strong coupling theory, quarks
International Nuclear Information System (INIS)
Sakita, B.
1984-01-01
It is shown that in QCD the large N limit is the same as the static strong coupling limit. By using the static strong coupling techniques some of the results of large N baryons are derived. The results are consistent with the large N SU(6) static quark model. (author)
Optimization of strong and weak coordinates
Swart, M.; Bickelhaupt, F.M.
2006-01-01
We present a new scheme for the geometry optimization of equilibrium and transition state structures that can be used for both strong and weak coordinates. We use a screening function that depends on atom-pair distances to differentiate strong coordinates from weak coordinates. This differentiation
Strong decays of nucleon and delta resonances
International Nuclear Information System (INIS)
Bijker, R.; Leviatan, A.
1996-01-01
We study the strong couplings of the nucleon and delta resonances in a collective model. In the ensuing algebraic treatment we derive closed expressions for decay widths which are used to analyze the experimental data for strong decays into the pion and eta channels. (Author)
Theoretical studies of strongly correlated fermions
Energy Technology Data Exchange (ETDEWEB)
Logan, D. [Institut Max von Laue - Paul Langevin (ILL), 38 - Grenoble (France)
1997-04-01
Strongly correlated fermions are investigated. An understanding of strongly correlated fermions underpins a diverse range of phenomena such as metal-insulator transitions, high-temperature superconductivity, magnetic impurity problems and the properties of heavy-fermion systems, in all of which local moments play an important role. (author).
Seismic switch for strong motion measurement
Harben, P.E.; Rodgers, P.W.; Ewert, D.W.
1995-05-30
A seismic switching device is described that has an input signal from an existing microseismic station seismometer and a signal from a strong motion measuring instrument. The seismic switch monitors the signal level of the strong motion instrument and passes the seismometer signal to the station data telemetry and recording systems. When the strong motion instrument signal level exceeds a user set threshold level, the seismometer signal is switched out and the strong motion signal is passed to the telemetry system. The amount of time the strong motion signal is passed before switching back to the seismometer signal is user controlled between 1 and 15 seconds. If the threshold level is exceeded during a switch time period, the length of time is extended from that instant by one user set time period. 11 figs.
Inferring tie strength from online directed behavior.
Directory of Open Access Journals (Sweden)
Jason J Jones
Full Text Available Some social connections are stronger than others. People have not only friends, but also best friends. Social scientists have long recognized this characteristic of social connections and researchers frequently use the term tie strength to refer to this concept. We used online interaction data (specifically, Facebook interactions to successfully identify real-world strong ties. Ground truth was established by asking users themselves to name their closest friends in real life. We found the frequency of online interaction was diagnostic of strong ties, and interaction frequency was much more useful diagnostically than were attributes of the user or the user's friends. More private communications (messages were not necessarily more informative than public communications (comments, wall posts, and other interactions.
Inferring mental states from neuroimaging data: From reverse inference to large-scale decoding
Poldrack, Russell A.
2011-01-01
A common goal of neuroimaging research is to use imaging data to identify the mental processes that are engaged when a subject performs a mental task. The use of reasoning from activation to mental functions, known as “reverse inference”, has been previously criticized on the basis that it does not take into account how selectively the area is activated by the mental process in question. In this Perspective, I outline the critique of informal reverse inference, and describe a number of new de...
On the Hardness of Topology Inference
Acharya, H. B.; Gouda, M. G.
Many systems require information about the topology of networks on the Internet, for purposes like management, efficiency, testing of new protocols and so on. However, ISPs usually do not share the actual topology maps with outsiders; thus, in order to obtain the topology of a network on the Internet, a system must reconstruct it from publicly observable data. The standard method employs traceroute to obtain paths between nodes; next, a topology is generated such that the observed paths occur in the graph. However, traceroute has the problem that some routers refuse to reveal their addresses, and appear as anonymous nodes in traces. Previous research on the problem of topology inference with anonymous nodes has demonstrated that it is at best NP-complete. In this paper, we improve upon this result. In our previous research, we showed that in the special case where nodes may be anonymous in some traces but not in all traces (so all node identifiers are known), there exist trace sets that are generable from multiple topologies. This paper extends our theory of network tracing to the general case (with strictly anonymous nodes), and shows that the problem of computing the network that generated a trace set, given the trace set, has no general solution. The weak version of the problem, which allows an algorithm to output a "small" set of networks- any one of which is the correct one- is also not solvable. Any algorithm guaranteed to output the correct topology outputs at least an exponential number of networks. Our results are surprisingly robust: they hold even when the network is known to have exactly two anonymous nodes, and every node as well as every edge in the network is guaranteed to occur in some trace. On the basis of this result, we suggest that exact reconstruction of network topology requires more powerful tools than traceroute.
Inferring pathway activity toward precise disease classification.
Directory of Open Access Journals (Sweden)
Eunjung Lee
2008-11-01
Full Text Available The advent of microarray technology has made it possible to classify disease states based on gene expression profiles of patients. Typically, marker genes are selected by measuring the power of their expression profiles to discriminate among patients of different disease states. However, expression-based classification can be challenging in complex diseases due to factors such as cellular heterogeneity within a tissue sample and genetic heterogeneity across patients. A promising technique for coping with these challenges is to incorporate pathway information into the disease classification procedure in order to classify disease based on the activity of entire signaling pathways or protein complexes rather than on the expression levels of individual genes or proteins. We propose a new classification method based on pathway activities inferred for each patient. For each pathway, an activity level is summarized from the gene expression levels of its condition-responsive genes (CORGs, defined as the subset of genes in the pathway whose combined expression delivers optimal discriminative power for the disease phenotype. We show that classifiers using pathway activity achieve better performance than classifiers based on individual gene expression, for both simple and complex case-control studies including differentiation of perturbed from non-perturbed cells and subtyping of several different kinds of cancer. Moreover, the new method outperforms several previous approaches that use a static (i.e., non-conditional definition of pathways. Within a pathway, the identified CORGs may facilitate the development of better diagnostic markers and the discovery of core alterations in human disease.
Statistical Inference for Data Adaptive Target Parameters.
Hubbard, Alan E; Kherad-Pajouh, Sara; van der Laan, Mark J
2016-05-01
Consider one observes n i.i.d. copies of a random variable with a probability distribution that is known to be an element of a particular statistical model. In order to define our statistical target we partition the sample in V equal size sub-samples, and use this partitioning to define V splits in an estimation sample (one of the V subsamples) and corresponding complementary parameter-generating sample. For each of the V parameter-generating samples, we apply an algorithm that maps the sample to a statistical target parameter. We define our sample-split data adaptive statistical target parameter as the average of these V-sample specific target parameters. We present an estimator (and corresponding central limit theorem) of this type of data adaptive target parameter. This general methodology for generating data adaptive target parameters is demonstrated with a number of practical examples that highlight new opportunities for statistical learning from data. This new framework provides a rigorous statistical methodology for both exploratory and confirmatory analysis within the same data. Given that more research is becoming "data-driven", the theory developed within this paper provides a new impetus for a greater involvement of statistical inference into problems that are being increasingly addressed by clever, yet ad hoc pattern finding methods. To suggest such potential, and to verify the predictions of the theory, extensive simulation studies, along with a data analysis based on adaptively determined intervention rules are shown and give insight into how to structure such an approach. The results show that the data adaptive target parameter approach provides a general framework and resulting methodology for data-driven science.
Inferring structural connectivity using Ising couplings in models of neuronal networks.
Kadirvelu, Balasundaram; Hayashi, Yoshikatsu; Nasuto, Slawomir J
2017-08-15
Functional connectivity metrics have been widely used to infer the underlying structural connectivity in neuronal networks. Maximum entropy based Ising models have been suggested to discount the effect of indirect interactions and give good results in inferring the true anatomical connections. However, no benchmarking is currently available to assess the performance of Ising couplings against other functional connectivity metrics in the microscopic scale of neuronal networks through a wide set of network conditions and network structures. In this paper, we study the performance of the Ising model couplings to infer the synaptic connectivity in in silico networks of neurons and compare its performance against partial and cross-correlations for different correlation levels, firing rates, network sizes, network densities, and topologies. Our results show that the relative performance amongst the three functional connectivity metrics depends primarily on the network correlation levels. Ising couplings detected the most structural links at very weak network correlation levels, and partial correlations outperformed Ising couplings and cross-correlations at strong correlation levels. The result was consistent across varying firing rates, network sizes, and topologies. The findings of this paper serve as a guide in choosing the right functional connectivity tool to reconstruct the structural connectivity.
Indexing the Environmental Quality Performance Based on A Fuzzy Inference Approach
Iswari, Lizda
2018-03-01
Environmental performance strongly deals with the quality of human life. In Indonesia, this performance is quantified through Environmental Quality Index (EQI) which consists of three indicators, i.e. river quality index, air quality index, and coverage of land cover. The current of this instrument data processing was done by averaging and weighting each index to represent the EQI at the provincial level. However, we found EQI interpretations that may contain some uncertainties and have a range of circumstances possibly less appropriate if processed under a common statistical approach. In this research, we aim to manage the indicators of EQI with a more intuitive computation technique and make some inferences related to the environmental performance in 33 provinces in Indonesia. Research was conducted in three stages of Mamdani Fuzzy Inference System (MAFIS), i.e. fuzzification, data inference, and defuzzification. Data input consists of 10 environmental parameters and the output is an index of Environmental Quality Performance (EQP). Research was applied to the environmental condition data set in 2015 and quantified the results into the scale of 0 to 100, i.e. 10 provinces at good performance with the EQP above 80 dominated by provinces in eastern part of Indonesia, 22 provinces with the EQP between 80 to 50, and one province in Java Island with the EQP below 20. This research shows that environmental quality performance can be quantified without eliminating the natures of the data set and simultaneously is able to show the environment behavior along with its spatial pattern distribution.
Directory of Open Access Journals (Sweden)
Saito Shigeru
2007-01-01
Full Text Available Hepatocellular carcinoma (HCC in a liver with advanced-stage chronic hepatitis C (CHC is induced by hepatitis C virus, which chronically infects about 170 million people worldwide. To elucidate the associations between gene groups in hepatocellular carcinogenesis, we analyzed the profiles of the genes characteristically expressed in the CHC and HCC cell stages by a statistical method for inferring the network between gene systems based on the graphical Gaussian model. A systematic evaluation of the inferred network in terms of the biological knowledge revealed that the inferred network was strongly involved in the known gene-gene interactions with high significance , and that the clusters characterized by different cancer-related responses were associated with those of the gene groups related to metabolic pathways and morphological events. Although some relationships in the network remain to be interpreted, the analyses revealed a snapshot of the orchestrated expression of cancer-related groups and some pathways related with metabolisms and morphological events in hepatocellular carcinogenesis, and thus provide possible clues on the disease mechanism and insights that address the gap between molecular and clinical assessments.
Kiani, Narsis A; Kaderali, Lars
2014-07-22
Network inference deals with the reconstruction of molecular networks from experimental data. Given N molecular species, the challenge is to find the underlying network. Due to data limitations, this typically is an ill-posed problem, and requires the integration of prior biological knowledge or strong regularization. We here focus on the situation when time-resolved measurements of a system's response after systematic perturbations are available. We present a novel method to infer signaling networks from time-course perturbation data. We utilize dynamic Bayesian networks with probabilistic Boolean threshold functions to describe protein activation. The model posterior distribution is analyzed using evolutionary MCMC sampling and subsequent clustering, resulting in probability distributions over alternative networks. We evaluate our method on simulated data, and study its performance with respect to data set size and levels of noise. We then use our method to study EGF-mediated signaling in the ERBB pathway. Dynamic Probabilistic Threshold Networks is a new method to infer signaling networks from time-series perturbation data. It exploits the dynamic response of a system after external perturbation for network reconstruction. On simulated data, we show that the approach outperforms current state of the art methods. On the ERBB data, our approach recovers a significant fraction of the known interactions, and predicts novel mechanisms in the ERBB pathway.
Active inference and robot control: a case study.
Pio-Lopez, Léo; Nizard, Ange; Friston, Karl; Pezzulo, Giovanni
2016-09-01
Active inference is a general framework for perception and action that is gaining prominence in computational and systems neuroscience but is less known outside these fields. Here, we discuss a proof-of-principle implementation of the active inference scheme for the control or the 7-DoF arm of a (simulated) PR2 robot. By manipulating visual and proprioceptive noise levels, we show under which conditions robot control under the active inference scheme is accurate. Besides accurate control, our analysis of the internal system dynamics (e.g. the dynamics of the hidden states that are inferred during the inference) sheds light on key aspects of the framework such as the quintessentially multimodal nature of control and the differential roles of proprioception and vision. In the discussion, we consider the potential importance of being able to implement active inference in robots. In particular, we briefly review the opportunities for modelling psychophysiological phenomena such as sensory attenuation and related failures of gain control, of the sort seen in Parkinson's disease. We also consider the fundamental difference between active inference and optimal control formulations, showing that in the former the heavy lifting shifts from solving a dynamical inverse problem to creating deep forward or generative models with dynamics, whose attracting sets prescribe desired behaviours. © 2016 The Authors.
Accounting for the Effect of Earth's Rotation in Magnetotelluric Inference
Riegert, D. L.; Thomson, D. J.
2017-12-01
The study of geomagnetism has been documented as far back as 1722 when the watchmaker G. Graham constructed a more sensitive compass and showed that the variations in geomagnetic direction varied with an irregular daily pattern. Increased interest in geomagnetism in geomagnetism began at the end of the 19th century (Lamb, Schuster, Chapman, and Price). The Magnetotelluric Method was first introduced in the 1950's (Cagniard and Tikhonov), and, at its core, is simply a regression problem. The result of this method is a transfer function estimate which describes the earth's response to magnetic field variations. This estimate can then be used to infer the earth's subsurface structure; useful for applications such as natural resource exploration. The statistical problem of estimating a transfer function between geomagnetic and induced current measurements has evolved since the 1950's due to a variety of problems: non-stationarity, outliers, and violation of Gaussian assumptions. To address some of these issues, robust regression methods (Chave and Thomson, 2004) and the remote reference method (Gambel, 1979) have been proposed and used. The current method seems to provide reasonable estimates, but still requires a large amount of data. Using the multitaper method of spectral analysis (Thomson, 1982), taking long (greater than 4 months) blocks of geomagnetic data, and concentrating on frequencies below 1000 microhertz to avoid ultraviolet effects, one finds that:1) the cross-spectra are dominated by many offset frequencies including plus and minus 1 and 2 cycles per day;2) the coherence at these offset frequencies is often stronger than at zero offset;3) there are strong couplings from the "quasi two-day" cycle;4) frequencines are usually not symmetric;5) the spectra are dominated by the normal modes of the Sun. This talk will discuss the method of incorporating these observations into the transfer function estimation model, some of the difficulties that arose, their
Some problems in inference from time series of geophysical processes
Koutsoyiannis, Demetris
2010-05-01
Due to the complexity of geophysical processes, their modelling and the conducting of typical tasks, such as estimation, prediction and hypothesis testing, heavily rely on available data series and their statistical processing. The classical statistical approaches, which are often used in geophysical modelling, are based upon several simplifying assumptions, which are invalidated in natural processes. Central among these is the (usually tacit) time independence assumption which is regarded to simplify modelling and statistical testing at no substantial cost for the validity of results. Moreover, the perception of the general behaviour of the natural processes and the implied uncertainty is heavily affected by the classical statistical paradigm that is in common use. However, the study of natural behaviours reveals the dominance of change at a multitude of time scales, which in statistical terms is translated in strong time dependence, decaying very slowly with lag time. In its simplest form, this dependence, and equivalently the multi-scale change, can be described by a Hurst-Kolmogorov process using a single parameter additional to those of the marginal distribution. Remarkably, the Hurst-Kolmogorov stochastic dynamics results in much higher uncertainty in comparison to either nonstationary descriptions, or to typical stationary descriptions with independent random processes and common Markov-type processes. In addition, as far as typical statistical estimation is concerned, the Hurst-Kolmogorov dynamics implies dramatically higher intervals in the estimation of location statistical parameters (e.g., mean) and highly negative bias in the estimation of dispersion parameters (e.g., standard deviation), not to mention the bias and uncertainty in higher order moments. Surprisingly, all these differences are commonly unaccounted for in most studies of geophysical processes, which may result in inappropriate modelling, wrong inferences and false claims about the
Strong and superstrong pulsed magnetic fields generation
Shneerson, German A; Krivosheev, Sergey I
2014-01-01
Strong pulsed magnetic fields are important for several fields in physics and engineering, such as power generation and accelerator facilities. Basic aspects of the generation of strong and superstrong pulsed magnetic fields technique are given, including the physics and hydrodynamics of the conductors interacting with the field as well as an account of the significant progress in generation of strong magnetic fields using the magnetic accumulation technique. Results of computer simulations as well as a survey of available field technology are completing the volume.
Impurity screening in strongly coupled plasma systems
Kyrkos, S
2003-01-01
We present an overview of the problem of screening of an impurity in a strongly coupled one-component plasma within the framework of the linear response (LR) theory. We consider 3D, 2D and quasi-2D layered systems. For a strongly coupled plasma the LR can be determined by way of the known S(k) structure functions. In general, an oscillating screening potential with local overscreening and antiscreening regions emerges. In the case of the bilayer, this phenomenon becomes global, as overscreening develops in the layer of the impurity and antiscreening in the adjacent layer. We comment on the limitations of the LR theory in the strong coupling situation.
The lambda sigma calculus and strong normalization
DEFF Research Database (Denmark)
Schack-Nielsen, Anders; Schürmann, Carsten
Explicit substitution calculi can be classified into several dis- tinct categories depending on whether they are confluent, meta-confluent, strong normalization preserving, strongly normalizing, simulating, fully compositional, and/or local. In this paper we present a variant of the λσ-calculus......, which satisfies all seven conditions. In particular, we show how to circumvent Mellies counter-example to strong normalization by a slight restriction of the congruence rules. The calculus is implemented as the core data structure of the Celf logical framework. All meta-theoretic aspects of this work...
Hernández, María Álvarez; Andrés, Antonio Martín; Tejedor, Inmaculada Herranz
2018-04-02
Two-tailed asymptotic inferences for the difference d = p 2 - p 1 with independent proportions have been widely studied in the literature. Nevertheless, the case of one tail has received less attention, despite its great practical importance (superiority studies and noninferiority studies). This paper assesses 97 methods to make these inferences (test and confidence intervals [CIs]), although it also alludes to many others. The conclusions obtained are (1) the optimal method in general (and particularly for errors α = 1% and 5%) is based on arcsine transformation, with the maximum likelihood estimator restricted to the null hypothesis and increasing the successes and failures by 3/8; (2) the optimal method for α = 10% is a modification of the classic model of Peskun; (3) a more simple and acceptable option for large sample sizes and values of d not near to ±1 is the classic method of Peskun; and (4) in the particular case of the superiority and inferiority tests, the optimal method is the classic Wald method (with continuity correction) when the successes and failures are increased by one. We additionally select the optimal methods to make compatible the conclusions of the homogeneity test and the CI for d, both for one tail and for two (methods which are related to arcsine transformation and the Wald method).
International Conference on Trends and Perspectives in Linear Statistical Inference
Rosen, Dietrich
2018-01-01
This volume features selected contributions on a variety of topics related to linear statistical inference. The peer-reviewed papers from the International Conference on Trends and Perspectives in Linear Statistical Inference (LinStat 2016) held in Istanbul, Turkey, 22-25 August 2016, cover topics in both theoretical and applied statistics, such as linear models, high-dimensional statistics, computational statistics, the design of experiments, and multivariate analysis. The book is intended for statisticians, Ph.D. students, and professionals who are interested in statistical inference. .
Working memory supports inference learning just like classification learning.
Craig, Stewart; Lewandowsky, Stephan
2013-08-01
Recent research has found a positive relationship between people's working memory capacity (WMC) and their speed of category learning. To date, only classification-learning tasks have been considered, in which people learn to assign category labels to objects. It is unknown whether learning to make inferences about category features might also be related to WMC. We report data from a study in which 119 participants undertook classification learning and inference learning, and completed a series of WMC tasks. Working memory capacity was positively related to people's classification and inference learning performance.
Surrogate based approaches to parameter inference in ocean models
Knio, Omar
2016-01-06
This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.
Fast and scalable inference of multi-sample cancer lineages.
Popic, Victoria
2015-05-06
Somatic variants can be used as lineage markers for the phylogenetic reconstruction of cancer evolution. Since somatic phylogenetics is complicated by sample heterogeneity, novel specialized tree-building methods are required for cancer phylogeny reconstruction. We present LICHeE (Lineage Inference for Cancer Heterogeneity and Evolution), a novel method that automates the phylogenetic inference of cancer progression from multiple somatic samples. LICHeE uses variant allele frequencies of somatic single nucleotide variants obtained by deep sequencing to reconstruct multi-sample cell lineage trees and infer the subclonal composition of the samples. LICHeE is open source and available at http://viq854.github.io/lichee .
Correlated Fluctuations in Strongly Coupled Binary Networks Beyond Equilibrium
Directory of Open Access Journals (Sweden)
David Dahmen
2016-08-01
Full Text Available Randomly coupled Ising spins constitute the classical model of collective phenomena in disordered systems, with applications covering glassy magnetism and frustration, combinatorial optimization, protein folding, stock market dynamics, and social dynamics. The phase diagram of these systems is obtained in the thermodynamic limit by averaging over the quenched randomness of the couplings. However, many applications require the statistics of activity for a single realization of the possibly asymmetric couplings in finite-sized networks. Examples include reconstruction of couplings from the observed dynamics, representation of probability distributions for sampling-based inference, and learning in the central nervous system based on the dynamic and correlation-dependent modification of synaptic connections. The systematic cumulant expansion for kinetic binary (Ising threshold units with strong, random, and asymmetric couplings presented here goes beyond mean-field theory and is applicable outside thermodynamic equilibrium; a system of approximate nonlinear equations predicts average activities and pairwise covariances in quantitative agreement with full simulations down to hundreds of units. The linearized theory yields an expansion of the correlation and response functions in collective eigenmodes, leads to an efficient algorithm solving the inverse problem, and shows that correlations are invariant under scaling of the interaction strengths.
Strong and strategic conformity understanding by 3- and 5-year-old children.
Cordonier, Laurent; Nettles, Theresa; Rochat, Philippe
2017-12-18
'Strong conformity' corresponds to the public endorsement of majority opinions that are in blatant contradiction to one's own correct perceptual judgements of the situation. We tested strong conformity inference by 3- and 5-year-old children using a third-person perspective paradigm. Results show that at neither age, children spontaneously expect that an ostracized third-party individual who wants to affiliate with the majority group will show strong conformity. However, when questioned as to what the ostracized individual should do to befriend others, from 5 years of age children explicitly demonstrate that they construe strong conformity as a strategic means of social affiliation. Additional data suggest that strong and strategic conformity understanding from an observer's third-person perspective is linked to the passing of the language-mediated false belief theory of mind task, an index of children's emerging 'meta' ability to construe the mental state of others. Statement of contribution What is already known on this subject? 'Strong conformity' corresponds to the public endorsement of majority opinions that are in blatant contradiction to one's own correct perceptual judgements of the situation. Asch's (1956, Psychological Monographs: General and Applied, 70, 1) classic demonstration of strong conformity with adults has been replicated with preschool children: 3- to 4-year-olds manifest signs of strong conformity by reversing about thirty to forty per cent of the time their correct perceptual judgements to fit with contradictory statements held unanimously by other individuals (Corriveau & Harris, 2010, Developmental Psychology, 46, 437; Corriveau et al., 2013, Journal of Cognition and Culture, 13, 367; Haun & Tomasello, 2011, Child Development, 82, 1759). As for adults, strong conformity does not obliterate children's own private, accurate knowledge of the situation. It is in essence a public expression to fit the group and alleviate social dissonance
Color inference in visual communication: the meaning of colors in recycling.
Schloss, Karen B; Lessard, Laurent; Walmsley, Charlotte S; Foley, Kathleen
2018-01-01
People interpret abstract meanings from colors, which makes color a useful perceptual feature for visual communication. This process is complicated, however, because there is seldom a one-to-one correspondence between colors and meanings. One color can be associated with many different concepts (one-to-many mapping) and many colors can be associated with the same concept (many-to-one mapping). We propose that to interpret color-coding systems, people perform assignment inference to determine how colors map onto concepts. We studied assignment inference in the domain of recycling. Participants saw images of colored but unlabeled bins and were asked to indicate which bins they would use to discard different kinds of recyclables and trash. In Experiment 1, we tested two hypotheses for how people perform assignment inference. The local assignment hypothesis predicts that people simply match objects with their most strongly associated color. The global assignment hypothesis predicts that people also account for the association strengths between all other objects and colors within the scope of the color-coding system. Participants discarded objects in bins that optimized the color-object associations of the entire set, which is consistent with the global assignment hypothesis. This sometimes resulted in discarding objects in bins whose colors were weakly associated with the object, even when there was a stronger associated option available. In Experiment 2, we tested different methods for encoding color-coding systems and found that people were better at assignment inference when color sets simultaneously maximized the association strength between assigned color-object parings while minimizing associations between unassigned pairings. Our study provides an approach for designing intuitive color-coding systems that facilitate communication through visual media such as graphs, maps, signs, and artifacts.
Domazet-Lošo, Tomislav; Carvunis, Anne-Ruxandra; Albà, M Mar; Šestak, Martin Sebastijan; Bakaric, Robert; Neme, Rafik; Tautz, Diethard
2017-04-01
Phylostratigraphy is a computational framework for dating the emergence of DNA and protein sequences in a phylogeny. It has been extensively applied to make inferences on patterns of genome evolution, including patterns of disease gene evolution, ontogeny and de novo gene origination. Phylostratigraphy typically relies on BLAST searches along a species tree, but new simulation studies have raised concerns about the ability of BLAST to detect remote homologues and its impact on phylostratigraphic inferences. Here, we re-assessed these simulations. We found that, even with a possible overall BLAST false negative rate between 11-15%, the large majority of sequences assigned to a recent evolutionary origin by phylostratigraphy is unaffected by technical concerns about BLAST. Where the results of the simulations did cast doubt on previously reported findings, we repeated the original analyses but now excluded all questionable sequences. The originally described patterns remained essentially unchanged. These new analyses strongly support phylostratigraphic inferences, including: genes that emerged after the origin of eukaryotes are more likely to be expressed in the ectoderm than in the endoderm or mesoderm in Drosophila, and the de novo emergence of protein-coding genes from non-genic sequences occurs through proto-gene intermediates in yeast. We conclude that BLAST is an appropriate and sufficiently sensitive tool in phylostratigraphic analysis that does not appear to introduce significant biases into evolutionary pattern inferences. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Inferring species interactions through joint mark–recapture analysis
Yackulic, Charles B.; Korman, Josh; Yard, Michael D.; Dzul, Maria C.
2018-01-01
Introduced species are frequently implicated in declines of native species. In many cases, however, evidence linking introduced species to native declines is weak. Failure to make strong inferences regarding the role of introduced species can hamper attempts to predict population viability and delay effective management responses. For many species, mark–recapture analysis is the more rigorous form of demographic analysis. However, to our knowledge, there are no mark–recapture models that allow for joint modeling of interacting species. Here, we introduce a two‐species mark–recapture population model in which the vital rates (and capture probabilities) of one species are allowed to vary in response to the abundance of the other species. We use a simulation study to explore bias and choose an approach to model selection. We then use the model to investigate species interactions between endangered humpback chub (Gila cypha) and introduced rainbow trout (Oncorhynchus mykiss) in the Colorado River between 2009 and 2016. In particular, we test hypotheses about how two environmental factors (turbidity and temperature), intraspecific density dependence, and rainbow trout abundance are related to survival, growth, and capture of juvenile humpback chub. We also project the long‐term effects of different rainbow trout abundances on adult humpback chub abundances. Our simulation study suggests this approach has minimal bias under potentially challenging circumstances (i.e., low capture probabilities) that characterized our application and that model selection using indicator variables could reliably identify the true generating model even when process error was high. When the model was applied to rainbow trout and humpback chub, we identified negative relationships between rainbow trout abundance and the survival, growth, and capture probability of juvenile humpback chub. Effects on interspecific interactions on survival and capture probability were strongly
Models and Inference for Multivariate Spatial Extremes
Vettori, Sabrina
2017-12-07
The development of flexible and interpretable statistical methods is necessary in order to provide appropriate risk assessment measures for extreme events and natural disasters. In this thesis, we address this challenge by contributing to the developing research field of Extreme-Value Theory. We initially study the performance of existing parametric and non-parametric estimators of extremal dependence for multivariate maxima. As the dimensionality increases, non-parametric estimators are more flexible than parametric methods but present some loss in efficiency that we quantify under various scenarios. We introduce a statistical tool which imposes the required shape constraints on non-parametric estimators in high dimensions, significantly improving their performance. Furthermore, by embedding the tree-based max-stable nested logistic distribution in the Bayesian framework, we develop a statistical algorithm that identifies the most likely tree structures representing the data\\'s extremal dependence using the reversible jump Monte Carlo Markov Chain method. A mixture of these trees is then used for uncertainty assessment in prediction through Bayesian model averaging. The computational complexity of full likelihood inference is significantly decreased by deriving a recursive formula for the nested logistic model likelihood. The algorithm performance is verified through simulation experiments which also compare different likelihood procedures. Finally, we extend the nested logistic representation to the spatial framework in order to jointly model multivariate variables collected across a spatial region. This situation emerges often in environmental applications but is not often considered in the current literature. Simulation experiments show that the new class of multivariate max-stable processes is able to detect both the cross and inner spatial dependence of a number of extreme variables at a relatively low computational cost, thanks to its Bayesian hierarchical
Strong Coupling Corrections in Quantum Thermodynamics
Perarnau-Llobet, M.; Wilming, H.; Riera, A.; Gallego, R.; Eisert, J.
2018-03-01
Quantum systems strongly coupled to many-body systems equilibrate to the reduced state of a global thermal state, deviating from the local thermal state of the system as it occurs in the weak-coupling limit. Taking this insight as a starting point, we study the thermodynamics of systems strongly coupled to thermal baths. First, we provide strong-coupling corrections to the second law applicable to general systems in three of its different readings: As a statement of maximal extractable work, on heat dissipation, and bound to the Carnot efficiency. These corrections become relevant for small quantum systems and vanish in first order in the interaction strength. We then move to the question of power of heat engines, obtaining a bound on the power enhancement due to strong coupling. Our results are exemplified on the paradigmatic non-Markovian quantum Brownian motion.
Finding quantum effects in strong classical potentials
Hegelich, B. Manuel; Labun, Lance; Labun, Ou Z.
2017-06-01
The long-standing challenge to describing charged particle dynamics in strong classical electromagnetic fields is how to incorporate classical radiation, classical radiation reaction and quantized photon emission into a consistent unified framework. The current, semiclassical methods to describe the dynamics of quantum particles in strong classical fields also provide the theoretical framework for fundamental questions in gravity and hadron-hadron collisions, including Hawking radiation, cosmological particle production and thermalization of particles created in heavy-ion collisions. However, as we show, these methods break down for highly relativistic particles propagating in strong fields. They must therefore be improved and adapted for the description of laser-plasma experiments that typically involve the acceleration of electrons. Theory developed from quantum electrodynamics, together with dedicated experimental efforts, offer the best controllable context to establish a robust, experimentally validated foundation for the fundamental theory of quantum effects in strong classical potentials.
The Charm and Beauty of Strong Interactions
El-Bennich, Bruno
2018-01-01
We briefly review common features and overlapping issues in hadron and flavor physics focussing on continuum QCD approaches to heavy bound states, their mass spectrum and weak decay constants in different strong interaction models.
Atomica ionization by strong coherent radiation
International Nuclear Information System (INIS)
Brandi, H.S.; Davidovich, L.
1979-07-01
The relation among the three most frequently used non-perturbative methods proposed to study the ionization of atoms by strong electromagnetic fields is established. Their range of validity is also determined. (Author) [pt
Perturbation of an exact strong gravity solution
International Nuclear Information System (INIS)
Baran, S.A.
1982-10-01
Perturbations of an exact strong gravity solution are investigated. It is shown, by using the new multipole expansions previously presented, that this exact and static spherically symmetric solution is stable under odd parity perturbations. (author)
Strong-force theorists scoop Noble Prize
Durrani, Matin
2004-01-01
Three US theorists have shared the 2004 Nobel Prize in Physics "for the discovery of asymptotic freedom in the theory of the strong interaction". Their theoretical work explains why quarks behave almost as free particles at high energies (½ page)
Calculating hadronic properties in strong QCD
International Nuclear Information System (INIS)
Pennington, M.R.
1996-01-01
This talk gives a brief review of the progress that has been made in calculating the properties of hadrons in strong QCD. In keeping with this meeting I will concentrate on those properties that can be studied with electromagnetic probes. Though perturbative QCD is highly successful, it only applies in a limited kinematic regime, where hard scattering occur, and the quarks move in the interaction region as if they are free, pointlike objects. However, the bulk of strong interactions are governed by the long distance regime, where the strong interaction is strong. It is this regime of length scales of the order of a Fermi, that determines the spectrum of light hadrons and their properties. The calculation of these properties requires an understanding of non-perturbative QCD, of confinement and chiral symmetry breaking. (author)
Building strong brands – does it matter?
Aure, Kristin Gaaseide; Nervik, Kristine Dybvik
2014-01-01
Brand equity has proven, through several decades of research, to be a primary source of competitive advantage and future earnings (Yoo & Donthu, 2001). Building strong brands has therefore become a priority for many organizations, with the presumption that building strong brands yields these advantages (Yasin et al., 2007). A quantitative survey was conducted at Sunnmøre in Norway in order to answer the two developed research questions. - Does the brand equity dimensions; brand...
Algebra of strong and electroweak interactions
International Nuclear Information System (INIS)
Bolokhov, S.V.; Vladimirov, Yu.S.
2004-01-01
The algebraic approach to describing the electroweak and strong interactions is considered within the frames of the binary geometrophysics, based on the principles of the Fokker-Feynman direct interparticle interaction theories of the Kaluza-Klein multidimensional geometrical models and the physical structures theory. It is shown that in this approach the electroweak and strong elementary particles interaction through the intermediate vector bosons, are characterized by the subtypes of the algebraic classification of the complex 3 x 3-matrices [ru
Manipulating light with strongly modulated photonic crystals
International Nuclear Information System (INIS)
Notomi, Masaya
2010-01-01
Recently, strongly modulated photonic crystals, fabricated by the state-of-the-art semiconductor nanofabrication process, have realized various novel optical properties. This paper describes the way in which they differ from other optical media, and clarifies what they can do. In particular, three important issues are considered: light confinement, frequency dispersion and spatial dispersion. First, I describe the latest status and impact of ultra-strong light confinement in a wavelength-cubic volume achieved in photonic crystals. Second, the extreme reduction in the speed of light is reported, which was achieved as a result of frequency dispersion management. Third, strange negative refraction in photonic crystals is introduced, which results from their unique spatial dispersion, and it is clarified how this leads to perfect imaging. The last two sections are devoted to applications of these novel properties. First, I report the fact that strong light confinement and huge light-matter interaction enhancement make strongly modulated photonic crystals promising for on-chip all-optical processing, and present several examples including all-optical switches/memories and optical logics. As a second application, it is shown that the strong light confinement and slow light in strongly modulated photonic crystals enable the adiabatic tuning of light, which leads to various novel ways of controlling light, such as adiabatic frequency conversion, efficient optomechanics systems, photon memories and photons pinning.
Efficient design and inference in distributed Bayesian networks: an overview
de Oude, P.; Groen, F.C.A.; Pavlin, G.; Bezhanishvili, N.; Löbner, S.; Schwabe, K.; Spada, L.
2011-01-01
This paper discusses an approach to distributed Bayesian modeling and inference, which is relevant for an important class of contemporary real world situation assessment applications. By explicitly considering the locality of causal relations, the presented approach (i) supports coherent distributed
Safety Analysis versus Type Inference with Partial Types
DEFF Research Database (Denmark)
Schwartzbach, Michael Ignatieff; Palsberg, Jens
1992-01-01
perspectives, however. Safety analysis is global in that it can only analyze a complete program. In contrast, type inference is local in that it can analyze pieces of a program in isolation. In this paper we prove that safety analysis is sound, relative to both a strict and a lazy operational semantics. We......Safety analysis is an algorithm for determining if a term in an untyped lambda calculus with constants is safe, i.e., if it does not cause an error during evaluation. This ambition is also shared by algorithms for type inference. Safety analysis and type inference are based on rather different...... also prove that safety analysis accepts strictly more safe lambda terms than does type inference for simple types. The latter result demonstrates that global program analysis can be more precise than local ones....
Statistical Inference for a Class of Multivariate Negative Binomial Distributions
DEFF Research Database (Denmark)
Rubak, Ege H.; Møller, Jesper; McCullagh, Peter
This paper considers statistical inference procedures for a class of models for positively correlated count variables called -permanental random fields, and which can be viewed as a family of multivariate negative binomial distributions. Their appealing probabilistic properties have earlier been...
ESPRIT: Exercise Sensing and Pose Recovery Inference Tool, Phase I
National Aeronautics and Space Administration — We propose to develop ESPRIT: an Exercise Sensing and Pose Recovery Inference Tool, in support of NASA's effort in developing crew exercise technologies for...
The Human Cochlear Mechanical Nonlinearity Inferred via Psychometric Functions
Directory of Open Access Journals (Sweden)
Nizami Lance
2013-12-01
Extension of the model of Schairer and colleagues results in credible cochlear nonlinearities in man, suggesting that forward-masking provides a non-invasive way to infer the human mechanical cochlear nonlinearity.
Toward Security Verification against Inference Attacks on Data Trees
Directory of Open Access Journals (Sweden)
Ryo Iwase
2013-11-01
Full Text Available This paper describes our ongoing work on security verification against inference attacks on data trees. We focus on infinite secrecy against inference attacks, which means that attackers cannot narrow down the candidates for the value of the sensitive information to finite by available information to the attackers. Our purpose is to propose a model under which infinite secrecy is decidable. To be specific, we first propose tree transducers which are expressive enough to represent practical queries. Then, in order to represent attackers' knowledge, we propose data tree types such that type inference and inverse type inference on those tree transducers are possible with respect to data tree types, and infiniteness of data tree types is decidable.
Inferred hybridisation, sympatry and movements of Chorister Robin ...
African Journals Online (AJOL)
Chat C. natalensis have previously only been recorded from the Eastern Cape province, South Africa. We extend the occurrence of inferred hybrids with ringed and photographed examples from KwaZulu-Natal and Limpopo provinces.
Automated Flight Safety Inference Engine (AFSIE) System, Phase I
National Aeronautics and Space Administration — We propose to develop an innovative Autonomous Flight Safety Inference Engine (AFSIE) system to autonomously and reliably terminate the flight of an errant launch...
Automated Flight Safety Inference Engine (AFSIE) System Project
National Aeronautics and Space Administration — We propose to develop an innovative Autonomous Flight Safety Inference Engine (AFSIE) system to autonomously and reliably terminate the flight of an errant launch...
A general Bayes weibull inference model for accelerated life testing
International Nuclear Information System (INIS)
Dorp, J. Rene van; Mazzuchi, Thomas A.
2005-01-01
This article presents the development of a general Bayes inference model for accelerated life testing. The failure times at a constant stress level are assumed to belong to a Weibull distribution, but the specification of strict adherence to a parametric time-transformation function is not required. Rather, prior information is used to indirectly define a multivariate prior distribution for the scale parameters at the various stress levels and the common shape parameter. Using the approach, Bayes point estimates as well as probability statements for use-stress (and accelerated) life parameters may be inferred from a host of testing scenarios. The inference procedure accommodates both the interval data sampling strategy and type I censored sampling strategy for the collection of ALT test data. The inference procedure uses the well-known MCMC (Markov Chain Monte Carlo) methods to derive posterior approximations. The approach is illustrated with an example
Inference method using bayesian network for diagnosis of pulmonary nodules
International Nuclear Information System (INIS)
Kawagishi, Masami; Iizuka, Yoshio; Yamamoto, Hiroyuki; Yakami, Masahiro; Kubo, Takeshi; Fujimoto, Koji; Togashi, Kaori
2010-01-01
This report describes the improvements of a naive Bayes model that infers the diagnosis of pulmonary nodules in chest CT images based on the findings obtained when a radiologist interprets the CT images. We have previously introduced an inference model using a naive Bayes classifier and have reported its clinical value based on evaluation using clinical data. In the present report, we introduce the following improvements to the original inference model: the selection of findings based on correlations and the generation of a model using only these findings, and the introduction of classifiers that integrate several simple classifiers each of which is specialized for specific diagnosis. These improvements were found to increase the inference accuracy by 10.4% (p<.01) as compared to the original model in 100 cases (222 nodules) based on leave-one-out evaluation. (author)
Z Number Based Fuzzy Inference System for Dynamic Plant Control
Directory of Open Access Journals (Sweden)
Rahib H. Abiyev
2016-01-01
Full Text Available Frequently the reliabilities of the linguistic values of the variables in the rule base are becoming important in the modeling of fuzzy systems. Taking into consideration the reliability degree of the fuzzy values of variables of the rules the design of inference mechanism acquires importance. For this purpose, Z number based fuzzy rules that include constraint and reliability degrees of information are constructed. Fuzzy rule interpolation is presented for designing of an inference engine of fuzzy rule-based system. The mathematical background of the fuzzy inference system based on interpolative mechanism is developed. Based on interpolative inference process Z number based fuzzy controller for control of dynamic plant has been designed. The transient response characteristic of designed controller is compared with the transient response characteristic of the conventional fuzzy controller. The obtained comparative results demonstrate the suitability of designed system in control of dynamic plants.
Strong Motion Earthquake Data Values of Digitized Strong-Motion Accelerograms, 1933-1994
National Oceanic and Atmospheric Administration, Department of Commerce — The Strong Motion Earthquake Data Values of Digitized Strong-Motion Accelerograms is a database of over 15,000 digitized and processed accelerograph records from...
Statistical inferences for bearings life using sudden death test
Directory of Open Access Journals (Sweden)
Morariu Cristin-Olimpiu
2017-01-01
Full Text Available In this paper we propose a calculus method for reliability indicators estimation and a complete statistical inferences for three parameters Weibull distribution of bearings life. Using experimental values regarding the durability of bearings tested on stands by the sudden death tests involves a series of particularities of the estimation using maximum likelihood method and statistical inference accomplishment. The paper detailing these features and also provides an example calculation.
Completion is an Instance of Abstract Canonical System Inference
Burel , Guillaume; Kirchner , Claude
2006-01-01
http://www.springerlink.com/content/u222753gl333221p/; Abstract canonical systems and inference (ACSI) were introduced to formalize the intuitive notions of good proof and good inference appearing typically in first-order logic or in Knuth-Bendix like completion procedures. Since this abstract framework is intended to be generic, it is of fundamental interest to show its adequacy to represent the main systems of interest. This has been done for ground completion (where all equational axioms a...
Inference in {open_quotes}poor{close_quotes} languages
Energy Technology Data Exchange (ETDEWEB)
Petrov, S. [Oak Ridge National Lab., TN (United States)
1996-12-31
Languages with a solvable implication problem but without complete and consistent systems of inference rules ({open_quote}poor{close_quote} languages) are considered. The problem of existence of a finite, complete, and consistent inference rule system for a {open_quotes}poor{close_quotes} language is stated independently of the language or the rule syntax. Several properties of the problem are proved. An application of the results to the language of join dependencies is given.
Inferring Team Strengths Using a Discrete Markov Random Field
Zech, John; Wood, Frank
2013-01-01
We propose an original model for inferring team strengths using a Markov Random Field, which can be used to generate historical estimates of the offensive and defensive strengths of a team over time. This model was designed to be applied to sports such as soccer or hockey, in which contest outcomes take value in a limited discrete space. We perform inference using a combination of Expectation Maximization and Loopy Belief Propagation. The challenges of working with a non-convex optimization p...
Inferring Past Effective Population Size from Distributions of Coalescent Times
Gattepaille, Lucie; G?nther, Torsten; Jakobsson, Mattias
2016-01-01
Inferring and understanding changes in effective population size over time is a major challenge for population genetics. Here we investigate some theoretical properties of random-mating populations with varying size over time. In particular, we present an exact solution to compute the population size as a function of time, N e ( t ) , based on distributions of coalescent times of samples of any size. This result reduces the problem of population size inference to a problem of estimating coale...
On Principles of Software Engineering -- Role of the Inductive Inference
Directory of Open Access Journals (Sweden)
Ladislav Samuelis
2012-01-01
Full Text Available This paper highlights the role of the inductive inference principle in software engineering. It takes the challenge to settle differences and to confront the ideas behind the usual software engineering concepts. We focus on the inductive inference mechanism’s role behind the automatic program construction activities and software evolution. We believe that the revision of rather old ideas in the new context of software engineering could enhance our endeavour and that is why deserves more attention.
The 'Puzzles' methodology: en route to Indirect Inference?
Le, Vo Phuong Mai; Minford, Patrick; Wickens, Michael
2009-01-01
We review the methods used in many papers to evaluate DSGE models by comparing their simulated moments with data moments. We compare these with the method of Indirect Inference to which they are closely related. We illustrate the comparison with contrasting assessments of a two-country model in two recent papers. We conclude that Indirect Inference is the proper end point of the puzzles methodology.
Towards Bayesian Inference of the Fast-Ion Distribution Function
DEFF Research Database (Denmark)
Stagner, L.; Heidbrink, W.W.; Salewski, Mirko
2012-01-01
sensitivity of the measurements are incorporated into Bayesian likelihood probabilities, while prior probabilities enforce physical constraints. As an initial step, this poster uses Bayesian statistics to infer the DIII-D electron density profile from multiple diagnostic measurements. Likelihood functions....... However, when theory and experiment disagree (for one or more diagnostics), it is unclear how to proceed. Bayesian statistics provides a framework to infer the DF, quantify errors, and reconcile discrepant diagnostic measurements. Diagnostic errors and ``weight functions" that describe the phase space...
The extended reciprocity: Strong belief outperforms persistence.
Kurokawa, Shun
2017-05-21
The existence of cooperation is a mysterious phenomenon and demands explanation, and direct reciprocity is one key potential explanation for the evolution of cooperation. Direct reciprocity allows cooperation to evolve for cooperators who switch their behavior on the basis of information about the opponent's behavior. Here, relevant to direct reciprocity is information deficiency. When the opponent's last move is unknown, how should players behave? One possibility is to choose cooperation with some default probability without using any further information. In fact, our previous paper (Kurokawa, 2016a) examined this strategy. However, there might be beneficial information other than the opponent's last move. A subsequent study of ours (Kurokawa, 2017) examined the strategy which uses the own last move when the opponent's last move is unknown, and revealed that referring to the own move and trying to imitate it when information is absent is beneficial. Is there any other beneficial information else? How about strong belief (i.e., have infinite memory and believe that the opponent's behavior is unchanged)? Here, we examine the evolution of strategies with strong belief. Analyzing the repeated prisoner's dilemma game and using evolutionarily stable strategy (ESS) analysis against an invasion by unconditional defectors, we find the strategy with strong belief is more likely to evolve than the strategy which does not use information other than the opponent player's last move and more likely to evolve than the strategy which uses not only the opponent player's last move but also the own last move. Strong belief produces the extended reciprocity and facilitates the evolution of cooperation. Additionally, we consider the two strategies game between strategies with strong belief and any strategy, and we consider the four strategies game in which unconditional cooperators, unconditional defectors, pessimistic reciprocators with strong belief, and optimistic reciprocators with
Fused Regression for Multi-source Gene Regulatory Network Inference.
Directory of Open Access Journals (Sweden)
Kari Y Lam
2016-12-01
Full Text Available Understanding gene regulatory networks is critical to understanding cellular differentiation and response to external stimuli. Methods for global network inference have been developed and applied to a variety of species. Most approaches consider the problem of network inference independently in each species, despite evidence that gene regulation can be conserved even in distantly related species. Further, network inference is often confined to single data-types (single platforms and single cell types. We introduce a method for multi-source network inference that allows simultaneous estimation of gene regulatory networks in multiple species or biological processes through the introduction of priors based on known gene relationships such as orthology incorporated using fused regression. This approach improves network inference performance even when orthology mapping and conservation are incomplete. We refine this method by presenting an algorithm that extracts the true conserved subnetwork from a larger set of potentially conserved interactions and demonstrate the utility of our method in cross species network inference. Last, we demonstrate our method's utility in learning from data collected on different experimental platforms.
Inferring parental genomic ancestries using pooled semi-Markov processes.
Zou, James Y; Halperin, Eran; Burchard, Esteban; Sankararaman, Sriram
2015-06-15
A basic problem of broad public and scientific interest is to use the DNA of an individual to infer the genomic ancestries of the parents. In particular, we are often interested in the fraction of each parent's genome that comes from specific ancestries (e.g. European, African, Native American, etc). This has many applications ranging from understanding the inheritance of ancestry-related risks and traits to quantifying human assortative mating patterns. We model the problem of parental genomic ancestry inference as a pooled semi-Markov process. We develop a general mathematical framework for pooled semi-Markov processes and construct efficient inference algorithms for these models. Applying our inference algorithm to genotype data from 231 Mexican trios and 258 Puerto Rican trios where we have the true genomic ancestry of each parent, we demonstrate that our method accurately infers parameters of the semi-Markov processes and parents' genomic ancestries. We additionally validated the method on simulations. Our model of pooled semi-Markov process and inference algorithms may be of independent interest in other settings in genomics and machine learning. © The Author 2015. Published by Oxford University Press.
Automatic physical inference with information maximizing neural networks
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
Compressing large data sets to a manageable number of summaries that are informative about the underlying parameters vastly simplifies both frequentist and Bayesian inference. When only simulations are available, these summaries are typically chosen heuristically, so they may inadvertently miss important information. We introduce a simulation-based machine learning technique that trains artificial neural networks to find nonlinear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). In test cases where the posterior can be derived exactly, likelihood-free inference based on automatically derived IMNN summaries produces nearly exact posteriors, showing that these summaries are good approximations to sufficient statistics. In a series of numerical examples of increasing complexity and astrophysical relevance we show that IMNNs are robustly capable of automatically finding optimal, nonlinear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima. We anticipate that the automatic physical inference method described in this paper will be essential to obtain both accurate and precise cosmological parameter estimates from complex and large astronomical data sets, including those from LSST and Euclid.
Directory of Open Access Journals (Sweden)
Matsen Frederick A
2012-05-01
Full Text Available Abstract Background Although taxonomy is often used informally to evaluate the results of phylogenetic inference and the root of phylogenetic trees, algorithmic methods to do so are lacking. Results In this paper we formalize these procedures and develop algorithms to solve the relevant problems. In particular, we introduce a new algorithm that solves a "subcoloring" problem to express the difference between a taxonomy and a phylogeny at a given rank. This algorithm improves upon the current best algorithm in terms of asymptotic complexity for the parameter regime of interest; we also describe a branch-and-bound algorithm that saves orders of magnitude in computation on real data sets. We also develop a formalism and an algorithm for rooting phylogenetic trees according to a taxonomy. Conclusions The algorithms in this paper, and the associated freely-available software, will help biologists better use and understand taxonomically labeled phylogenetic trees.
Inferences of the deep solar meridional flow
Böning, Vincent G. A.
2017-10-01
Understanding the solar meridional flow is important for uncovering the origin of the solar activity cycle. Yet, recent helioseismic estimates of this flow have come to conflicting conclusions in deeper layers of the solar interior, i.e., at depths below about 0.9 solar radii. The aim of this thesis is to contribute to a better understanding of the deep solar meridional flow. Time-distance helioseismology is the major method for investigating this flow. In this method, travel times of waves propagating between pairs of locations on the solar surface are measured. Until now, the travel-time measurements have been modeled using the ray approximation, which assumes that waves travel along infinitely thin ray paths between these locations. In contrast, the scattering of the full wave field in the solar interior due to the flow is modeled in first order by the Born approximation. It is in general a more accurate model of the physics in the solar interior. In a first step, an existing model for calculating the sensitivity of travel-time measurements to solar interior flows using the Born approximation is extended from Cartesian to spherical geometry. The results are succesfully compared to the Cartesian ones and are tested for self-consistency. In a second step, the newly developed model is validated using an existing numerical simulation of linear wave propagation in the Sun. An inversion of artificial travel times for meridional flow shows excellent agreement for noiseless data and reproduces many features in the input flow profile in the case of noisy data. Finally, the new method is used to infer the deep meridional flow. I used Global Oscillation Network Group (GONG) data that were earlier analyzed using the ray approximation and I employed the same Substractive Optimized Local Averaging (SOLA) inversion technique as in the earlier study. Using an existing formula for the covariance of travel-time measurements, it is shown that the assumption of uncorrelated errors
A strongly coupled quark-gluon plasma
Energy Technology Data Exchange (ETDEWEB)
Shuryak, Edward [Department of Physics and Astronomy, University at Stony Brook, NY 11794 (United States)
2004-08-01
Successful description of robust collective flow phenomena at RHIC by ideal hydrodynamics, recent observations of bound c-barc,q-barq states on the lattice, and other theoretical developments indicate that QGP produced at RHIC, and probably in a wider temperature region T{sub c} < T < 4T{sub c}, is not a weakly coupled quasiparticle gas as believed previously. We discuss how strong the interaction is and why it seems to generate hundreds of binary channels with bound states, surviving well inside the QGP phase. We in particular discuss their effect on pressure and viscosity. We conclude by reviewing the similar phenomena for other 'strongly coupled systems', such as (i) strongly coupled supersymmetric theories studied via Maldacena duality; (ii) trapped ultra-cold atoms with very large scattering length, tuned to Feschbach resonances.
Strong Coupling between Plasmons and Organic Semiconductors
Directory of Open Access Journals (Sweden)
Joel Bellessa
2014-05-01
Full Text Available In this paper we describe the properties of organic material in strong coupling with plasmon, mainly based on our work in this field of research. The strong coupling modifies the optical transitions of the structure, and occurs when the interaction between molecules and plasmon prevails on the damping of the system. We describe the dispersion relation of different plasmonic systems, delocalized and localized plasmon, coupled to aggregated dyes and the typical properties of these systems in strong coupling. The modification of the dye emission is also studied. In the second part, the effect of the microscopic structure of the organics, which can be seen as a disordered film, is described. As the different molecules couple to the same plasmon mode, an extended coherent state on several microns is observed.
A theory of the strong interactions
International Nuclear Information System (INIS)
Gross, D.J.
1979-01-01
The most promising candidate for a fundamental microscopic theory of the strong interactions is a gauge theory of colored quarks-Quantum Chromodynamics (QCD). There are many excellent reasons for believing in this theory. It embodies the broken symmetries, SU(3) and chiral SU(3)xSU(3), of the strong interactions and reflects the success of (albeit crude) quark models in explaining the spectrum of the observed hadrons. The hidden quantum number of color, necessary to account for the quantum numbers of the low lying hadrons, plays a fundamental role in this theory as the SU(3) color gauge vector 'gluons' are the mediators of the strong interactions. The absence of physical quark states can be 'explained' by the hypothesis of color confinement i.e. that hadrons are permanently bound in color singlet bound states. Finally this theory is unique in being asymptotically free, thus accounting for the almost free field theory behvior of quarks observed at short distances. (Auth.)
Electromagnetic processes in strong crystalline fields
2007-01-01
We propose a number of new investigations on aspects of radiation from high energy electron and positron beams (10-300 GeV) in single crystals and amorphous targets. The common heading is radiation emission by electrons and positrons in strong electromagnetic fields, but as the setup is quite versatile, other related phenomena in radiation emission can be studied as well. The intent is to clarify the role of a number of important aspects of radiation in strong fields as e.g. observed in crystals. We propose to measure trident 'Klein-like' production in strong crystalline fields, 'crystalline undulator' radiation, 'sandwich' target phenomena, LPM suppression of pair production as well as axial and planar effects in contributions of spin to the radiation.
Patterns of Strong Coupling for LHC Searches
Liu, Da; Rattazzi, Riccardo; Riva, Francesco
2016-11-23
Even though the Standard Model (SM) is weakly coupled at the Fermi scale, a new strong dynamics involving its degrees of freedom may conceivably lurk at slightly higher energies, in the multi TeV range. Approximate symmetries provide a structurally robust context where, within the low energy description, the dimensionless SM couplings are weak, while the new strong dynamics manifests itself exclusively through higher-derivative interactions. We present an exhaustive classification of such scenarios in the form of effective field theories, paying special attention to new classes of models where the strong dynamics involves, along with the Higgs boson, the SM gauge bosons and/or the fermions. The IR softness of the new dynamics suppresses its effects at LEP energies, but deviations are in principle detectable at the LHC, even at energies below the threshold for production of new states. Our construction provides the so far unique structurally robust context where to motivate several searches in Higgs physics, d...
Electronic Structure of Strongly Correlated Materials
Anisimov, Vladimir
2010-01-01
Electronic structure and physical properties of strongly correlated materials containing elements with partially filled 3d, 4d, 4f and 5f electronic shells is analyzed by Dynamical Mean-Field Theory (DMFT). DMFT is the most universal and effective tool used for the theoretical investigation of electronic states with strong correlation effects. In the present book the basics of the method are given and its application to various material classes is shown. The book is aimed at a broad readership: theoretical physicists and experimentalists studying strongly correlated systems. It also serves as a handbook for students and all those who want to be acquainted with fast developing filed of condensed matter physics.
Aperture averaging in strong oceanic turbulence
Gökçe, Muhsin Caner; Baykal, Yahya
2018-04-01
Receiver aperture averaging technique is employed in underwater wireless optical communication (UWOC) systems to mitigate the effects of oceanic turbulence, thus to improve the system performance. The irradiance flux variance is a measure of the intensity fluctuations on a lens of the receiver aperture. Using the modified Rytov theory which uses the small-scale and large-scale spatial filters, and our previously presented expression that shows the atmospheric structure constant in terms of oceanic turbulence parameters, we evaluate the irradiance flux variance and the aperture averaging factor of a spherical wave in strong oceanic turbulence. Irradiance flux variance variations are examined versus the oceanic turbulence parameters and the receiver aperture diameter are examined in strong oceanic turbulence. Also, the effect of the receiver aperture diameter on the aperture averaging factor is presented in strong oceanic turbulence.
Electromagnetic Processes in strong Crystalline Fields
2007-01-01
We propose a number of new investigations on aspects of radiation from high energy electron and positron beams (10-300 GeV) in single crystals and amorphous targets. The common heading is radiation emission by electrons and positrons in strong electromagnetic fields, but as the setup is quite versatile, other related phenomena in radiation emission can be studied as well. The intent is to clarify the role of a number of important aspects of radiation in strong fields as e.g. observed in crystals. We propose to measure trident 'Klein-like' production in strong crystalline fields, 'crystalline undulator' radiation, 'sandwich' target phenomena, LPM suppression of pair production as well as axial and planar effects in contributions of spin to the radiation.
Eccentric binaries of compact objects in strong-field gravity
International Nuclear Information System (INIS)
Gold, Roman
2011-01-01
In this thesis we study the dynamics as well as the resulting gravitational radiation from eccentric binaries of compact objects in the non-linear regime of General Relativity. For this purpose we solve Einstein's field equation numerically in a 3+1 decomposition using the moving-puncture technique. We focus our study on very particular orbits, arising as a purely relativistic phenomenon of the two-body problem in General Relativity, which are associated with unstable circular orbits. They are governed by a fast, nearly circular revolution at a short distance followed by a slow, radial motion on a nearly elliptic trajectory. Due to the unique features of their orbital trajectories they are called zoom-whirl orbits. We analyze how the peculiar dynamics manifests itself in the emitted gravitational radiation and to which extent one can infer the orbital properties from observations of the gravitational waves. In the first part, we consider black hole binaries. We perform a comprehensive parameter study by varying the initial eccentricity, computing and characterizing the resulting gravitational waveforms. We address aspects, which can only be obtained from non-perturbative methods, and which are crucial to the astrophysical relevance of these orbits. In particular, our results imply a fairly low amount of fine-tuning necessary to spot zoom-whirl effects. We find whirl orbits for values of the eccentricities, which fall in disjunct intervals extending to rather low values. Furthermore, we show that whirl effects just before merger cause a signal with significant amplitude. In the second part, we investigate neutron star binaries on eccentric orbits in full General Relativity, which has not been studied so far. We explore their phenomenology and study the consequences for the matter after the neutron stars have merged. In these evolutions the merged neutron stars sooner or later collapse to form a black hole. During the collapse most of the matter is accreted on to the
Eccentric binaries of compact objects in strong-field gravity
Energy Technology Data Exchange (ETDEWEB)
Gold, Roman
2011-09-27
In this thesis we study the dynamics as well as the resulting gravitational radiation from eccentric binaries of compact objects in the non-linear regime of General Relativity. For this purpose we solve Einstein's field equation numerically in a 3+1 decomposition using the moving-puncture technique. We focus our study on very particular orbits, arising as a purely relativistic phenomenon of the two-body problem in General Relativity, which are associated with unstable circular orbits. They are governed by a fast, nearly circular revolution at a short distance followed by a slow, radial motion on a nearly elliptic trajectory. Due to the unique features of their orbital trajectories they are called zoom-whirl orbits. We analyze how the peculiar dynamics manifests itself in the emitted gravitational radiation and to which extent one can infer the orbital properties from observations of the gravitational waves. In the first part, we consider black hole binaries. We perform a comprehensive parameter study by varying the initial eccentricity, computing and characterizing the resulting gravitational waveforms. We address aspects, which can only be obtained from non-perturbative methods, and which are crucial to the astrophysical relevance of these orbits. In particular, our results imply a fairly low amount of fine-tuning necessary to spot zoom-whirl effects. We find whirl orbits for values of the eccentricities, which fall in disjunct intervals extending to rather low values. Furthermore, we show that whirl effects just before merger cause a signal with significant amplitude. In the second part, we investigate neutron star binaries on eccentric orbits in full General Relativity, which has not been studied so far. We explore their phenomenology and study the consequences for the matter after the neutron stars have merged. In these evolutions the merged neutron stars sooner or later collapse to form a black hole. During the collapse most of the matter is accreted on
An algebra-based method for inferring gene regulatory networks.
Vera-Licona, Paola; Jarrah, Abdul; Garcia-Puente, Luis David; McGee, John; Laubenbacher, Reinhard
2014-03-26
The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used. This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also predicting several of the
Experimental investigation of strong field trident production
Esberg, J; Knudsen, H; Thomsen, H D; Uggerhøj, E; Uggerhøj, U I; Sona, P; Mangiarotti, A; Ketel, T J; Dizdar, A; Dalton, M M; Ballestrero, S; Connell, S H
2010-01-01
We show by experiment that an electron impinging on an electric field that is of critical magnitude in its rest frame, may produce an electron-positron pair. Our measurements address higher-order QED, using the strong electric fields obtainable along particular crystallographic directions in single crystals. For the amorphous material our data are in good agreement with theory, whereas a discrepancy with theory on the magnitude of the trident enhancement is found in the precisely aligned case where the strong electric field acts.
Gluon scattering amplitudes at strong coupling
Energy Technology Data Exchange (ETDEWEB)
Alday, Luis F. [Institute for Theoretical Physics and Spinoza Institute, Utrecht University, 3508 TD Utrecht (Netherlands); Maldacena, Juan [School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540 (United States)
2007-06-15
We describe how to compute planar gluon scattering amplitudes at strong coupling in N = 4 super Yang Mills by using the gauge/string duality. The computation boils down to finding a certain classical string configuration whose boundary conditions are determined by the gluon momenta. The results are infrared divergent. We introduce the gravity version of dimensional regularization to define finite quantities. The leading and subleading IR divergencies are characterized by two functions of the coupling that we compute at strong coupling. We compute also the full finite form for the four point amplitude and we find agreement with a recent ansatz by Bern, Dixon and Smirnov.
Strong boundedness of analytic functions in tubes
Directory of Open Access Journals (Sweden)
Richard D. Carmichael
1979-01-01
Full Text Available Certain classes of analytic functions in tube domains TC=ℝn+iC in n-dimensional complex space, where C is an open connected cone in ℝn, are studied. We show that the functions have a boundedness property in the strong topology of the space of tempered distributions g′. We further give a direct proof that each analytic function attains the Fourier transform of its spectral function as distributional boundary value in the strong (and weak topology of g′.
Including virtual photons in strong interactions
International Nuclear Information System (INIS)
Rusetsky, A.
2003-01-01
In the perturbative field-theoretical models we investigate the inclusion of the electromagnetic interactions into the purely strong theory that describes hadronic processes. In particular, we study the convention for splitting electromagnetic and strong interactions and the ambiguity of such a splitting. The issue of the interpretation of the parameters of the low-energy effective field theory in the presence of electromagnetic interactions is addressed, as well as the scale and gauge dependence of the effective theory couplings. We hope, that the results of these studies are relevant for the electromagnetic sector of ChPT. (orig.)
Thermodynamical instabilities under strong magnetic fields
Chen, Y. J.
2017-03-01
The thermodynamical instabilities of low densities in the n p matter and n p e matter are studied within several relativistic nuclear models under some values of magnetic fields. The results are compared between each other and the effects of the symmetry energy slope at saturation density on the instability are investigated. The instability regions can exhibit bands due to the presence of Landau levels for very strong magnetic fields of the order of 1017 G, while for weaker magnetic fields, the bands are replaced by many diffused or scattered pieces. It also shows that the proton fraction in the inner crust of neutron stars may be complex under strong magnetic fields.
Universal behavior of strongly correlated Fermi systems
Energy Technology Data Exchange (ETDEWEB)
Shaginyan, Vasilii R [B.P. Konstantinov St. Petersburg Institute of Nuclear Physics, Russian Academy of Sciences, Gatchina, Leningrad region, Rusian Federation (Russian Federation); Amusia, M Ya [A.F. Ioffe Physico-Technical Institute, Russian Academy of Sciences, St. Petersburg (Russian Federation); Popov, Konstantin G [Komi Scientific Center, Ural Branch of the Russian Academy of Sciences, Syktyvkar (Russian Federation)
2007-06-30
This review discusses the construction of a theory and the analysis of phenomena occurring in strongly correlated Fermi systems such as high-T{sub c} superconductors, heavy-fermion metals, and quasi-two-dimensional Fermi systems. It is shown that the basic properties and the universal behavior of strongly correlated Fermi systems can be described in the framework of the Fermi-condensate quantum phase transition and the well-known Landau paradigm of quasiparticles and the order parameter. The concept of fermion condensation may be fruitful in studying neutron stars, finite Fermi systems, ultra-cold gases in traps, and quark plasma. (reviews of topical problems)
Universal behavior of strongly correlated Fermi systems
International Nuclear Information System (INIS)
Shaginyan, Vasilii R; Amusia, M Ya; Popov, Konstantin G
2007-01-01
This review discusses the construction of a theory and the analysis of phenomena occurring in strongly correlated Fermi systems such as high-T c superconductors, heavy-fermion metals, and quasi-two-dimensional Fermi systems. It is shown that the basic properties and the universal behavior of strongly correlated Fermi systems can be described in the framework of the Fermi-condensate quantum phase transition and the well-known Landau paradigm of quasiparticles and the order parameter. The concept of fermion condensation may be fruitful in studying neutron stars, finite Fermi systems, ultra-cold gases in traps, and quark plasma. (reviews of topical problems)
Analytical solution of strongly nonlinear Duffing oscillators
El-Naggar, A.M.; Ismail, G.M.
2016-01-01
In this paper, a new perturbation technique is employed to solve strongly nonlinear Duffing oscillators, in which a new parameter α=α(ε)α=α(ε) is defined such that the value of α is always small regardless of the magnitude of the original parameter εε. Therefore, the strongly nonlinear Duffing oscillators with large parameter ε are transformed into a small parameter system with respect to αα. Approximate solution obtained by the present method is compared with the solution of energy balance m...
De Sitter vacua of strongly interacting QFT
Energy Technology Data Exchange (ETDEWEB)
Buchel, Alex [Department of Applied Mathematics, University of Western Ontario,London, Ontario N6A 5B7 (Canada); Department of Physics and Astronomy, University of Western Ontario,London, Ontario N6A 5B7 (Canada); Perimeter Institute for Theoretical Physics,Waterloo, Ontario N2J 2W9 (Canada); Karapetyan, Aleksandr [Department of Applied Mathematics, University of Western Ontario,London, Ontario N6A 5B7 (Canada)
2017-03-22
We use holographic correspondence to argue that Euclidean (Bunch-Davies) vacuum is a late-time attractor of the dynamical evolution of quantum gauge theories at strong coupling. The Bunch-Davies vacuum is not an adiabatic state, if the gauge theory is non-conformal — the comoving entropy production rate is nonzero. Using the N=2{sup ∗} gauge theory holography, we explore prospects of explaining current accelerated expansion of the Universe as due to the vacuum energy of a strongly coupled QFT.
A Robust Mass Estimator for Dark Matter Subhalo Perturbations in Strong Gravitational Lenses
Energy Technology Data Exchange (ETDEWEB)
Minor, Quinn E. [Department of Science, Borough of Manhattan Community College, City University of New York, New York, NY 10007 (United States); Kaplinghat, Manoj [Department of Physics and Astronomy, University of California, Irvine CA 92697 (United States); Li, Nan [Department of Astronomy and Astrophysics, The University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States)
2017-08-20
A few dark matter substructures have recently been detected in strong gravitational lenses through their perturbations of highly magnified images. We derive a characteristic scale for lensing perturbations and show that they are significantly larger than the perturber’s Einstein radius. We show that the perturber’s projected mass enclosed within this radius, scaled by the log-slope of the host galaxy’s density profile, can be robustly inferred even if the inferred density profile and tidal radius of the perturber are biased. We demonstrate the validity of our analytic derivation using several gravitational lens simulations where the tidal radii and the inner log-slopes of the density profile of the perturbing subhalo are allowed to vary. By modeling these simulated data, we find that our mass estimator, which we call the effective subhalo lensing mass, is accurate to within about 10% or smaller in each case, whereas the inferred total subhalo mass can potentially be biased by nearly an order of magnitude. We therefore recommend that the effective subhalo lensing mass be reported in future lensing reconstructions, as this will allow for a more accurate comparison with the results of dark matter simulations.
Spatiotemporal Bayesian inference dipole analysis for MEG neuroimaging data.
Jun, Sung C; George, John S; Paré-Blagoev, Juliana; Plis, Sergey M; Ranken, Doug M; Schmidt, David M; Wood, C C
2005-10-15
Recently, we described a Bayesian inference approach to the MEG/EEG inverse problem that used numerical techniques to estimate the full posterior probability distributions of likely solutions upon which all inferences were based [Schmidt, D.M., George, J.S., Wood, C.C., 1999. Bayesian inference applied to the electromagnetic inverse problem. Human Brain Mapping 7, 195; Schmidt, D.M., George, J.S., Ranken, D.M., Wood, C.C., 2001. Spatial-temporal bayesian inference for MEG/EEG. In: Nenonen, J., Ilmoniemi, R. J., Katila, T. (Eds.), Biomag 2000: 12th International Conference on Biomagnetism. Espoo, Norway, p. 671]. Schmidt et al. (1999) focused on the analysis of data at a single point in time employing an extended region source model. They subsequently extended their work to a spatiotemporal Bayesian inference analysis of the full spatiotemporal MEG/EEG data set. Here, we formulate spatiotemporal Bayesian inference analysis using a multi-dipole model of neural activity. This approach is faster than the extended region model, does not require use of the subject's anatomical information, does not require prior determination of the number of dipoles, and yields quantitative probabilistic inferences. In addition, we have incorporated the ability to handle much more complex and realistic estimates of the background noise, which may be represented as a sum of Kronecker products of temporal and spatial noise covariance components. This reduces the effects of undermodeling noise. In order to reduce the rigidity of the multi-dipole formulation which commonly causes problems due to multiple local minima, we treat the given covariance of the background as uncertain and marginalize over it in the analysis. Markov Chain Monte Carlo (MCMC) was used to sample the many possible likely solutions. The spatiotemporal Bayesian dipole analysis is demonstrated using simulated and empirical whole-head MEG data.
Earthquake source model using strong motion displacement
Indian Academy of Sciences (India)
The strong motion displacement records available during an earthquake can be treated as the response of the earth as the a structural system to unknown forces acting at unknown locations. Thus, if the part of the earth participating in ground motion is modelled as a known finite elastic medium, one can attempt to model the ...