WorldWideScience

Sample records for scale structure inference

  1. Unifying Inference of Meso-Scale Structures in Networks.

    Science.gov (United States)

    Tunç, Birkan; Verma, Ragini

    2015-01-01

    Networks are among the most prevalent formal representations in scientific studies, employed to depict interactions between objects such as molecules, neuronal clusters, or social groups. Studies performed at meso-scale that involve grouping of objects based on their distinctive interaction patterns form one of the main lines of investigation in network science. In a social network, for instance, meso-scale structures can correspond to isolated social groupings or groups of individuals that serve as a communication core. Currently, the research on different meso-scale structures such as community and core-periphery structures has been conducted via independent approaches, which precludes the possibility of an algorithmic design that can handle multiple meso-scale structures and deciding which structure explains the observed data better. In this study, we propose a unified formulation for the algorithmic detection and analysis of different meso-scale structures. This facilitates the investigation of hybrid structures that capture the interplay between multiple meso-scale structures and statistical comparison of competing structures, all of which have been hitherto unavailable. We demonstrate the applicability of the methodology in analyzing the human brain network, by determining the dominant organizational structure (communities) of the brain, as well as its auxiliary characteristics (core-periphery).

  2. Unifying Inference of Meso-Scale Structures in Networks.

    Directory of Open Access Journals (Sweden)

    Birkan Tunç

    Full Text Available Networks are among the most prevalent formal representations in scientific studies, employed to depict interactions between objects such as molecules, neuronal clusters, or social groups. Studies performed at meso-scale that involve grouping of objects based on their distinctive interaction patterns form one of the main lines of investigation in network science. In a social network, for instance, meso-scale structures can correspond to isolated social groupings or groups of individuals that serve as a communication core. Currently, the research on different meso-scale structures such as community and core-periphery structures has been conducted via independent approaches, which precludes the possibility of an algorithmic design that can handle multiple meso-scale structures and deciding which structure explains the observed data better. In this study, we propose a unified formulation for the algorithmic detection and analysis of different meso-scale structures. This facilitates the investigation of hybrid structures that capture the interplay between multiple meso-scale structures and statistical comparison of competing structures, all of which have been hitherto unavailable. We demonstrate the applicability of the methodology in analyzing the human brain network, by determining the dominant organizational structure (communities of the brain, as well as its auxiliary characteristics (core-periphery.

  3. Expectation propagation for large scale Bayesian inference of non-linear molecular networks from perturbation data.

    Science.gov (United States)

    Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger

    2017-01-01

    Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.

  4. Efficient Exact Inference With Loss Augmented Objective in Structured Learning.

    Science.gov (United States)

    Bauer, Alexander; Nakajima, Shinichi; Muller, Klaus-Robert

    2016-08-19

    Structural support vector machine (SVM) is an elegant approach for building complex and accurate models with structured outputs. However, its applicability relies on the availability of efficient inference algorithms--the state-of-the-art training algorithms repeatedly perform inference to compute a subgradient or to find the most violating configuration. In this paper, we propose an exact inference algorithm for maximizing nondecomposable objectives due to special type of a high-order potential having a decomposable internal structure. As an important application, our method covers the loss augmented inference, which enables the slack and margin scaling formulations of structural SVM with a variety of dissimilarity measures, e.g., Hamming loss, precision and recall, Fβ-loss, intersection over union, and many other functions that can be efficiently computed from the contingency table. We demonstrate the advantages of our approach in natural language parsing and sequence segmentation applications.

  5. Functional inference of complex anatomical tendinous networks at a macroscopic scale via sparse experimentation.

    Science.gov (United States)

    Saxena, Anupam; Lipson, Hod; Valero-Cuevas, Francisco J

    2012-01-01

    In systems and computational biology, much effort is devoted to functional identification of systems and networks at the molecular-or cellular scale. However, similarly important networks exist at anatomical scales such as the tendon network of human fingers: the complex array of collagen fibers that transmits and distributes muscle forces to finger joints. This network is critical to the versatility of the human hand, and its function has been debated since at least the 16(th) century. Here, we experimentally infer the structure (both topology and parameter values) of this network through sparse interrogation with force inputs. A population of models representing this structure co-evolves in simulation with a population of informative future force inputs via the predator-prey estimation-exploration algorithm. Model fitness depends on their ability to explain experimental data, while the fitness of future force inputs depends on causing maximal functional discrepancy among current models. We validate our approach by inferring two known synthetic Latex networks, and one anatomical tendon network harvested from a cadaver's middle finger. We find that functionally similar but structurally diverse models can exist within a narrow range of the training set and cross-validation errors. For the Latex networks, models with low training set error [functional structure of complex anatomical networks. This work expands current bioinformatics inference approaches by demonstrating that sparse, yet informative interrogation of biological specimens holds significant computational advantages in accurate and efficient inference over random testing, or assuming model topology and only inferring parameters values. These findings also hold clues to both our evolutionary history and the development of versatile machines.

  6. Bayesian structural inference for hidden processes

    Science.gov (United States)

    Strelioff, Christopher C.; Crutchfield, James P.

    2014-04-01

    We introduce a Bayesian approach to discovering patterns in structurally complex processes. The proposed method of Bayesian structural inference (BSI) relies on a set of candidate unifilar hidden Markov model (uHMM) topologies for inference of process structure from a data series. We employ a recently developed exact enumeration of topological ɛ-machines. (A sequel then removes the topological restriction.) This subset of the uHMM topologies has the added benefit that inferred models are guaranteed to be ɛ-machines, irrespective of estimated transition probabilities. Properties of ɛ-machines and uHMMs allow for the derivation of analytic expressions for estimating transition probabilities, inferring start states, and comparing the posterior probability of candidate model topologies, despite process internal structure being only indirectly present in data. We demonstrate BSI's effectiveness in estimating a process's randomness, as reflected by the Shannon entropy rate, and its structure, as quantified by the statistical complexity. We also compare using the posterior distribution over candidate models and the single, maximum a posteriori model for point estimation and show that the former more accurately reflects uncertainty in estimated values. We apply BSI to in-class examples of finite- and infinite-order Markov processes, as well to an out-of-class, infinite-state hidden process.

  7. Structural influence of gene networks on their inference: analysis of C3NET

    Directory of Open Access Journals (Sweden)

    Emmert-Streib Frank

    2011-06-01

    Full Text Available Abstract Background The availability of large-scale high-throughput data possesses considerable challenges toward their functional analysis. For this reason gene network inference methods gained considerable interest. However, our current knowledge, especially about the influence of the structure of a gene network on its inference, is limited. Results In this paper we present a comprehensive investigation of the structural influence of gene networks on the inferential characteristics of C3NET - a recently introduced gene network inference algorithm. We employ local as well as global performance metrics in combination with an ensemble approach. The results from our numerical study for various biological and synthetic network structures and simulation conditions, also comparing C3NET with other inference algorithms, lead a multitude of theoretical and practical insights into the working behavior of C3NET. In addition, in order to facilitate the practical usage of C3NET we provide an user-friendly R package, called c3net, and describe its functionality. It is available from https://r-forge.r-project.org/projects/c3net and from the CRAN package repository. Conclusions The availability of gene network inference algorithms with known inferential properties opens a new era of large-scale screening experiments that could be equally beneficial for basic biological and biomedical research with auspicious prospects. The availability of our easy to use software package c3net may contribute to the popularization of such methods. Reviewers This article was reviewed by Lev Klebanov, Joel Bader and Yuriy Gusev.

  8. The confounding effect of population structure on bayesian skyline plot inferences of demographic history

    DEFF Research Database (Denmark)

    Heller, Rasmus; Chikhi, Lounes; Siegismund, Hans

    2013-01-01

    Many coalescent-based methods aiming to infer the demographic history of populations assume a single, isolated and panmictic population (i.e. a Wright-Fisher model). While this assumption may be reasonable under many conditions, several recent studies have shown that the results can be misleading...... when it is violated. Among the most widely applied demographic inference methods are Bayesian skyline plots (BSPs), which are used across a range of biological fields. Violations of the panmixia assumption are to be expected in many biological systems, but the consequences for skyline plot inferences...... the best scheme for inferring demographic change over a typical time scale. Analyses of data from a structured African buffalo population demonstrate how BSP results can be strengthened by simulations. We recommend that sample selection should be carefully considered in relation to population structure...

  9. Nonparametric inference of network structure and dynamics

    Science.gov (United States)

    Peixoto, Tiago P.

    The network structure of complex systems determine their function and serve as evidence for the evolutionary mechanisms that lie behind them. Despite considerable effort in recent years, it remains an open challenge to formulate general descriptions of the large-scale structure of network systems, and how to reliably extract such information from data. Although many approaches have been proposed, few methods attempt to gauge the statistical significance of the uncovered structures, and hence the majority cannot reliably separate actual structure from stochastic fluctuations. Due to the sheer size and high-dimensionality of many networks, this represents a major limitation that prevents meaningful interpretations of the results obtained with such nonstatistical methods. In this talk, I will show how these issues can be tackled in a principled and efficient fashion by formulating appropriate generative models of network structure that can have their parameters inferred from data. By employing a Bayesian description of such models, the inference can be performed in a nonparametric fashion, that does not require any a priori knowledge or ad hoc assumptions about the data. I will show how this approach can be used to perform model comparison, and how hierarchical models yield the most appropriate trade-off between model complexity and quality of fit based on the statistical evidence present in the data. I will also show how this general approach can be elegantly extended to networks with edge attributes, that are embedded in latent spaces, and that change in time. The latter is obtained via a fully dynamic generative network model, based on arbitrary-order Markov chains, that can also be inferred in a nonparametric fashion. Throughout the talk I will illustrate the application of the methods with many empirical networks such as the internet at the autonomous systems level, the global airport network, the network of actors and films, social networks, citations among

  10. Statistical inference and visualization in scale-space for spatially dependent images

    KAUST Repository

    Vaughan, Amy

    2012-03-01

    SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical scale-space visualization tool that allows for statistical inferences. In this paper we develop a spatial SiZer for finding significant features and conducting goodness-of-fit tests for spatially dependent images. The spatial SiZer utilizes a family of kernel estimates of the image and provides not only exploratory data analysis but also statistical inference with spatial correlation taken into account. It is also capable of comparing the observed image with a specific null model being tested by adjusting the statistical inference using an assumed covariance structure. Pixel locations having statistically significant differences between the image and a given null model are highlighted by arrows. The spatial SiZer is compared with the existing independent SiZer via the analysis of simulated data with and without signal on both planar and spherical domains. We apply the spatial SiZer method to the decadal temperature change over some regions of the Earth. © 2011 The Korean Statistical Society.

  11. Inference Attacks and Control on Database Structures

    Directory of Open Access Journals (Sweden)

    Muhamed Turkanovic

    2015-02-01

    Full Text Available Today’s databases store information with sensitivity levels that range from public to highly sensitive, hence ensuring confidentiality can be highly important, but also requires costly control. This paper focuses on the inference problem on different database structures. It presents possible treats on privacy with relation to the inference, and control methods for mitigating these treats. The paper shows that using only access control, without any inference control is inadequate, since these models are unable to protect against indirect data access. Furthermore, it covers new inference problems which rise from the dimensions of new technologies like XML, semantics, etc.

  12. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Biros, George [Univ. of Texas, Austin, TX (United States)

    2018-01-12

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. These include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a

  13. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining

    Science.gov (United States)

    Hero, Alfred O.; Rajaratnam, Bala

    2015-01-01

    When can reliable inference be drawn in fue “Big Data” context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for “Big Data”. Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks. PMID:27087700

  14. Foundational Principles for Large-Scale Inference: Illustrations Through Correlation Mining.

    Science.gov (United States)

    Hero, Alfred O; Rajaratnam, Bala

    2016-01-01

    When can reliable inference be drawn in fue "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, wifu implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics fue dataset is often variable-rich but sample-starved: a regime where the number n of acquired samples (statistical replicates) is far fewer than fue number p of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data". Sample complexity however has received relatively less attention, especially in the setting when the sample size n is fixed, and the dimension p grows without bound. To address fuis gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where fue variable dimension is fixed and fue sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa cale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables fua t are of interest. Correlation mining arises in numerous applications and subsumes the regression context as a special case. we demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks.

  15. Inferring ontology graph structures using OWL reasoning

    KAUST Repository

    Rodriguez-Garcia, Miguel Angel

    2018-01-05

    Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies\\' semantic content remains a challenge.We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies\\' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph .Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.

  16. Inferring ontology graph structures using OWL reasoning.

    Science.gov (United States)

    Rodríguez-García, Miguel Ángel; Hoehndorf, Robert

    2018-01-05

    Ontologies are representations of a conceptualization of a domain. Traditionally, ontologies in biology were represented as directed acyclic graphs (DAG) which represent the backbone taxonomy and additional relations between classes. These graphs are widely exploited for data analysis in the form of ontology enrichment or computation of semantic similarity. More recently, ontologies are developed in a formal language such as the Web Ontology Language (OWL) and consist of a set of axioms through which classes are defined or constrained. While the taxonomy of an ontology can be inferred directly from the axioms of an ontology as one of the standard OWL reasoning tasks, creating general graph structures from OWL ontologies that exploit the ontologies' semantic content remains a challenge. We developed a method to transform ontologies into graphs using an automated reasoner while taking into account all relations between classes. Searching for (existential) patterns in the deductive closure of ontologies, we can identify relations between classes that are implied but not asserted and generate graph structures that encode for a large part of the ontologies' semantic content. We demonstrate the advantages of our method by applying it to inference of protein-protein interactions through semantic similarity over the Gene Ontology and demonstrate that performance is increased when graph structures are inferred using deductive inference according to our method. Our software and experiment results are available at http://github.com/bio-ontology-research-group/Onto2Graph . Onto2Graph is a method to generate graph structures from OWL ontologies using automated reasoning. The resulting graphs can be used for improved ontology visualization and ontology-based data analysis.

  17. Inference and Analysis of Population Structure Using Genetic Data and Network Theory.

    Science.gov (United States)

    Greenbaum, Gili; Templeton, Alan R; Bar-David, Shirli

    2016-04-01

    Clustering individuals to subpopulations based on genetic data has become commonplace in many genetic studies. Inference about population structure is most often done by applying model-based approaches, aided by visualization using distance-based approaches such as multidimensional scaling. While existing distance-based approaches suffer from a lack of statistical rigor, model-based approaches entail assumptions of prior conditions such as that the subpopulations are at Hardy-Weinberg equilibria. Here we present a distance-based approach for inference about population structure using genetic data by defining population structure using network theory terminology and methods. A network is constructed from a pairwise genetic-similarity matrix of all sampled individuals. The community partition, a partition of a network to dense subgraphs, is equated with population structure, a partition of the population to genetically related groups. Community-detection algorithms are used to partition the network into communities, interpreted as a partition of the population to subpopulations. The statistical significance of the structure can be estimated by using permutation tests to evaluate the significance of the partition's modularity, a network theory measure indicating the quality of community partitions. To further characterize population structure, a new measure of the strength of association (SA) for an individual to its assigned community is presented. The strength of association distribution (SAD) of the communities is analyzed to provide additional population structure characteristics, such as the relative amount of gene flow experienced by the different subpopulations and identification of hybrid individuals. Human genetic data and simulations are used to demonstrate the applicability of the analyses. The approach presented here provides a novel, computationally efficient model-free method for inference about population structure that does not entail assumption of

  18. Causal inference between bioavailability of heavy metals and environmental factors in a large-scale region

    International Nuclear Information System (INIS)

    Liu, Yuqiong; Du, Qingyun; Wang, Qi; Yu, Huanyun; Liu, Jianfeng; Tian, Yu; Chang, Chunying; Lei, Jing

    2017-01-01

    The causation between bioavailability of heavy metals and environmental factors are generally obtained from field experiments at local scales at present, and lack sufficient evidence from large scales. However, inferring causation between bioavailability of heavy metals and environmental factors across large-scale regions is challenging. Because the conventional correlation-based approaches used for causation assessments across large-scale regions, at the expense of actual causation, can result in spurious insights. In this study, a general approach framework, Intervention calculus when the directed acyclic graph (DAG) is absent (IDA) combined with the backdoor criterion (BC), was introduced to identify causation between the bioavailability of heavy metals and the potential environmental factors across large-scale regions. We take the Pearl River Delta (PRD) in China as a case study. The causal structures and effects were identified based on the concentrations of heavy metals (Zn, As, Cu, Hg, Pb, Cr, Ni and Cd) in soil (0–20 cm depth) and vegetable (lettuce) and 40 environmental factors (soil properties, extractable heavy metals and weathering indices) in 94 samples across the PRD. Results show that the bioavailability of heavy metals (Cd, Zn, Cr, Ni and As) was causally influenced by soil properties and soil weathering factors, whereas no causal factor impacted the bioavailability of Cu, Hg and Pb. No latent factor was found between the bioavailability of heavy metals and environmental factors. The causation between the bioavailability of heavy metals and environmental factors at field experiments is consistent with that on a large scale. The IDA combined with the BC provides a powerful tool to identify causation between the bioavailability of heavy metals and environmental factors across large-scale regions. Causal inference in a large system with the dynamic changes has great implications for system-based risk management. - Causation between the

  19. Structure-based inference of molecular functions of proteins of unknown function from Berkeley Structural Genomics Center

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung-Hou; Shin, Dong Hae; Hou, Jingtong; Chandonia, John-Marc; Das, Debanu; Choi, In-Geol; Kim, Rosalind; Kim, Sung-Hou

    2007-09-02

    Advances in sequence genomics have resulted in an accumulation of a huge number of protein sequences derived from genome sequences. However, the functions of a large portion of them cannot be inferred based on the current methods of sequence homology detection to proteins of known functions. Three-dimensional structure can have an important impact in providing inference of molecular function (physical and chemical function) of a protein of unknown function. Structural genomics centers worldwide have been determining many 3-D structures of the proteins of unknown functions, and possible molecular functions of them have been inferred based on their structures. Combined with bioinformatics and enzymatic assay tools, the successful acceleration of the process of protein structure determination through high throughput pipelines enables the rapid functional annotation of a large fraction of hypothetical proteins. We present a brief summary of the process we used at the Berkeley Structural Genomics Center to infer molecular functions of proteins of unknown function.

  20. On Bayesian Inference under Sampling from Scale Mixtures of Normals

    NARCIS (Netherlands)

    Fernández, C.; Steel, M.F.J.

    1996-01-01

    This paper considers a Bayesian analysis of the linear regression model under independent sampling from general scale mixtures of Normals.Using a common reference prior, we investigate the validity of Bayesian inference and the existence of posterior moments of the regression and precision

  1. Using DNA metabarcoding for simultaneous inference of common vampire bat diet and population structure.

    Science.gov (United States)

    Bohmann, Kristine; Gopalakrishnan, Shyam; Nielsen, Martin; Nielsen, Luisa Dos Santos Bay; Jones, Gareth; Streicker, Daniel G; Gilbert, M Thomas P

    2018-04-19

    Metabarcoding diet analysis has become a valuable tool in animal ecology; however, co-amplified predator sequences are not generally used for anything other than to validate predator identity. Exemplified by the common vampire bat, we demonstrate the use of metabarcoding to infer predator population structure alongside diet assessments. Growing populations of common vampire bats impact human, livestock and wildlife health in Latin America through transmission of pathogens, such as lethal rabies viruses. Techniques to determine large-scale variation in vampire bat diet and bat population structure would empower locality- and species-specific projections of disease transmission risks. However, previously used methods are not cost-effective and efficient for large-scale applications. Using bloodmeal and faecal samples from common vampire bats from coastal, Andean and Amazonian regions of Peru, we showcase metabarcoding as a scalable tool to assess vampire bat population structure and feeding preferences. Dietary metabarcoding was highly effective, detecting vertebrate prey in 93.2% of the samples. Bats predominantly preyed on domestic animals, but fed on tapirs at one Amazonian site. In addition, we identified arthropods in 9.3% of samples, likely reflecting consumption of ectoparasites. Using the same data, we document mitochondrial geographic population structure in the common vampire bat in Peru. Such simultaneous inference of vampire bat diet and population structure can enable new insights into the interplay between vampire bat ecology and disease transmission risks. Importantly, the methodology can be incorporated into metabarcoding diet studies of other animals to couple information on diet and population structure. © 2018 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.

  2. Learning about the internal structure of categories through classification and feature inference.

    Science.gov (United States)

    Jee, Benjamin D; Wiley, Jennifer

    2014-01-01

    Previous research on category learning has found that classification tasks produce representations that are skewed toward diagnostic feature dimensions, whereas feature inference tasks lead to richer representations of within-category structure. Yet, prior studies often measure category knowledge through tasks that involve identifying only the typical features of a category. This neglects an important aspect of a category's internal structure: how typical and atypical features are distributed within a category. The present experiments tested the hypothesis that inference learning results in richer knowledge of internal category structure than classification learning. We introduced several new measures to probe learners' representations of within-category structure. Experiment 1 found that participants in the inference condition learned and used a wider range of feature dimensions than classification learners. Classification learners, however, were more sensitive to the presence of atypical features within categories. Experiment 2 provided converging evidence that classification learners were more likely to incorporate atypical features into their representations. Inference learners were less likely to encode atypical category features, even in a "partial inference" condition that focused learners' attention on the feature dimensions relevant to classification. Overall, these results are contrary to the hypothesis that inference learning produces superior knowledge of within-category structure. Although inference learning promoted representations that included a broad range of category-typical features, classification learning promoted greater sensitivity to the distribution of typical and atypical features within categories.

  3. Microarray Data Processing Techniques for Genome-Scale Network Inference from Large Public Repositories.

    Science.gov (United States)

    Chockalingam, Sriram; Aluru, Maneesha; Aluru, Srinivas

    2016-09-19

    Pre-processing of microarray data is a well-studied problem. Furthermore, all popular platforms come with their own recommended best practices for differential analysis of genes. However, for genome-scale network inference using microarray data collected from large public repositories, these methods filter out a considerable number of genes. This is primarily due to the effects of aggregating a diverse array of experiments with different technical and biological scenarios. Here we introduce a pre-processing pipeline suitable for inferring genome-scale gene networks from large microarray datasets. We show that partitioning of the available microarray datasets according to biological relevance into tissue- and process-specific categories significantly extends the limits of downstream network construction. We demonstrate the effectiveness of our pre-processing pipeline by inferring genome-scale networks for the model plant Arabidopsis thaliana using two different construction methods and a collection of 11,760 Affymetrix ATH1 microarray chips. Our pre-processing pipeline and the datasets used in this paper are made available at http://alurulab.cc.gatech.edu/microarray-pp.

  4. Probing the Small-scale Structure in Strongly Lensed Systems via Transdimensional Inference

    Science.gov (United States)

    Daylan, Tansu; Cyr-Racine, Francis-Yan; Diaz Rivero, Ana; Dvorkin, Cora; Finkbeiner, Douglas P.

    2018-02-01

    Strong lensing is a sensitive probe of the small-scale density fluctuations in the Universe. We implement a pipeline to model strongly lensed systems using probabilistic cataloging, which is a transdimensional, hierarchical, and Bayesian framework to sample from a metamodel (union of models with different dimensionality) consistent with observed photon count maps. Probabilistic cataloging allows one to robustly characterize modeling covariances within and across lens models with different numbers of subhalos. Unlike traditional cataloging of subhalos, it does not require model subhalos to improve the goodness of fit above the detection threshold. Instead, it allows the exploitation of all information contained in the photon count maps—for instance, when constraining the subhalo mass function. We further show that, by not including these small subhalos in the lens model, fixed-dimensional inference methods can significantly mismodel the data. Using a simulated Hubble Space Telescope data set, we show that the subhalo mass function can be probed even when many subhalos in the sample catalogs are individually below the detection threshold and would be absent in a traditional catalog. The implemented software, Probabilistic Cataloger (PCAT) is made publicly available at https://github.com/tdaylan/pcat.

  5. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Xiangyun Xiao

    Full Text Available The reconstruction of gene regulatory networks (GRNs from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM, experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  6. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Science.gov (United States)

    Xiao, Xiangyun; Zhang, Wei; Zou, Xiufen

    2015-01-01

    The reconstruction of gene regulatory networks (GRNs) from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE)-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM), experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  7. Large scale statistical inference of signaling pathways from RNAi and microarray data

    Directory of Open Access Journals (Sweden)

    Poustka Annemarie

    2007-10-01

    Full Text Available Abstract Background The advent of RNA interference techniques enables the selective silencing of biologically interesting genes in an efficient way. In combination with DNA microarray technology this enables researchers to gain insights into signaling pathways by observing downstream effects of individual knock-downs on gene expression. These secondary effects can be used to computationally reverse engineer features of the upstream signaling pathway. Results In this paper we address this challenging problem by extending previous work by Markowetz et al., who proposed a statistical framework to score networks hypotheses in a Bayesian manner. Our extensions go in three directions: First, we introduce a way to omit the data discretization step needed in the original framework via a calculation based on p-values instead. Second, we show how prior assumptions on the network structure can be incorporated into the scoring scheme using regularization techniques. Third and most important, we propose methods to scale up the original approach, which is limited to around 5 genes, to large scale networks. Conclusion Comparisons of these methods on artificial data are conducted. Our proposed module network is employed to infer the signaling network between 13 genes in the ER-α pathway in human MCF-7 breast cancer cells. Using a bootstrapping approach this reconstruction can be found with good statistical stability. The code for the module network inference method is available in the latest version of the R-package nem, which can be obtained from the Bioconductor homepage.

  8. Statistical inference and visualization in scale-space for spatially dependent images

    KAUST Repository

    Vaughan, Amy; Jun, Mikyoung; Park, Cheolwoo

    2012-01-01

    SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical scale-space visualization tool that allows for statistical inferences. In this paper we develop a spatial SiZer for finding significant features and conducting goodness-of-fit tests

  9. Inferring network structure in non-normal and mixed discrete-continuous genomic data.

    Science.gov (United States)

    Bhadra, Anindya; Rao, Arvind; Baladandayuthapani, Veerabhadran

    2018-03-01

    Inferring dependence structure through undirected graphs is crucial for uncovering the major modes of multivariate interaction among high-dimensional genomic markers that are potentially associated with cancer. Traditionally, conditional independence has been studied using sparse Gaussian graphical models for continuous data and sparse Ising models for discrete data. However, there are two clear situations when these approaches are inadequate. The first occurs when the data are continuous but display non-normal marginal behavior such as heavy tails or skewness, rendering an assumption of normality inappropriate. The second occurs when a part of the data is ordinal or discrete (e.g., presence or absence of a mutation) and the other part is continuous (e.g., expression levels of genes or proteins). In this case, the existing Bayesian approaches typically employ a latent variable framework for the discrete part that precludes inferring conditional independence among the data that are actually observed. The current article overcomes these two challenges in a unified framework using Gaussian scale mixtures. Our framework is able to handle continuous data that are not normal and data that are of mixed continuous and discrete nature, while still being able to infer a sparse conditional sign independence structure among the observed data. Extensive performance comparison in simulations with alternative techniques and an analysis of a real cancer genomics data set demonstrate the effectiveness of the proposed approach. © 2017, The International Biometric Society.

  10. Causal inference between bioavailability of heavy metals and environmental factors in a large-scale region.

    Science.gov (United States)

    Liu, Yuqiong; Du, Qingyun; Wang, Qi; Yu, Huanyun; Liu, Jianfeng; Tian, Yu; Chang, Chunying; Lei, Jing

    2017-07-01

    The causation between bioavailability of heavy metals and environmental factors are generally obtained from field experiments at local scales at present, and lack sufficient evidence from large scales. However, inferring causation between bioavailability of heavy metals and environmental factors across large-scale regions is challenging. Because the conventional correlation-based approaches used for causation assessments across large-scale regions, at the expense of actual causation, can result in spurious insights. In this study, a general approach framework, Intervention calculus when the directed acyclic graph (DAG) is absent (IDA) combined with the backdoor criterion (BC), was introduced to identify causation between the bioavailability of heavy metals and the potential environmental factors across large-scale regions. We take the Pearl River Delta (PRD) in China as a case study. The causal structures and effects were identified based on the concentrations of heavy metals (Zn, As, Cu, Hg, Pb, Cr, Ni and Cd) in soil (0-20 cm depth) and vegetable (lettuce) and 40 environmental factors (soil properties, extractable heavy metals and weathering indices) in 94 samples across the PRD. Results show that the bioavailability of heavy metals (Cd, Zn, Cr, Ni and As) was causally influenced by soil properties and soil weathering factors, whereas no causal factor impacted the bioavailability of Cu, Hg and Pb. No latent factor was found between the bioavailability of heavy metals and environmental factors. The causation between the bioavailability of heavy metals and environmental factors at field experiments is consistent with that on a large scale. The IDA combined with the BC provides a powerful tool to identify causation between the bioavailability of heavy metals and environmental factors across large-scale regions. Causal inference in a large system with the dynamic changes has great implications for system-based risk management. Copyright © 2017 Elsevier Ltd. All

  11. Decadal- to Centennial-Scale Variations in Anchovy Biomass in the Last 250 Years Inferred From Scales Preserved in Laminated Sediments off the Coast of Pisco, Peru

    Science.gov (United States)

    Salvatteci, R.; Field, D.; Gutierrez, D.; Baumgartner, T.; Ferreira, V.; Velazco, F.; Niquen, M.; Guevara, R.; Sifeddine, A.; Ortlieb, L.

    2005-12-01

    The highly productive upwelling environment off the coast of Peru sustains one of the world's largest fisheries, the Peruvian anchoveta ( Engraulis ringens), but variability on interannual to decadal timescales results in dramatic variations in catch. We quantified variations in anchovy scale abundance preserved in laminated sediments collected at 300 m depth of the Peruvian margin (near Pisco, central Peru) to infer decadal- to centennial-scale population variability prior to the development of the fishery. High-resolution subsampling of 2.5 - 8.2 mm was done following the laminated structure of the core. A chronology based on downcore excess 210Pb activities and 14C-AMS ages indicate that samples represent an estimated 1-7 years in time. Anchovy scale deposition is correlated with anchovy landings at Pisco, indicating that scale deposition can be used as a proxy of (at least) local biomass. A small, but significant, reduction in anchovy scale width (0.2 mm) after the development of the fishery suggests a small effect of the fishery on anchovy size distributions. While decadal-scale variability in anchovy scale deposition is persistent throughout the record, a dramatic increase in scale flux occurred around 1860 A.D. and persists for approximately a century. Our results indicate that centennial-scale variability composes a large portion of the variability. However, decadal-scale variability associated with the Pacific Decadal Oscillation is not correlated with the inferred biomass variability prior to the development of the fishery. Shifts in the distribution of the population may account for an additional component of the variability in scale deposition.

  12. Generating inferences from knowledge structures based on general automata

    Energy Technology Data Exchange (ETDEWEB)

    Koenig, E C

    1983-01-01

    The author shows that the model for knowledge structures for computers based on general automata accommodates procedures for establishing inferences. Algorithms are presented which generate inferences as output of a computer when its sentence input names appropriate knowledge elements contained in an associated knowledge structure already stored in the memory of the computer. The inferences are found to have either a single graph tuple or more than one graph tuple of associated knowledge. Six algorithms pertain to a single graph tuple and a seventh pertains to more than one graph tuple of associated knowledge. A named term is either the automaton, environment, auxiliary receptor, principal receptor, auxiliary effector, or principal effector. The algorithm pertaining to more than one graph tuple requires that the input sentence names the automaton, transformation response, and environment of one of the tuples of associated knowledge in a sequence of tuples. Interaction with the computer may be either in a conversation or examination mode. The algorithms are illustrated by an example. 13 references.

  13. Robust Inference of Population Structure for Ancestry Prediction and Correction of Stratification in the Presence of Relatedness

    Science.gov (United States)

    Conomos, Matthew P.; Miller, Mike; Thornton, Timothy

    2016-01-01

    Population structure inference with genetic data has been motivated by a variety of applications in population genetics and genetic association studies. Several approaches have been proposed for the identification of genetic ancestry differences in samples where study participants are assumed to be unrelated, including principal components analysis (PCA), multi-dimensional scaling (MDS), and model-based methods for proportional ancestry estimation. Many genetic studies, however, include individuals with some degree of relatedness, and existing methods for inferring genetic ancestry fail in related samples. We present a method, PC-AiR, for robust population structure inference in the presence of known or cryptic relatedness. PC-AiR utilizes genome-screen data and an efficient algorithm to identify a diverse subset of unrelated individuals that is representative of all ancestries in the sample. The PC-AiR method directly performs PCA on the identified ancestry representative subset and then predicts components of variation for all remaining individuals based on genetic similarities. In simulation studies and in applications to real data from Phase III of the HapMap Project, we demonstrate that PC-AiR provides a substantial improvement over existing approaches for population structure inference in related samples. We also demonstrate significant efficiency gains, where a single axis of variation from PC-AiR provides better prediction of ancestry in a variety of structure settings than using ten (or more) components of variation from widely used PCA and MDS approaches. Finally, we illustrate that PC-AiR can provide improved population stratification correction over existing methods in genetic association studies with population structure and relatedness. PMID:25810074

  14. Structures and lithofacies of inferred silicic conduits in the Paraná-Etendeka LIP, southernmost Brazil

    Science.gov (United States)

    Simões, M. S.; Lima, E. F.; Sommer, C. A.; Rossetti, L. M. M.

    2018-04-01

    Extensive silicic units in the Paraná-Etendeka LIP have been long interpreted as pyroclastic density currents (rheomorphic ignimbrites) derived from the Messum Complex in Namibia. In recent literature, however, they have been characterized as effusive lava flows and domes. In this paper we describe structures and lithofacies related to postulated silicic lava feeder conduits at Mato Perso, São Marcos and Jaquirana-Cambará do Sul areas in southern Brazil. Inferred conduits are at least 15-25 m in width and the lithofacies include variably vesicular monomictic welded and non-welded breccias in the margins to poorly vesicular, banded, spherulitic and microfractured vitrophyres in the central parts. Flat-lying coherent vitrophyres and massive obsidian are considered to be the subaerial equivalents of the conduits. Large-scale, regional tectonic structures in southern Brazil include the NE-SW aligned Porto Alegre Suture, Leão and Açotea faults besides the Antas Lineament, a curved tectonic feature accompanying the bed of Antas river. South of the Antas Lineament smaller-scale, NW-SE lineaments limit the exposure areas of the inferred conduits. NE-SW and subordinate NW-SE structures within this smaller-scale lineaments are represented by the main postulated conduit outcrops and are parallel to the dominant sub-vertical banding in the widespread banded vitrophyre lithofacies. Upper lava flows display flat-lying foliation, pipe-like and spherical vesicles and have better developed microlites. Petrographic characteristics of the silicic vitrophyres indicate that crystal-poor magmas underwent distinct cooling paths for each inferred conduit area. The vitrophyre chemical composition is defined by the evolution of trachydacitic/dacitic vitrophyres with 62-65 wt% SiO2 to rhyodacite and rhyolite with 66-68 wt% SiO2. The more evolved rocks are assigned to the latest intrusive grey vitrophyre outcropping in the center of the conduits. Degassing pathways formed during

  15. A scale-free structure prior for graphical models with applications in functional genomics.

    Directory of Open Access Journals (Sweden)

    Paul Sheridan

    Full Text Available The problem of reconstructing large-scale, gene regulatory networks from gene expression data has garnered considerable attention in bioinformatics over the past decade with the graphical modeling paradigm having emerged as a popular framework for inference. Analysis in a full Bayesian setting is contingent upon the assignment of a so-called structure prior-a probability distribution on networks, encoding a priori biological knowledge either in the form of supplemental data or high-level topological features. A key topological consideration is that a wide range of cellular networks are approximately scale-free, meaning that the fraction, , of nodes in a network with degree is roughly described by a power-law with exponent between and . The standard practice, however, is to utilize a random structure prior, which favors networks with binomially distributed degree distributions. In this paper, we introduce a scale-free structure prior for graphical models based on the formula for the probability of a network under a simple scale-free network model. Unlike the random structure prior, its scale-free counterpart requires a node labeling as a parameter. In order to use this prior for large-scale network inference, we design a novel Metropolis-Hastings sampler for graphical models that includes a node labeling as a state space variable. In a simulation study, we demonstrate that the scale-free structure prior outperforms the random structure prior at recovering scale-free networks while at the same time retains the ability to recover random networks. We then estimate a gene association network from gene expression data taken from a breast cancer tumor study, showing that scale-free structure prior recovers hubs, including the previously unknown hub SLC39A6, which is a zinc transporter that has been implicated with the spread of breast cancer to the lymph nodes. Our analysis of the breast cancer expression data underscores the value of the scale

  16. Inferring infection hazard in wildlife populations by linking data across individual and population scales

    Science.gov (United States)

    Pepin, Kim M.; Kay, Shannon L.; Golas, Ben D.; Shriner, Susan A.; Gilbert, Amy T.; Miller, Ryan S.; Graham, Andrea L.; Riley, Steven; Cross, Paul C.; Samuel, Michael D.; Hooten, Mevin B.; Hoeting, Jennifer A.; Lloyd-Smith, James O.; Webb, Colleen T.; Buhnerkempe, Michael G.

    2017-01-01

    Our ability to infer unobservable disease-dynamic processes such as force of infection (infection hazard for susceptible hosts) has transformed our understanding of disease transmission mechanisms and capacity to predict disease dynamics. Conventional methods for inferring FOI estimate a time-averaged value and are based on population-level processes. Because many pathogens exhibit epidemic cycling and FOI is the result of processes acting across the scales of individuals and populations, a flexible framework that extends to epidemic dynamics and links within-host processes to FOI is needed. Specifically, within-host antibody kinetics in wildlife hosts can be short-lived and produce patterns that are repeatable across individuals, suggesting individual-level antibody concentrations could be used to infer time since infection and hence FOI. Using simulations and case studies (influenza A in lesser snow geese and Yersinia pestis in coyotes), we argue that with careful experimental and surveillance design, the population-level FOI signal can be recovered from individual-level antibody kinetics, despite substantial individual-level variation. In addition to improving inference, the cross-scale quantitative antibody approach we describe can reveal insights into drivers of individual-based variation in disease response, and the role of poorly understood processes such as secondary infections, in population-level dynamics of disease.

  17. Inferring network structure from cascades

    Science.gov (United States)

    Ghonge, Sushrut; Vural, Dervis Can

    2017-07-01

    Many physical, biological, and social phenomena can be described by cascades taking place on a network. Often, the activity can be empirically observed, but not the underlying network of interactions. In this paper we offer three topological methods to infer the structure of any directed network given a set of cascade arrival times. Our formulas hold for a very general class of models where the activation probability of a node is a generic function of its degree and the number of its active neighbors. We report high success rates for synthetic and real networks, for several different cascade models.

  18. Prewhitening of hydroclimatic time series? Implications for inferred change and variability across time scales

    Science.gov (United States)

    Razavi, Saman; Vogel, Richard

    2018-02-01

    Prewhitening, the process of eliminating or reducing short-term stochastic persistence to enable detection of deterministic change, has been extensively applied to time series analysis of a range of geophysical variables. Despite the controversy around its utility, methodologies for prewhitening time series continue to be a critical feature of a variety of analyses including: trend detection of hydroclimatic variables and reconstruction of climate and/or hydrology through proxy records such as tree rings. With a focus on the latter, this paper presents a generalized approach to exploring the impact of a wide range of stochastic structures of short- and long-term persistence on the variability of hydroclimatic time series. Through this approach, we examine the impact of prewhitening on the inferred variability of time series across time scales. We document how a focus on prewhitened, residual time series can be misleading, as it can drastically distort (or remove) the structure of variability across time scales. Through examples with actual data, we show how such loss of information in prewhitened time series of tree rings (so-called "residual chronologies") can lead to the underestimation of extreme conditions in climate and hydrology, particularly droughts, reconstructed for centuries preceding the historical period.

  19. RCK: accurate and efficient inference of sequence- and structure-based protein-RNA binding models from RNAcompete data.

    Science.gov (United States)

    Orenstein, Yaron; Wang, Yuhao; Berger, Bonnie

    2016-06-15

    Protein-RNA interactions, which play vital roles in many processes, are mediated through both RNA sequence and structure. CLIP-based methods, which measure protein-RNA binding in vivo, suffer from experimental noise and systematic biases, whereas in vitro experiments capture a clearer signal of protein RNA-binding. Among them, RNAcompete provides binding affinities of a specific protein to more than 240 000 unstructured RNA probes in one experiment. The computational challenge is to infer RNA structure- and sequence-based binding models from these data. The state-of-the-art in sequence models, Deepbind, does not model structural preferences. RNAcontext models both sequence and structure preferences, but is outperformed by GraphProt. Unfortunately, GraphProt cannot detect structural preferences from RNAcompete data due to the unstructured nature of the data, as noted by its developers, nor can it be tractably run on the full RNACompete dataset. We develop RCK, an efficient, scalable algorithm that infers both sequence and structure preferences based on a new k-mer based model. Remarkably, even though RNAcompete data is designed to be unstructured, RCK can still learn structural preferences from it. RCK significantly outperforms both RNAcontext and Deepbind in in vitro binding prediction for 244 RNAcompete experiments. Moreover, RCK is also faster and uses less memory, which enables scalability. While currently on par with existing methods in in vivo binding prediction on a small scale test, we demonstrate that RCK will increasingly benefit from experimentally measured RNA structure profiles as compared to computationally predicted ones. By running RCK on the entire RNAcompete dataset, we generate and provide as a resource a set of protein-RNA structure-based models on an unprecedented scale. Software and models are freely available at http://rck.csail.mit.edu/ bab@mit.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by

  20. Directed partial correlation: inferring large-scale gene regulatory network through induced topology disruptions.

    Directory of Open Access Journals (Sweden)

    Yinyin Yuan

    Full Text Available Inferring regulatory relationships among many genes based on their temporal variation in transcript abundance has been a popular research topic. Due to the nature of microarray experiments, classical tools for time series analysis lose power since the number of variables far exceeds the number of the samples. In this paper, we describe some of the existing multivariate inference techniques that are applicable to hundreds of variables and show the potential challenges for small-sample, large-scale data. We propose a directed partial correlation (DPC method as an efficient and effective solution to regulatory network inference using these data. Specifically for genomic data, the proposed method is designed to deal with large-scale datasets. It combines the efficiency of partial correlation for setting up network topology by testing conditional independence, and the concept of Granger causality to assess topology change with induced interruptions. The idea is that when a transcription factor is induced artificially within a gene network, the disruption of the network by the induction signifies a genes role in transcriptional regulation. The benchmarking results using GeneNetWeaver, the simulator for the DREAM challenges, provide strong evidence of the outstanding performance of the proposed DPC method. When applied to real biological data, the inferred starch metabolism network in Arabidopsis reveals many biologically meaningful network modules worthy of further investigation. These results collectively suggest DPC is a versatile tool for genomics research. The R package DPC is available for download (http://code.google.com/p/dpcnet/.

  1. Sparse Bayesian Inference and the Temperature Structure of the Solar Corona

    Energy Technology Data Exchange (ETDEWEB)

    Warren, Harry P. [Space Science Division, Naval Research Laboratory, Washington, DC 20375 (United States); Byers, Jeff M. [Materials Science and Technology Division, Naval Research Laboratory, Washington, DC 20375 (United States); Crump, Nicholas A. [Naval Center for Space Technology, Naval Research Laboratory, Washington, DC 20375 (United States)

    2017-02-20

    Measuring the temperature structure of the solar atmosphere is critical to understanding how it is heated to high temperatures. Unfortunately, the temperature of the upper atmosphere cannot be observed directly, but must be inferred from spectrally resolved observations of individual emission lines that span a wide range of temperatures. Such observations are “inverted” to determine the distribution of plasma temperatures along the line of sight. This inversion is ill posed and, in the absence of regularization, tends to produce wildly oscillatory solutions. We introduce the application of sparse Bayesian inference to the problem of inferring the temperature structure of the solar corona. Within a Bayesian framework a preference for solutions that utilize a minimum number of basis functions can be encoded into the prior and many ad hoc assumptions can be avoided. We demonstrate the efficacy of the Bayesian approach by considering a test library of 40 assumed temperature distributions.

  2. Statistical Inference on Memory Structure of Processes and Its Applications to Information Theory

    Science.gov (United States)

    2016-05-12

    Distribution Unlimited UU UU UU UU 12-05-2016 15-May-2014 14-Feb-2015 Final Report: Statistical Inference on Memory Structure of Processes and Its Applications ...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 mathematical statistics ; time series; Markov chains; random...journals: Final Report: Statistical Inference on Memory Structure of Processes and Its Applications to Information Theory Report Title Three areas

  3. Eight challenges in phylodynamic inference

    Directory of Open Access Journals (Sweden)

    Simon D.W. Frost

    2015-03-01

    Full Text Available The field of phylodynamics, which attempts to enhance our understanding of infectious disease dynamics using pathogen phylogenies, has made great strides in the past decade. Basic epidemiological and evolutionary models are now well characterized with inferential frameworks in place. However, significant challenges remain in extending phylodynamic inference to more complex systems. These challenges include accounting for evolutionary complexities such as changing mutation rates, selection, reassortment, and recombination, as well as epidemiological complexities such as stochastic population dynamics, host population structure, and different patterns at the within-host and between-host scales. An additional challenge exists in making efficient inferences from an ever increasing corpus of sequence data.

  4. Bayesian Inference using Neural Net Likelihood Models for Protein Secondary Structure Prediction

    Directory of Open Access Journals (Sweden)

    Seong-Gon Kim

    2011-06-01

    Full Text Available Several techniques such as Neural Networks, Genetic Algorithms, Decision Trees and other statistical or heuristic methods have been used to approach the complex non-linear task of predicting Alpha-helicies, Beta-sheets and Turns of a proteins secondary structure in the past. This project introduces a new machine learning method by using an offline trained Multilayered Perceptrons (MLP as the likelihood models within a Bayesian Inference framework to predict secondary structures proteins. Varying window sizes are used to extract neighboring amino acid information and passed back and forth between the Neural Net models and the Bayesian Inference process until there is a convergence of the posterior secondary structure probability.

  5. Searching for signatures of dark matter-dark radiation interaction in observations of large-scale structure

    Science.gov (United States)

    Pan, Zhen; Kaplinghat, Manoj; Knox, Lloyd

    2018-05-01

    In this paper, we conduct a search in the latest large-scale structure measurements for signatures of the dark matter-dark radiation interaction proposed by Buen-Abad et al. (2015). We show that prior claims of an inference of this interaction at ˜3 σ significance rely on a use of the Sunyaev-Zeldovich cluster mass function that ignores uncertainty in the mass-observable relationship. Including this uncertainty we find that the inferred level of interaction remains consistent with the data, but so does zero interaction; i.e., there is no longer a preference for nonzero interaction. We also point out that inference of the shape and amplitude of the matter power spectrum from Ly α forest measurements is highly inconsistent with the predictions of the Λ CDM model conditioned on Planck cosmic microwave background temperature, polarization, and lensing power spectra, and that the dark matter-dark radiation model can restore that consistency. We also phenomenologically generalize the model of Buen-Abad et al. (2015) to allow for interaction rates with different scalings with temperature, and find that the original scaling is preferred by the data.

  6. Quantitative DMS mapping for automated RNA secondary structure inference

    OpenAIRE

    Cordero, Pablo; Kladwang, Wipapat; VanLang, Christopher C.; Das, Rhiju

    2012-01-01

    For decades, dimethyl sulfate (DMS) mapping has informed manual modeling of RNA structure in vitro and in vivo. Here, we incorporate DMS data into automated secondary structure inference using a pseudo-energy framework developed for 2'-OH acylation (SHAPE) mapping. On six non-coding RNAs with crystallographic models, DMS- guided modeling achieves overall false negative and false discovery rates of 9.5% and 11.6%, comparable or better than SHAPE-guided modeling; and non-parametric bootstrappin...

  7. Probabilistic inference of ecohydrological parameters using observations from point to satellite scales

    Science.gov (United States)

    Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.

    2018-06-01

    Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.

  8. The Generator of the Event Structure Lexicon (GESL): Automatic Annotation of Event Structure for Textual Inference Tasks

    Science.gov (United States)

    Im, Seohyun

    2013-01-01

    This dissertation aims to develop the Generator of the Event Structure Lexicon (GESL) which is a tool to automate annotating the event structure of verbs in text to support textual inference tasks related to lexically entailed subevents. The output of the GESL is the Event Structure Lexicon (ESL), which is a lexicon of verbs in text which includes…

  9. Ion-Scale Structure in Mercury's Magnetopause Reconnection Diffusion Region

    Science.gov (United States)

    Gershman, Daniel J.; Dorelli, John C.; DiBraccio, Gina A.; Raines, Jim M.; Slavin, James A.; Poh, Gangkai; Zurbuchen, Thomas H.

    2016-01-01

    The strength and time dependence of the electric field in a magnetopause diffusion region relate to the rate of magnetic reconnection between the solar wind and a planetary magnetic field. Here we use approximately 150 milliseconds measurements of energetic electrons from the Mercury Surface, Space Environment, GEochemistry, and Ranging (MESSENGER) spacecraft observed over Mercury's dayside polar cap boundary (PCB) to infer such small-scale changes in magnetic topology and reconnection rates. We provide the first direct measurement of open magnetic topology in flux transfer events at Mercury, structures thought to account for a significant portion of the open magnetic flux transport throughout the magnetosphere. In addition, variations in PCB latitude likely correspond to intermittent bursts of approximately 0.3 to 3 millivolts per meter reconnection electric fields separated by approximately 5 to10 seconds, resulting in average and peak normalized dayside reconnection rates of approximately 0.02 and approximately 0.2, respectively. These data demonstrate that structure in the magnetopause diffusion region at Mercury occurs at the smallest ion scales relevant to reconnection physics.

  10. Climate-induced changes in lake ecosystem structure inferred from coupled neo- and paleoecological approaches

    Science.gov (United States)

    Saros, Jasmine E.; Stone, Jeffery R.; Pederson, Gregory T.; Slemmons, Krista; Spanbauer, Trisha; Schliep, Anna; Cahl, Douglas; Williamson, Craig E.; Engstrom, Daniel R.

    2015-01-01

    Over the 20th century, surface water temperatures have increased in many lake ecosystems around the world, but long-term trends in the vertical thermal structure of lakes remain unclear, despite the strong control that thermal stratification exerts on the biological response of lakes to climate change. Here we used both neo- and paleoecological approaches to develop a fossil-based inference model for lake mixing depths and thereby refine understanding of lake thermal structure change. We focused on three common planktonic diatom taxa, the distributions of which previous research suggests might be affected by mixing depth. Comparative lake surveys and growth rate experiments revealed that these species respond to lake thermal structure when nitrogen is sufficient, with species optima ranging from shallower to deeper mixing depths. The diatom-based mixing depth model was applied to sedimentary diatom profiles extending back to 1750 AD in two lakes with moderate nitrate concentrations but differing climate settings. Thermal reconstructions were consistent with expected changes, with shallower mixing depths inferred for an alpine lake where treeline has advanced, and deeper mixing depths inferred for a boreal lake where wind strength has increased. The inference model developed here provides a new tool to expand and refine understanding of climate-induced changes in lake ecosystems.

  11. Boosting Bayesian parameter inference of nonlinear stochastic differential equation models by Hamiltonian scale separation.

    Science.gov (United States)

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.

  12. An Algebraic Approach to Inference in Complex Networked Structures

    Science.gov (United States)

    2015-07-09

    44], [45],[46] where the shift is the elementary non-trivial filter that generates, under an appropriate notion of shift invariance, all linear ... elementary filter, and its output is a graph signal with the value at vertex n of the graph given approximately by a weighted linear combination of...AFRL-AFOSR-VA-TR-2015-0265 An Algebraic Approach to Inference in Complex Networked Structures Jose Moura CARNEGIE MELLON UNIVERSITY Final Report 07

  13. Wavelet phase analysis of two velocity components to infer the structure of interscale transfers in a turbulent boundary-layer

    Energy Technology Data Exchange (ETDEWEB)

    Keylock, Christopher J [Sheffield Fluid Mechanics Group and Department of Civil and Structural Engineering, University of Sheffield, Mappin Street, Sheffield, S1 3JD (United Kingdom); Nishimura, Kouichi, E-mail: c.keylock@sheffield.ac.uk [Graduate School of Environmental Studies, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601 (Japan)

    2016-04-15

    Scale-dependent phase analysis of velocity time series measured in a zero pressure gradient boundary layer shows that phase coupling between longitudinal and vertical velocity components is strong at both large and small scales, but minimal in the middle of the inertial regime. The same general pattern is observed at all vertical positions studied, but there is stronger phase coherence as the vertical coordinate, y, increases. The phase difference histograms evolve from a unimodal shape at small scales to the development of significant bimodality at the integral scale and above. The asymmetry in the off-diagonal couplings changes sign at the midpoint of the inertial regime, with the small scale relation consistent with intense ejections followed by a more prolonged sweep motion. These results may be interpreted in a manner that is consistent with the action of low speed streaks and hairpin vortices near the wall, with large scale motions further from the wall, the effect of which penetrates to smaller scales. Hence, a measure of phase coupling, when combined with a scale-by-scale decomposition of perpendicular velocity components, is a useful tool for investigating boundary-layer structure and inferring process from single-point measurements. (paper)

  14. Wavelet phase analysis of two velocity components to infer the structure of interscale transfers in a turbulent boundary-layer

    International Nuclear Information System (INIS)

    Keylock, Christopher J; Nishimura, Kouichi

    2016-01-01

    Scale-dependent phase analysis of velocity time series measured in a zero pressure gradient boundary layer shows that phase coupling between longitudinal and vertical velocity components is strong at both large and small scales, but minimal in the middle of the inertial regime. The same general pattern is observed at all vertical positions studied, but there is stronger phase coherence as the vertical coordinate, y, increases. The phase difference histograms evolve from a unimodal shape at small scales to the development of significant bimodality at the integral scale and above. The asymmetry in the off-diagonal couplings changes sign at the midpoint of the inertial regime, with the small scale relation consistent with intense ejections followed by a more prolonged sweep motion. These results may be interpreted in a manner that is consistent with the action of low speed streaks and hairpin vortices near the wall, with large scale motions further from the wall, the effect of which penetrates to smaller scales. Hence, a measure of phase coupling, when combined with a scale-by-scale decomposition of perpendicular velocity components, is a useful tool for investigating boundary-layer structure and inferring process from single-point measurements. (paper)

  15. Inference of neuronal network spike dynamics and topology from calcium imaging data

    Directory of Open Access Journals (Sweden)

    Henry eLütcke

    2013-12-01

    Full Text Available Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP occurrence ('spike trains' from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties.

  16. Inference of Transmission Network Structure from HIV Phylogenetic Trees.

    Science.gov (United States)

    Giardina, Federica; Romero-Severson, Ethan Obie; Albert, Jan; Britton, Tom; Leitner, Thomas

    2017-01-01

    Phylogenetic inference is an attractive means to reconstruct transmission histories and epidemics. However, there is not a perfect correspondence between transmission history and virus phylogeny. Both node height and topological differences may occur, depending on the interaction between within-host evolutionary dynamics and between-host transmission patterns. To investigate these interactions, we added a within-host evolutionary model in epidemiological simulations and examined if the resulting phylogeny could recover different types of contact networks. To further improve realism, we also introduced patient-specific differences in infectivity across disease stages, and on the epidemic level we considered incomplete sampling and the age of the epidemic. Second, we implemented an inference method based on approximate Bayesian computation (ABC) to discriminate among three well-studied network models and jointly estimate both network parameters and key epidemiological quantities such as the infection rate. Our ABC framework used both topological and distance-based tree statistics for comparison between simulated and observed trees. Overall, our simulations showed that a virus time-scaled phylogeny (genealogy) may be substantially different from the between-host transmission tree. This has important implications for the interpretation of what a phylogeny reveals about the underlying epidemic contact network. In particular, we found that while the within-host evolutionary process obscures the transmission tree, the diversification process and infectivity dynamics also add discriminatory power to differentiate between different types of contact networks. We also found that the possibility to differentiate contact networks depends on how far an epidemic has progressed, where distance-based tree statistics have more power early in an epidemic. Finally, we applied our ABC inference on two different outbreaks from the Swedish HIV-1 epidemic.

  17. Using Vertical Structure to Infer the Total Mass Hidden in a Debris Disk

    Science.gov (United States)

    Daley, Cail; Hughes, A. Meredith; Carter, Evan; Flaherty, Kevin; Stafford Lambros, Zachary; Pan, Margaret; Schlichting, Hilke; Chiang, Eugene; Wilner, David; Dent, Bill; Carpenter, John; Andrews, Sean; MacGregor, Meredith Ann; Moor, Attila; Kospal, Agnes

    2018-01-01

    Disks of optically thin debris dust surround ≥ 20% of main sequence stars and mark the final stage of planetary system evolution. The features of debris disks encode dynamical interactions between the dust and any unseen planets embedded in the disk. The vertical distribution of the dust is particularly sensitive to the total mass of planetesimal bodies in the disk, and is therefore well suited for constraining the prevalence of otherwise unobservable Uranus and Neptune analogs. Inferences of mass from debris disk vertical structure have previously been applied to infrared and optical observations of several systems, but the smaller particles traced by short-wavelength observations are ‘puffed up’ by radiation pressure, yielding only upper limits on the total embedded mass. The large grains that dominate the emission at millimeter wavelengths are essentially impervious to the effects of stellar radiation, and therefore trace the underlying mass distribution more directly. Here we present 1.3mm dust continuum observations of the debris disk around the nearby M star AU Mic with the Atacama Large Millimeter/submillimeter Array (ALMA). The 3 au spatial resolution of the observations, combined with the favorable edge-on geometry of the system, allows us to measure the vertical structure of a debris disk at millimeter wavelengths for the first time. We analyze the data using a ray-tracing code that translates a 2-D density and temperature structure into a model sky image of the disk. This model image is then compared directly to the interferometric data in the visibility domain, and the model parameters are explored using a Markov Chain Monte Carlo routine. We measure a scale height-to-radius ratio of 0.03, which we then compare to a theoretical model of steady-state, size-dependent velocity distributions in the collisional cascade to infer a total mass within the disk of ∼ 1.7 Earth masses. These measurements rule out the presence of a gas giant or Neptune

  18. Improving catchment discharge predictions by inferring flow route contributions from a nested-scale monitoring and model setup

    Science.gov (United States)

    van der Velde, Y.; Rozemeijer, J. C.; de Rooij, G. H.; van Geer, F. C.; Torfs, P. J. J. F.; de Louw, P. G. B.

    2011-03-01

    Identifying effective measures to reduce nutrient loads of headwaters in lowland catchments requires a thorough understanding of flow routes of water and nutrients. In this paper we assess the value of nested-scale discharge and groundwater level measurements for the estimation of flow route volumes and for predictions of catchment discharge. In order to relate field-site measurements to the catchment-scale an upscaling approach is introduced that assumes that scale differences in flow route fluxes originate from differences in the relationship between groundwater storage and the spatial structure of the groundwater table. This relationship is characterized by the Groundwater Depth Distribution (GDD) curve that relates spatial variation in groundwater depths to the average groundwater depth. The GDD-curve was measured for a single field site (0.009 km2) and simple process descriptions were applied to relate groundwater levels to flow route discharges. This parsimonious model could accurately describe observed storage, tube drain discharge, overland flow and groundwater flow simultaneously with Nash-Sutcliff coefficients exceeding 0.8. A probabilistic Monte Carlo approach was applied to upscale field-site measurements to catchment scales by inferring scale-specific GDD-curves from the hydrographs of two nested catchments (0.4 and 6.5 km2). The estimated contribution of tube drain effluent (a dominant source for nitrates) decreased with increasing scale from 76-79% at the field-site to 34-61% and 25-50% for both catchment scales. These results were validated by demonstrating that a model conditioned on nested-scale measurements improves simulations of nitrate loads and predictions of extreme discharges during validation periods compared to a model that was conditioned on catchment discharge only.

  19. Structure identification in fuzzy inference using reinforcement learning

    Science.gov (United States)

    Berenji, Hamid R.; Khedkar, Pratap

    1993-01-01

    In our previous work on the GARIC architecture, we have shown that the system can start with surface structure of the knowledge base (i.e., the linguistic expression of the rules) and learn the deep structure (i.e., the fuzzy membership functions of the labels used in the rules) by using reinforcement learning. Assuming the surface structure, GARIC refines the fuzzy membership functions used in the consequents of the rules using a gradient descent procedure. This hybrid fuzzy logic and reinforcement learning approach can learn to balance a cart-pole system and to backup a truck to its docking location after a few trials. In this paper, we discuss how to do structure identification using reinforcement learning in fuzzy inference systems. This involves identifying both surface as well as deep structure of the knowledge base. The term set of fuzzy linguistic labels used in describing the values of each control variable must be derived. In this process, splitting a label refers to creating new labels which are more granular than the original label and merging two labels creates a more general label. Splitting and merging of labels directly transform the structure of the action selection network used in GARIC by increasing or decreasing the number of hidden layer nodes.

  20. Critical Zone structure inferred from multiscale near surface geophysical and hydrological data across hillslopes at the Eel River CZO

    Science.gov (United States)

    Lee, S. S.; Rempe, D. M.; Holbrook, W. S.; Schmidt, L.; Hahm, W. J.; Dietrich, W. E.

    2017-12-01

    Except for boreholes and road cut, landslide, and quarry exposures, the subsurface structure of the critical zone (CZ) of weathered bedrock is relatively invisible and unmapped, yet this structure controls the short and long term fluxes of water and solutes. Non-invasive geophysical methods such as seismic refraction are widely applied to image the structure of the CZ at the hillslope scale. However, interpretations of such data are often limited due to heterogeneity and anisotropy contributed from fracturing, moisture content, and mineralogy on the seismic signal. We develop a quantitative framework for using seismic refraction tomography from intersecting geophysical surveys and hydrologic data obtained at the Eel River Critical Zone Observatory (ERCZO) in Northern California to help quantify the nature of subsurface structure across multiple hillslopes of varying topography in the area. To enhance our understanding of modeled velocity gradients and boundaries in relation to lithological properties, we compare refraction tomography results with borehole logs of nuclear magnetic resonance (NMR), gamma and neutron density, standard penetration testing, and observation drilling logs. We also incorporate laboratory scale rock characterization including mineralogical and elemental analyses as well as porosity and density measurements made via pycnometry, helium and mercury porosimetry, and laboratory scale NMR. We evaluate the sensitivity of seismically inferred saprolite-weathered bedrock and weathered-unweathered bedrock boundaries to various velocity and inversion parameters in relation with other macro scale processes such as gravitational and tectonic forces in influencing weathered bedrock velocities. Together, our sensitivity analyses and multi-method data comparison provide insight into the interpretation of seismic refraction tomography for the quantification of CZ structure and hydrologic dynamics.

  1. Implementation of structure-mapping inference by event-file binding and action planning: a model of tool-improvisation analogies.

    Science.gov (United States)

    Fields, Chris

    2011-03-01

    Structure-mapping inferences are generally regarded as dependent upon relational concepts that are understood and expressible in language by subjects capable of analogical reasoning. However, tool-improvisation inferences are executed by members of a variety of non-human primate and other species. Tool improvisation requires correctly inferring the motion and force-transfer affordances of an object; hence tool improvisation requires structure mapping driven by relational properties. Observational and experimental evidence can be interpreted to indicate that structure-mapping analogies in tool improvisation are implemented by multi-step manipulation of event files by binding and action-planning mechanisms that act in a language-independent manner. A functional model of language-independent event-file manipulations that implement structure mapping in the tool-improvisation domain is developed. This model provides a mechanism by which motion and force representations commonly employed in tool-improvisation structure mappings may be sufficiently reinforced to be available to inwardly directed attention and hence conceptualization. Predictions and potential experimental tests of this model are outlined.

  2. Inferring transcriptional compensation interactions in yeast via stepwise structure equation modeling

    Directory of Open Access Journals (Sweden)

    Wang Woei-Fuh

    2008-03-01

    Full Text Available Abstract Background With the abundant information produced by microarray technology, various approaches have been proposed to infer transcriptional regulatory networks. However, few approaches have studied subtle and indirect interaction such as genetic compensation, the existence of which is widely recognized although its mechanism has yet to be clarified. Furthermore, when inferring gene networks most models include only observed variables whereas latent factors, such as proteins and mRNA degradation that are not measured by microarrays, do participate in networks in reality. Results Motivated by inferring transcriptional compensation (TC interactions in yeast, a stepwise structural equation modeling algorithm (SSEM is developed. In addition to observed variables, SSEM also incorporates hidden variables to capture interactions (or regulations from latent factors. Simulated gene networks are used to determine with which of six possible model selection criteria (MSC SSEM works best. SSEM with Bayesian information criterion (BIC results in the highest true positive rates, the largest percentage of correctly predicted interactions from all existing interactions, and the highest true negative (non-existing interactions rates. Next, we apply SSEM using real microarray data to infer TC interactions among (1 small groups of genes that are synthetic sick or lethal (SSL to SGS1, and (2 a group of SSL pairs of 51 yeast genes involved in DNA synthesis and repair that are of interest. For (1, SSEM with BIC is shown to outperform three Bayesian network algorithms and a multivariate autoregressive model, checked against the results of qRT-PCR experiments. The predictions for (2 are shown to coincide with several known pathways of Sgs1 and its partners that are involved in DNA replication, recombination and repair. In addition, experimentally testable interactions of Rad27 are predicted. Conclusion SSEM is a useful tool for inferring genetic networks, and the

  3. Improving catchment discharge predictions by inferring flow route contributions from a nested-scale monitoring and model setup

    Directory of Open Access Journals (Sweden)

    Y. van der Velde

    2011-03-01

    Full Text Available Identifying effective measures to reduce nutrient loads of headwaters in lowland catchments requires a thorough understanding of flow routes of water and nutrients. In this paper we assess the value of nested-scale discharge and groundwater level measurements for the estimation of flow route volumes and for predictions of catchment discharge. In order to relate field-site measurements to the catchment-scale an upscaling approach is introduced that assumes that scale differences in flow route fluxes originate from differences in the relationship between groundwater storage and the spatial structure of the groundwater table. This relationship is characterized by the Groundwater Depth Distribution (GDD curve that relates spatial variation in groundwater depths to the average groundwater depth. The GDD-curve was measured for a single field site (0.009 km2 and simple process descriptions were applied to relate groundwater levels to flow route discharges. This parsimonious model could accurately describe observed storage, tube drain discharge, overland flow and groundwater flow simultaneously with Nash-Sutcliff coefficients exceeding 0.8. A probabilistic Monte Carlo approach was applied to upscale field-site measurements to catchment scales by inferring scale-specific GDD-curves from the hydrographs of two nested catchments (0.4 and 6.5 km2. The estimated contribution of tube drain effluent (a dominant source for nitrates decreased with increasing scale from 76–79% at the field-site to 34–61% and 25–50% for both catchment scales. These results were validated by demonstrating that a model conditioned on nested-scale measurements improves simulations of nitrate loads and predictions of extreme discharges during validation periods compared to a model that was conditioned on catchment discharge only.

  4. Inverse Bayesian inference as a key of consciousness featuring a macroscopic quantum logical structure.

    Science.gov (United States)

    Gunji, Yukio-Pegio; Shinohara, Shuji; Haruna, Taichi; Basios, Vasileios

    2017-02-01

    To overcome the dualism between mind and matter and to implement consciousness in science, a physical entity has to be embedded with a measurement process. Although quantum mechanics have been regarded as a candidate for implementing consciousness, nature at its macroscopic level is inconsistent with quantum mechanics. We propose a measurement-oriented inference system comprising Bayesian and inverse Bayesian inferences. While Bayesian inference contracts probability space, the newly defined inverse one relaxes the space. These two inferences allow an agent to make a decision corresponding to an immediate change in their environment. They generate a particular pattern of joint probability for data and hypotheses, comprising multiple diagonal and noisy matrices. This is expressed as a nondistributive orthomodular lattice equivalent to quantum logic. We also show that an orthomodular lattice can reveal information generated by inverse syllogism as well as the solutions to the frame and symbol-grounding problems. Our model is the first to connect macroscopic cognitive processes with the mathematical structure of quantum mechanics with no additional assumptions. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Inferring the conservative causal core of gene regulatory networks

    Directory of Open Access Journals (Sweden)

    Emmert-Streib Frank

    2010-09-01

    Full Text Available Abstract Background Inferring gene regulatory networks from large-scale expression data is an important problem that received much attention in recent years. These networks have the potential to gain insights into causal molecular interactions of biological processes. Hence, from a methodological point of view, reliable estimation methods based on observational data are needed to approach this problem practically. Results In this paper, we introduce a novel gene regulatory network inference (GRNI algorithm, called C3NET. We compare C3NET with four well known methods, ARACNE, CLR, MRNET and RN, conducting in-depth numerical ensemble simulations and demonstrate also for biological expression data from E. coli that C3NET performs consistently better than the best known GRNI methods in the literature. In addition, it has also a low computational complexity. Since C3NET is based on estimates of mutual information values in conjunction with a maximization step, our numerical investigations demonstrate that our inference algorithm exploits causal structural information in the data efficiently. Conclusions For systems biology to succeed in the long run, it is of crucial importance to establish methods that extract large-scale gene networks from high-throughput data that reflect the underlying causal interactions among genes or gene products. Our method can contribute to this endeavor by demonstrating that an inference algorithm with a neat design permits not only a more intuitive and possibly biological interpretation of its working mechanism but can also result in superior results.

  6. Inferring the conservative causal core of gene regulatory networks.

    Science.gov (United States)

    Altay, Gökmen; Emmert-Streib, Frank

    2010-09-28

    Inferring gene regulatory networks from large-scale expression data is an important problem that received much attention in recent years. These networks have the potential to gain insights into causal molecular interactions of biological processes. Hence, from a methodological point of view, reliable estimation methods based on observational data are needed to approach this problem practically. In this paper, we introduce a novel gene regulatory network inference (GRNI) algorithm, called C3NET. We compare C3NET with four well known methods, ARACNE, CLR, MRNET and RN, conducting in-depth numerical ensemble simulations and demonstrate also for biological expression data from E. coli that C3NET performs consistently better than the best known GRNI methods in the literature. In addition, it has also a low computational complexity. Since C3NET is based on estimates of mutual information values in conjunction with a maximization step, our numerical investigations demonstrate that our inference algorithm exploits causal structural information in the data efficiently. For systems biology to succeed in the long run, it is of crucial importance to establish methods that extract large-scale gene networks from high-throughput data that reflect the underlying causal interactions among genes or gene products. Our method can contribute to this endeavor by demonstrating that an inference algorithm with a neat design permits not only a more intuitive and possibly biological interpretation of its working mechanism but can also result in superior results.

  7. Method of fuzzy inference for one class of MISO-structure systems with non-singleton inputs

    Science.gov (United States)

    Sinuk, V. G.; Panchenko, M. V.

    2018-03-01

    In fuzzy modeling, the inputs of the simulated systems can receive both crisp values and non-Singleton. Computational complexity of fuzzy inference with fuzzy non-Singleton inputs corresponds to an exponential. This paper describes a new method of inference, based on the theorem of decomposition of a multidimensional fuzzy implication and a fuzzy truth value. This method is considered for fuzzy inputs and has a polynomial complexity, which makes it possible to use it for modeling large-dimensional MISO-structure systems.

  8. Large-scale modeling of condition-specific gene regulatory networks by information integration and inference.

    Science.gov (United States)

    Ellwanger, Daniel Christian; Leonhardt, Jörn Florian; Mewes, Hans-Werner

    2014-12-01

    Understanding how regulatory networks globally coordinate the response of a cell to changing conditions, such as perturbations by shifting environments, is an elementary challenge in systems biology which has yet to be met. Genome-wide gene expression measurements are high dimensional as these are reflecting the condition-specific interplay of thousands of cellular components. The integration of prior biological knowledge into the modeling process of systems-wide gene regulation enables the large-scale interpretation of gene expression signals in the context of known regulatory relations. We developed COGERE (http://mips.helmholtz-muenchen.de/cogere), a method for the inference of condition-specific gene regulatory networks in human and mouse. We integrated existing knowledge of regulatory interactions from multiple sources to a comprehensive model of prior information. COGERE infers condition-specific regulation by evaluating the mutual dependency between regulator (transcription factor or miRNA) and target gene expression using prior information. This dependency is scored by the non-parametric, nonlinear correlation coefficient η(2) (eta squared) that is derived by a two-way analysis of variance. We show that COGERE significantly outperforms alternative methods in predicting condition-specific gene regulatory networks on simulated data sets. Furthermore, by inferring the cancer-specific gene regulatory network from the NCI-60 expression study, we demonstrate the utility of COGERE to promote hypothesis-driven clinical research. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Multimodel inference and adaptive management

    Science.gov (United States)

    Rehme, S.E.; Powell, L.A.; Allen, Craig R.

    2011-01-01

    Ecology is an inherently complex science coping with correlated variables, nonlinear interactions and multiple scales of pattern and process, making it difficult for experiments to result in clear, strong inference. Natural resource managers, policy makers, and stakeholders rely on science to provide timely and accurate management recommendations. However, the time necessary to untangle the complexities of interactions within ecosystems is often far greater than the time available to make management decisions. One method of coping with this problem is multimodel inference. Multimodel inference assesses uncertainty by calculating likelihoods among multiple competing hypotheses, but multimodel inference results are often equivocal. Despite this, there may be pressure for ecologists to provide management recommendations regardless of the strength of their study’s inference. We reviewed papers in the Journal of Wildlife Management (JWM) and the journal Conservation Biology (CB) to quantify the prevalence of multimodel inference approaches, the resulting inference (weak versus strong), and how authors dealt with the uncertainty. Thirty-eight percent and 14%, respectively, of articles in the JWM and CB used multimodel inference approaches. Strong inference was rarely observed, with only 7% of JWM and 20% of CB articles resulting in strong inference. We found the majority of weak inference papers in both journals (59%) gave specific management recommendations. Model selection uncertainty was ignored in most recommendations for management. We suggest that adaptive management is an ideal method to resolve uncertainty when research results in weak inference.

  10. Continuous Record Laplace-based Inference about the Break Date in Structural Change Models

    OpenAIRE

    Casini, Alessandro; Perron, Pierre

    2018-01-01

    Building upon the continuous record asymptotic framework recently introduced by Casini and Perron (2017a) for inference in structural change models, we propose a Laplace-based (Quasi-Bayes) procedure for the construction of the estimate and confidence set for the date of a structural change. The procedure relies on a Laplace-type estimator defined by an integration-based rather than an optimization-based method. A transformation of the leastsquares criterion function is evaluated in order to ...

  11. Scaling structure loads for SMA

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong Won; Song, Jeong Guk; Jeon, Sang Ho; Lim, Hak Kyu; Lee, Kwang Nam [KEPCO ENC, Yongin (Korea, Republic of)

    2012-10-15

    When the Seismic Margin Analysis(SMA) is conducted, the new structural load generation with Seismic Margin Earthquake(SME) is the time consuming work. For the convenience, EPRI NP 6041 suggests the scaling of the structure load. The report recommend that the fixed base(rock foundation) structure designed using either constant modal damping or modal damping ratios developed for a single material damping. For these cases, the SME loads can easily and accurately be calculated by scaling the spectral accelerations of the individual modes for the new SME response spectra. EPRI NP 6041 provides two simple methodologies for the scaling structure seismic loads which are the dominant frequency scaling methodology and the mode by mode scaling methodology. Scaling of the existing analysis to develop SME loads is much easier and more efficient than performing a new analysis. This paper is intended to compare the calculating results of two different methodologies.

  12. Scaling structure loads for SMA

    International Nuclear Information System (INIS)

    Lee, Dong Won; Song, Jeong Guk; Jeon, Sang Ho; Lim, Hak Kyu; Lee, Kwang Nam

    2012-01-01

    When the Seismic Margin Analysis(SMA) is conducted, the new structural load generation with Seismic Margin Earthquake(SME) is the time consuming work. For the convenience, EPRI NP 6041 suggests the scaling of the structure load. The report recommend that the fixed base(rock foundation) structure designed using either constant modal damping or modal damping ratios developed for a single material damping. For these cases, the SME loads can easily and accurately be calculated by scaling the spectral accelerations of the individual modes for the new SME response spectra. EPRI NP 6041 provides two simple methodologies for the scaling structure seismic loads which are the dominant frequency scaling methodology and the mode by mode scaling methodology. Scaling of the existing analysis to develop SME loads is much easier and more efficient than performing a new analysis. This paper is intended to compare the calculating results of two different methodologies

  13. Inferring the Clonal Structure of Viral Populations from Time Series Sequencing.

    Directory of Open Access Journals (Sweden)

    Donatien F Chedom

    2015-11-01

    Full Text Available RNA virus populations will undergo processes of mutation and selection resulting in a mixed population of viral particles. High throughput sequencing of a viral population subsequently contains a mixed signal of the underlying clones. We would like to identify the underlying evolutionary structures. We utilize two sources of information to attempt this; within segment linkage information, and mutation prevalence. We demonstrate that clone haplotypes, their prevalence, and maximum parsimony reticulate evolutionary structures can be identified, although the solutions may not be unique, even for complete sets of information. This is applied to a chain of influenza infection, where we infer evolutionary structures, including reassortment, and demonstrate some of the difficulties of interpretation that arise from deep sequencing due to artifacts such as template switching during PCR amplification.

  14. Structural Information Inference from Lanthanoid Complexing Systems: Photoluminescence Studies on Isolated Ions

    Science.gov (United States)

    Greisch, Jean Francois; Harding, Michael E.; Chmela, Jiri; Klopper, Willem M.; Schooss, Detlef; Kappes, Manfred M.

    2016-06-01

    The application of lanthanoid complexes ranges from photovoltaics and light-emitting diodes to quantum memories and biological assays. Rationalization of their design requires a thorough understanding of intramolecular processes such as energy transfer, charge transfer, and non-radiative decay involving their subunits. Characterization of the excited states of such complexes considerably benefits from mass spectrometric methods since the associated optical transitions and processes are strongly affected by stoichiometry, symmetry, and overall charge state. We report herein spectroscopic measurements on ensembles of ions trapped in the gas phase and soft-landed in neon matrices. Their interpretation is considerably facilitated by direct comparison with computations. The combination of energy- and time-resolved measurements on isolated species with density functional as well as ligand-field and Franck-Condon computations enables us to infer structural as well as dynamical information about the species studied. The approach is first illustrated for sets of model lanthanoid complexes whose structure and electronic properties are systematically varied via the substitution of one component (lanthanoid or alkali,alkali-earth ion): (i) systematic dependence of ligand-centered phosphorescence on the lanthanoid(III) promotion energy and its impact on sensitization, and (ii) structural changes induced by the substitution of alkali or alkali-earth ions in relation with structures inferred using ion mobility spectroscopy. The temperature dependence of sensitization is briefly discussed. The focus is then shifted to measurements involving europium complexes with doxycycline an antibiotic of the tetracycline family. Besides discussing the complexes' structural and electronic features, we report on their use to monitor enzymatic processes involving hydrogen peroxide or biologically relevant molecules such as adenosine triphosphate (ATP).

  15. Ancestry inference using principal component analysis and spatial analysis: a distance-based analysis to account for population substructure.

    Science.gov (United States)

    Byun, Jinyoung; Han, Younghun; Gorlov, Ivan P; Busam, Jonathan A; Seldin, Michael F; Amos, Christopher I

    2017-10-16

    Accurate inference of genetic ancestry is of fundamental interest to many biomedical, forensic, and anthropological research areas. Genetic ancestry memberships may relate to genetic disease risks. In a genome association study, failing to account for differences in genetic ancestry between cases and controls may also lead to false-positive results. Although a number of strategies for inferring and taking into account the confounding effects of genetic ancestry are available, applying them to large studies (tens thousands samples) is challenging. The goal of this study is to develop an approach for inferring genetic ancestry of samples with unknown ancestry among closely related populations and to provide accurate estimates of ancestry for application to large-scale studies. In this study we developed a novel distance-based approach, Ancestry Inference using Principal component analysis and Spatial analysis (AIPS) that incorporates an Inverse Distance Weighted (IDW) interpolation method from spatial analysis to assign individuals to population memberships. We demonstrate the benefits of AIPS in analyzing population substructure, specifically related to the four most commonly used tools EIGENSTRAT, STRUCTURE, fastSTRUCTURE, and ADMIXTURE using genotype data from various intra-European panels and European-Americans. While the aforementioned commonly used tools performed poorly in inferring ancestry from a large number of subpopulations, AIPS accurately distinguished variations between and within subpopulations. Our results show that AIPS can be applied to large-scale data sets to discriminate the modest variability among intra-continental populations as well as for characterizing inter-continental variation. The method we developed will protect against spurious associations when mapping the genetic basis of a disease. Our approach is more accurate and computationally efficient method for inferring genetic ancestry in the large-scale genetic studies.

  16. On the soft limit of the large scale structure power spectrum. UV dependence

    International Nuclear Information System (INIS)

    Garny, Mathias

    2015-08-01

    We derive a non-perturbative equation for the large scale structure power spectrum of long-wavelength modes. Thereby, we use an operator product expansion together with relations between the three-point function and power spectrum in the soft limit. The resulting equation encodes the coupling to ultraviolet (UV) modes in two time-dependent coefficients, which may be obtained from response functions to (anisotropic) parameters, such as spatial curvature, in a modified cosmology. We argue that both depend weakly on fluctuations deep in the UV. As a byproduct, this implies that the renormalized leading order coefficient(s) in the effective field theory (EFT) of large scale structures receive most of their contribution from modes close to the non-linear scale. Consequently, the UV dependence found in explicit computations within standard perturbation theory stems mostly from counter-term(s). We confront a simplified version of our non-perturbative equation against existent numerical simulations, and find good agreement within the expected uncertainties. Our approach can in principle be used to precisely infer the relevance of the leading order EFT coefficient(s) using small volume simulations in an 'anisotropic separate universe' framework. Our results suggest that the importance of these coefficient(s) is a ∝ 10% effect, and plausibly smaller.

  17. Feature Inference Learning and Eyetracking

    Science.gov (United States)

    Rehder, Bob; Colner, Robert M.; Hoffman, Aaron B.

    2009-01-01

    Besides traditional supervised classification learning, people can learn categories by inferring the missing features of category members. It has been proposed that feature inference learning promotes learning a category's internal structure (e.g., its typical features and interfeature correlations) whereas classification promotes the learning of…

  18. Statistical scaling of pore-scale Lagrangian velocities in natural porous media.

    Science.gov (United States)

    Siena, M; Guadagnini, A; Riva, M; Bijeljic, B; Pereira Nunes, J P; Blunt, M J

    2014-08-01

    We investigate the scaling behavior of sample statistics of pore-scale Lagrangian velocities in two different rock samples, Bentheimer sandstone and Estaillades limestone. The samples are imaged using x-ray computer tomography with micron-scale resolution. The scaling analysis relies on the study of the way qth-order sample structure functions (statistical moments of order q of absolute increments) of Lagrangian velocities depend on separation distances, or lags, traveled along the mean flow direction. In the sandstone block, sample structure functions of all orders exhibit a power-law scaling within a clearly identifiable intermediate range of lags. Sample structure functions associated with the limestone block display two diverse power-law regimes, which we infer to be related to two overlapping spatially correlated structures. In both rocks and for all orders q, we observe linear relationships between logarithmic structure functions of successive orders at all lags (a phenomenon that is typically known as extended power scaling, or extended self-similarity). The scaling behavior of Lagrangian velocities is compared with the one exhibited by porosity and specific surface area, which constitute two key pore-scale geometric observables. The statistical scaling of the local velocity field reflects the behavior of these geometric observables, with the occurrence of power-law-scaling regimes within the same range of lags for sample structure functions of Lagrangian velocity, porosity, and specific surface area.

  19. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  20. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    2012-01-01

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture fine scale rules of interaction, which are primarily mediated by physical contact. Conversely, the Markovian self-propelled particle model captures the fine scale rules of interaction but fails to reproduce global dynamics. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  1. Brittle fracture in structural steels: perspectives at different size-scales.

    Science.gov (United States)

    Knott, John

    2015-03-28

    This paper describes characteristics of transgranular cleavage fracture in structural steel, viewed at different size-scales. Initially, consideration is given to structures and the service duty to which they are exposed at the macroscale, highlighting failure by plastic collapse and failure by brittle fracture. This is followed by sections describing the use of fracture mechanics and materials testing in carrying-out assessments of structural integrity. Attention then focuses on the microscale, explaining how values of the local fracture stress in notched bars or of fracture toughness in pre-cracked test-pieces are related to features of the microstructure: carbide thicknesses in wrought material; the sizes of oxide/silicate inclusions in weld metals. Effects of a microstructure that is 'heterogeneous' at the mesoscale are treated briefly, with respect to the extraction of test-pieces from thick sections and to extrapolations of data to low failure probabilities. The values of local fracture stress may be used to infer a local 'work-of-fracture' that is found experimentally to be a few times greater than that of two free surfaces. Reasons for this are discussed in the conclusion section on nano-scale events. It is suggested that, ahead of a sharp crack, it is necessary to increase the compliance by a cooperative movement of atoms (involving extra work) to allow the crack-tip bond to displace sufficiently for the energy of attraction between the atoms to reduce to zero. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  2. FuncPatch: a web server for the fast Bayesian inference of conserved functional patches in protein 3D structures.

    Science.gov (United States)

    Huang, Yi-Fei; Golding, G Brian

    2015-02-15

    A number of statistical phylogenetic methods have been developed to infer conserved functional sites or regions in proteins. Many methods, e.g. Rate4Site, apply the standard phylogenetic models to infer site-specific substitution rates and totally ignore the spatial correlation of substitution rates in protein tertiary structures, which may reduce their power to identify conserved functional patches in protein tertiary structures when the sequences used in the analysis are highly similar. The 3D sliding window method has been proposed to infer conserved functional patches in protein tertiary structures, but the window size, which reflects the strength of the spatial correlation, must be predefined and is not inferred from data. We recently developed GP4Rate to solve these problems under the Bayesian framework. Unfortunately, GP4Rate is computationally slow. Here, we present an intuitive web server, FuncPatch, to perform a fast approximate Bayesian inference of conserved functional patches in protein tertiary structures. Both simulations and four case studies based on empirical data suggest that FuncPatch is a good approximation to GP4Rate. However, FuncPatch is orders of magnitudes faster than GP4Rate. In addition, simulations suggest that FuncPatch is potentially a useful tool complementary to Rate4Site, but the 3D sliding window method is less powerful than FuncPatch and Rate4Site. The functional patches predicted by FuncPatch in the four case studies are supported by experimental evidence, which corroborates the usefulness of FuncPatch. The software FuncPatch is freely available at the web site, http://info.mcmaster.ca/yifei/FuncPatch golding@mcmaster.ca Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Mantle Circulation Models with variational data assimilation: Inferring past mantle flow and structure from plate motion histories and seismic tomography

    Science.gov (United States)

    Bunge, H.; Hagelberg, C.; Travis, B.

    2002-12-01

    EarthScope will deliver data on structure and dynamics of continental North America and the underlying mantle on an unprecedented scale. Indeed, the scope of EarthScope makes its mission comparable to the large remote sensing efforts that are transforming the oceanographic and atmospheric sciences today. Arguably the main impact of new solid Earth observing systems is to transform our use of geodynamic models increasingly from conditions that are data poor to an environment that is data rich. Oceanographers and meteorologists already have made substantial progress in adapting to this environment, by developing new approaches of interpreting oceanographic and atmospheric data objectively through data assimilation methods in their models. However, a similarly rigorous theoretical framework for merging EarthScope derived solid Earth data with geodynamic models has yet to be devised. Here we explore the feasibility of data assimilation in mantle convection studies in an attempt to fit global geodynamic model calculations explicitly to tomographic and tectonic constraints. This is an inverse problem not quite unlike the inverse problem of finding optimal seismic velocity structures faced by seismologists. We derive the generalized inverse of mantle convection from a variational approach and present the adjoint equations of mantle flow. The substantial computational burden associated with solutions to the generalized inverse problem of mantle convection is made feasible using a highly efficient finite element approach based on the 3-D spherical fully parallelized mantle dynamics code TERRA, implemented on a cost-effective topical PC-cluster (geowulf) dedicated specifically to large-scale geophysical simulations. This dedicated geophysical modeling computer allows us to investigate global inverse convection problems having a spatial discretization of less than 50 km throughout the mantle. We present a synthetic high-resolution modeling experiment to demonstrate that mid

  4. Inferring biome-scale net primary productivity from tree-ring isotopes

    Science.gov (United States)

    Pederson, N.; Levesque, M.; Williams, A. P.; Hobi, M. L.; Smith, W. K.; Andreu-Hayles, L.

    2017-12-01

    Satellite estimates of vegetation growth (net primary productivity; NPP), tree-ring records, and forest inventories indicate that ongoing climate change and rising atmospheric CO2 concentration are altering productivity and carbon storage of forests worldwide. The impact of global change on the trends of NPP, however, remain unknown because of the lack of long-term high-resolution NPP data. For the first time, we tested if annually resolved carbon (δ13C) and oxygen (δ18O) stable isotopes from the cellulose of tree rings from trees in temperate regions could be used as a tool for inferring NPP across spatiotemporal scales. We compared satellite NPP estimates from the moderate-resolution imaging spectroradiometer sensor (MODIS, product MOD17A) and a newly developed global NPP dataset derived from the Global Inventory Modeling and Mapping Studies (GIMMS) dataset to annually resolved tree-ring width and δ13C and δ18O records from four sites along a hydroclimatic gradient in Eastern and Central United States. We found strong correlations across large geographical regions between satellite-derived NPP and tree-ring isotopes that ranged from -0.40 to -0.91. Notably, tree-ring derived δ18O had the strongest relation to climate. The results were consistent among the studied tree species (Quercus rubra and Liriodendron tulipifera) and along the hydroclimatic conditions of our network. Our study indicates that tree-ring isotopes can potentially be used to reconstruct NPP in time and space. As such, our findings represent an important breakthrough for estimating long-term changes in vegetation productivity at the biome scale.

  5. Statistical inference of seabed sound-speed structure in the Gulf of Oman Basin.

    Science.gov (United States)

    Sagers, Jason D; Knobles, David P

    2014-06-01

    Addressed is the statistical inference of the sound-speed depth profile of a thick soft seabed from broadband sound propagation data recorded in the Gulf of Oman Basin in 1977. The acoustic data are in the form of time series signals recorded on a sparse vertical line array and generated by explosive sources deployed along a 280 km track. The acoustic data offer a unique opportunity to study a deep-water bottom-limited thickly sedimented environment because of the large number of time series measurements, very low seabed attenuation, and auxiliary measurements. A maximum entropy method is employed to obtain a conditional posterior probability distribution (PPD) for the sound-speed ratio and the near-surface sound-speed gradient. The multiple data samples allow for a determination of the average error constraint value required to uniquely specify the PPD for each data sample. Two complicating features of the statistical inference study are addressed: (1) the need to develop an error function that can both utilize the measured multipath arrival structure and mitigate the effects of data errors and (2) the effect of small bathymetric slopes on the structure of the bottom interacting arrivals.

  6. Optimal inference with suboptimal models: Addiction and active Bayesian inference

    Science.gov (United States)

    Schwartenbeck, Philipp; FitzGerald, Thomas H.B.; Mathys, Christoph; Dolan, Ray; Wurst, Friedrich; Kronbichler, Martin; Friston, Karl

    2015-01-01

    When casting behaviour as active (Bayesian) inference, optimal inference is defined with respect to an agent’s beliefs – based on its generative model of the world. This contrasts with normative accounts of choice behaviour, in which optimal actions are considered in relation to the true structure of the environment – as opposed to the agent’s beliefs about worldly states (or the task). This distinction shifts an understanding of suboptimal or pathological behaviour away from aberrant inference as such, to understanding the prior beliefs of a subject that cause them to behave less ‘optimally’ than our prior beliefs suggest they should behave. Put simply, suboptimal or pathological behaviour does not speak against understanding behaviour in terms of (Bayes optimal) inference, but rather calls for a more refined understanding of the subject’s generative model upon which their (optimal) Bayesian inference is based. Here, we discuss this fundamental distinction and its implications for understanding optimality, bounded rationality and pathological (choice) behaviour. We illustrate our argument using addictive choice behaviour in a recently described ‘limited offer’ task. Our simulations of pathological choices and addictive behaviour also generate some clear hypotheses, which we hope to pursue in ongoing empirical work. PMID:25561321

  7. Diagnostic SNPs for inferring population structure in American mink (Neovison vison) identified through RAD sequencing

    DEFF Research Database (Denmark)

    2015-01-01

    Data from: "Diagnostic SNPs for inferring population structure in American mink (Neovison vison) identified through RAD sequencing" in Genomic Resources Notes accepted 1 October 2014 to 30 November 2014....

  8. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture the observed locality of interactions. Traditional self-propelled particle models fail to capture the fine scale dynamics of the system. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics, while maintaining a biologically plausible perceptual range. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  9. Inference in models with adaptive learning

    NARCIS (Netherlands)

    Chevillon, G.; Massmann, M.; Mavroeidis, S.

    2010-01-01

    Identification of structural parameters in models with adaptive learning can be weak, causing standard inference procedures to become unreliable. Learning also induces persistent dynamics, and this makes the distribution of estimators and test statistics non-standard. Valid inference can be

  10. Inferring spatial memory and spatiotemporal scaling from GPS data: comparing red deer Cervus elaphus movements with simulation models.

    Science.gov (United States)

    Gautestad, Arild O; Loe, Leif E; Mysterud, Atle

    2013-05-01

    1. Increased inference regarding underlying behavioural mechanisms of animal movement can be achieved by comparing GPS data with statistical mechanical movement models such as random walk and Lévy walk with known underlying behaviour and statistical properties. 2. GPS data are typically collected with ≥ 1 h intervals not exactly tracking every mechanistic step along the movement path, so a statistical mechanical model approach rather than a mechanistic approach is appropriate. However, comparisons require a coherent framework involving both scaling and memory aspects of the underlying process. Thus, simulation models have recently been extended to include memory-guided returns to previously visited patches, that is, site fidelity. 3. We define four main classes of movement, differing in incorporation of memory and scaling (based on respective intervals of the statistical fractal dimension D and presence/absence of site fidelity). Using three statistical protocols to estimate D and site fidelity, we compare these main movement classes with patterns observed in GPS data from 52 females of red deer (Cervus elaphus). 4. The results show best compliance with a scale-free and memory-enhanced kind of space use; that is, a power law distribution of step lengths, a fractal distribution of the spatial scatter of fixes and site fidelity. 5. Our study thus demonstrates how inference regarding memory effects and a hierarchical pattern of space use can be derived from analysis of GPS data. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.

  11. Demographic inferences from large-scale NGS data

    DEFF Research Database (Denmark)

    Pedersen, Casper-Emil Tingskov

    .g. human genetics. In this thesis, the three papers presented demonstrate the advantages of NGS data in the framework of population genetics for elucidating demographic inferences, important for understanding conservation efforts, selection and mutational burdens. In the first whole-genome study...... that the demographic history of the Inuit is the most extreme in terms of population size, of any human population. We identify a slight increase in the number of deleterious alleles because of this demographic history and support our results using simulations. We use this to show that the reduction in population size...

  12. Optimization methods for logical inference

    CERN Document Server

    Chandru, Vijay

    2011-01-01

    Merging logic and mathematics in deductive inference-an innovative, cutting-edge approach. Optimization methods for logical inference? Absolutely, say Vijay Chandru and John Hooker, two major contributors to this rapidly expanding field. And even though ""solving logical inference problems with optimization methods may seem a bit like eating sauerkraut with chopsticks. . . it is the mathematical structure of a problem that determines whether an optimization model can help solve it, not the context in which the problem occurs."" Presenting powerful, proven optimization techniques for logic in

  13. Small scale structure on cosmic strings

    International Nuclear Information System (INIS)

    Albrecht, A.

    1989-01-01

    I discuss our current understanding of cosmic string evolution, and focus on the question of small scale structure on strings, where most of the disagreements lie. I present a physical picture designed to put the role of the small scale structure into more intuitive terms. In this picture one can see how the small scale structure can feed back in a major way on the overall scaling solution. I also argue that it is easy for small scale numerical errors to feed back in just such a way. The intuitive discussion presented here may form the basis for an analytic treatment of the small structure, which I argue in any case would be extremely valuable in filling the gaps in our resent understanding of cosmic string evolution. 24 refs., 8 figs

  14. Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Willcox, Karen [MIT; Marzouk, Youssef [MIT

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to

  15. Super learning to hedge against incorrect inference from arbitrary parametric assumptions in marginal structural modeling.

    Science.gov (United States)

    Neugebauer, Romain; Fireman, Bruce; Roy, Jason A; Raebel, Marsha A; Nichols, Gregory A; O'Connor, Patrick J

    2013-08-01

    Clinical trials are unlikely to ever be launched for many comparative effectiveness research (CER) questions. Inferences from hypothetical randomized trials may however be emulated with marginal structural modeling (MSM) using observational data, but success in adjusting for time-dependent confounding and selection bias typically relies on parametric modeling assumptions. If these assumptions are violated, inferences from MSM may be inaccurate. In this article, we motivate the application of a data-adaptive estimation approach called super learning (SL) to avoid reliance on arbitrary parametric assumptions in CER. Using the electronic health records data from adults with new-onset type 2 diabetes, we implemented MSM with inverse probability weighting (IPW) estimation to evaluate the effect of three oral antidiabetic therapies on the worsening of glomerular filtration rate. Inferences from IPW estimation were noticeably sensitive to the parametric assumptions about the associations between both the exposure and censoring processes and the main suspected source of confounding, that is, time-dependent measurements of hemoglobin A1c. SL was successfully implemented to harness flexible confounding and selection bias adjustment from existing machine learning algorithms. Erroneous IPW inference about clinical effectiveness because of arbitrary and incorrect modeling decisions may be avoided with SL. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Causal inference in biology networks with integrated belief propagation.

    Science.gov (United States)

    Chang, Rui; Karr, Jonathan R; Schadt, Eric E

    2015-01-01

    Inferring causal relationships among molecular and higher order phenotypes is a critical step in elucidating the complexity of living systems. Here we propose a novel method for inferring causality that is no longer constrained by the conditional dependency arguments that limit the ability of statistical causal inference methods to resolve causal relationships within sets of graphical models that are Markov equivalent. Our method utilizes Bayesian belief propagation to infer the responses of perturbation events on molecular traits given a hypothesized graph structure. A distance measure between the inferred response distribution and the observed data is defined to assess the 'fitness' of the hypothesized causal relationships. To test our algorithm, we infer causal relationships within equivalence classes of gene networks in which the form of the functional interactions that are possible are assumed to be nonlinear, given synthetic microarray and RNA sequencing data. We also apply our method to infer causality in real metabolic network with v-structure and feedback loop. We show that our method can recapitulate the causal structure and recover the feedback loop only from steady-state data which conventional method cannot.

  17. Computational applications of DNA structural scales

    DEFF Research Database (Denmark)

    Baldi, P.; Chauvin, Y.; Brunak, Søren

    1998-01-01

    that these scales provide an alternative or complementary compact representation of DNA sequences. As an example, we construct a strand-invariant representation of DNA sequences. The scales can also be used to analyze and discover new DNA structural patterns, especially in combination with hidden Markov models......Studies several different physical scales associated with the structural features of DNA sequences from a computational standpoint, including dinucleotide scales, such as base stacking energy and propeller twist, and trinucleotide scales, such as bendability and nucleosome positioning. We show...

  18. On the criticality of inferred models

    Science.gov (United States)

    Mastromatteo, Iacopo; Marsili, Matteo

    2011-10-01

    Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality.

  19. On the criticality of inferred models

    International Nuclear Information System (INIS)

    Mastromatteo, Iacopo; Marsili, Matteo

    2011-01-01

    Advanced inference techniques allow one to reconstruct a pattern of interaction from high dimensional data sets, from probing simultaneously thousands of units of extended systems—such as cells, neural tissues and financial markets. We focus here on the statistical properties of inferred models and argue that inference procedures are likely to yield models which are close to singular values of parameters, akin to critical points in physics where phase transitions occur. These are points where the response of physical systems to external perturbations, as measured by the susceptibility, is very large and diverges in the limit of infinite size. We show that the reparameterization invariant metrics in the space of probability distributions of these models (the Fisher information) are directly related to the susceptibility of the inferred model. As a result, distinguishable models tend to accumulate close to critical points, where the susceptibility diverges in infinite systems. This region is the one where the estimate of inferred parameters is most stable. In order to illustrate these points, we discuss inference of interacting point processes with application to financial data and show that sensible choices of observation time scales naturally yield models which are close to criticality

  20. A Bayesian Network Schema for Lessening Database Inference

    National Research Council Canada - National Science Library

    Chang, LiWu; Moskowitz, Ira S

    2001-01-01

    .... The authors introduce a formal schema for database inference analysis, based upon a Bayesian network structure, which identifies critical parameters involved in the inference problem and represents...

  1. Solving the small-scale structure puzzles with dissipative dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Foot, Robert [ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, University of Melbourne, Melbourne, Victoria 3010 (Australia); Vagnozzi, Sunny, E-mail: rfoot@unimelb.edu.au, E-mail: sunny.vagnozzi@fysik.su.se [The Oskar Klein Centre for Cosmoparticle Physics, Department of Physics, Stockholm University, AlbaNova University Center, Roslagstullbacken 21A, SE-106 91 Stockholm (Sweden)

    2016-07-01

    Small-scale structure is studied in the context of dissipative dark matter, arising for instance in models with a hidden unbroken Abelian sector, so that dark matter couples to a massless dark photon. The dark sector interacts with ordinary matter via gravity and photon-dark photon kinetic mixing. Mirror dark matter is a theoretically constrained special case where all parameters are fixed except for the kinetic mixing strength, ε. In these models, the dark matter halo around spiral and irregular galaxies takes the form of a dissipative plasma which evolves in response to various heating and cooling processes. It has been argued previously that such dynamics can account for the inferred cored density profiles of galaxies and other related structural features. Here we focus on the apparent deficit of nearby small galaxies (''missing satellite problem'), which these dissipative models have the potential to address through small-scale power suppression by acoustic and diffusion damping. Using a variant of the extended Press-Schechter formalism, we evaluate the halo mass function for the special case of mirror dark matter. Considering a simplified model where M {sub baryons} ∝ M {sub halo}, we relate the halo mass function to more directly observable quantities, and find that for ε ≈ 2 × 10{sup −10} such a simplified description is compatible with the measured galaxy luminosity and velocity functions. On scales M {sub halo} ∼< 10{sup 8} M {sub ⊙}, diffusion damping exponentially suppresses the halo mass function, suggesting a nonprimordial origin for dwarf spheroidal satellite galaxies, which we speculate were formed via a top-down fragmentation process as the result of nonlinear dissipative collapse of larger density perturbations. This could explain the planar orientation of satellite galaxies around Andromeda and the Milky Way.

  2. Solving the small-scale structure puzzles with dissipative dark matter

    Science.gov (United States)

    Foot, Robert; Vagnozzi, Sunny

    2016-07-01

    Small-scale structure is studied in the context of dissipative dark matter, arising for instance in models with a hidden unbroken Abelian sector, so that dark matter couples to a massless dark photon. The dark sector interacts with ordinary matter via gravity and photon-dark photon kinetic mixing. Mirror dark matter is a theoretically constrained special case where all parameters are fixed except for the kinetic mixing strength, epsilon. In these models, the dark matter halo around spiral and irregular galaxies takes the form of a dissipative plasma which evolves in response to various heating and cooling processes. It has been argued previously that such dynamics can account for the inferred cored density profiles of galaxies and other related structural features. Here we focus on the apparent deficit of nearby small galaxies (``missing satellite problem"), which these dissipative models have the potential to address through small-scale power suppression by acoustic and diffusion damping. Using a variant of the extended Press-Schechter formalism, we evaluate the halo mass function for the special case of mirror dark matter. Considering a simplified model where Mbaryons propto Mhalo, we relate the halo mass function to more directly observable quantities, and find that for epsilon ≈ 2 × 10-10 such a simplified description is compatible with the measured galaxy luminosity and velocity functions. On scales Mhalo lesssim 108 Msolar, diffusion damping exponentially suppresses the halo mass function, suggesting a nonprimordial origin for dwarf spheroidal satellite galaxies, which we speculate were formed via a top-down fragmentation process as the result of nonlinear dissipative collapse of larger density perturbations. This could explain the planar orientation of satellite galaxies around Andromeda and the Milky Way.

  3. Optimal structural inference of signaling pathways from unordered and overlapping gene sets.

    Science.gov (United States)

    Acharya, Lipi R; Judeh, Thair; Wang, Guangdi; Zhu, Dongxiao

    2012-02-15

    A plethora of bioinformatics analysis has led to the discovery of numerous gene sets, which can be interpreted as discrete measurements emitted from latent signaling pathways. Their potential to infer signaling pathway structures, however, has not been sufficiently exploited. Existing methods accommodating discrete data do not explicitly consider signal cascading mechanisms that characterize a signaling pathway. Novel computational methods are thus needed to fully utilize gene sets and broaden the scope from focusing only on pairwise interactions to the more general cascading events in the inference of signaling pathway structures. We propose a gene set based simulated annealing (SA) algorithm for the reconstruction of signaling pathway structures. A signaling pathway structure is a directed graph containing up to a few hundred nodes and many overlapping signal cascades, where each cascade represents a chain of molecular interactions from the cell surface to the nucleus. Gene sets in our context refer to discrete sets of genes participating in signal cascades, the basic building blocks of a signaling pathway, with no prior information about gene orderings in the cascades. From a compendium of gene sets related to a pathway, SA aims to search for signal cascades that characterize the optimal signaling pathway structure. In the search process, the extent of overlap among signal cascades is used to measure the optimality of a structure. Throughout, we treat gene sets as random samples from a first-order Markov chain model. We evaluated the performance of SA in three case studies. In the first study conducted on 83 KEGG pathways, SA demonstrated a significantly better performance than Bayesian network methods. Since both SA and Bayesian network methods accommodate discrete data, use a 'search and score' network learning strategy and output a directed network, they can be compared in terms of performance and computational time. In the second study, we compared SA and

  4. Problem solving and inference mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    Furukawa, K; Nakajima, R; Yonezawa, A; Goto, S; Aoyama, A

    1982-01-01

    The heart of the fifth generation computer will be powerful mechanisms for problem solving and inference. A deduction-oriented language is to be designed, which will form the core of the whole computing system. The language is based on predicate logic with the extended features of structuring facilities, meta structures and relational data base interfaces. Parallel computation mechanisms and specialized hardware architectures are being investigated to make possible efficient realization of the language features. The project includes research into an intelligent programming system, a knowledge representation language and system, and a meta inference system to be built on the core. 30 references.

  5. What do transitive inference and class inclusion have in common? Categorical (coproducts and cognitive development.

    Directory of Open Access Journals (Sweden)

    Steven Phillips

    2009-12-01

    Full Text Available Transitive inference, class inclusion and a variety of other inferential abilities have strikingly similar developmental profiles-all are acquired around the age of five. Yet, little is known about the reasons for this correspondence. Category theory was invented as a formal means of establishing commonalities between various mathematical structures. We use category theory to show that transitive inference and class inclusion involve dual mathematical structures, called product and coproduct. Other inferential tasks with similar developmental profiles, including matrix completion, cardinality, dimensional changed card sorting, balance-scale (weight-distance integration, and Theory of Mind also involve these structures. By contrast, (coproducts are not involved in the behaviours exhibited by younger children on these tasks, or simplified versions that are within their ability. These results point to a fundamental cognitive principle under development during childhood that is the capacity to compute (coproducts in the categorical sense.

  6. A large scale analysis of information-theoretic network complexity measures using chemical structures.

    Directory of Open Access Journals (Sweden)

    Matthias Dehmer

    Full Text Available This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases.

  7. Inference of expanded Lrp-like feast/famine transcription factor targets in a non-model organism using protein structure-based prediction.

    Science.gov (United States)

    Ashworth, Justin; Plaisier, Christopher L; Lo, Fang Yin; Reiss, David J; Baliga, Nitin S

    2014-01-01

    Widespread microbial genome sequencing presents an opportunity to understand the gene regulatory networks of non-model organisms. This requires knowledge of the binding sites for transcription factors whose DNA-binding properties are unknown or difficult to infer. We adapted a protein structure-based method to predict the specificities and putative regulons of homologous transcription factors across diverse species. As a proof-of-concept we predicted the specificities and transcriptional target genes of divergent archaeal feast/famine regulatory proteins, several of which are encoded in the genome of Halobacterium salinarum. This was validated by comparison to experimentally determined specificities for transcription factors in distantly related extremophiles, chromatin immunoprecipitation experiments, and cis-regulatory sequence conservation across eighteen related species of halobacteria. Through this analysis we were able to infer that Halobacterium salinarum employs a divergent local trans-regulatory strategy to regulate genes (carA and carB) involved in arginine and pyrimidine metabolism, whereas Escherichia coli employs an operon. The prediction of gene regulatory binding sites using structure-based methods is useful for the inference of gene regulatory relationships in new species that are otherwise difficult to infer.

  8. Structural Inference in the Art of Violin Making.

    Science.gov (United States)

    Morse-Fortier, Leonard Joseph

    The "secrets" of success of early Italian violins have long been sought. Among their many efforts to reproduce the results of Stradiveri, Guarneri, and Amati, luthiers have attempted to order and match natural resonant frequencies in the free violin plates. This tap-tone plate tuning technique is simply an eigenvalue extraction scheme. In the final stages of carving, the violin maker complements considerable intuitive knowledge of violin plate structure and of modal attributes with tap-tone frequency estimates to better understand plate structure and to inform decisions about plate carving and completeness. Examining the modal attributes of violin plates, this work develops and incorporates an impulse-response scheme for modal inference, measures resonant frequencies and modeshapes for a pair of violin plates, and presents modeshapes through a unique computer visualization scheme developed specifically for this purpose. The work explores, through simple examples questions of how plate modal attributes reflect underlying structure, and questions about the so -called evolution of modeshapes and frequencies through assembly of the violin. Separately, the work develops computer code for a carved, anisotropic, plate/shell finite element. Solutions are found to the static displacement and free-vibration eigenvalue problems for an orthotropic plate, and used to verify element accuracy. Finally, a violin back plate is modelled with full consideration of plate thickness and arching. Model estimates for modal attributes compare very well against experimentally acquired values. Finally, the modal synthesis technique is applied to predicting the modal attributes of the violin top plate with ribs attached from those of the top plate alone, and with an estimate of rib mass and stiffness. This last analysis serves to verify the modal synthesis method, and to quantify its limits of applicability in attempting to solve problems with severe structural modification. Conclusions

  9. Inference of gene regulatory networks with sparse structural equation models exploiting genetic perturbations.

    Directory of Open Access Journals (Sweden)

    Xiaodong Cai

    Full Text Available Integrating genetic perturbations with gene expression data not only improves accuracy of regulatory network topology inference, but also enables learning of causal regulatory relations between genes. Although a number of methods have been developed to integrate both types of data, the desiderata of efficient and powerful algorithms still remains. In this paper, sparse structural equation models (SEMs are employed to integrate both gene expression data and cis-expression quantitative trait loci (cis-eQTL, for modeling gene regulatory networks in accordance with biological evidence about genes regulating or being regulated by a small number of genes. A systematic inference method named sparsity-aware maximum likelihood (SML is developed for SEM estimation. Using simulated directed acyclic or cyclic networks, the SML performance is compared with that of two state-of-the-art algorithms: the adaptive Lasso (AL based scheme, and the QTL-directed dependency graph (QDG method. Computer simulations demonstrate that the novel SML algorithm offers significantly better performance than the AL-based and QDG algorithms across all sample sizes from 100 to 1,000, in terms of detection power and false discovery rate, in all the cases tested that include acyclic or cyclic networks of 10, 30 and 300 genes. The SML method is further applied to infer a network of 39 human genes that are related to the immune function and are chosen to have a reliable eQTL per gene. The resulting network consists of 9 genes and 13 edges. Most of the edges represent interactions reasonably expected from experimental evidence, while the remaining may just indicate the emergence of new interactions. The sparse SEM and efficient SML algorithm provide an effective means of exploiting both gene expression and perturbation data to infer gene regulatory networks. An open-source computer program implementing the SML algorithm is freely available upon request.

  10. Inferring Enceladus' ice shell strength and structure from Tiger Stripe formation

    Science.gov (United States)

    Rhoden, A.; Hurford, T., Jr.; Spitale, J.; Henning, W. G.

    2017-12-01

    The tiger stripe fractures (TSFs) of Enceladus are four, roughly parallel, linear fractures that correlate with plume sources and high heat flows measured by Cassini. Diurnal variations of plume eruptions along the TSFs strongly suggest that tides modulate the eruptions. Several attempts have been made to infer Enceladus' ice shell structure, and the mechanical process of plume formation, by matching variations in the plumes' eruptive output with tidal stresses for different interior models. Unfortunately, the many, often degenerate, unknowns make these analyses non-unique. Tidal-interior models that best match the observed plume variability imply very low tidal stresses (<14 kPa), much lower than the 1 MPa tensile strength of ice implied by lab experiments or the 100 kPa threshold inferred for Europa's ice. In addition, the interior models that give the best matches are inconsistent with the constraints from observed librations. To gain more insight into the interior structure and rheology of Enceladus and the role of tidal stress in the development of the south polar terrain, we utilize the orientations of the TSFs themselves as observational constraints on tidal-interior models. While the initial formation of the TSFs has previously been attributed to tidal stress, detailed modeling of their formation has not been performed until now. We compute tidal stresses for a suite of rheologically-layered interior models, consistent with Enceladus' observed librations, and apply a variety of failure conditions. We then compare the measured orientations at 6391 points along the TSFs with the predicted orientations from the tidal models. Ultimately, we compute the likelihood of forming the TSFs with tidal stresses for each model and failure condition. We find that tidal stresses are a good match to the observed orientations of the TSFs and likely led to their formation. We also find that the model with the highest likelihood changes depending on the failure criterion

  11. Scale dependent inference in landscape genetics

    Science.gov (United States)

    Samuel A. Cushman; Erin L. Landguth

    2010-01-01

    Ecological relationships between patterns and processes are highly scale dependent. This paper reports the first formal exploration of how changing scale of research away from the scale of the processes governing gene flow affects the results of landscape genetic analysis. We used an individual-based, spatially explicit simulation model to generate patterns of genetic...

  12. Factors Influencing the Sahelian Paradox at the Local Watershed Scale: Causal Inference Insights

    Science.gov (United States)

    Van Gordon, M.; Groenke, A.; Larsen, L.

    2017-12-01

    While the existence of paradoxical rainfall-runoff and rainfall-groundwater correlations are well established in the West African Sahel, the hydrologic mechanisms involved are poorly understood. In pursuit of mechanistic explanations, we perform a causal inference analysis on hydrologic variables in three watersheds in Benin and Niger. Using an ensemble of techniques, we compute the strength of relationships between observational soil moisture, runoff, precipitation, and temperature data at seasonal and event timescales. Performing analysis over a range of time lags allows dominant time scales to emerge from the relationships between variables. By determining the time scales of hydrologic connectivity over vertical and lateral space, we show differences in the importance of overland and subsurface flow over the course of the rainy season and between watersheds. While previous work on the paradoxical hydrologic behavior in the Sahel focuses on surface processes and infiltration, our results point toward the importance of subsurface flow to rainfall-runoff relationships in these watersheds. The hypotheses generated from our ensemble approach suggest that subsequent explorations of mechanistic hydrologic processes in the region include subsurface flow. Further, this work highlights how an ensemble approach to causal analysis can reveal nuanced relationships between variables even in poorly understood hydrologic systems.

  13. Adaptive Neuro-Fuzzy Inference System Models for Force Prediction of a Mechatronic Flexible Structure

    DEFF Research Database (Denmark)

    Achiche, S.; Shlechtingen, M.; Raison, M.

    2016-01-01

    This paper presents the results obtained from a research work investigating the performance of different Adaptive Neuro-Fuzzy Inference System (ANFIS) models developed to predict excitation forces on a dynamically loaded flexible structure. For this purpose, a flexible structure is equipped...... obtained from applying a random excitation force on the flexible structure. The performance of the developed models is evaluated by analyzing the prediction capabilities based on a normalized prediction error. The frequency domain is considered to analyze the similarity of the frequencies in the predicted...... of the sampling frequency and sensor location on the model performance is investigated. The results obtained in this paper show that ANFIS models can be used to set up reliable force predictors for dynamical loaded flexible structures, when a certain degree of inaccuracy is accepted. Furthermore, the comparison...

  14. Universal Scaling Relations in Scale-Free Structure Formation

    Science.gov (United States)

    Guszejnov, Dávid; Hopkins, Philip F.; Grudić, Michael Y.

    2018-04-01

    A large number of astronomical phenomena exhibit remarkably similar scaling relations. The most well-known of these is the mass distribution dN/dM∝M-2 which (to first order) describes stars, protostellar cores, clumps, giant molecular clouds, star clusters and even dark matter halos. In this paper we propose that this ubiquity is not a coincidence and that it is the generic result of scale-free structure formation where the different scales are uncorrelated. We show that all such systems produce a mass function proportional to M-2 and a column density distribution with a power law tail of dA/d lnΣ∝Σ-1. In the case where structure formation is controlled by gravity the two-point correlation becomes ξ2D∝R-1. Furthermore, structures formed by such processes (e.g. young star clusters, DM halos) tend to a ρ∝R-3 density profile. We compare these predictions with observations, analytical fragmentation cascade models, semi-analytical models of gravito-turbulent fragmentation and detailed "full physics" hydrodynamical simulations. We find that these power-laws are good first order descriptions in all cases.

  15. The impact of mating systems and dispersal on fine-scale genetic structure at maternally, paternally and biparentally inherited markers.

    Science.gov (United States)

    Shaw, Robyn E; Banks, Sam C; Peakall, Rod

    2018-01-01

    For decades, studies have focused on how dispersal and mating systems influence genetic structure across populations or social groups. However, we still lack a thorough understanding of how these processes and their interaction shape spatial genetic patterns over a finer scale (tens-hundreds of metres). Using uniparentally inherited markers may help answer these questions, yet their potential has not been fully explored. Here, we use individual-level simulations to investigate the effects of dispersal and mating system on fine-scale genetic structure at autosomal, mitochondrial and Y chromosome markers. Using genetic spatial autocorrelation analysis, we found that dispersal was the major driver of fine-scale genetic structure across maternally, paternally and biparentally inherited markers. However, when dispersal was restricted (mean distance = 100 m), variation in mating behaviour created strong differences in the comparative level of structure detected at maternally and paternally inherited markers. Promiscuity reduced spatial genetic structure at Y chromosome loci (relative to monogamy), whereas structure increased under polygyny. In contrast, mitochondrial and autosomal markers were robust to differences in the specific mating system, although genetic structure increased across all markers when reproductive success was skewed towards fewer individuals. Comparing males and females at Y chromosome vs. mitochondrial markers, respectively, revealed that some mating systems can generate similar patterns to those expected under sex-biased dispersal. This demonstrates the need for caution when inferring ecological and behavioural processes from genetic results. Comparing patterns between the sexes, across a range of marker types, may help us tease apart the processes shaping fine-scale genetic structure. © 2017 John Wiley & Sons Ltd.

  16. On soft limits of large-scale structure correlation functions

    International Nuclear Information System (INIS)

    Sagunski, Laura

    2016-08-01

    background method to the case of a directional soft mode, being absorbed into a locally curved anisotropic background cosmology. The resulting non-perturbative power spectrum equation encodes the coupling to ultraviolet (UV) modes in two time-dependent coefficients. These can most generally be inferred from response functions to geometrical parameters, such as spatial curvature, in the locally curved anisotropic background cosmology. However, we can determine one coefficient by use of the angular-averaged bispectrum consistency condition together with the generalized VKPR proposal, and we show that the impact of the other one is subleading. Neglecting the latter in consequence, we confront the non-perturbative power spectrum equation against numerical simulations and find indeed a very good agreement within the expected error bars. Moreover, we argue that both coefficients and thus the non-perturbative power spectrum in the soft limit depend only weakly on UV modes deep in the non-linear regime. This non-perturbative finding allows us in turn to derive important implications for perturbative approaches to large-scale structure formation. First, it leads to the conclusion that the UV dependence of the power spectrum found in explicit computations within standard perturbation theory is an artifact. Second, it implies that in the Eulerian (Lagrangian) effective field theory (EFT) approach, where UV divergences are canceled by counter-terms, the renormalized leading-order coefficient(s) receive most contributions from modes close to the non-linear scale. The non-perturbative approach we developed can in principle be used to precisely infer the size of these renormalized leading-order EFT coefficient(s) by performing small-volume numerical simulations within an anisotropic 'separate universe' framework. Our results suggest that the importance of these coefficient(s) is a ∝10% effect at most.

  17. On soft limits of large-scale structure correlation functions

    Energy Technology Data Exchange (ETDEWEB)

    Sagunski, Laura

    2016-08-15

    background method to the case of a directional soft mode, being absorbed into a locally curved anisotropic background cosmology. The resulting non-perturbative power spectrum equation encodes the coupling to ultraviolet (UV) modes in two time-dependent coefficients. These can most generally be inferred from response functions to geometrical parameters, such as spatial curvature, in the locally curved anisotropic background cosmology. However, we can determine one coefficient by use of the angular-averaged bispectrum consistency condition together with the generalized VKPR proposal, and we show that the impact of the other one is subleading. Neglecting the latter in consequence, we confront the non-perturbative power spectrum equation against numerical simulations and find indeed a very good agreement within the expected error bars. Moreover, we argue that both coefficients and thus the non-perturbative power spectrum in the soft limit depend only weakly on UV modes deep in the non-linear regime. This non-perturbative finding allows us in turn to derive important implications for perturbative approaches to large-scale structure formation. First, it leads to the conclusion that the UV dependence of the power spectrum found in explicit computations within standard perturbation theory is an artifact. Second, it implies that in the Eulerian (Lagrangian) effective field theory (EFT) approach, where UV divergences are canceled by counter-terms, the renormalized leading-order coefficient(s) receive most contributions from modes close to the non-linear scale. The non-perturbative approach we developed can in principle be used to precisely infer the size of these renormalized leading-order EFT coefficient(s) by performing small-volume numerical simulations within an anisotropic 'separate universe' framework. Our results suggest that the importance of these coefficient(s) is a ∝10% effect at most.

  18. Structural colors from Morpho peleides butterfly wing scales

    KAUST Repository

    Ding, Yong; Xu, Sheng; Wang, Zhong Lin

    2009-01-01

    A male Morpho peleides butterfly wing is decorated by two types of scales, cover and ground scales. We have studied the optical properties of each type of scales in conjunction with the structural information provided by cross-sectional transmission electron microscopy and computer simulation. The shining blue color is mainly from the Bragg reflection of the one-dimensional photonic structure, e.g., the shelf structure packed regularly in each ridges on cover scales. A thin-film-like interference effect from the base plate of the cover scale enhances such blue color and further gives extra reflection peaks in the infrared and ultraviolet regions. The analogy in the spectra acquired from the original wing and that from the cover scales suggests that the cover scales take a dominant role in its structural color. This study provides insight of using the biotemplates for fabricating smart photonic structures. © 2009 American Institute of Physics.

  19. Optimal causal inference: estimating stored information and approximating causal architecture.

    Science.gov (United States)

    Still, Susanne; Crutchfield, James P; Ellison, Christopher J

    2010-09-01

    We introduce an approach to inferring the causal architecture of stochastic dynamical systems that extends rate-distortion theory to use causal shielding--a natural principle of learning. We study two distinct cases of causal inference: optimal causal filtering and optimal causal estimation. Filtering corresponds to the ideal case in which the probability distribution of measurement sequences is known, giving a principled method to approximate a system's causal structure at a desired level of representation. We show that in the limit in which a model-complexity constraint is relaxed, filtering finds the exact causal architecture of a stochastic dynamical system, known as the causal-state partition. From this, one can estimate the amount of historical information the process stores. More generally, causal filtering finds a graded model-complexity hierarchy of approximations to the causal architecture. Abrupt changes in the hierarchy, as a function of approximation, capture distinct scales of structural organization. For nonideal cases with finite data, we show how the correct number of the underlying causal states can be found by optimal causal estimation. A previously derived model-complexity control term allows us to correct for the effect of statistical fluctuations in probability estimates and thereby avoid overfitting.

  20. The prisoner's dilemma in structured scale-free networks

    International Nuclear Information System (INIS)

    Li Xing; Wu Yonghui; Zhang Zhongzhi; Zhou Shuigeng; Rong Zhihai

    2009-01-01

    The conventional wisdom is that scale-free networks are prone to cooperation spreading. In this paper we investigate the cooperative behavior on the structured scale-free network. In contrast to the conventional wisdom that scale-free networks are prone to cooperation spreading, the evolution of cooperation is inhibited on the structured scale-free network when the prisoner's dilemma (PD) game is modeled. First, we demonstrate that neither the scale-free property nor the high clustering coefficient is responsible for the inhibition of cooperation spreading on the structured scale-free network. Then we provide one heuristic method to argue that the lack of age correlations and its associated 'large-world' behavior in the structured scale-free network inhibit the spread of cooperation. These findings may help enlighten further studies on the evolutionary dynamics of the PD game in scale-free networks

  1. Hippocampal Structure Predicts Statistical Learning and Associative Inference Abilities during Development.

    Science.gov (United States)

    Schlichting, Margaret L; Guarino, Katharine F; Schapiro, Anna C; Turk-Browne, Nicholas B; Preston, Alison R

    2017-01-01

    Despite the importance of learning and remembering across the lifespan, little is known about how the episodic memory system develops to support the extraction of associative structure from the environment. Here, we relate individual differences in volumes along the hippocampal long axis to performance on statistical learning and associative inference tasks-both of which require encoding associations that span multiple episodes-in a developmental sample ranging from ages 6 to 30 years. Relating age to volume, we found dissociable patterns across the hippocampal long axis, with opposite nonlinear volume changes in the head and body. These structural differences were paralleled by performance gains across the age range on both tasks, suggesting improvements in the cross-episode binding ability from childhood to adulthood. Controlling for age, we also found that smaller hippocampal heads were associated with superior behavioral performance on both tasks, consistent with this region's hypothesized role in forming generalized codes spanning events. Collectively, these results highlight the importance of examining hippocampal development as a function of position along the hippocampal axis and suggest that the hippocampal head is particularly important in encoding associative structure across development.

  2. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    Science.gov (United States)

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in

  3. Partially acoustic dark matter, interacting dark radiation, and large scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Chacko, Zackaria [Maryland Center for Fundamental Physics, Department of Physics, University of Maryland,Stadium Dr., College Park, MD 20742 (United States); Cui, Yanou [Maryland Center for Fundamental Physics, Department of Physics, University of Maryland,Stadium Dr., College Park, MD 20742 (United States); Department of Physics and Astronomy, University of California-Riverside,University Ave, Riverside, CA 92521 (United States); Perimeter Institute, 31 Caroline Street, North Waterloo, Ontario N2L 2Y5 (Canada); Hong, Sungwoo [Maryland Center for Fundamental Physics, Department of Physics, University of Maryland,Stadium Dr., College Park, MD 20742 (United States); Okui, Takemichi [Department of Physics, Florida State University,College Avenue, Tallahassee, FL 32306 (United States); Tsai, Yuhsinz [Maryland Center for Fundamental Physics, Department of Physics, University of Maryland,Stadium Dr., College Park, MD 20742 (United States)

    2016-12-21

    The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H{sub 0} and the matter density perturbation σ{sub 8} inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightly coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ{sub 8} problem, while the presence of tightly coupled dark radiation ameliorates the H{sub 0} problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.

  4. Partially acoustic dark matter, interacting dark radiation, and large scale structure

    International Nuclear Information System (INIS)

    Chacko, Zackaria; Cui, Yanou; Hong, Sungwoo; Okui, Takemichi; Tsai, Yuhsinz

    2016-01-01

    The standard paradigm of collisionless cold dark matter is in tension with measurements on large scales. In particular, the best fit values of the Hubble rate H 0 and the matter density perturbation σ 8 inferred from the cosmic microwave background seem inconsistent with the results from direct measurements. We show that both problems can be solved in a framework in which dark matter consists of two distinct components, a dominant component and a subdominant component. The primary component is cold and collisionless. The secondary component is also cold, but interacts strongly with dark radiation, which itself forms a tightly coupled fluid. The growth of density perturbations in the subdominant component is inhibited by dark acoustic oscillations due to its coupling to the dark radiation, solving the σ 8 problem, while the presence of tightly coupled dark radiation ameliorates the H 0 problem. The subdominant component of dark matter and dark radiation continue to remain in thermal equilibrium until late times, inhibiting the formation of a dark disk. We present an example of a simple model that naturally realizes this scenario in which both constituents of dark matter are thermal WIMPs. Our scenario can be tested by future stage-IV experiments designed to probe the CMB and large scale structure.

  5. Inference of directional selection and mutation parameters assuming equilibrium.

    Science.gov (United States)

    Vogl, Claus; Bergman, Juraj

    2015-12-01

    In a classical study, Wright (1931) proposed a model for the evolution of a biallelic locus under the influence of mutation, directional selection and drift. He derived the equilibrium distribution of the allelic proportion conditional on the scaled mutation rate, the mutation bias and the scaled strength of directional selection. The equilibrium distribution can be used for inference of these parameters with genome-wide datasets of "site frequency spectra" (SFS). Assuming that the scaled mutation rate is low, Wright's model can be approximated by a boundary-mutation model, where mutations are introduced into the population exclusively from sites fixed for the preferred or unpreferred allelic states. With the boundary-mutation model, inference can be partitioned: (i) the shape of the SFS distribution within the polymorphic region is determined by random drift and directional selection, but not by the mutation parameters, such that inference of the selection parameter relies exclusively on the polymorphic sites in the SFS; (ii) the mutation parameters can be inferred from the amount of polymorphic and monomorphic preferred and unpreferred alleles, conditional on the selection parameter. Herein, we derive maximum likelihood estimators for the mutation and selection parameters in equilibrium and apply the method to simulated SFS data as well as empirical data from a Madagascar population of Drosophila simulans. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. INFERRING THE MAGNETOHYDRODYNAMIC STRUCTURE OF SOLAR FLARE SUPRA-ARCADE PLASMAS FROM A DATA-ASSIMILATED FIELD TRANSPORT MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Scott, Roger B.; McKenzie, David E.; Longcope, Dana W. [Montana State University, P.O. Box 173840, Bozeman, MT 59717-3840 (United States)

    2016-03-01

    Supra-arcade fans are highly dynamic structures that form in the region above post-reconnection flare arcades. In these features the plasma density and temperature evolve on the scale of a few seconds, despite the much slower dynamics of the underlying arcade. Further, the motion of supra-arcade plasma plumes appears to be inconsistent with the low-beta conditions that are often assumed to exist in the solar corona. In order to understand the nature of these highly debated structures, it is, therefore, important to investigate the interplay of the magnetic field with the plasma. Here we present a technique for inferring the underlying magnetohydrodynamic processes that might lead to the types of motions seen in supra-arcade structures. Taking as a case study the 2011 October 22 event, we begin with extreme-ultraviolet observations and develop a time-dependent velocity field that is consistent with both continuity and local correlation tracking. We then assimilate this velocity field into a simplified magnetohydrodynamic simulation, which deals simultaneously with regions of high and low signal-to-noise ratio, thereby allowing the magnetic field to evolve self-consistently with the fluid. Ultimately, we extract the missing contributions from the momentum equation in order to estimate the relative strength of the various forcing terms. In this way we are able to make estimates of the plasma beta, as well as predict the spectral character and total power of Alfvén waves radiated from the supra-arcade region.

  7. LASSIM-A network inference toolbox for genome-wide mechanistic modeling.

    Directory of Open Access Journals (Sweden)

    Rasmus Magnusson

    2017-06-01

    Full Text Available Recent technological advancements have made time-resolved, quantitative, multi-omics data available for many model systems, which could be integrated for systems pharmacokinetic use. Here, we present large-scale simulation modeling (LASSIM, which is a novel mathematical tool for performing large-scale inference using mechanistically defined ordinary differential equations (ODE for gene regulatory networks (GRNs. LASSIM integrates structural knowledge about regulatory interactions and non-linear equations with multiple steady state and dynamic response expression datasets. The rationale behind LASSIM is that biological GRNs can be simplified using a limited subset of core genes that are assumed to regulate all other gene transcription events in the network. The LASSIM method is implemented as a general-purpose toolbox using the PyGMO Python package to make the most of multicore computers and high performance clusters, and is available at https://gitlab.com/Gustafsson-lab/lassim. As a method, LASSIM works in two steps, where it first infers a non-linear ODE system of the pre-specified core gene expression. Second, LASSIM in parallel optimizes the parameters that model the regulation of peripheral genes by core system genes. We showed the usefulness of this method by applying LASSIM to infer a large-scale non-linear model of naïve Th2 cell differentiation, made possible by integrating Th2 specific bindings, time-series together with six public and six novel siRNA-mediated knock-down experiments. ChIP-seq showed significant overlap for all tested transcription factors. Next, we performed novel time-series measurements of total T-cells during differentiation towards Th2 and verified that our LASSIM model could monitor those data significantly better than comparable models that used the same Th2 bindings. In summary, the LASSIM toolbox opens the door to a new type of model-based data analysis that combines the strengths of reliable mechanistic models

  8. A scale invariant covariance structure on jet space

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2005-01-01

    This paper considers scale invariance of statistical image models. We study statistical scale invariance of the covariance structure of jet space under scale space blurring and derive the necessary structure and conditions of the jet covariance matrix in order for it to be scale invariant. As par...

  9. Full scale dynamic testing of Kozloduy NPP unit 5 structures

    International Nuclear Information System (INIS)

    Da Rin, E.M.

    1999-01-01

    As described in this report, the Kozloduy NPP western site has been subjected to low level earthquake-like ground shaking - through appropriately devised underground explosions - and the resulting dynamic response of the NPP reactor Unit 5 important structures appropriately measured and digitally recorded. In-situ free-field response was measured concurrently more than 100 m aside the main structures of interest. The collected experimental data provide reference information on the actual dynamic characteristics of the Kozloduy NPPs main structures, as well as give some useful indications on the dynamic soil-structure interaction effects for the case of low level excitation. Performing the present full-scale dynamic structural testing activities took advantage of the experience gained by ISMES during similar tests, lately performed in Italy and abroad (in particular, at the Paks NPP in 1994). The IAEA promoted dynamic testing of the Kozloduy NPP Unit 5 by means of pertinently designed buried explosion-induced ground motions which has provided a large amount of data on the dynamic structural response of its major structures. In the present report, the conducted investigation is described and the acquired digital data presented. A series of preliminary analyses were undertaken for examining in detail the ground excitation levels that were produced by these weak earthquake simulation experiments, as well as for inferring some structural characteristics and behaviour information from the collected data. These analyses ascertained the high quality of the collected digital data. Presumably due to soil-structure dynamic interaction effects, reduced excitation levels were observed at the reactor building foundation raft level with respect to the concurrent free-field ground motions. measured at a 140 m distance from the reactor building centre. Further more detailed and systematic analyses are worthwhile to be performed for extracting more complete information about the

  10. Statistical Inference at Work: Statistical Process Control as an Example

    Science.gov (United States)

    Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia

    2008-01-01

    To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…

  11. Inference of viscosity jump at 670 km depth and lower mantle viscosity structure from GIA observations

    Science.gov (United States)

    Nakada, Masao; Okuno, Jun'ichi; Irie, Yoshiya

    2018-03-01

    A viscosity model with an exponential profile described by temperature (T) and pressure (P) distributions and constant activation energy (E_{{{um}}}^{{*}} for the upper mantle and E_{{{lm}}}^* for the lower mantle) and volume (V_{{{um}}}^{{*}} and V_{{{lm}}}^*) is employed in inferring the viscosity structure of the Earth's mantle from observations of glacial isostatic adjustment (GIA). We first construct standard viscosity models with an average upper-mantle viscosity ({\\bar{η }_{{{um}}}}) of 2 × 1020 Pa s, a typical value for the oceanic upper-mantle viscosity, satisfying the observationally derived three GIA-related observables, GIA-induced rate of change of the degree-two zonal harmonic of the geopotential, {\\dot{J}_2}, and differential relative sea level (RSL) changes for the Last Glacial Maximum sea levels at Barbados and Bonaparte Gulf in Australia and for RSL changes at 6 kyr BP for Karumba and Halifax Bay in Australia. Standard viscosity models inferred from three GIA-related observables are characterized by a viscosity of ˜1023 Pa s in the deep mantle for an assumed viscosity at 670 km depth, ηlm(670), of (1 - 50) × 1021 Pa s. Postglacial RSL changes at Southport, Bermuda and Everglades in the intermediate region of the North American ice sheet, largely dependent on its gross melting history, have a crucial potential for inference of a viscosity jump at 670 km depth. The analyses of these RSL changes based on the viscosity models with {\\bar{η }_{{{um}}}} ≥ 2 × 1020 Pa s and lower-mantle viscosity structures for the standard models yield permissible {\\bar{η }_{{{um}}}} and ηlm (670) values, although there is a trade-off between the viscosity and ice history models. Our preferred {\\bar{η }_{{{um}}}} and ηlm (670) values are ˜(7 - 9) × 1020 and ˜1022 Pa s, respectively, and the {\\bar{η }_{{{um}}}} is higher than that for the typical value of oceanic upper mantle, which may reflect a moderate laterally heterogeneous upper

  12. Inverse Ising inference with correlated samples

    International Nuclear Information System (INIS)

    Obermayer, Benedikt; Levine, Erel

    2014-01-01

    Correlations between two variables of a high-dimensional system can be indicative of an underlying interaction, but can also result from indirect effects. Inverse Ising inference is a method to distinguish one from the other. Essentially, the parameters of the least constrained statistical model are learned from the observed correlations such that direct interactions can be separated from indirect correlations. Among many other applications, this approach has been helpful for protein structure prediction, because residues which interact in the 3D structure often show correlated substitutions in a multiple sequence alignment. In this context, samples used for inference are not independent but share an evolutionary history on a phylogenetic tree. Here, we discuss the effects of correlations between samples on global inference. Such correlations could arise due to phylogeny but also via other slow dynamical processes. We present a simple analytical model to address the resulting inference biases, and develop an exact method accounting for background correlations in alignment data by combining phylogenetic modeling with an adaptive cluster expansion algorithm. We find that popular reweighting schemes are only marginally effective at removing phylogenetic bias, suggest a rescaling strategy that yields better results, and provide evidence that our conclusions carry over to the frequently used mean-field approach to the inverse Ising problem. (paper)

  13. Forest fragmentation and bird community dynamics: inference at regional scales

    Science.gov (United States)

    Boulinier, T.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Flather, C.H.; Pollock, K.H.

    2001-01-01

    With increasing fragmentation of natural areas and a dramatic reduction of forest cover in several parts of the world, quantifying the impact of such changes on species richness and community dynamics has been a subject of much concern. Here, we tested whether in more fragmented landscapes there was a lower number of area-sensitive species and higher local extinction and turnover rates, which could explain higher temporal variability in species richness. To investigate such potential landscape effects at a regional scale, we merged two independent, large-scale monitoring efforts: the North American Breeding Bird Survey (BBS) and the Land Use and Land Cover Classification data from the U.S. Geological Survey. We used methods that accounted for heterogeneity in the probability of detecting species to estimate species richness and temporal changes in the bird communities for BBS routes in three mid-Atlantic U.S. states. Forest breeding bird species were grouped prior to the analyses into area-sensitive and non-area-sensitive species according to previous studies. We tested predictions relating measures of forest structure at one point in time (1974) to species richness at that time and to parameters of forest bird community change over the following 22-yr-period (1975-1996). We used the mean size of forest patches to characterize landscape structure, as high correlations among landscape variables did not allow us to disentangle the relative roles of habitat fragmentation per se and habitat loss. As predicted, together with lower species richness for area-sensitive species on routes surrounded by landscapes with lower mean forest-patch size, we found higher mean year-to-year rates of local extinction. Moreover, the mean year-to-year rates of local turnover (proportion of locally new species) for area-sensitive species were also higher in landscapes with lower mean forest-patch size. These associations were not observed for the non-area-sensitive species group. These

  14. Inferring personal economic status from social network location

    Science.gov (United States)

    Luo, Shaojun; Morone, Flaviano; Sarraute, Carlos; Travizano, Matías; Makse, Hernán A.

    2017-05-01

    It is commonly believed that patterns of social ties affect individuals' economic status. Here we translate this concept into an operational definition at the network level, which allows us to infer the economic well-being of individuals through a measure of their location and influence in the social network. We analyse two large-scale sources: telecommunications and financial data of a whole country's population. Our results show that an individual's location, measured as the optimal collective influence to the structural integrity of the social network, is highly correlated with personal economic status. The observed social network patterns of influence mimic the patterns of economic inequality. For pragmatic use and validation, we carry out a marketing campaign that shows a threefold increase in response rate by targeting individuals identified by our social network metrics as compared to random targeting. Our strategy can also be useful in maximizing the effects of large-scale economic stimulus policies.

  15. Bayesian inference for partially identified models exploring the limits of limited data

    CERN Document Server

    Gustafson, Paul

    2015-01-01

    Introduction Identification What Is against Us? What Is for Us? Some Simple Examples of Partially Identified ModelsThe Road Ahead The Structure of Inference in Partially Identified Models Bayesian Inference The Structure of Posterior Distributions in PIMs Computational Strategies Strength of Bayesian Updating, Revisited Posterior MomentsCredible Intervals Evaluating the Worth of Inference Partial Identification versus Model Misspecification The Siren Call of Identification Comp

  16. Inference of functional properties from large-scale analysis of enzyme superfamilies.

    Science.gov (United States)

    Brown, Shoshana D; Babbitt, Patricia C

    2012-01-02

    As increasingly large amounts of data from genome and other sequencing projects become available, new approaches are needed to determine the functions of the proteins these genes encode. We show how large-scale computational analysis can help to address this challenge by linking functional information to sequence and structural similarities using protein similarity networks. Network analyses using three functionally diverse enzyme superfamilies illustrate the use of these approaches for facile updating and comparison of available structures for a large superfamily, for creation of functional hypotheses for metagenomic sequences, and to summarize the limits of our functional knowledge about even well studied superfamilies.

  17. Ecosystem-level water-use efficiency inferred from eddy covariance data: definitions, patterns and spatial up-scaling

    Science.gov (United States)

    Reichstein, M.; Beer, C.; Kuglitsch, F.; Papale, D.; Soussana, J. A.; Janssens, I.; Ciais, P.; Baldocchi, D.; Buchmann, N.; Verbeeck, H.; Ceulemans, R.; Moors, E.; Köstner, B.; Schulze, D.; Knohl, A.; Law, B. E.

    2007-12-01

    In this presentation we discuss ways to infer and to interpret water-use efficiency at ecosystem level (WUEe) from eddy covariance flux data and possibilities for scaling these patterns to regional and continental scale. In particular we convey the following: WUEe may be computed as a ratio of integrated fluxes or as the slope of carbon versus water fluxes offering different chances for interpretation. If computed from net ecosystem exchange and evapotranspiration on has to take of counfounding effects of respiration and soil evaporation. WUEe time-series at diurnal and seasonal scale is a valuable ecosystem physiological diagnostic for example about ecosystem-level responses to drought. Most often WUEe decreases during dry periods. The mean growing season ecosystem water-use efficiency of gross carbon uptake (WUEGPP) is highest in temperate broad-leaved deciduous forests, followed by temperate mixed forests, temperate evergreen conifers, Mediterranean broad-leaved deciduous forests, Mediterranean broad-leaved evergreen forests and Mediterranean evergreen conifers and boreal, grassland and tundra ecosystems. Water-use efficiency exhibits a temporally quite conservative relation with atmospheric water vapor pressure deficit (VPD) that is modified between sites by leaf area index (LAI) and soil quality, such that WUEe increases with LAI and soil water holding capacity which is related to texture. This property and tight coupling between carbon and water cycles is used to estimate catchment-scale water-use efficiency and primary productivity by integration of space-borne earth observation and river discharge data.

  18. Inference of Causal Relationships between Biomarkers and Outcomes in High Dimensions

    Directory of Open Access Journals (Sweden)

    Felix Agakov

    2011-12-01

    Full Text Available We describe a unified computational framework for learning causal dependencies between genotypes, biomarkers, and phenotypic outcomes from large-scale data. In contrast to previous studies, our framework allows for noisy measurements, hidden confounders, missing data, and pleiotropic effects of genotypes on outcomes. The method exploits the use of genotypes as “instrumental variables” to infer causal associations between phenotypic biomarkers and outcomes, without requiring the assumption that genotypic effects are mediated only through the observed biomarkers. The framework builds on sparse linear methods developed in statistics and machine learning and modified here for inferring structures of richer networks with latent variables. Where the biomarkers are gene transcripts, the method can be used for fine mapping of quantitative trait loci (QTLs detected in genetic linkage studies. To demonstrate our method, we examined effects of gene transcript levels in the liver on plasma HDL cholesterol levels in a sample of 260 mice from a heterogeneous stock.

  19. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [The University of Texas at Austin

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUARO Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.

  20. Cooling pipeline disposing structure for large-scaled cryogenic structure

    International Nuclear Information System (INIS)

    Takahashi, Hiroyuki.

    1996-01-01

    The present invention concerns an electromagnetic force supporting structure for superconductive coils. As the size of a cryogenic structure is increased, since it takes much cooling time, temperature difference between cooling pipelines and the cryogenic structure is increased over a wide range, and difference of heat shrinkage is increased to increase thermal stresses. Then, in the cooling pipelines for a large scaled cryogenic structure, the cooling pipelines and the structure are connected by way of a thin metal plate made of a material having a heat conductivity higher than that of the material of the structure by one digit or more, and the thin metal plate is bent. The displacement between the cryogenic structure and the cooling pipelines caused by heat shrinkage is absorbed by the elongation/shrinkage of the bent structure of the thin metal plate, and the thermal stresses due to the displacement is reduced. In addition, the heat of the cryogenic structures is transferred by way of the thin metal plate. Then, the cooling pipelines can be secured to the cryogenic structure such that cooling by heat transfer is enabled by absorbing a great deviation or three dimensional displacement due to the difference of the temperature distribution between the cryogenic structure enlarged in the scale and put into the three dimensional shape, and the cooling pipelines. (N.H.)

  1. Approximation and inference methods for stochastic biochemical kinetics—a tutorial review

    International Nuclear Information System (INIS)

    Schnoerr, David; Grima, Ramon; Sanguinetti, Guido

    2017-01-01

    Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the chemical master equation. Despite its simple structure, no analytic solutions to the chemical master equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics. (topical review)

  2. Using AFLP markers and the Geneland program for the inference of population genetic structure

    DEFF Research Database (Denmark)

    Guillot, Gilles; Santos, Filipe

    2010-01-01

    the computer program Geneland designed to infer population structure has been adapted to deal with dominant markers; and (ii) we use Geneland for numerical comparison of dominant and codominant markers to perform clustering. AFLP markers lead to less accurate results than bi-allelic codominant markers...... such as single nucleotide polymorphisms (SNP) markers but this difference becomes negligible for data sets of common size (number of individuals n≥100, number of markers L≥200). The latest Geneland version (3.2.1) handling dominant markers is freely available as an R package with a fully clickable graphical...

  3. Small-scale structure of the geodynamo inferred from Ørsted and Magsat satellite data

    DEFF Research Database (Denmark)

    Hulot, G.; Eymin, C.; Langlais, B.

    2002-01-01

    The 'geodynamo' in the Earth's liquid outer core produces a magnetic field that dominates the large and medium length scales of the magnetic field observed at the Earth's surface(1,2). Here we use data from the currently operating Danish Oersted(3) satellite, and from the US Magsat(2) satellite...... that operated in 1979/80, to identify and interpret variations in the magnetic field over the past 20 years, down to length scales previously inaccessible. Projected down to the surface of the Earth's core, we found these variations to be small below the Pacific Ocean, and large at polar latitudes...... and in a region centred below southern Africa. The flow pattern at the surface of the core that we calculate to account for these changes is characterized by a westward flow concentrated in retrograde polar vortices and an asymmetric ring where prograde vortices are correlated with highs (and retrograde vortices...

  4. Inference for Local Distributions at High Sampling Frequencies: A Bootstrap Approach

    DEFF Research Database (Denmark)

    Hounyo, Ulrich; Varneskov, Rasmus T.

    of "large" jumps. Our locally dependent wild bootstrap (LDWB) accommodate issues related to the stochastic scale and jumps as well as account for a special block-wise dependence structure induced by sampling errors. We show that the LDWB replicates first and second-order limit theory from the usual...... empirical process and the stochastic scale estimate, respectively, as well as an asymptotic bias. Moreover, we design the LDWB sufficiently general to establish asymptotic equivalence between it and and a nonparametric local block bootstrap, also introduced here, up to second-order distribution theory....... Finally, we introduce LDWB-aided Kolmogorov-Smirnov tests for local Gaussianity as well as local von-Mises statistics, with and without bootstrap inference, and establish their asymptotic validity using the second-order distribution theory. The finite sample performance of CLT and LDWB-aided local...

  5. Feral pig populations are structured at fine spatial scales in tropical Queensland, Australia.

    Science.gov (United States)

    Lopez, Jobina; Hurwood, David; Dryden, Bart; Fuller, Susan

    2014-01-01

    Feral pigs occur throughout tropical far north Queensland, Australia and are a significant threat to biodiversity and World Heritage values, agriculture and are a vector of infectious diseases. One of the constraints on long-lasting, local eradication of feral pigs is the process of reinvasion into recently controlled areas. This study examined the population genetic structure of feral pigs in far north Queensland to identify the extent of movement and the scale at which demographically independent management units exist. Genetic analysis of 328 feral pigs from the Innisfail to Tully region of tropical Queensland was undertaken. Seven microsatellite loci were screened and Bayesian clustering methods used to infer population clusters. Sequence variation at the mitochondrial DNA control region was examined to identify pig breed. Significant population structure was identified in the study area at a scale of 25 to 35 km, corresponding to three demographically independent management units (MUs). Distinct natural or anthropogenic barriers were not found, but environmental features such as topography and land use appear to influence patterns of gene flow. Despite the strong, overall pattern of structure, some feral pigs clearly exhibited ancestry from a MU outside of that from which they were sampled indicating isolated long distance dispersal or translocation events. Furthermore, our results suggest that gene flow is restricted among pigs of domestic Asian and European origin and non-random mating influences management unit boundaries. We conclude that the three MUs identified in this study should be considered as operational units for feral pig control in far north Queensland. Within a MU, coordinated and simultaneous control is required across farms, rainforest areas and National Park Estates to prevent recolonisation from adjacent localities.

  6. Inference of Functional Properties from Large-scale Analysis of Enzyme Superfamilies*

    Science.gov (United States)

    Brown, Shoshana D.; Babbitt, Patricia C.

    2012-01-01

    As increasingly large amounts of data from genome and other sequencing projects become available, new approaches are needed to determine the functions of the proteins these genes encode. We show how large-scale computational analysis can help to address this challenge by linking functional information to sequence and structural similarities using protein similarity networks. Network analyses using three functionally diverse enzyme superfamilies illustrate the use of these approaches for facile updating and comparison of available structures for a large superfamily, for creation of functional hypotheses for metagenomic sequences, and to summarize the limits of our functional knowledge about even well studied superfamilies. PMID:22069325

  7. Statistical inference approach to structural reconstruction of complex networks from binary time series

    Science.gov (United States)

    Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng

    2018-02-01

    Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.

  8. Probes of large-scale structure in the Universe

    International Nuclear Information System (INIS)

    Suto, Yasushi; Gorski, K.; Juszkiewicz, R.; Silk, J.

    1988-01-01

    Recent progress in observational techniques has made it possible to confront quantitatively various models for the large-scale structure of the Universe with detailed observational data. We develop a general formalism to show that the gravitational instability theory for the origin of large-scale structure is now capable of critically confronting observational results on cosmic microwave background radiation angular anisotropies, large-scale bulk motions and large-scale clumpiness in the galaxy counts. (author)

  9. Large-scale structure of the Universe

    International Nuclear Information System (INIS)

    Doroshkevich, A.G.

    1978-01-01

    The problems, discussed at the ''Large-scale Structure of the Universe'' symposium are considered on a popular level. Described are the cell structure of galaxy distribution in the Universe, principles of mathematical galaxy distribution modelling. The images of cell structures, obtained after reprocessing with the computer are given. Discussed are three hypothesis - vortical, entropic, adiabatic, suggesting various processes of galaxy and galaxy clusters origin. A considerable advantage of the adiabatic hypothesis is recognized. The relict radiation, as a method of direct studying the processes taking place in the Universe is considered. The large-scale peculiarities and small-scale fluctuations of the relict radiation temperature enable one to estimate the turbance properties at the pre-galaxy stage. The discussion of problems, pertaining to studying the hot gas, contained in galaxy clusters, the interactions within galaxy clusters and with the inter-galaxy medium, is recognized to be a notable contribution into the development of theoretical and observational cosmology

  10. Topological Privacy: Lattice Structures and Information Bubbles for Inference and Obfuscation

    Science.gov (United States)

    2016-12-19

    in parentheses. As in C, seeing someone eat one ice-cream cone is not enough to identify anyone in B. Seeing someone (in this case Bob), eat two...13). If attributes represent shared dinners , then in some cases one can infer all the guests at a dinner after having seen as few as two guests...are not shown in ΦQ.) cannot infer additional dinners attended by a guest simply from having observed that guest at a particular dinner or two.) J.2

  11. Inferring Stop-Locations from WiFi

    DEFF Research Database (Denmark)

    Wind, David Kofoed; Sapiezynski, Piotr; Furman, Magdalena Anna

    2016-01-01

    methods are based exclusively on WiFi data. We study two months of WiFi data collected every two minutes by a smartphone, and infer stop-locations in the form of labelled time-intervals. For this purpose, we investigate two algorithms, both of which scale to large datasets: a greedy approach to select...

  12. Atmospheric Energy Deposition Modeling and Inference for Varied Meteoroid Structures

    Science.gov (United States)

    Wheeler, Lorien; Mathias, Donovan; Stokan, Edward; Brown, Peter

    2018-01-01

    Asteroids populations are highly diverse, ranging from coherent monoliths to loosely-bound rubble piles with a broad range of material and compositional properties. These different structures and properties could significantly affect how an asteroid breaks up and deposits energy in the atmosphere, and how much ground damage may occur from resulting blast waves. We have previously developed a fragment-cloud model (FCM) for assessing the atmospheric breakup and energy deposition of asteroids striking Earth. The approach represents ranges of breakup characteristics by combining progressive fragmentation with releases of variable fractions of debris and larger discrete fragments. In this work, we have extended the FCM to also represent asteroids with varied initial structures, such as rubble piles or fractured bodies. We have used the extended FCM to model the Chelyabinsk, Benesov, Kosice, and Tagish Lake meteors, and have obtained excellent matches to energy deposition profiles derived from their light curves. These matches provide validation for the FCM approach, help guide further model refinements, and enable inferences about pre-entry structure and breakup behavior. Results highlight differences in the amount of small debris vs. discrete fragments in matching the various flare characteristics of each meteor. The Chelyabinsk flares were best represented using relatively high debris fractions, while Kosice and Benesov cases were more notably driven by their discrete fragmentation characteristics, perhaps indicating more cohesive initial structures. Tagish Lake exhibited a combination of these characteristics, with lower-debris fragmentation at high altitudes followed by sudden disintegration into small debris in the lower flares. Results from all cases also suggest that lower ablation coefficients and debris spread rates may be more appropriate for the way in which debris clouds are represented in FCM, offering an avenue for future model refinement.

  13. Phylogenetic community structure: temporal variation in fish assemblage

    OpenAIRE

    Santorelli, Sergio; Magnusson, William; Ferreira, Efrem; Caramaschi, Erica; Zuanon, Jansen; Amadio, Sidnéia

    2014-01-01

    Hypotheses about phylogenetic relationships among species allow inferences about the mechanisms that affect species coexistence. Nevertheless, most studies assume that phylogenetic patterns identified are stable over time. We used data on monthly samples of fish from a single lake over 10 years to show that the structure in phylogenetic assemblages varies over time and conclusions depend heavily on the time scale investigated. The data set was organized in guild structures and temporal scales...

  14. Gravity inferred subsurface structure of Gadwal schist belt, Andhra ...

    Indian Academy of Sciences (India)

    residual gravity profile data were interpreted using 2-D prism models. The results ... Geological and geophysical layout map of the Gadwal schist belt area, Andhra Pradesh (after Ananda Murty and ... Observed gravity (Bouguer) values, regional, residual and inferred gravity models along traverse I of the Gadwal schist.

  15. Role of Utility and Inference in the Evolution of Functional Information

    Science.gov (United States)

    Sharov, Alexei A.

    2009-01-01

    Functional information means an encoded network of functions in living organisms from molecular signaling pathways to an organism’s behavior. It is represented by two components: code and an interpretation system, which together form a self-sustaining semantic closure. Semantic closure allows some freedom between components because small variations of the code are still interpretable. The interpretation system consists of inference rules that control the correspondence between the code and the function (phenotype) and determines the shape of the fitness landscape. The utility factor operates at multiple time scales: short-term selection drives evolution towards higher survival and reproduction rate within a given fitness landscape, and long-term selection favors those fitness landscapes that support adaptability and lead to evolutionary expansion of certain lineages. Inference rules make short-term selection possible by shaping the fitness landscape and defining possible directions of evolution, but they are under control of the long-term selection of lineages. Communication normally occurs within a set of agents with compatible interpretation systems, which I call communication system. Functional information cannot be directly transferred between communication systems with incompatible inference rules. Each biological species is a genetic communication system that carries unique functional information together with inference rules that determine evolutionary directions and constraints. This view of the relation between utility and inference can resolve the conflict between realism/positivism and pragmatism. Realism overemphasizes the role of inference in evolution of human knowledge because it assumes that logic is embedded in reality. Pragmatism substitutes usefulness for truth and therefore ignores the advantage of inference. The proposed concept of evolutionary pragmatism rejects the idea that logic is embedded in reality; instead, inference rules are

  16. Geostatistical inference using crosshole ground-penetrating radar

    DEFF Research Database (Denmark)

    Looms, Majken C; Hansen, Thomas Mejer; Cordua, Knud Skou

    2010-01-01

    of the subsurface are used to evaluate the uncertainty of the inversion estimate. We have explored the full potential of the geostatistical inference method using several synthetic models of varying correlation structures and have tested the influence of different assumptions concerning the choice of covariance...... reflection profile. Furthermore, the inferred values of the subsurface global variance and the mean velocity have been corroborated with moisturecontent measurements, obtained gravimetrically from samples collected at the field site....

  17. An empirical Bayesian approach for model-based inference of cellular signaling networks

    Directory of Open Access Journals (Sweden)

    Klinke David J

    2009-11-01

    Full Text Available Abstract Background A common challenge in systems biology is to infer mechanistic descriptions of biological process given limited observations of a biological system. Mathematical models are frequently used to represent a belief about the causal relationships among proteins within a signaling network. Bayesian methods provide an attractive framework for inferring the validity of those beliefs in the context of the available data. However, efficient sampling of high-dimensional parameter space and appropriate convergence criteria provide barriers for implementing an empirical Bayesian approach. The objective of this study was to apply an Adaptive Markov chain Monte Carlo technique to a typical study of cellular signaling pathways. Results As an illustrative example, a kinetic model for the early signaling events associated with the epidermal growth factor (EGF signaling network was calibrated against dynamic measurements observed in primary rat hepatocytes. A convergence criterion, based upon the Gelman-Rubin potential scale reduction factor, was applied to the model predictions. The posterior distributions of the parameters exhibited complicated structure, including significant covariance between specific parameters and a broad range of variance among the parameters. The model predictions, in contrast, were narrowly distributed and were used to identify areas of agreement among a collection of experimental studies. Conclusion In summary, an empirical Bayesian approach was developed for inferring the confidence that one can place in a particular model that describes signal transduction mechanisms and for inferring inconsistencies in experimental measurements.

  18. Inference of Large Phylogenies Using Neighbour-Joining

    DEFF Research Database (Denmark)

    Simonsen, Martin; Mailund, Thomas; Pedersen, Christian Nørgaard Storm

    2011-01-01

    The neighbour-joining method is a widely used method for phylogenetic reconstruction which scales to thousands of taxa. However, advances in sequencing technology have made data sets with more than 10,000 related taxa widely available. Inference of such large phylogenies takes hours or days using...... the Neighbour-Joining method on a normal desktop computer because of the O(n^3) running time. RapidNJ is a search heuristic which reduce the running time of the Neighbour-Joining method significantly but at the cost of an increased memory consumption making inference of large phylogenies infeasible. We present...... two extensions for RapidNJ which reduce the memory requirements and \\makebox{allows} phylogenies with more than 50,000 taxa to be inferred efficiently on a desktop computer. Furthermore, an improved version of the search heuristic is presented which reduces the running time of RapidNJ on many data...

  19. A human genome-wide library of local phylogeny predictions for whole-genome inference problems

    Directory of Open Access Journals (Sweden)

    Schwartz Russell

    2008-08-01

    Full Text Available Abstract Background Many common inference problems in computational genetics depend on inferring aspects of the evolutionary history of a data set given a set of observed modern sequences. Detailed predictions of the full phylogenies are therefore of value in improving our ability to make further inferences about population history and sources of genetic variation. Making phylogenetic predictions on the scale needed for whole-genome analysis is, however, extremely computationally demanding. Results In order to facilitate phylogeny-based predictions on a genomic scale, we develop a library of maximum parsimony phylogenies within local regions spanning all autosomal human chromosomes based on Haplotype Map variation data. We demonstrate the utility of this library for population genetic inferences by examining a tree statistic we call 'imperfection,' which measures the reuse of variant sites within a phylogeny. This statistic is significantly predictive of recombination rate, shows additional regional and population-specific conservation, and allows us to identify outlier genes likely to have experienced unusual amounts of variation in recent human history. Conclusion Recent theoretical advances in algorithms for phylogenetic tree reconstruction have made it possible to perform large-scale inferences of local maximum parsimony phylogenies from single nucleotide polymorphism (SNP data. As results from the imperfection statistic demonstrate, phylogeny predictions encode substantial information useful for detecting genomic features and population history. This data set should serve as a platform for many kinds of inferences one may wish to make about human population history and genetic variation.

  20. Multi-scale structural similarity index for motion detection

    Directory of Open Access Journals (Sweden)

    M. Abdel-Salam Nasr

    2017-07-01

    Full Text Available The most recent approach for measuring the image quality is the structural similarity index (SSI. This paper presents a novel algorithm based on the multi-scale structural similarity index for motion detection (MS-SSIM in videos. The MS-SSIM approach is based on modeling of image luminance, contrast and structure at multiple scales. The MS-SSIM has resulted in much better performance than the single scale SSI approach but at the cost of relatively lower processing speed. The major advantages of the presented algorithm are both: the higher detection accuracy and the quasi real-time processing speed.

  1. Bootstrapping phylogenies inferred from rearrangement data

    Directory of Open Access Journals (Sweden)

    Lin Yu

    2012-08-01

    Full Text Available Abstract Background Large-scale sequencing of genomes has enabled the inference of phylogenies based on the evolution of genomic architecture, under such events as rearrangements, duplications, and losses. Many evolutionary models and associated algorithms have been designed over the last few years and have found use in comparative genomics and phylogenetic inference. However, the assessment of phylogenies built from such data has not been properly addressed to date. The standard method used in sequence-based phylogenetic inference is the bootstrap, but it relies on a large number of homologous characters that can be resampled; yet in the case of rearrangements, the entire genome is a single character. Alternatives such as the jackknife suffer from the same problem, while likelihood tests cannot be applied in the absence of well established probabilistic models. Results We present a new approach to the assessment of distance-based phylogenetic inference from whole-genome data; our approach combines features of the jackknife and the bootstrap and remains nonparametric. For each feature of our method, we give an equivalent feature in the sequence-based framework; we also present the results of extensive experimental testing, in both sequence-based and genome-based frameworks. Through the feature-by-feature comparison and the experimental results, we show that our bootstrapping approach is on par with the classic phylogenetic bootstrap used in sequence-based reconstruction, and we establish the clear superiority of the classic bootstrap for sequence data and of our corresponding new approach for rearrangement data over proposed variants. Finally, we test our approach on a small dataset of mammalian genomes, verifying that the support values match current thinking about the respective branches. Conclusions Our method is the first to provide a standard of assessment to match that of the classic phylogenetic bootstrap for aligned sequences. Its

  2. Bootstrapping phylogenies inferred from rearrangement data.

    Science.gov (United States)

    Lin, Yu; Rajan, Vaibhav; Moret, Bernard Me

    2012-08-29

    Large-scale sequencing of genomes has enabled the inference of phylogenies based on the evolution of genomic architecture, under such events as rearrangements, duplications, and losses. Many evolutionary models and associated algorithms have been designed over the last few years and have found use in comparative genomics and phylogenetic inference. However, the assessment of phylogenies built from such data has not been properly addressed to date. The standard method used in sequence-based phylogenetic inference is the bootstrap, but it relies on a large number of homologous characters that can be resampled; yet in the case of rearrangements, the entire genome is a single character. Alternatives such as the jackknife suffer from the same problem, while likelihood tests cannot be applied in the absence of well established probabilistic models. We present a new approach to the assessment of distance-based phylogenetic inference from whole-genome data; our approach combines features of the jackknife and the bootstrap and remains nonparametric. For each feature of our method, we give an equivalent feature in the sequence-based framework; we also present the results of extensive experimental testing, in both sequence-based and genome-based frameworks. Through the feature-by-feature comparison and the experimental results, we show that our bootstrapping approach is on par with the classic phylogenetic bootstrap used in sequence-based reconstruction, and we establish the clear superiority of the classic bootstrap for sequence data and of our corresponding new approach for rearrangement data over proposed variants. Finally, we test our approach on a small dataset of mammalian genomes, verifying that the support values match current thinking about the respective branches. Our method is the first to provide a standard of assessment to match that of the classic phylogenetic bootstrap for aligned sequences. Its support values follow a similar scale and its receiver

  3. The Network Completion Problem: Inferring Missing Nodes and Edges in Networks

    Energy Technology Data Exchange (ETDEWEB)

    Kim, M; Leskovec, J

    2011-11-14

    Network structures, such as social networks, web graphs and networks from systems biology, play important roles in many areas of science and our everyday lives. In order to study the networks one needs to first collect reliable large scale network data. While the social and information networks have become ubiquitous, the challenge of collecting complete network data still persists. Many times the collected network data is incomplete with nodes and edges missing. Commonly, only a part of the network can be observed and we would like to infer the unobserved part of the network. We address this issue by studying the Network Completion Problem: Given a network with missing nodes and edges, can we complete the missing part? We cast the problem in the Expectation Maximization (EM) framework where we use the observed part of the network to fit a model of network structure, and then we estimate the missing part of the network using the model, re-estimate the parameters and so on. We combine the EM with the Kronecker graphs model and design a scalable Metropolized Gibbs sampling approach that allows for the estimation of the model parameters as well as the inference about missing nodes and edges of the network. Experiments on synthetic and several real-world networks show that our approach can effectively recover the network even when about half of the nodes in the network are missing. Our algorithm outperforms not only classical link-prediction approaches but also the state of the art Stochastic block modeling approach. Furthermore, our algorithm easily scales to networks with tens of thousands of nodes.

  4. Thermal fluid-structure interaction - a few scaling considerations

    International Nuclear Information System (INIS)

    Dimitrov, B.; Schwan, H.

    1984-01-01

    Scaling laws for modeling of nuclear reactor systems primarily consider relations between thermalhydraulic parameters in the control volumes for the model and the prototype. Usually the influence of structural heat is neglected. This report describes, how scaling criteria are improved by parameters concerning structural heat, because during thermal transients there is a strong coupling between the thermalhydraulic system and the surrounding structures. Volumetric scaling laws are applied to a straight pipe of the primary loop of a pressurized water reactor (PWR). For the prototype pipe data of a KWU standard PWR with four loops are chosen. Theoretical studies and RELAP 5/MOD 1 calculations regarding the influence of structural heat on thermalhydraulic response of the fluid are performed. Recommendations are given for minimization of distortions due to influence of structural heat between model and prototype. (orig.) [de

  5. Large-Scale Structure and Hyperuniformity of Amorphous Ices

    Science.gov (United States)

    Martelli, Fausto; Torquato, Salvatore; Giovambattista, Nicolas; Car, Roberto

    2017-09-01

    We investigate the large-scale structure of amorphous ices and transitions between their different forms by quantifying their large-scale density fluctuations. Specifically, we simulate the isothermal compression of low-density amorphous ice (LDA) and hexagonal ice to produce high-density amorphous ice (HDA). Both HDA and LDA are nearly hyperuniform; i.e., they are characterized by an anomalous suppression of large-scale density fluctuations. By contrast, in correspondence with the nonequilibrium phase transitions to HDA, the presence of structural heterogeneities strongly suppresses the hyperuniformity and the system becomes hyposurficial (devoid of "surface-area fluctuations"). Our investigation challenges the largely accepted "frozen-liquid" picture, which views glasses as structurally arrested liquids. Beyond implications for water, our findings enrich our understanding of pressure-induced structural transformations in glasses.

  6. Detection of multiple damages employing best achievable eigenvectors under Bayesian inference

    Science.gov (United States)

    Prajapat, Kanta; Ray-Chaudhuri, Samit

    2018-05-01

    A novel approach is presented in this work to localize simultaneously multiple damaged elements in a structure along with the estimation of damage severity for each of the damaged elements. For detection of damaged elements, a best achievable eigenvector based formulation has been derived. To deal with noisy data, Bayesian inference is employed in the formulation wherein the likelihood of the Bayesian algorithm is formed on the basis of errors between the best achievable eigenvectors and the measured modes. In this approach, the most probable damage locations are evaluated under Bayesian inference by generating combinations of various possible damaged elements. Once damage locations are identified, damage severities are estimated using a Bayesian inference Markov chain Monte Carlo simulation. The efficiency of the proposed approach has been demonstrated by carrying out a numerical study involving a 12-story shear building. It has been found from this study that damage scenarios involving as low as 10% loss of stiffness in multiple elements are accurately determined (localized and severities quantified) even when 2% noise contaminated modal data are utilized. Further, this study introduces a term parameter impact (evaluated based on sensitivity of modal parameters towards structural parameters) to decide the suitability of selecting a particular mode, if some idea about the damaged elements are available. It has been demonstrated here that the accuracy and efficiency of the Bayesian quantification algorithm increases if damage localization is carried out a-priori. An experimental study involving a laboratory scale shear building and different stiffness modification scenarios shows that the proposed approach is efficient enough to localize the stories with stiffness modification.

  7. Trends in large-scale testing of reactor structures

    International Nuclear Information System (INIS)

    Blejwas, T.E.

    2003-01-01

    Large-scale tests of reactor structures have been conducted at Sandia National Laboratories since the late 1970s. This paper describes a number of different large-scale impact tests, pressurization tests of models of containment structures, and thermal-pressure tests of models of reactor pressure vessels. The advantages of large-scale testing are evident, but cost, in particular limits its use. As computer models have grown in size, such as number of degrees of freedom, the advent of computer graphics has made possible very realistic representation of results - results that may not accurately represent reality. A necessary condition to avoiding this pitfall is the validation of the analytical methods and underlying physical representations. Ironically, the immensely larger computer models sometimes increase the need for large-scale testing, because the modeling is applied to increasing more complex structural systems and/or more complex physical phenomena. Unfortunately, the cost of large-scale tests is a disadvantage that will likely severely limit similar testing in the future. International collaborations may provide the best mechanism for funding future programs with large-scale tests. (author)

  8. Scalable inference for stochastic block models

    KAUST Repository

    Peng, Chengbin; Zhang, Zhihua; Wong, Ka-Chun; Zhang, Xiangliang; Keyes, David E.

    2017-01-01

    Community detection in graphs is widely used in social and biological networks, and the stochastic block model is a powerful probabilistic tool for describing graphs with community structures. However, in the era of "big data," traditional inference

  9. Specificity of Structural Assessment of Knowledge

    Science.gov (United States)

    Trumpower, David L.; Sharara, Harold; Goldsmith, Timothy E.

    2010-01-01

    This study examines the specificity of information provided by structural assessment of knowledge (SAK). SAK is a technique which uses the Pathfinder scaling algorithm to transform ratings of concept relatedness into network representations (PFnets) of individuals' knowledge. Inferences about individuals' overall domain knowledge based on the…

  10. Baselines and test data for cross-lingual inference

    DEFF Research Database (Denmark)

    Agic, Zeljko; Schluter, Natalie

    2018-01-01

    The recent years have seen a revival of interest in textual entailment, sparked by i) the emergence of powerful deep neural network learners for natural language processing and ii) the timely development of large-scale evaluation datasets such as SNLI. Recast as natural language inference......, the problem now amounts to detecting the relation between pairs of statements: they either contradict or entail one another, or they are mutually neutral. Current research in natural language inference is effectively exclusive to English. In this paper, we propose to advance the research in SNLI-style natural...... language inference toward multilingual evaluation. To that end, we provide test data for four major languages: Arabic, French, Spanish, and Russian. We experiment with a set of baselines. Our systems are based on cross-lingual word embeddings and machine translation. While our best system scores an average...

  11. Causal learning and inference as a rational process: the new synthesis.

    Science.gov (United States)

    Holyoak, Keith J; Cheng, Patricia W

    2011-01-01

    Over the past decade, an active line of research within the field of human causal learning and inference has converged on a general representational framework: causal models integrated with bayesian probabilistic inference. We describe this new synthesis, which views causal learning and inference as a fundamentally rational process, and review a sample of the empirical findings that support the causal framework over associative alternatives. Causal events, like all events in the distal world as opposed to our proximal perceptual input, are inherently unobservable. A central assumption of the causal approach is that humans (and potentially nonhuman animals) have been designed in such a way as to infer the most invariant causal relations for achieving their goals based on observed events. In contrast, the associative approach assumes that learners only acquire associations among important observed events, omitting the representation of the distal relations. By incorporating bayesian inference over distributions of causal strength and causal structures, along with noisy-logical (i.e., causal) functions for integrating the influences of multiple causes on a single effect, human judgments about causal strength and structure can be predicted accurately for relatively simple causal structures. Dynamic models of learning based on the causal framework can explain patterns of acquisition observed with serial presentation of contingency data and are consistent with available neuroimaging data. The approach has been extended to a diverse range of inductive tasks, including category-based and analogical inferences.

  12. STRIDE: Species Tree Root Inference from Gene Duplication Events.

    Science.gov (United States)

    Emms, David M; Kelly, Steven

    2017-12-01

    The correct interpretation of any phylogenetic tree is dependent on that tree being correctly rooted. We present STRIDE, a fast, effective, and outgroup-free method for identification of gene duplication events and species tree root inference in large-scale molecular phylogenetic analyses. STRIDE identifies sets of well-supported in-group gene duplication events from a set of unrooted gene trees, and analyses these events to infer a probability distribution over an unrooted species tree for the location of its root. We show that STRIDE correctly identifies the root of the species tree in multiple large-scale molecular phylogenetic data sets spanning a wide range of timescales and taxonomic groups. We demonstrate that the novel probability model implemented in STRIDE can accurately represent the ambiguity in species tree root assignment for data sets where information is limited. Furthermore, application of STRIDE to outgroup-free inference of the origin of the eukaryotic tree resulted in a root probability distribution that provides additional support for leading hypotheses for the origin of the eukaryotes. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  13. The vertical structure of Jupiter and Saturn zonal winds from nonlinear simulations of major vortices and planetary-scale disturbances

    Science.gov (United States)

    Garcia-Melendo, E.; Legarreta, J.; Sanchez-Lavega, A.

    2012-12-01

    Direct measurements of the structure of the zonal winds of Jupiter and Saturn below the upper cloud layer are very difficult to retrieve. Except from the vertical profile at a Jupiter hot spot obtained from the Galileo probe in 1995 and measurements from cloud tracking by Cassini instruments just below the upper cloud, no other data are available. We present here our inferences of the vertical structure of Jupiter and Saturn zonal wind across the upper troposphere (deep down to about 10 bar level) obtained from nonlinear simulations using the EPIC code of the stability and interactions of large-scale vortices and planetary-scale disturbances in both planets. Acknowledgements: This work has been funded by Spanish MICIIN AYA2009-10701 with FEDER support, Grupos Gobierno Vasco IT-464-07 and UPV/EHU UFI11/55. [1] García-Melendo E., Sánchez-Lavega A., Dowling T.., Icarus, 176, 272-282 (2005). [2] García-Melendo E., Sánchez-Lavega A., Hueso R., Icarus, 191, 665-677 (2007). [3] Sánchez-Lavega A., et al., Nature, 451, 437- 440 (2008). [4] Sánchez-Lavega A., et al., Nature, 475, 71-74 (2011).

  14. Inference of domain structure at elevated temperature in fine ...

    African Journals Online (AJOL)

    The thermal variation of the number of domains (nd) for Fe7S8 particles (within the size range 1-30 mm and between 20 and 300°C), has been inferred from the room temperature analytic expression between nd and particle size (L), the temperature dependences of the anisotropy energy constant (K) and the spontaneous ...

  15. Population structure of Atlantic Mackerel inferred from RAD-seq derived SNP markers: effects of sequence clustering parameters and hierarchical SNP selection

    KAUST Repository

    Rodríguez-Ezpeleta, Naiara

    2016-03-03

    Restriction-site associated DNA sequencing (RAD-seq) and related methods are revolutionizing the field of population genomics in non-model organisms as they allow generating an unprecedented number of single nucleotide polymorphisms (SNPs) even when no genomic information is available. Yet, RAD-seq data analyses rely on assumptions on nature and number of nucleotide variants present in a single locus, the choice of which may lead to an under- or overestimated number of SNPs and/or to incorrectly called genotypes. Using the Atlantic mackerel (Scomber scombrus L.) and a close relative, the Atlantic chub mackerel (Scomber colias), as case study, here we explore the sensitivity of population structure inferences to two crucial aspects in RAD-seq data analysis: the maximum number of mismatches allowed to merge reads into a locus and the relatedness of the individuals used for genotype calling and SNP selection. Our study resolves the population structure of the Atlantic mackerel, but, most importantly, provides insights into the effects of alternative RAD-seq data analysis strategies on population structure inferences that are directly applicable to other species.

  16. Revealing less derived nature of cartilaginous fish genomes with their evolutionary time scale inferred with nuclear genes.

    Directory of Open Access Journals (Sweden)

    Adina J Renz

    Full Text Available Cartilaginous fishes, divided into Holocephali (chimaeras and Elasmoblanchii (sharks, rays and skates, occupy a key phylogenetic position among extant vertebrates in reconstructing their evolutionary processes. Their accurate evolutionary time scale is indispensable for better understanding of the relationship between phenotypic and molecular evolution of cartilaginous fishes. However, our current knowledge on the time scale of cartilaginous fish evolution largely relies on estimates using mitochondrial DNA sequences. In this study, making the best use of the still partial, but large-scale sequencing data of cartilaginous fish species, we estimate the divergence times between the major cartilaginous fish lineages employing nuclear genes. By rigorous orthology assessment based on available genomic and transcriptomic sequence resources for cartilaginous fishes, we selected 20 protein-coding genes in the nuclear genome, spanning 2973 amino acid residues. Our analysis based on the Bayesian inference resulted in the mean divergence time of 421 Ma, the late Silurian, for the Holocephali-Elasmobranchii split, and 306 Ma, the late Carboniferous, for the split between sharks and rays/skates. By applying these results and other documented divergence times, we measured the relative evolutionary rate of the Hox A cluster sequences in the cartilaginous fish lineages, which resulted in a lower substitution rate with a factor of at least 2.4 in comparison to tetrapod lineages. The obtained time scale enables mapping phenotypic and molecular changes in a quantitative framework. It is of great interest to corroborate the less derived nature of cartilaginous fish at the molecular level as a genome-wide phenomenon.

  17. Scale of association: hierarchical linear models and the measurement of ecological systems

    Science.gov (United States)

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  18. Local topography shapes fine-scale spatial genetic structure in the Arkansas Valley evening primrose, Oenothera harringtonii (Onagraceae).

    Science.gov (United States)

    Rhodes, Matthew K; Fant, Jeremie B; Skogen, Krissa A

    2014-01-01

    Identifying factors that shape the spatial distribution of genetic variation is crucial to understanding many population- and landscape-level processes. In this study, we explore fine-scale spatial genetic structure in Oenothera harringtonii (Onagraceae), an insect-pollinated, gravity-dispersed herb endemic to the grasslands of south-central and southeastern Colorado, USA. We genotyped 315 individuals with 11 microsatellite markers and utilized a combination of spatial autocorrelation analyses and landscape genetic models to relate life history traits and landscape features to dispersal processes. Spatial genetic structure was consistent with theoretical expectations of isolation by distance, but this pattern was weak (Sp = 0.00374). Anisotropic analyses indicated that spatial genetic structure was markedly directional, in this case consistent with increased dispersal along prominent slopes. Landscape genetic models subsequently confirmed that spatial genetic variation was significantly influenced by local topographic heterogeneity, specifically that geographic distance, elevation and aspect were important predictors of spatial genetic structure. Among these variables, geographic distance was ~68% more important than elevation in describing spatial genetic variation, and elevation was ~42% more important than aspect after removing the effect of geographic distance. From these results, we infer a mechanism of hydrochorous seed dispersal along major drainages aided by seasonal monsoon rains. Our findings suggest that landscape features may shape microevolutionary processes at much finer spatial scales than typically considered, and stress the importance of considering how particular dispersal vectors are influenced by their environmental context. © The American Genetic Association 2014. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Full-scale dynamic structural testing of Paks nuclear power plant

    International Nuclear Information System (INIS)

    Da Rin, E.M.; Muzzi, F.P.

    1995-01-01

    Within the framework of the IAEA coordinated 'Benchmark Study for the seismic analysis and testing of WWER-type NPPs', in-situ dynamic structural testing activities have been performed at the Paks Nuclear Power Plant in Hungary. The specific objective of the investigation was to obtain experimental data on the actual dynamic structural behaviour of the plant's major constructions and equipment under normal operating conditions, for enabling a valid seismic safety review to be made. This paper gives a synthetic description of the conducted experiments and presents some results, regarding in particular the free-field excitations produced during the earthquake-simulation experiments and an experiment of the dynamic soil-structure interaction global effects at the base of the reactor containment structure. Moreover, a method which can be used for inferring dynamic structural characteristics from the recorded time-histories is briefly described and a simple illustrative example given. (author)

  20. Study of structural colour of Hebomoia glaucippe butterfly wing scales

    Science.gov (United States)

    Shur, V. Ya; Kuznetsov, D. K.; Pryakhina, V. I.; Kosobokov, M. S.; Zubarev, I. V.; Boymuradova, S. K.; Volchetskaya, K. V.

    2017-10-01

    Structural colours of Hebomoia glaucippe butterfly wing scales have been studied experimentally using high resolution scanning electron microscopy. Visualization of scales structures and computer simulation allowed distinguishing correlation between nanostructures on the scales and their colour.

  1. Scale modeling of reinforced concrete structures subjected to seismic loading

    International Nuclear Information System (INIS)

    Dove, R.C.

    1983-01-01

    Reinforced concrete, Category I structures are so large that the possibility of seismicly testing the prototype structures under controlled conditions is essentially nonexistent. However, experimental data, from which important structural properties can be determined and existing and new methods of seismic analysis benchmarked, are badly needed. As a result, seismic experiments on scaled models are of considerable interest. In this paper, the scaling laws are developed in some detail so that assumptions and choices based on judgement can be clearly recognized and their effects discussed. The scaling laws developed are then used to design a reinforced concrete model of a Category I structure. Finally, how scaling is effected by various types of damping (viscous, structural, and Coulomb) is discussed

  2. Multi-scale structural community organisation of the human genome.

    Science.gov (United States)

    Boulos, Rasha E; Tremblay, Nicolas; Arneodo, Alain; Borgnat, Pierre; Audit, Benjamin

    2017-04-11

    Structural interaction frequency matrices between all genome loci are now experimentally achievable thanks to high-throughput chromosome conformation capture technologies. This ensues a new methodological challenge for computational biology which consists in objectively extracting from these data the structural motifs characteristic of genome organisation. We deployed the fast multi-scale community mining algorithm based on spectral graph wavelets to characterise the networks of intra-chromosomal interactions in human cell lines. We observed that there exist structural domains of all sizes up to chromosome length and demonstrated that the set of structural communities forms a hierarchy of chromosome segments. Hence, at all scales, chromosome folding predominantly involves interactions between neighbouring sites rather than the formation of links between distant loci. Multi-scale structural decomposition of human chromosomes provides an original framework to question structural organisation and its relationship to functional regulation across the scales. By construction the proposed methodology is independent of the precise assembly of the reference genome and is thus directly applicable to genomes whose assembly is not fully determined.

  3. A general Bayes weibull inference model for accelerated life testing

    International Nuclear Information System (INIS)

    Dorp, J. Rene van; Mazzuchi, Thomas A.

    2005-01-01

    This article presents the development of a general Bayes inference model for accelerated life testing. The failure times at a constant stress level are assumed to belong to a Weibull distribution, but the specification of strict adherence to a parametric time-transformation function is not required. Rather, prior information is used to indirectly define a multivariate prior distribution for the scale parameters at the various stress levels and the common shape parameter. Using the approach, Bayes point estimates as well as probability statements for use-stress (and accelerated) life parameters may be inferred from a host of testing scenarios. The inference procedure accommodates both the interval data sampling strategy and type I censored sampling strategy for the collection of ALT test data. The inference procedure uses the well-known MCMC (Markov Chain Monte Carlo) methods to derive posterior approximations. The approach is illustrated with an example

  4. Life-history traits of the Miocene Hipparion concudense (Spain inferred from bone histological structure.

    Directory of Open Access Journals (Sweden)

    Cayetana Martinez-Maza

    Full Text Available Histological analyses of fossil bones have provided clues on the growth patterns and life history traits of several extinct vertebrates that would be unavailable for classical morphological studies. We analyzed the bone histology of Hipparion to infer features of its life history traits and growth pattern. Microscope analysis of thin sections of a large sample of humeri, femora, tibiae and metapodials of Hipparion concudense from the upper Miocene site of Los Valles de Fuentidueña (Segovia, Spain has shown that the number of growth marks is similar among the different limb bones, suggesting that equivalent skeletochronological inferences for this Hipparion population might be achieved by means of any of the elements studied. Considering their abundance, we conducted a skeletechronological study based on the large sample of third metapodials from Los Valles de Fuentidueña together with another large sample from the Upper Miocene locality of Concud (Teruel, Spain. The data obtained enabled us to distinguish four age groups in both samples and to determine that Hipparion concudense tended to reach skeletal maturity during its third year of life. Integration of bone microstructure and skeletochronological data allowed us to identify ontogenetic changes in bone structure and growth rate and to distinguish three histologic ontogenetic stages corresponding to immature, subadult and adult individuals. Data on secondary osteon density revealed an increase in bone remodeling throughout the ontogenetic stages and a lesser degree thereof in the Concud population, which indicates different biomechanical stresses in the two populations, likely due to environmental differences. Several individuals showed atypical growth patterns in the Concud sample, which may also reflect environmental differences between the two localities. Finally, classification of the specimens' age within groups enabled us to characterize the age structure of both samples, which is

  5. A BAYESIAN ESTIMATE OF THE CMB–LARGE-SCALE STRUCTURE CROSS-CORRELATION

    Energy Technology Data Exchange (ETDEWEB)

    Moura-Santos, E. [Instituto de Física, Universidade de São Paulo, Rua do Matão trav. R 187, 05508-090, São Paulo—SP (Brazil); Carvalho, F. C. [Departamento de Física, Universidade do Estado do Rio Grande do Norte, 59610-210, Mossoró-RN (Brazil); Penna-Lima, M. [APC, AstroParticule et Cosmologie, Université Paris Diderot, CNRS/IN2P3, CEA/Irfu, Observatoire de Paris, Sorbonne Paris Cité, 10, rue Alice Domon et Léonie Duquet, F-75205 Paris Cedex 13 (France); Novaes, C. P.; Wuensche, C. A., E-mail: emoura@if.usp.br, E-mail: fabiocabral@uern.br, E-mail: pennal@apc.in2p3.fr, E-mail: cawuenschel@das.inpe.br, E-mail: camilanovaes@on.br [Observatório Nacional, Rua General José Cristino 77, São Cristóvão, 20921-400, Rio de Janeiro, RJ (Brazil)

    2016-08-01

    Evidences for late-time acceleration of the universe are provided by multiple probes, such as Type Ia supernovae, the cosmic microwave background (CMB), and large-scale structure (LSS). In this work, we focus on the integrated Sachs–Wolfe (ISW) effect, i.e., secondary CMB fluctuations generated by evolving gravitational potentials due to the transition between, e.g., the matter and dark energy (DE) dominated phases. Therefore, assuming a flat universe, DE properties can be inferred from ISW detections. We present a Bayesian approach to compute the CMB–LSS cross-correlation signal. The method is based on the estimate of the likelihood for measuring a combined set consisting of a CMB temperature and galaxy contrast maps, provided that we have some information on the statistical properties of the fluctuations affecting these maps. The likelihood is estimated by a sampling algorithm, therefore avoiding the computationally demanding techniques of direct evaluation in either pixel or harmonic space. As local tracers of the matter distribution at large scales, we used the Two Micron All Sky Survey galaxy catalog and, for the CMB temperature fluctuations, the ninth-year data release of the Wilkinson Microwave Anisotropy Probe ( WMAP 9). The results show a dominance of cosmic variance over the weak recovered signal, due mainly to the shallowness of the catalog used, with systematics associated with the sampling algorithm playing a secondary role as sources of uncertainty. When combined with other complementary probes, the method presented in this paper is expected to be a useful tool to late-time acceleration studies in cosmology.

  6. Hierarchical structure and cytocompatibility of fish scales from Carassius auratus

    International Nuclear Information System (INIS)

    Fang, Zhou; Wang, Yukun; Feng, Qingling; Kienzle, Arne; Müller, Werner E.G.

    2014-01-01

    To study the structure and the cytocompatibility of fish scales from Carassius auratus, scanning electron microscopy (SEM) was used to observe the morphology of fish scales treated with different processing methods. Based on varying morphologies and components, the fish scales can be divided into three regions on the surface and three layers in vertical. The functions of these three individual layers were analyzed. SEM results show that the primary inorganic components are spherical or cubic hydroxyapatite (HA) nanoparticles. The fish scales have an ∼ 60° overlapped plywood structure of lamellas in the fibrillary plate. The plywood structure consists of co-aligned type I collagen fibers, which are parallel to the HA lamellas. X-ray diffraction (XRD), thermogravimetric analysis/differential scanning calorimetry (TGA/DSC) and Fourier transform infrared (FTIR) analysis indicate that the main components are HA and type I collagen fibers. MC3T3-E1 cell culture results show a high cytocompatibility and the ability to guide cell proliferation and migration along the scale ridge channels of the fish scales. This plywood structure provides inspiration for a structure-enhanced composite material. - Highlights: • The Carassius auratus fish scale can be divided into 3 layers rather than 2. • The functions of these three individual layers were firstly analyzed. • The fish scale shows a high cytocompatibility. • The fish scale can guide cells migration along the scale ridge channels

  7. Hierarchical structure and cytocompatibility of fish scales from Carassius auratus

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Zhou [State Key Laboratory of New Ceramics and Fine Processing, School of Materials Science and Engineering, Tsinghua University, Beijing 100084 (China); Wang, Yukun [State Key Laboratory of Geological Processes and Mineral Resources, China University of Geosciences, Beijing 100084 (China); Feng, Qingling, E-mail: biomater@mail.tsinghua.edu.cn [State Key Laboratory of New Ceramics and Fine Processing, School of Materials Science and Engineering, Tsinghua University, Beijing 100084 (China); Key Laboratory of Advanced Materials of Ministry of Education of China, School of Materials Science and Engineering, Tsinghua University, Beijing 100084 (China); Kienzle, Arne; Müller, Werner E.G. [Institut für Physiologische Chemie, Abteilung Angewandte Molekularbiologie, Johannes Gutenberg-Universität, Duesbergweg 6, Mainz 55099 (Germany)

    2014-10-01

    To study the structure and the cytocompatibility of fish scales from Carassius auratus, scanning electron microscopy (SEM) was used to observe the morphology of fish scales treated with different processing methods. Based on varying morphologies and components, the fish scales can be divided into three regions on the surface and three layers in vertical. The functions of these three individual layers were analyzed. SEM results show that the primary inorganic components are spherical or cubic hydroxyapatite (HA) nanoparticles. The fish scales have an ∼ 60° overlapped plywood structure of lamellas in the fibrillary plate. The plywood structure consists of co-aligned type I collagen fibers, which are parallel to the HA lamellas. X-ray diffraction (XRD), thermogravimetric analysis/differential scanning calorimetry (TGA/DSC) and Fourier transform infrared (FTIR) analysis indicate that the main components are HA and type I collagen fibers. MC3T3-E1 cell culture results show a high cytocompatibility and the ability to guide cell proliferation and migration along the scale ridge channels of the fish scales. This plywood structure provides inspiration for a structure-enhanced composite material. - Highlights: • The Carassius auratus fish scale can be divided into 3 layers rather than 2. • The functions of these three individual layers were firstly analyzed. • The fish scale shows a high cytocompatibility. • The fish scale can guide cells migration along the scale ridge channels.

  8. Large scale structure and baryogenesis

    International Nuclear Information System (INIS)

    Kirilova, D.P.; Chizhov, M.V.

    2001-08-01

    We discuss a possible connection between the large scale structure formation and the baryogenesis in the universe. An update review of the observational indications for the presence of a very large scale 120h -1 Mpc in the distribution of the visible matter of the universe is provided. The possibility to generate a periodic distribution with the characteristic scale 120h -1 Mpc through a mechanism producing quasi-periodic baryon density perturbations during inflationary stage, is discussed. The evolution of the baryon charge density distribution is explored in the framework of a low temperature boson condensate baryogenesis scenario. Both the observed very large scale of a the visible matter distribution in the universe and the observed baryon asymmetry value could naturally appear as a result of the evolution of a complex scalar field condensate, formed at the inflationary stage. Moreover, for some model's parameters a natural separation of matter superclusters from antimatter ones can be achieved. (author)

  9. Estimating mountain basin-mean precipitation from streamflow using Bayesian inference

    Science.gov (United States)

    Henn, Brian; Clark, Martyn P.; Kavetski, Dmitri; Lundquist, Jessica D.

    2015-10-01

    Estimating basin-mean precipitation in complex terrain is difficult due to uncertainty in the topographical representativeness of precipitation gauges relative to the basin. To address this issue, we use Bayesian methodology coupled with a multimodel framework to infer basin-mean precipitation from streamflow observations, and we apply this approach to snow-dominated basins in the Sierra Nevada of California. Using streamflow observations, forcing data from lower-elevation stations, the Bayesian Total Error Analysis (BATEA) methodology and the Framework for Understanding Structural Errors (FUSE), we infer basin-mean precipitation, and compare it to basin-mean precipitation estimated using topographically informed interpolation from gauges (PRISM, the Parameter-elevation Regression on Independent Slopes Model). The BATEA-inferred spatial patterns of precipitation show agreement with PRISM in terms of the rank of basins from wet to dry but differ in absolute values. In some of the basins, these differences may reflect biases in PRISM, because some implied PRISM runoff ratios may be inconsistent with the regional climate. We also infer annual time series of basin precipitation using a two-step calibration approach. Assessment of the precision and robustness of the BATEA approach suggests that uncertainty in the BATEA-inferred precipitation is primarily related to uncertainties in hydrologic model structure. Despite these limitations, time series of inferred annual precipitation under different model and parameter assumptions are strongly correlated with one another, suggesting that this approach is capable of resolving year-to-year variability in basin-mean precipitation.

  10. Bayesian inference for spatio-temporal spike-and-slab priors

    DEFF Research Database (Denmark)

    Andersen, Michael Riis; Vehtari, Aki; Winther, Ole

    2017-01-01

    a transformed Gaussian process on the spike-and-slab probabilities. An expectation propagation (EP) algorithm for posterior inference under the proposed model is derived. For large scale problems, the standard EP algorithm can be prohibitively slow. We therefore introduce three different approximation schemes...

  11. Detecting structure of haplotypes and local ancestry

    Science.gov (United States)

    We present a two-layer hidden Markov model to detect the structure of haplotypes for unrelated individuals. This allows us to model two scales of linkage disequilibrium (one within a group of haplotypes and one between groups), thereby taking advantage of rich haplotype information to infer local an...

  12. Functional neuroanatomy of intuitive physical inference.

    Science.gov (United States)

    Fischer, Jason; Mikhael, John G; Tenenbaum, Joshua B; Kanwisher, Nancy

    2016-08-23

    To engage with the world-to understand the scene in front of us, plan actions, and predict what will happen next-we must have an intuitive grasp of the world's physical structure and dynamics. How do the objects in front of us rest on and support each other, how much force would be required to move them, and how will they behave when they fall, roll, or collide? Despite the centrality of physical inferences in daily life, little is known about the brain mechanisms recruited to interpret the physical structure of a scene and predict how physical events will unfold. Here, in a series of fMRI experiments, we identified a set of cortical regions that are selectively engaged when people watch and predict the unfolding of physical events-a "physics engine" in the brain. These brain regions are selective to physical inferences relative to nonphysical but otherwise highly similar scenes and tasks. However, these regions are not exclusively engaged in physical inferences per se or, indeed, even in scene understanding; they overlap with the domain-general "multiple demand" system, especially the parts of that system involved in action planning and tool use, pointing to a close relationship between the cognitive and neural mechanisms involved in parsing the physical content of a scene and preparing an appropriate action.

  13. The Large-scale Coronal Structure of the 2017 August 21 Great American Eclipse: An Assessment of Solar Surface Flux Transport Model Enabled Predictions and Observations

    Science.gov (United States)

    Nandy, Dibyendu; Bhowmik, Prantika; Yeates, Anthony R.; Panda, Suman; Tarafder, Rajashik; Dash, Soumyaranjan

    2018-01-01

    On 2017 August 21, a total solar eclipse swept across the contiguous United States, providing excellent opportunities for diagnostics of the Sun’s corona. The Sun’s coronal structure is notoriously difficult to observe except during solar eclipses; thus, theoretical models must be relied upon for inferring the underlying magnetic structure of the Sun’s outer atmosphere. These models are necessary for understanding the role of magnetic fields in the heating of the corona to a million degrees and the generation of severe space weather. Here we present a methodology for predicting the structure of the coronal field based on model forward runs of a solar surface flux transport model, whose predicted surface field is utilized to extrapolate future coronal magnetic field structures. This prescription was applied to the 2017 August 21 solar eclipse. A post-eclipse analysis shows good agreement between model simulated and observed coronal structures and their locations on the limb. We demonstrate that slow changes in the Sun’s surface magnetic field distribution driven by long-term flux emergence and its evolution governs large-scale coronal structures with a (plausibly cycle-phase dependent) dynamical memory timescale on the order of a few solar rotations, opening up the possibility for large-scale, global corona predictions at least a month in advance.

  14. Macroecological Patterns of Resilience Inferred from a Multinational, Synchronized Experiment

    Directory of Open Access Journals (Sweden)

    Didier L. Baho

    2015-01-01

    Full Text Available The likelihood of an ecological system to undergo undesired regime shifts is expected to increase as climate change effects unfold. To understand how regional climate settings can affect resilience; i.e., the ability of an ecosystem to tolerate disturbances without changing its original structure and processes, we used a synchronized mesocosm experiment (representative of shallow lakes along a latitudinal gradient. We manipulated nutrient concentrations and water levels in a synchronized mesocosm experiment in different climate zones across Europe involving Sweden, Estonia, Germany, the Czech Republic, Turkey and Greece. We assessed attributes of zooplankton communities that might contribute to resilience under different ecological configurations. We assessed four indicator of relative ecological resilience (cross-scale, within-scale structures, aggregation length and gap size of zooplankton communities, inferred from discontinuity analysis. Similar resilience attributes were found across experimental treatments and countries, except Greece, which experienced severe drought conditions during the experiment. These conditions apparently led to a lower relative resilience in the Greek mesocosms. Our results indicate that zooplankton community resilience in shallow lakes is marginally affected by water level and the studied nutrient range unless extreme drought occurs. In practice, this means that drought mitigation could be especially challenging in semi-arid countries in the future.

  15. Evaporation characteristics of a hydrophilic surface with micro-scale and/or nano-scale structures fabricated by sandblasting and aluminum anodization

    International Nuclear Information System (INIS)

    Kim, Hyungmo; Kim, Joonwon

    2010-01-01

    This paper presents the results of evaporation experiments using water droplets on aluminum sheets that were either smooth or had surface structures at the micro-scale, at the nano-scale or at both micro- and nano-scales (dual-scale). The smooth surface was a polished aluminum sheet; the surface with micro-scale structures was obtained by sandblasting; the surface with nano-scale structures was obtained using conventional aluminum anodization and the surface with dual-scale structures was prepared using sandblasting and anodization sequentially. The wetting properties and evaporation rates were measured for each surface. The evaporation rates were affected by their static and dynamic wetting properties. Evaporation on the surface with dual-scale structures was fastest and the evaporation rate was analyzed quantitatively.

  16. Nonlinear Analysis and Scaling Laws for Noncircular Composite Structures Subjected to Combined Loads

    Science.gov (United States)

    Hilburger, Mark W.; Rose, Cheryl A.; Starnes, James H., Jr.

    2001-01-01

    Results from an analytical study of the response of a built-up, multi-cell noncircular composite structure subjected to combined internal pressure and mechanical loads are presented. Nondimensional parameters and scaling laws based on a first-order shear-deformation plate theory are derived for this noncircular composite structure. The scaling laws are used to design sub-scale structural models for predicting the structural response of a full-scale structure representative of a portion of a blended-wing-body transport aircraft. Because of the complexity of the full-scale structure, some of the similitude conditions are relaxed for the sub-scale structural models. Results from a systematic parametric study are used to determine the effects of relaxing selected similitude conditions on the sensitivity of the effectiveness of using the sub-scale structural model response characteristics for predicting the full-scale structure response characteristics.

  17. Similitude and scaling of large structural elements: Case study

    Directory of Open Access Journals (Sweden)

    M. Shehadeh

    2015-06-01

    Full Text Available Scaled down models are widely used for experimental investigations of large structures due to the limitation in the capacities of testing facilities along with the expenses of the experimentation. The modeling accuracy depends upon the model material properties, fabrication accuracy and loading techniques. In the present work the Buckingham π theorem is used to develop the relations (i.e. geometry, loading and properties between the model and a large structural element as that is present in the huge existing petroleum oil drilling rigs. The model is to be designed, loaded and treated according to a set of similitude requirements that relate the model to the large structural element. Three independent scale factors which represent three fundamental dimensions, namely mass, length and time need to be selected for designing the scaled down model. Numerical prediction of the stress distribution within the model and its elastic deformation under steady loading is to be made. The results are compared with those obtained from the full scale structure numerical computations. The effect of scaled down model size and material on the accuracy of the modeling technique is thoroughly examined.

  18. Cortical hierarchies perform Bayesian causal inference in multisensory perception.

    Directory of Open Access Journals (Sweden)

    Tim Rohe

    2015-02-01

    Full Text Available To form a veridical percept of the environment, the brain needs to integrate sensory signals from a common source but segregate those from independent sources. Thus, perception inherently relies on solving the "causal inference problem." Behaviorally, humans solve this problem optimally as predicted by Bayesian Causal Inference; yet, the underlying neural mechanisms are unexplored. Combining psychophysics, Bayesian modeling, functional magnetic resonance imaging (fMRI, and multivariate decoding in an audiovisual spatial localization task, we demonstrate that Bayesian Causal Inference is performed by a hierarchy of multisensory processes in the human brain. At the bottom of the hierarchy, in auditory and visual areas, location is represented on the basis that the two signals are generated by independent sources (= segregation. At the next stage, in posterior intraparietal sulcus, location is estimated under the assumption that the two signals are from a common source (= forced fusion. Only at the top of the hierarchy, in anterior intraparietal sulcus, the uncertainty about the causal structure of the world is taken into account and sensory signals are combined as predicted by Bayesian Causal Inference. Characterizing the computational operations of signal interactions reveals the hierarchical nature of multisensory perception in human neocortex. It unravels how the brain accomplishes Bayesian Causal Inference, a statistical computation fundamental for perception and cognition. Our results demonstrate how the brain combines information in the face of uncertainty about the underlying causal structure of the world.

  19. Classification of Farmland Landscape Structure in Multiple Scales

    Science.gov (United States)

    Jiang, P.; Cheng, Q.; Li, M.

    2017-12-01

    Farmland is one of the basic terrestrial resources that support the development and survival of human beings and thus plays a crucial role in the national security of every country. Pattern change is the intuitively spatial representation of the scale and quality variation of farmland. Through the characteristic development of spatial shapes as well as through changes in system structures, functions and so on, farmland landscape patterns may indicate the landscape health level. Currently, it is still difficult to perform positioning analyses of landscape pattern changes that reflect the landscape structure variations of farmland with an index model. Depending on a number of spatial properties such as locations and adjacency relations, distance decay, fringe effect, and on the model of patch-corridor-matrix that is applied, this study defines a type system of farmland landscape structure on the national, provincial, and city levels. According to such a definition, the classification model of farmland landscape-structure type at the pixel scale is developed and validated based on mathematical-morphology concepts and on spatial-analysis methods. Then, the laws that govern farmland landscape-pattern change in multiple scales are analyzed from the perspectives of spatial heterogeneity, spatio-temporal evolution, and function transformation. The result shows that the classification model of farmland landscape-structure type can reflect farmland landscape-pattern change and its effects on farmland production function. Moreover, farmland landscape change in different scales displayed significant disparity in zonality, both within specific regions and in urban-rural areas.

  20. Geographic population structure analysis of worldwide human populations infers their biogeographical origins

    Science.gov (United States)

    Elhaik, Eran; Tatarinova, Tatiana; Chebotarev, Dmitri; Piras, Ignazio S.; Maria Calò, Carla; De Montis, Antonella; Atzori, Manuela; Marini, Monica; Tofanelli, Sergio; Francalacci, Paolo; Pagani, Luca; Tyler-Smith, Chris; Xue, Yali; Cucca, Francesco; Schurr, Theodore G.; Gaieski, Jill B.; Melendez, Carlalynne; Vilar, Miguel G.; Owings, Amanda C.; Gómez, Rocío; Fujita, Ricardo; Santos, Fabrício R.; Comas, David; Balanovsky, Oleg; Balanovska, Elena; Zalloua, Pierre; Soodyall, Himla; Pitchappan, Ramasamy; GaneshPrasad, ArunKumar; Hammer, Michael; Matisoo-Smith, Lisa; Wells, R. Spencer; Acosta, Oscar; Adhikarla, Syama; Adler, Christina J.; Bertranpetit, Jaume; Clarke, Andrew C.; Cooper, Alan; Der Sarkissian, Clio S. I.; Haak, Wolfgang; Haber, Marc; Jin, Li; Kaplan, Matthew E.; Li, Hui; Li, Shilin; Martínez-Cruz, Begoña; Merchant, Nirav C.; Mitchell, John R.; Parida, Laxmi; Platt, Daniel E.; Quintana-Murci, Lluis; Renfrew, Colin; Lacerda, Daniela R.; Royyuru, Ajay K.; Sandoval, Jose Raul; Santhakumari, Arun Varatharajan; Soria Hernanz, David F.; Swamikrishnan, Pandikumar; Ziegle, Janet S.

    2014-01-01

    The search for a method that utilizes biological information to predict humans’ place of origin has occupied scientists for millennia. Over the past four decades, scientists have employed genetic data in an effort to achieve this goal but with limited success. While biogeographical algorithms using next-generation sequencing data have achieved an accuracy of 700 km in Europe, they were inaccurate elsewhere. Here we describe the Geographic Population Structure (GPS) algorithm and demonstrate its accuracy with three data sets using 40,000–130,000 SNPs. GPS placed 83% of worldwide individuals in their country of origin. Applied to over 200 Sardinians villagers, GPS placed a quarter of them in their villages and most of the rest within 50 km of their villages. GPS’s accuracy and power to infer the biogeography of worldwide individuals down to their country or, in some cases, village, of origin, underscores the promise of admixture-based methods for biogeography and has ramifications for genetic ancestry testing. PMID:24781250

  1. Double inflation: A possible resolution of the large-scale structure problem

    International Nuclear Information System (INIS)

    Turner, M.S.; Villumsen, J.V.; Vittorio, N.; Silk, J.; Juszkiewicz, R.

    1986-11-01

    A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Ω = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of ∼100 Mpc, while the small-scale structure over ≤ 10 Mpc resembles that in a low density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations. 38 refs., 6 figs

  2. Design of scaled down structural models

    Science.gov (United States)

    Simitses, George J.

    1994-07-01

    In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.

  3. A hierarchy of time-scales and the brain.

    Science.gov (United States)

    Kiebel, Stefan J; Daunizeau, Jean; Friston, Karl J

    2008-11-01

    In this paper, we suggest that cortical anatomy recapitulates the temporal hierarchy that is inherent in the dynamics of environmental states. Many aspects of brain function can be understood in terms of a hierarchy of temporal scales at which representations of the environment evolve. The lowest level of this hierarchy corresponds to fast fluctuations associated with sensory processing, whereas the highest levels encode slow contextual changes in the environment, under which faster representations unfold. First, we describe a mathematical model that exploits the temporal structure of fast sensory input to track the slower trajectories of their underlying causes. This model of sensory encoding or perceptual inference establishes a proof of concept that slowly changing neuronal states can encode the paths or trajectories of faster sensory states. We then review empirical evidence that suggests that a temporal hierarchy is recapitulated in the macroscopic organization of the cortex. This anatomic-temporal hierarchy provides a comprehensive framework for understanding cortical function: the specific time-scale that engages a cortical area can be inferred by its location along a rostro-caudal gradient, which reflects the anatomical distance from primary sensory areas. This is most evident in the prefrontal cortex, where complex functions can be explained as operations on representations of the environment that change slowly. The framework provides predictions about, and principled constraints on, cortical structure-function relationships, which can be tested by manipulating the time-scales of sensory input.

  4. A hierarchy of time-scales and the brain.

    Directory of Open Access Journals (Sweden)

    Stefan J Kiebel

    2008-11-01

    Full Text Available In this paper, we suggest that cortical anatomy recapitulates the temporal hierarchy that is inherent in the dynamics of environmental states. Many aspects of brain function can be understood in terms of a hierarchy of temporal scales at which representations of the environment evolve. The lowest level of this hierarchy corresponds to fast fluctuations associated with sensory processing, whereas the highest levels encode slow contextual changes in the environment, under which faster representations unfold. First, we describe a mathematical model that exploits the temporal structure of fast sensory input to track the slower trajectories of their underlying causes. This model of sensory encoding or perceptual inference establishes a proof of concept that slowly changing neuronal states can encode the paths or trajectories of faster sensory states. We then review empirical evidence that suggests that a temporal hierarchy is recapitulated in the macroscopic organization of the cortex. This anatomic-temporal hierarchy provides a comprehensive framework for understanding cortical function: the specific time-scale that engages a cortical area can be inferred by its location along a rostro-caudal gradient, which reflects the anatomical distance from primary sensory areas. This is most evident in the prefrontal cortex, where complex functions can be explained as operations on representations of the environment that change slowly. The framework provides predictions about, and principled constraints on, cortical structure-function relationships, which can be tested by manipulating the time-scales of sensory input.

  5. Final Report, DOE Early Career Award: Predictive modeling of complex physical systems: new tools for statistical inference, uncertainty quantification, and experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2016-08-31

    Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesian inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.

  6. Decoupling local mechanics from large-scale structure in modular metamaterials

    Science.gov (United States)

    Yang, Nan; Silverberg, Jesse L.

    2017-04-01

    A defining feature of mechanical metamaterials is that their properties are determined by the organization of internal structure instead of the raw fabrication materials. This shift of attention to engineering internal degrees of freedom has coaxed relatively simple materials into exhibiting a wide range of remarkable mechanical properties. For practical applications to be realized, however, this nascent understanding of metamaterial design must be translated into a capacity for engineering large-scale structures with prescribed mechanical functionality. Thus, the challenge is to systematically map desired functionality of large-scale structures backward into a design scheme while using finite parameter domains. Such “inverse design” is often complicated by the deep coupling between large-scale structure and local mechanical function, which limits the available design space. Here, we introduce a design strategy for constructing 1D, 2D, and 3D mechanical metamaterials inspired by modular origami and kirigami. Our approach is to assemble a number of modules into a voxelized large-scale structure, where the module’s design has a greater number of mechanical design parameters than the number of constraints imposed by bulk assembly. This inequality allows each voxel in the bulk structure to be uniquely assigned mechanical properties independent from its ability to connect and deform with its neighbors. In studying specific examples of large-scale metamaterial structures we show that a decoupling of global structure from local mechanical function allows for a variety of mechanically and topologically complex designs.

  7. Large-scale structure observables in general relativity

    International Nuclear Information System (INIS)

    Jeong, Donghui; Schmidt, Fabian

    2015-01-01

    We review recent studies that rigorously define several key observables of the large-scale structure of the Universe in a general relativistic context. Specifically, we consider (i) redshift perturbation of cosmic clock events; (ii) distortion of cosmic rulers, including weak lensing shear and magnification; and (iii) observed number density of tracers of the large-scale structure. We provide covariant and gauge-invariant expressions of these observables. Our expressions are given for a linearly perturbed flat Friedmann–Robertson–Walker metric including scalar, vector, and tensor metric perturbations. While we restrict ourselves to linear order in perturbation theory, the approach can be straightforwardly generalized to higher order. (paper)

  8. Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method.

    Science.gov (United States)

    Zhang, Tingting; Kou, S C

    2010-01-01

    Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure.

  9. Mathematical inference and control of molecular networks from perturbation experiments

    Science.gov (United States)

    Mohammed-Rasheed, Mohammed

    in order to affect the time evolution of molecular activity in a desirable manner. In this proposal, we address both the inference and control problems of GRNs. In the first part of the thesis, we consider the control problem. We assume that we are given a general topology network structure, whose dynamics follow a discrete-time Markov chain model. We subsequently develop a comprehensive framework for optimal perturbation control of the network. The aim of the perturbation is to drive the network away from undesirable steady-states and to force it to converge to a unique desirable steady-state. The proposed framework does not make any assumptions about the topology of the initial network (e.g., ergodicity, weak and strong connectivity), and is thus applicable to general topology networks. We define the optimal perturbation as the minimum-energy perturbation measured in terms of the Frobenius norm between the initial and perturbed networks. We subsequently demonstrate that there exists at most one optimal perturbation that forces the network into the desirable steady-state. In the event where the optimal perturbation does not exist, we construct a family of sub-optimal perturbations that approximate the optimal solution arbitrarily closely. In the second part of the thesis, we address the inference problem of GRNs from time series data. We model the dynamics of the molecules using a system of ordinary differential equations corrupted by additive white noise. For large-scale networks, we formulate the inference problem as a constrained maximum likelihood estimation problem. We derive the molecular interactions that maximize the likelihood function while constraining the network to be sparse. We further propose a procedure to recover weak interactions based on the Bayesian information criterion. For small-size networks, we investigated the inference of a globally stable 7-gene melanoma genetic regulatory network from genetic perturbation experiments. We considered five

  10. Implementing and analyzing the multi-threaded LP-inference

    Science.gov (United States)

    Bolotova, S. Yu; Trofimenko, E. V.; Leschinskaya, M. V.

    2018-03-01

    The logical production equations provide new possibilities for the backward inference optimization in intelligent production-type systems. The strategy of a relevant backward inference is aimed at minimization of a number of queries to external information source (either to a database or an interactive user). The idea of the method is based on the computing of initial preimages set and searching for the true preimage. The execution of each stage can be organized independently and in parallel and the actual work at a given stage can also be distributed between parallel computers. This paper is devoted to the parallel algorithms of the relevant inference based on the advanced scheme of the parallel computations “pipeline” which allows to increase the degree of parallelism. The author also provides some details of the LP-structures implementation.

  11. Classification versus inference learning contrasted with real-world categories.

    Science.gov (United States)

    Jones, Erin L; Ross, Brian H

    2011-07-01

    Categories are learned and used in a variety of ways, but the research focus has been on classification learning. Recent work contrasting classification with inference learning of categories found important later differences in category performance. However, theoretical accounts differ on whether this is due to an inherent difference between the tasks or to the implementation decisions. The inherent-difference explanation argues that inference learners focus on the internal structure of the categories--what each category is like--while classification learners focus on diagnostic information to predict category membership. In two experiments, using real-world categories and controlling for earlier methodological differences, inference learners learned more about what each category was like than did classification learners, as evidenced by higher performance on a novel classification test. These results suggest that there is an inherent difference between learning new categories by classifying an item versus inferring a feature.

  12. Entropic Inference

    Science.gov (United States)

    Caticha, Ariel

    2011-03-01

    In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme.

  13. Inferring properties of disordered chains from FRET transfer efficiencies

    Science.gov (United States)

    Zheng, Wenwei; Zerze, Gül H.; Borgia, Alessandro; Mittal, Jeetain; Schuler, Benjamin; Best, Robert B.

    2018-03-01

    Förster resonance energy transfer (FRET) is a powerful tool for elucidating both structural and dynamic properties of unfolded or disordered biomolecules, especially in single-molecule experiments. However, the key observables, namely, the mean transfer efficiency and fluorescence lifetimes of the donor and acceptor chromophores, are averaged over a broad distribution of donor-acceptor distances. The inferred average properties of the ensemble therefore depend on the form of the model distribution chosen to describe the distance, as has been widely recognized. In addition, while the distribution for one type of polymer model may be appropriate for a chain under a given set of physico-chemical conditions, it may not be suitable for the same chain in a different environment so that even an apparently consistent application of the same model over all conditions may distort the apparent changes in chain dimensions with variation of temperature or solution composition. Here, we present an alternative and straightforward approach to determining ensemble properties from FRET data, in which the polymer scaling exponent is allowed to vary with solution conditions. In its simplest form, it requires either the mean FRET efficiency or fluorescence lifetime information. In order to test the accuracy of the method, we have utilized both synthetic FRET data from implicit and explicit solvent simulations for 30 different protein sequences, and experimental single-molecule FRET data for an intrinsically disordered and a denatured protein. In all cases, we find that the inferred radii of gyration are within 10% of the true values, thus providing higher accuracy than simpler polymer models. In addition, the scaling exponents obtained by our procedure are in good agreement with those determined directly from the molecular ensemble. Our approach can in principle be generalized to treating other ensemble-averaged functions of intramolecular distances from experimental data.

  14. Large-scale inference of gene function through phylogenetic annotation of Gene Ontology terms: case study of the apoptosis and autophagy cellular processes.

    Science.gov (United States)

    Feuermann, Marc; Gaudet, Pascale; Mi, Huaiyu; Lewis, Suzanna E; Thomas, Paul D

    2016-01-01

    We previously reported a paradigm for large-scale phylogenomic analysis of gene families that takes advantage of the large corpus of experimentally supported Gene Ontology (GO) annotations. This 'GO Phylogenetic Annotation' approach integrates GO annotations from evolutionarily related genes across ∼100 different organisms in the context of a gene family tree, in which curators build an explicit model of the evolution of gene functions. GO Phylogenetic Annotation models the gain and loss of functions in a gene family tree, which is used to infer the functions of uncharacterized (or incompletely characterized) gene products, even for human proteins that are relatively well studied. Here, we report our results from applying this paradigm to two well-characterized cellular processes, apoptosis and autophagy. This revealed several important observations with respect to GO annotations and how they can be used for function inference. Notably, we applied only a small fraction of the experimentally supported GO annotations to infer function in other family members. The majority of other annotations describe indirect effects, phenotypes or results from high throughput experiments. In addition, we show here how feedback from phylogenetic annotation leads to significant improvements in the PANTHER trees, the GO annotations and GO itself. Thus GO phylogenetic annotation both increases the quantity and improves the accuracy of the GO annotations provided to the research community. We expect these phylogenetically based annotations to be of broad use in gene enrichment analysis as well as other applications of GO annotations.Database URL: http://amigo.geneontology.org/amigo. © The Author(s) 2016. Published by Oxford University Press.

  15. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation.

    Science.gov (United States)

    Yang, Ye; Christensen, Ole F; Sorensen, Daniel

    2011-02-01

    Over recent years, statistical support for the presence of genetic factors operating at the level of the environmental variance has come from fitting a genetically structured heterogeneous variance model to field or experimental data in various species. Misleading results may arise due to skewness of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box-Cox transformations. Litter size data in rabbits and pigs that had previously been analysed in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box-Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected by the presence of asymmetry in the distribution of data. We recommend that to avoid one important source of spurious inferences, future work seeking support for a genetic component acting on environmental variation using a parametric approach based on normality assumptions confirms that these are met.

  16. Inference of reactive transport model parameters using a Bayesian multivariate approach

    Science.gov (United States)

    Carniato, Luca; Schoups, Gerrit; van de Giesen, Nick

    2014-08-01

    Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least squares with weight estimation (WLS(we)) where weights are estimated from the data together with the parameters. In this study, we formulate the parameter estimation task as a multivariate Bayesian inference problem. The WLS and WLS(we) methods are special cases in this framework, corresponding to specific prior assumptions about the residual covariance matrix. The Bayesian perspective allows for generalizations to cases where residual correlation is important and for efficient inference by analytically integrating out the variances (weights) and selected covariances from the joint posterior. Specifically, the WLS and WLS(we) methods are compared to a multivariate (MV) approach that accounts for specific residual correlations without the need for explicit estimation of the error parameters. When applied to inference of reactive transport model parameters from column-scale data on dissolved species concentrations, the following results were obtained: (1) accounting for residual correlation between species provides more accurate parameter estimation for high residual correlation levels whereas its influence for predictive uncertainty is negligible, (2) integrating out the (co)variances leads to an efficient estimation of the full joint posterior with a reduced computational effort compared to the WLS(we) method, and (3) in the presence of model structural errors, none of the methods is able to identify the correct parameter values.

  17. High genetic diversity and fine-scale spatial structure in the marine flagellate Oxyrrhis marina (Dinophyceae uncovered by microsatellite loci.

    Directory of Open Access Journals (Sweden)

    Chris D Lowe

    2010-12-01

    Full Text Available Free-living marine protists are often assumed to be broadly distributed and genetically homogeneous on large spatial scales. However, an increasing application of highly polymorphic genetic markers (e.g., microsatellites has provided evidence for high genetic diversity and population structuring on small spatial scales in many free-living protists. Here we characterise a panel of new microsatellite markers for the common marine flagellate Oxyrrhis marina. Nine microsatellite loci were used to assess genotypic diversity at two spatial scales by genotyping 200 isolates of O. marina from 6 broad geographic regions around Great Britain and Ireland; in one region, a single 2 km shore line was sampled intensively to assess fine-scale genetic diversity. Microsatellite loci resolved between 1-6 and 7-23 distinct alleles per region in the least and most variable loci respectively, with corresponding variation in expected heterozygosities (H(e of 0.00-0.30 and 0.81-0.93. Across the dataset, genotypic diversity was high with 183 genotypes detected from 200 isolates. Bayesian analysis of population structure supported two model populations. One population was distributed across all sampled regions; the other was confined to the intensively sampled shore, and thus two distinct populations co-occurred at this site. Whilst model-based analysis inferred a single UK-wide population, pairwise regional F(ST values indicated weak to moderate population sub-division (0.01-0.12, but no clear correlation between spatial and genetic distance was evident. Data presented in this study highlight extensive genetic diversity for O. marina; however, it remains a substantial challenge to uncover the mechanisms that drive genetic diversity in free-living microorganisms.

  18. Some Statistics for Measuring Large-Scale Structure

    OpenAIRE

    Brandenberger, Robert H.; Kaplan, David M.; A, Stephen; Ramsey

    1993-01-01

    Good statistics for measuring large-scale structure in the Universe must be able to distinguish between different models of structure formation. In this paper, two and three dimensional ``counts in cell" statistics and a new ``discrete genus statistic" are applied to toy versions of several popular theories of structure formation: random phase cold dark matter model, cosmic string models, and global texture scenario. All three statistics appear quite promising in terms of differentiating betw...

  19. Probing cosmology with the homogeneity scale of the Universe through large scale structure surveys

    International Nuclear Information System (INIS)

    Ntelis, Pierros

    2017-01-01

    . It is thus possible to reconstruct the distribution of matter in 3 dimensions in gigantic volumes. We can then extract various statistical observables to measure the BAO scale and the scale of homogeneity of the universe. Using Data Release 12 CMASS galaxy catalogs, we obtained precision on the homogeneity scale reduced by 5 times compared to Wiggle Z measurement. At large scales, the universe is remarkably well described in linear order by the ΛCDM-model, the standard model of cosmology. In general, it is not necessary to take into account the nonlinear effects which complicate the model at small scales. On the other hand, at large scales, the measurement of our observables becomes very sensitive to the systematic effects. This is particularly true for the analysis of cosmic homogeneity, which requires an observational method so as not to bias the measurement. In order to study the homogeneity principle in a model independent way, we explore a new way to infer distances using cosmic clocks and type Ia Supernovae. This establishes the Cosmological Principle using only a small number of a priori assumption, i.e. the theory of General Relativity and astrophysical assumptions that are independent from Friedmann Universes and in extend the homogeneity assumption. This manuscript is as follows. After a short presentation of the knowledge in cosmology necessary for the understanding of this manuscript, presented in Chapter 1, Chapter 2 will deal with the challenges of the Cosmological Principle as well as how to overcome those. In Chapter 3, we will discuss the technical characteristics of the large scale structure surveys, in particular focusing on BOSS and eBOSS galaxy surveys. Chapter 4 presents the detailed analysis of the measurement of cosmic homogeneity and the various systematic effects likely to impact our observables. Chapter 5 will discuss how to use the cosmic homogeneity as a standard ruler to constrain dark energy models from current and future surveys. In

  20. Inference of financial networks using the normalised mutual information rate

    Science.gov (United States)

    2018-01-01

    In this paper, we study data from financial markets, using the normalised Mutual Information Rate. We show how to use it to infer the underlying network structure of interrelations in the foreign currency exchange rates and stock indices of 15 currency areas. We first present the mathematical method and discuss its computational aspects, and apply it to artificial data from chaotic dynamics and to correlated normal-variates data. We then apply the method to infer the structure of the financial system from the time-series of currency exchange rates and stock indices. In particular, we study and reveal the interrelations among the various foreign currency exchange rates and stock indices in two separate networks, of which we also study their structural properties. Our results show that both inferred networks are small-world networks, sharing similar properties and having differences in terms of assortativity. Importantly, our work shows that global economies tend to connect with other economies world-wide, rather than creating small groups of local economies. Finally, the consistent interrelations depicted among the 15 currency areas are further supported by a discussion from the viewpoint of economics. PMID:29420644

  1. Inference of financial networks using the normalised mutual information rate.

    Science.gov (United States)

    Goh, Yong Kheng; Hasim, Haslifah M; Antonopoulos, Chris G

    2018-01-01

    In this paper, we study data from financial markets, using the normalised Mutual Information Rate. We show how to use it to infer the underlying network structure of interrelations in the foreign currency exchange rates and stock indices of 15 currency areas. We first present the mathematical method and discuss its computational aspects, and apply it to artificial data from chaotic dynamics and to correlated normal-variates data. We then apply the method to infer the structure of the financial system from the time-series of currency exchange rates and stock indices. In particular, we study and reveal the interrelations among the various foreign currency exchange rates and stock indices in two separate networks, of which we also study their structural properties. Our results show that both inferred networks are small-world networks, sharing similar properties and having differences in terms of assortativity. Importantly, our work shows that global economies tend to connect with other economies world-wide, rather than creating small groups of local economies. Finally, the consistent interrelations depicted among the 15 currency areas are further supported by a discussion from the viewpoint of economics.

  2. SELECTION OF SCALE OF PICTURE OF STRUCTURE FOR ITS MULTIFRACTAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    VOLCHUK V. N.

    2015-11-01

    Full Text Available Problem statement. Each scale level detectesthe new features of the structure of the material describing of it quality. For example, features of the grain structure are revealed in different kind of steel on microstruc ture level, and its parameters greatly influences on the strength properties of the metal. Thus, to select the scale of representation of a fractal object, for instance the elements of structure of roll iron or steel is necessary to determine the interval (1, where observed its self-similarity, and on this interval should be selected the scale, the use of which will allow him to choose adequate fractal dimension. For optimal scale structure of repose is taken one in which at least two adjacent points of the series (2, the fractal dimension is minimal differences between them. This is explained by the fact that this is best observed property of self-similarity structure. An example of the selection of the scale representation of the structure of cast iron rolls execution of SPHN (a and execution SSHN (b is shown on interval of increases in the range of x 100 to x1000 with a predetermined pitch Δl = 100. The implementation of this phase of research allowed to determine experimentally the optimal scale of representation of structure of iron roll with increasing x 200 for multifractal analysis of its elements: inclusion of the plate and nodular graphit, carbides. Purpose To determine the optimal scale structure representation for iron roll multifractal analysis of its elements: inclusion of the plate and nodular carbides. Conclusion. It was found that the fractal dimension of the structural elements of the test ranged from experimental error 5÷7%, which testifies to the universality of this assessment, and therefore reliability and economic benefits, in terms of the equipping of laboratories expensive metallurgical microscopes with higher resolution.

  3. Small scale structure formation in chameleon cosmology

    International Nuclear Information System (INIS)

    Brax, Ph.; Bruck, C. van de; Davis, A.C.; Green, A.M.

    2006-01-01

    Chameleon fields are scalar fields whose mass depends on the ambient matter density. We investigate the effects of these fields on the growth of density perturbations on sub-galactic scales and the formation of the first dark matter halos. Density perturbations on comoving scales R<1 pc go non-linear and collapse to form structure much earlier than in standard ΛCDM cosmology. The resulting mini-halos are hence more dense and resilient to disruption. We therefore expect (provided that the density perturbations on these scales have not been erased by damping processes) that the dark matter distribution on small scales would be more clumpy in chameleon cosmology than in the ΛCDM model

  4. Hierarchical Cantor set in the large scale structure with torus geometry

    Energy Technology Data Exchange (ETDEWEB)

    Murdzek, R. [Physics Department, ' Al. I. Cuza' University, Blvd. Carol I, Nr. 11, Iassy 700506 (Romania)], E-mail: rmurdzek@yahoo.com

    2008-12-15

    The formation of large scale structures is considered within a model with string on toroidal space-time. Firstly, the space-time geometry is presented. In this geometry, the Universe is represented by a string describing a torus surface. Thereafter, the large scale structure of the Universe is derived from the string oscillations. The results are in agreement with the cellular structure of the large scale distribution and with the theory of a Cantorian space-time.

  5. More than one kind of inference: re-examining what's learned in feature inference and classification.

    Science.gov (United States)

    Sweller, Naomi; Hayes, Brett K

    2010-08-01

    Three studies examined how task demands that impact on attention to typical or atypical category features shape the category representations formed through classification learning and inference learning. During training categories were learned via exemplar classification or by inferring missing exemplar features. In the latter condition inferences were made about missing typical features alone (typical feature inference) or about both missing typical and atypical features (mixed feature inference). Classification and mixed feature inference led to the incorporation of typical and atypical features into category representations, with both kinds of features influencing inferences about familiar (Experiments 1 and 2) and novel (Experiment 3) test items. Those in the typical inference condition focused primarily on typical features. Together with formal modelling, these results challenge previous accounts that have characterized inference learning as producing a focus on typical category features. The results show that two different kinds of inference learning are possible and that these are subserved by different kinds of category representations.

  6. Perceptual inference.

    Science.gov (United States)

    Aggelopoulos, Nikolaos C

    2015-08-01

    Perceptual inference refers to the ability to infer sensory stimuli from predictions that result from internal neural representations built through prior experience. Methods of Bayesian statistical inference and decision theory model cognition adequately by using error sensing either in guiding action or in "generative" models that predict the sensory information. In this framework, perception can be seen as a process qualitatively distinct from sensation, a process of information evaluation using previously acquired and stored representations (memories) that is guided by sensory feedback. The stored representations can be utilised as internal models of sensory stimuli enabling long term associations, for example in operant conditioning. Evidence for perceptual inference is contributed by such phenomena as the cortical co-localisation of object perception with object memory, the response invariance in the responses of some neurons to variations in the stimulus, as well as from situations in which perception can be dissociated from sensation. In the context of perceptual inference, sensory areas of the cerebral cortex that have been facilitated by a priming signal may be regarded as comparators in a closed feedback loop, similar to the better known motor reflexes in the sensorimotor system. The adult cerebral cortex can be regarded as similar to a servomechanism, in using sensory feedback to correct internal models, producing predictions of the outside world on the basis of past experience. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. SEMANTIC PATCH INFERENCE

    DEFF Research Database (Denmark)

    Andersen, Jesper

    2009-01-01

    Collateral evolution the problem of updating several library-using programs in response to API changes in the used library. In this dissertation we address the issue of understanding collateral evolutions by automatically inferring a high-level specification of the changes evident in a given set ...... specifications inferred by spdiff in Linux are shown. We find that the inferred specifications concisely capture the actual collateral evolution performed in the examples....

  8. Graph-based linear scaling electronic structure theory

    Energy Technology Data Exchange (ETDEWEB)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.; Swart, Pieter J.; Germann, Timothy C.; Bock, Nicolas [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Mniszewski, Susan M.; Mohd-Yusof, Jamal; Wall, Michael E.; Djidjev, Hristo [Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Rubensson, Emanuel H. [Division of Scientific Computing, Department of Information Technology, Uppsala University, Box 337, SE-751 05 Uppsala (Sweden)

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  9. Approximate Inference and Deep Generative Models

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Advances in deep generative models are at the forefront of deep learning research because of the promise they offer for allowing data-efficient learning, and for model-based reinforcement learning. In this talk I'll review a few standard methods for approximate inference and introduce modern approximations which allow for efficient large-scale training of a wide variety of generative models. Finally, I'll demonstrate several important application of these models to density estimation, missing data imputation, data compression and planning.

  10. Detecting Multi-scale Structures in Chandra Images of Centaurus A

    Science.gov (United States)

    Karovska, M.; Fabbiano, G.; Elvis, M. S.; Evans, I. N.; Kim, D. W.; Prestwich, A. H.; Schwartz, D. A.; Murray, S. S.; Forman, W.; Jones, C.; Kraft, R. P.; Isobe, T.; Cui, W.; Schreier, E. J.

    1999-12-01

    Centaurus A (NGC 5128) is a giant early-type galaxy with a merger history, containing the nearest radio-bright AGN. Recent Chandra High Resolution Camera (HRC) observations of Cen A reveal X-ray multi-scale structures in this object with unprecedented detail and clarity. We show the results of an analysis of the Chandra data with smoothing and edge enhancement techniques that allow us to enhance and quantify the multi-scale structures present in the HRC images. These techniques include an adaptive smoothing algorithm (Ebeling et al 1999), and a multi-directional gradient detection algorithm (Karovska et al 1994). The Ebeling et al adaptive smoothing algorithm, which is incorporated in the CXC analysis s/w package, is a powerful tool for smoothing images containing complex structures at various spatial scales. The adaptively smoothed images of Centaurus A show simultaneously the high-angular resolution bright structures at scales as small as an arcsecond and the extended faint structures as large as several arc minutes. The large scale structures suggest complex symmetry, including a component possibly associated with the inner radio lobes (as suggested by the ROSAT HRI data, Dobereiner et al 1996), and a separate component with an orthogonal symmetry that may be associated with the galaxy as a whole. The dust lane and the x-ray ridges are very clearly visible. The adaptively smoothed images and the edge-enhanced images also suggest several filamentary features including a large filament-like structure extending as far as about 5 arcminutes to North-West.

  11. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  12. BigSUR: large-scale structured urban reconstruction

    KAUST Repository

    Kelly, Tom

    2017-11-22

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  13. BigSUR: large-scale structured urban reconstruction

    KAUST Repository

    Kelly, Tom; Femiani, John; Wonka, Peter; Mitra, Niloy J.

    2017-01-01

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  14. Spatial structure and scaling of macropores in hydrological process at small catchment scale

    Science.gov (United States)

    Silasari, Rasmiaditya; Broer, Martine; Blöschl, Günter

    2013-04-01

    During rainfall events, the formation of overland flow can occur under the circumstances of saturation excess and/or infiltration excess. These conditions are affected by the soil moisture state which represents the soil water content in micropores and macropores. Macropores act as pathway for the preferential flows and have been widely studied locally. However, very little is known about their spatial structure and conductivity of macropores and other flow characteristic at the catchment scale. This study will analyze these characteristics to better understand its importance in hydrological processes. The research will be conducted in Petzenkirchen Hydrological Open Air Laboratory (HOAL), a 64 ha catchment located 100 km west of Vienna. The land use is divided between arable land (87%), pasture (5%), forest (6%) and paved surfaces (2%). Video cameras will be installed on an agricultural field to monitor the overland flow pattern during rainfall events. A wireless soil moisture network is also installed within the monitored area. These field data will be combined to analyze the soil moisture state and the responding surface runoff occurrence. The variability of the macropores spatial structure of the observed area (field scale) then will be assessed based on the topography and soil data. Soil characteristics will be supported with laboratory experiments on soil matrix flow to obtain proper definitions of the spatial structure of macropores and its variability. A coupled physically based distributed model of surface and subsurface flow will be used to simulate the variability of macropores spatial structure and its effect on the flow behaviour. This model will be validated by simulating the observed rainfall events. Upscaling from field scale to catchment scale will be done to understand the effect of macropores variability on larger scales by applying spatial stochastic methods. The first phase in this study is the installation and monitoring configuration of video

  15. Large-scale structures in turbulent Couette flow

    Science.gov (United States)

    Kim, Jung Hoon; Lee, Jae Hwa

    2016-11-01

    Direct numerical simulation of fully developed turbulent Couette flow is performed with a large computational domain in the streamwise and spanwise directions (40 πh and 6 πh) to investigate streamwise-scale growth mechanism of the streamwise velocity fluctuating structures in the core region, where h is the channel half height. It is shown that long streamwise-scale structures (> 3 h) are highly energetic and they contribute to more than 80% of the turbulent kinetic energy and Reynolds shear stress, compared to previous studies in canonical Poiseuille flows. Instantaneous and statistical analysis show that negative-u' structures on the bottom wall in the Couette flow continuously grow in the streamwise direction due to mean shear, and they penetrate to the opposite moving wall. The geometric center of the log layer is observed in the centerline with a dominant outer peak in streamwise spectrum, and the maximum streamwise extent for structure is found in the centerline, similar to previous observation in turbulent Poiseuille flows at high Reynolds number. Further inspection of time-evolving instantaneous fields clearly exhibits that adjacent long structures combine to form a longer structure in the centerline. This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2014R1A1A2057031).

  16. Genetic structure of earthworm populations at a regional scale: inferences from mitochondrial and microsatellite molecular markers in Aporrectodea icterica (Savigny 1826.

    Directory of Open Access Journals (Sweden)

    Magally Torres-Leguizamon

    Full Text Available Despite the fundamental role that soil invertebrates (e.g. earthworms play in soil ecosystems, the magnitude of their spatial genetic variation is still largely unknown and only a few studies have investigated the population genetic structure of these organisms. Here, we investigated the genetic structure of seven populations of a common endogeic earthworm (Aporrectodea icterica sampled in northern France to explore how historical species range changes, microevolutionary processes and human activities interact in shaping genetic variation at a regional scale. Because combining markers with distinct modes of inheritance can provide extra, complementary information on gene flow, we compared the patterns of genetic structure revealed using nuclear (7 microsatellite loci and mitochondrial markers (COI. Both types of markers indicated low genetic polymorphism compared to other earthworm species, a result that can be attributed to ancient bottlenecks, for instance due to species isolation in southern refugia during the ice ages with subsequent expansion toward northern Europe. Historical events can also be responsible for the existence of two divergent, but randomly interbreeding mitochondrial lineages within all study populations. In addition, the comparison of observed heterozygosity among microsatellite loci and heterozygosity expected under mutation-drift equilibrium suggested a recent decrease in effective size in some populations that could be due to contemporary events such as habitat fragmentation. The absence of relationship between geographic and genetic distances estimated from microsatellite allele frequency data also suggested that dispersal is haphazard and that human activities favour passive dispersal among geographically distant populations.

  17. Inferring Population Genetic Structure in Widely and Continuously Distributed Carnivores: The Stone Marten (Martes foina) as a Case Study.

    Science.gov (United States)

    Vergara, María; Basto, Mafalda P; Madeira, María José; Gómez-Moliner, Benjamín J; Santos-Reis, Margarida; Fernandes, Carlos; Ruiz-González, Aritz

    2015-01-01

    The stone marten is a widely distributed mustelid in the Palaearctic region that exhibits variable habitat preferences in different parts of its range. The species is a Holocene immigrant from southwest Asia which, according to fossil remains, followed the expansion of the Neolithic farming cultures into Europe and possibly colonized the Iberian Peninsula during the Early Neolithic (ca. 7,000 years BP). However, the population genetic structure and historical biogeography of this generalist carnivore remains essentially unknown. In this study we have combined mitochondrial DNA (mtDNA) sequencing (621 bp) and microsatellite genotyping (23 polymorphic markers) to infer the population genetic structure of the stone marten within the Iberian Peninsula. The mtDNA data revealed low haplotype and nucleotide diversities and a lack of phylogeographic structure, most likely due to a recent colonization of the Iberian Peninsula by a few mtDNA lineages during the Early Neolithic. The microsatellite data set was analysed with a) spatial and non-spatial Bayesian individual-based clustering (IBC) approaches (STRUCTURE, TESS, BAPS and GENELAND), and b) multivariate methods [discriminant analysis of principal components (DAPC) and spatial principal component analysis (sPCA)]. Additionally, because isolation by distance (IBD) is a common spatial genetic pattern in mobile and continuously distributed species and it may represent a challenge to the performance of the above methods, the microsatellite data set was tested for its presence. Overall, the genetic structure of the stone marten in the Iberian Peninsula was characterized by a NE-SW spatial pattern of IBD, and this may explain the observed disagreement between clustering solutions obtained by the different IBC methods. However, there was significant indication for contemporary genetic structuring, albeit weak, into at least three different subpopulations. The detected subdivision could be attributed to the influence of the

  18. Inferring Population Genetic Structure in Widely and Continuously Distributed Carnivores: The Stone Marten (Martes foina as a Case Study.

    Directory of Open Access Journals (Sweden)

    María Vergara

    Full Text Available The stone marten is a widely distributed mustelid in the Palaearctic region that exhibits variable habitat preferences in different parts of its range. The species is a Holocene immigrant from southwest Asia which, according to fossil remains, followed the expansion of the Neolithic farming cultures into Europe and possibly colonized the Iberian Peninsula during the Early Neolithic (ca. 7,000 years BP. However, the population genetic structure and historical biogeography of this generalist carnivore remains essentially unknown. In this study we have combined mitochondrial DNA (mtDNA sequencing (621 bp and microsatellite genotyping (23 polymorphic markers to infer the population genetic structure of the stone marten within the Iberian Peninsula. The mtDNA data revealed low haplotype and nucleotide diversities and a lack of phylogeographic structure, most likely due to a recent colonization of the Iberian Peninsula by a few mtDNA lineages during the Early Neolithic. The microsatellite data set was analysed with a spatial and non-spatial Bayesian individual-based clustering (IBC approaches (STRUCTURE, TESS, BAPS and GENELAND, and b multivariate methods [discriminant analysis of principal components (DAPC and spatial principal component analysis (sPCA]. Additionally, because isolation by distance (IBD is a common spatial genetic pattern in mobile and continuously distributed species and it may represent a challenge to the performance of the above methods, the microsatellite data set was tested for its presence. Overall, the genetic structure of the stone marten in the Iberian Peninsula was characterized by a NE-SW spatial pattern of IBD, and this may explain the observed disagreement between clustering solutions obtained by the different IBC methods. However, there was significant indication for contemporary genetic structuring, albeit weak, into at least three different subpopulations. The detected subdivision could be attributed to the influence

  19. Inference of segmented color and texture description by tensor voting.

    Science.gov (United States)

    Jia, Jiaya; Tang, Chi-Keung

    2004-06-01

    A robust synthesis method is proposed to automatically infer missing color and texture information from a damaged 2D image by (N)D tensor voting (N > 3). The same approach is generalized to range and 3D data in the presence of occlusion, missing data and noise. Our method translates texture information into an adaptive (N)D tensor, followed by a voting process that infers noniteratively the optimal color values in the (N)D texture space. A two-step method is proposed. First, we perform segmentation based on insufficient geometry, color, and texture information in the input, and extrapolate partitioning boundaries by either 2D or 3D tensor voting to generate a complete segmentation for the input. Missing colors are synthesized using (N)D tensor voting in each segment. Different feature scales in the input are automatically adapted by our tensor scale analysis. Results on a variety of difficult inputs demonstrate the effectiveness of our tensor voting approach.

  20. Emergence of scale-free close-knit friendship structure in online social networks.

    Directory of Open Access Journals (Sweden)

    Ai-Xiang Cui

    Full Text Available Although the structural properties of online social networks have attracted much attention, the properties of the close-knit friendship structures remain an important question. Here, we mainly focus on how these mesoscale structures are affected by the local and global structural properties. Analyzing the data of four large-scale online social networks reveals several common structural properties. It is found that not only the local structures given by the indegree, outdegree, and reciprocal degree distributions follow a similar scaling behavior, the mesoscale structures represented by the distributions of close-knit friendship structures also exhibit a similar scaling law. The degree correlation is very weak over a wide range of the degrees. We propose a simple directed network model that captures the observed properties. The model incorporates two mechanisms: reciprocation and preferential attachment. Through rate equation analysis of our model, the local-scale and mesoscale structural properties are derived. In the local-scale, the same scaling behavior of indegree and outdegree distributions stems from indegree and outdegree of nodes both growing as the same function of the introduction time, and the reciprocal degree distribution also shows the same power-law due to the linear relationship between the reciprocal degree and in/outdegree of nodes. In the mesoscale, the distributions of four closed triples representing close-knit friendship structures are found to exhibit identical power-laws, a behavior attributed to the negligible degree correlations. Intriguingly, all the power-law exponents of the distributions in the local-scale and mesoscale depend only on one global parameter, the mean in/outdegree, while both the mean in/outdegree and the reciprocity together determine the ratio of the reciprocal degree of a node to its in/outdegree. Structural properties of numerical simulated networks are analyzed and compared with each of the four

  1. Emergence of scale-free close-knit friendship structure in online social networks.

    Science.gov (United States)

    Cui, Ai-Xiang; Zhang, Zi-Ke; Tang, Ming; Hui, Pak Ming; Fu, Yan

    2012-01-01

    Although the structural properties of online social networks have attracted much attention, the properties of the close-knit friendship structures remain an important question. Here, we mainly focus on how these mesoscale structures are affected by the local and global structural properties. Analyzing the data of four large-scale online social networks reveals several common structural properties. It is found that not only the local structures given by the indegree, outdegree, and reciprocal degree distributions follow a similar scaling behavior, the mesoscale structures represented by the distributions of close-knit friendship structures also exhibit a similar scaling law. The degree correlation is very weak over a wide range of the degrees. We propose a simple directed network model that captures the observed properties. The model incorporates two mechanisms: reciprocation and preferential attachment. Through rate equation analysis of our model, the local-scale and mesoscale structural properties are derived. In the local-scale, the same scaling behavior of indegree and outdegree distributions stems from indegree and outdegree of nodes both growing as the same function of the introduction time, and the reciprocal degree distribution also shows the same power-law due to the linear relationship between the reciprocal degree and in/outdegree of nodes. In the mesoscale, the distributions of four closed triples representing close-knit friendship structures are found to exhibit identical power-laws, a behavior attributed to the negligible degree correlations. Intriguingly, all the power-law exponents of the distributions in the local-scale and mesoscale depend only on one global parameter, the mean in/outdegree, while both the mean in/outdegree and the reciprocity together determine the ratio of the reciprocal degree of a node to its in/outdegree. Structural properties of numerical simulated networks are analyzed and compared with each of the four real networks. This

  2. Analysis of a genetically structured variance heterogeneity model using the Box-Cox transformation

    DEFF Research Database (Denmark)

    Yang, Ye; Christensen, Ole Fredslund; Sorensen, Daniel

    2011-01-01

    of the marginal distribution of the data. To investigate how the scale of measurement affects inferences, the genetically structured heterogeneous variance model is extended to accommodate the family of Box–Cox transformations. Litter size data in rabbits and pigs that had previously been analysed...... in the untransformed scale were reanalysed in a scale equal to the mode of the marginal posterior distribution of the Box–Cox parameter. In the rabbit data, the statistical evidence for a genetic component at the level of the environmental variance is considerably weaker than that resulting from an analysis...... in the original metric. In the pig data, the statistical evidence is stronger, but the coefficient of correlation between additive genetic effects affecting mean and variance changes sign, compared to the results in the untransformed scale. The study confirms that inferences on variances can be strongly affected...

  3. Inferred vs realized patterns of gene flow: an analysis of population structure in the Andros Island Rock Iguana.

    Science.gov (United States)

    Colosimo, Giuliano; Knapp, Charles R; Wallace, Lisa E; Welch, Mark E

    2014-01-01

    Ecological data, the primary source of information on patterns and rates of migration, can be integrated with genetic data to more accurately describe the realized connectivity between geographically isolated demes. In this paper we implement this approach and discuss its implications for managing populations of the endangered Andros Island Rock Iguana, Cyclura cychlura cychlura. This iguana is endemic to Andros, a highly fragmented landmass of large islands and smaller cays. Field observations suggest that geographically isolated demes were panmictic due to high, inferred rates of gene flow. We expand on these observations using 16 polymorphic microsatellites to investigate the genetic structure and rates of gene flow from 188 Andros Iguanas collected across 23 island sites. Bayesian clustering of specimens assigned individuals to three distinct genotypic clusters. An analysis of molecular variance (AMOVA) indicates that allele frequency differences are responsible for a significant portion of the genetic variance across the three defined clusters (Fst =  0.117, p<0.01). These clusters are associated with larger islands and satellite cays isolated by broad water channels with strong currents. These findings imply that broad water channels present greater obstacles to gene flow than was inferred from field observation alone. Additionally, rates of gene flow were indirectly estimated using BAYESASS 3.0. The proportion of individuals originating from within each identified cluster varied from 94.5 to 98.7%, providing further support for local isolation. Our assessment reveals a major disparity between inferred and realized gene flow. We discuss our results in a conservation perspective for species inhabiting highly fragmented landscapes.

  4. Inferred vs Realized Patterns of Gene Flow: An Analysis of Population Structure in the Andros Island Rock Iguana

    Science.gov (United States)

    Colosimo, Giuliano; Knapp, Charles R.; Wallace, Lisa E.; Welch, Mark E.

    2014-01-01

    Ecological data, the primary source of information on patterns and rates of migration, can be integrated with genetic data to more accurately describe the realized connectivity between geographically isolated demes. In this paper we implement this approach and discuss its implications for managing populations of the endangered Andros Island Rock Iguana, Cyclura cychlura cychlura. This iguana is endemic to Andros, a highly fragmented landmass of large islands and smaller cays. Field observations suggest that geographically isolated demes were panmictic due to high, inferred rates of gene flow. We expand on these observations using 16 polymorphic microsatellites to investigate the genetic structure and rates of gene flow from 188 Andros Iguanas collected across 23 island sites. Bayesian clustering of specimens assigned individuals to three distinct genotypic clusters. An analysis of molecular variance (AMOVA) indicates that allele frequency differences are responsible for a significant portion of the genetic variance across the three defined clusters (Fst =  0.117, p0.01). These clusters are associated with larger islands and satellite cays isolated by broad water channels with strong currents. These findings imply that broad water channels present greater obstacles to gene flow than was inferred from field observation alone. Additionally, rates of gene flow were indirectly estimated using BAYESASS 3.0. The proportion of individuals originating from within each identified cluster varied from 94.5 to 98.7%, providing further support for local isolation. Our assessment reveals a major disparity between inferred and realized gene flow. We discuss our results in a conservation perspective for species inhabiting highly fragmented landscapes. PMID:25229344

  5. Inferred vs realized patterns of gene flow: an analysis of population structure in the Andros Island Rock Iguana.

    Directory of Open Access Journals (Sweden)

    Giuliano Colosimo

    Full Text Available Ecological data, the primary source of information on patterns and rates of migration, can be integrated with genetic data to more accurately describe the realized connectivity between geographically isolated demes. In this paper we implement this approach and discuss its implications for managing populations of the endangered Andros Island Rock Iguana, Cyclura cychlura cychlura. This iguana is endemic to Andros, a highly fragmented landmass of large islands and smaller cays. Field observations suggest that geographically isolated demes were panmictic due to high, inferred rates of gene flow. We expand on these observations using 16 polymorphic microsatellites to investigate the genetic structure and rates of gene flow from 188 Andros Iguanas collected across 23 island sites. Bayesian clustering of specimens assigned individuals to three distinct genotypic clusters. An analysis of molecular variance (AMOVA indicates that allele frequency differences are responsible for a significant portion of the genetic variance across the three defined clusters (Fst =  0.117, p<<0.01. These clusters are associated with larger islands and satellite cays isolated by broad water channels with strong currents. These findings imply that broad water channels present greater obstacles to gene flow than was inferred from field observation alone. Additionally, rates of gene flow were indirectly estimated using BAYESASS 3.0. The proportion of individuals originating from within each identified cluster varied from 94.5 to 98.7%, providing further support for local isolation. Our assessment reveals a major disparity between inferred and realized gene flow. We discuss our results in a conservation perspective for species inhabiting highly fragmented landscapes.

  6. Congested Link Inference Algorithms in Dynamic Routing IP Network

    Directory of Open Access Journals (Sweden)

    Yu Chen

    2017-01-01

    Full Text Available The performance descending of current congested link inference algorithms is obviously in dynamic routing IP network, such as the most classical algorithm CLINK. To overcome this problem, based on the assumptions of Markov property and time homogeneity, we build a kind of Variable Structure Discrete Dynamic Bayesian (VSDDB network simplified model of dynamic routing IP network. Under the simplified VSDDB model, based on the Bayesian Maximum A Posteriori (BMAP and Rest Bayesian Network Model (RBNM, we proposed an Improved CLINK (ICLINK algorithm. Considering the concurrent phenomenon of multiple link congestion usually happens, we also proposed algorithm CLILRS (Congested Link Inference algorithm based on Lagrangian Relaxation Subgradient to infer the set of congested links. We validated our results by the experiments of analogy, simulation, and actual Internet.

  7. Landslide Fissure Inference Assessment by ANFIS and Logistic Regression Using UAS-Based Photogrammetry

    Directory of Open Access Journals (Sweden)

    Ozgun Akcay

    2015-10-01

    Full Text Available Unmanned Aerial Systems (UAS are now capable of gathering high-resolution data, therefore, landslides can be explored in detail at larger scales. In this research, 132 aerial photographs were captured, and 85,456 features were detected and matched automatically using UAS photogrammetry. The root mean square (RMS values of the image coordinates of the Ground Control Points (GPCs varied from 0.521 to 2.293 pixels, whereas maximum RMS values of automatically matched features was calculated as 2.921 pixels. Using the 3D point cloud, which was acquired by aerial photogrammetry, the raster datasets of the aspect, slope, and maximally stable extremal regions (MSER detecting visual uniformity, were defined as three variables, in order to reason fissure structures on the landslide surface. In this research, an Adaptive Neuro Fuzzy Inference System (ANFIS and a Logistic Regression (LR were implemented using training datasets to infer fissure data appropriately. The accuracy of the predictive models was evaluated by drawing receiver operating characteristic (ROC curves and by calculating the area under the ROC curve (AUC. The experiments exposed that high-resolution imagery is an indispensable data source to model and validate landslide fissures appropriately.

  8. Process Inference from High Frequency Temporal Variations in Dissolved Organic Carbon (DOC) Dynamics Across Nested Spatial Scales

    Science.gov (United States)

    Tunaley, C.; Tetzlaff, D.; Lessels, J. S.; Soulsby, C.

    2014-12-01

    In order to understand aquatic ecosystem functioning it is critical to understand the processes that control the spatial and temporal variations in DOC. DOC concentrations are highly dynamic, however, our understanding at short, high frequency timescales is still limited. Optical sensors which act as a proxy for DOC provide the opportunity to investigate near-continuous DOC variations in order to understand the hydrological and biogeochemical processes that control concentrations at short temporal scales. Here we present inferred 15 minute stream water DOC data for a 12 month period at three nested scales (1km2, 3km2 and 31km2) for the Bruntland Burn, a headwater catchment in NE Scotland. High frequency data were measured using FDOM and CDOM probes which work by measuring the fluorescent component and coloured component, respectively, of DOC when exposed to ultraviolet light. Both FDOM and CDOM were strongly correlated (r2 >0.8) with DOC allowing high frequency estimations. Results show the close coupling of DOC with discharge throughout the sampling period at all three spatial scales. However, analysis at the event scale highlights anticlockwise hysteresis relationships between DOC and discharge due to the delay in DOC being flushed from the increasingly large areas of peaty soils as saturation zones expand and increase hydrological connectivity. Lag times vary between events dependent on antecedent conditions. During a 10 year drought period in late summer 2013 it was apparent that very small changes in discharge on a 15 minute timescale result in high increases in DOC. This suggests transport limitation during this period where DOC builds up in the soil and is not flushed regularly, therefore any subsequent increase in discharge results in large DOC peaks. The high frequency sensors also reveal diurnal variability during summer months related to the photo-oxidation, evaporative and biological influences of DOC during the day. This relationship is less

  9. An analysis of line-drawings based upon automatically inferred grammar and its application to chest x-ray images

    International Nuclear Information System (INIS)

    Nakayama, Akira; Yoshida, Yuuji; Fukumura, Teruo

    1984-01-01

    There is a technique using inferring grammer as image- structure analyzing technique. This technique involves a few problems if it is applied to naturally obtained images, as the practical grammatical technique for two-dimensional image is not established. The authors developed a technique which solved the above problems for the main purpose of the automated structure analysis of naturally obtained image. The first half of this paper describes on the automatic inference of line drawing generation grammar and the line drawing analysis based on that automatic inference. The second half of the paper reports on the actual analysis. The proposed technique is that to extract object line drawings out of the line drawings containing noise. The technique was evaluated for its effectiveness with an example of extracting rib center lines out of thin line chest X-ray images having practical scale and complexity. In this example, the total number of characteristic points (ends, branch points and intersections) composing line drawings per one image was 377, and the total number of line segments composing line drawings was 566 on average per sheet. The extraction ratio was 86.6 % which seemed to be proper when the complexity of input line drawings was considered. Further, the result was compared with the identified rib center lines with the automatic screening system AISCR-V3 for comparison with the conventional processing technique, and it was satisfactory when the versatility of this method was considered. (Wakatsuki, Y.)

  10. Geometrical scaling in charm structure function ratios

    International Nuclear Information System (INIS)

    Boroun, G.R.; Rezaei, B.

    2014-01-01

    By using a Laplace-transform technique, we solve the next-to-leading-order master equation for charm production and derive a compact formula for the ratio R c =F L cc ¯ /F 2 cc ¯ , which is useful for extracting the charm structure function from the reduced charm cross section, in particular, at DESY HERA, at small x. Our results show that this ratio is independent of x at small x. In this method of determining the ratios, we apply geometrical scaling in charm production in deep inelastic scattering (DIS). Our analysis shows that the renormalization scales have a sizable impact on the ratio R c at high Q 2 . Our results for the ratio of the charm structure functions are in a good agreement with some phenomenological models

  11. Total meltwater volume since the Last Glacial Maximum and viscosity structure of Earth's mantle inferred from relative sea level changes at Barbados and Bonaparte Gulf and GIA-induced J˙2

    Science.gov (United States)

    Nakada, Masao; Okuno, Jun'ichi; Yokoyama, Yusuke

    2016-02-01

    Inference of globally averaged eustatic sea level (ESL) rise since the Last Glacial Maximum (LGM) highly depends on the interpretation of relative sea level (RSL) observations at Barbados and Bonaparte Gulf, Australia, which are sensitive to the viscosity structure of Earth's mantle. Here we examine the RSL changes at the LGM for Barbados and Bonaparte Gulf ({{RSL}}_{{L}}^{{{Bar}}} and {{RSL}}_{{L}}^{{{Bon}}}), differential RSL for both sites (Δ {{RSL}}_{{L}}^{{{Bar}},{{Bon}}}) and rate of change of degree-two harmonics of Earth's geopotential due to glacial isostatic adjustment (GIA) process (GIA-induced J˙2) to infer the ESL component and viscosity structure of Earth's mantle. Differential RSL, Δ {{RSL}}_{{L}}^{{{Bar}},{{Bon}}} and GIA-induced J˙2 are dominantly sensitive to the lower-mantle viscosity, and nearly insensitive to the upper-mantle rheological structure and GIA ice models with an ESL component of about (120-130) m. The comparison between the predicted and observationally derived Δ {{RSL}}_{{L}}^{{{Bar}},{{Bon}}} indicates the lower-mantle viscosity higher than ˜2 × 1022 Pa s, and the observationally derived GIA-induced J˙2 of -(6.0-6.5) × 10-11 yr-1 indicates two permissible solutions for the lower mantle, ˜1022 and (5-10) × 1022 Pa s. That is, the effective lower-mantle viscosity inferred from these two observational constraints is (5-10) × 1022 Pa s. The LGM RSL changes at both sites, {{RSL}}_{{L}}^{{{Bar}}} and {{RSL}}_{{L}}^{{{Bon}}}, are also sensitive to the ESL component and upper-mantle viscosity as well as the lower-mantle viscosity. The permissible upper-mantle viscosity increases with decreasing ESL component due to the sensitivity of the LGM sea level at Bonaparte Gulf ({{RSL}}_{{L}}^{{{Bon}}}) to the upper-mantle viscosity, and inferred upper-mantle viscosity for adopted lithospheric thicknesses of 65 and 100 km is (1-3) × 1020 Pa s for ESL˜130 m and (4-10) × 1020 Pa s for ESL˜125 m. The former solution of (1-3) × 1020

  12. Bayesian inference of chemical kinetic models from proposed reactions

    KAUST Repository

    Galagali, Nikhil

    2015-02-01

    © 2014 Elsevier Ltd. Bayesian inference provides a natural framework for combining experimental data with prior knowledge to develop chemical kinetic models and quantify the associated uncertainties, not only in parameter values but also in model structure. Most existing applications of Bayesian model selection methods to chemical kinetics have been limited to comparisons among a small set of models, however. The significant computational cost of evaluating posterior model probabilities renders traditional Bayesian methods infeasible when the model space becomes large. We present a new framework for tractable Bayesian model inference and uncertainty quantification using a large number of systematically generated model hypotheses. The approach involves imposing point-mass mixture priors over rate constants and exploring the resulting posterior distribution using an adaptive Markov chain Monte Carlo method. The posterior samples are used to identify plausible models, to quantify rate constant uncertainties, and to extract key diagnostic information about model structure-such as the reactions and operating pathways most strongly supported by the data. We provide numerical demonstrations of the proposed framework by inferring kinetic models for catalytic steam and dry reforming of methane using available experimental data.

  13. Inference rule and problem solving

    Energy Technology Data Exchange (ETDEWEB)

    Goto, S

    1982-04-01

    Intelligent information processing signifies an opportunity of having man's intellectual activity executed on the computer, in which inference, in place of ordinary calculation, is used as the basic operational mechanism for such an information processing. Many inference rules are derived from syllogisms in formal logic. The problem of programming this inference function is referred to as a problem solving. Although logically inference and problem-solving are in close relation, the calculation ability of current computers is on a low level for inferring. For clarifying the relation between inference and computers, nonmonotonic logic has been considered. The paper deals with the above topics. 16 references.

  14. Alignment-free genome tree inference by learning group-specific distance metrics.

    Science.gov (United States)

    Patil, Kaustubh R; McHardy, Alice C

    2013-01-01

    Understanding the evolutionary relationships between organisms is vital for their in-depth study. Gene-based methods are often used to infer such relationships, which are not without drawbacks. One can now attempt to use genome-scale information, because of the ever increasing number of genomes available. This opportunity also presents a challenge in terms of computational efficiency. Two fundamentally different methods are often employed for sequence comparisons, namely alignment-based and alignment-free methods. Alignment-free methods rely on the genome signature concept and provide a computationally efficient way that is also applicable to nonhomologous sequences. The genome signature contains evolutionary signal as it is more similar for closely related organisms than for distantly related ones. We used genome-scale sequence information to infer taxonomic distances between organisms without additional information such as gene annotations. We propose a method to improve genome tree inference by learning specific distance metrics over the genome signature for groups of organisms with similar phylogenetic, genomic, or ecological properties. Specifically, our method learns a Mahalanobis metric for a set of genomes and a reference taxonomy to guide the learning process. By applying this method to more than a thousand prokaryotic genomes, we showed that, indeed, better distance metrics could be learned for most of the 18 groups of organisms tested here. Once a group-specific metric is available, it can be used to estimate the taxonomic distances for other sequenced organisms from the group. This study also presents a large scale comparison between 10 methods--9 alignment-free and 1 alignment-based.

  15. Inference algorithms and learning theory for Bayesian sparse factor analysis

    International Nuclear Information System (INIS)

    Rattray, Magnus; Sharp, Kevin; Stegle, Oliver; Winn, John

    2009-01-01

    Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.

  16. Inference algorithms and learning theory for Bayesian sparse factor analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rattray, Magnus; Sharp, Kevin [School of Computer Science, University of Manchester, Manchester M13 9PL (United Kingdom); Stegle, Oliver [Max-Planck-Institute for Biological Cybernetics, Tuebingen (Germany); Winn, John, E-mail: magnus.rattray@manchester.ac.u [Microsoft Research Cambridge, Roger Needham Building, Cambridge, CB3 0FB (United Kingdom)

    2009-12-01

    Bayesian sparse factor analysis has many applications; for example, it has been applied to the problem of inferring a sparse regulatory network from gene expression data. We describe a number of inference algorithms for Bayesian sparse factor analysis using a slab and spike mixture prior. These include well-established Markov chain Monte Carlo (MCMC) and variational Bayes (VB) algorithms as well as a novel hybrid of VB and Expectation Propagation (EP). For the case of a single latent factor we derive a theory for learning performance using the replica method. We compare the MCMC and VB/EP algorithm results with simulated data to the theoretical prediction. The results for MCMC agree closely with the theory as expected. Results for VB/EP are slightly sub-optimal but show that the new algorithm is effective for sparse inference. In large-scale problems MCMC is infeasible due to computational limitations and the VB/EP algorithm then provides a very useful computationally efficient alternative.

  17. Learning Additional Languages as Hierarchical Probabilistic Inference: Insights From First Language Processing.

    Science.gov (United States)

    Pajak, Bozena; Fine, Alex B; Kleinschmidt, Dave F; Jaeger, T Florian

    2016-12-01

    We present a framework of second and additional language (L2/L n ) acquisition motivated by recent work on socio-indexical knowledge in first language (L1) processing. The distribution of linguistic categories covaries with socio-indexical variables (e.g., talker identity, gender, dialects). We summarize evidence that implicit probabilistic knowledge of this covariance is critical to L1 processing, and propose that L2/L n learning uses the same type of socio-indexical information to probabilistically infer latent hierarchical structure over previously learned and new languages. This structure guides the acquisition of new languages based on their inferred place within that hierarchy, and is itself continuously revised based on new input from any language. This proposal unifies L1 processing and L2/L n acquisition as probabilistic inference under uncertainty over socio-indexical structure. It also offers a new perspective on crosslinguistic influences during L2/L n learning, accommodating gradient and continued transfer (both negative and positive) from previously learned to novel languages, and vice versa.

  18. Modulated modularity clustering as an exploratory tool for functional genomic inference.

    Directory of Open Access Journals (Sweden)

    Eric A Stone

    2009-05-01

    Full Text Available In recent years, the advent of high-throughput assays, coupled with their diminishing cost, has facilitated a systems approach to biology. As a consequence, massive amounts of data are currently being generated, requiring efficient methodology aimed at the reduction of scale. Whole-genome transcriptional profiling is a standard component of systems-level analyses, and to reduce scale and improve inference clustering genes is common. Since clustering is often the first step toward generating hypotheses, cluster quality is critical. Conversely, because the validation of cluster-driven hypotheses is indirect, it is critical that quality clusters not be obtained by subjective means. In this paper, we present a new objective-based clustering method and demonstrate that it yields high-quality results. Our method, modulated modularity clustering (MMC, seeks community structure in graphical data. MMC modulates the connection strengths of edges in a weighted graph to maximize an objective function (called modularity that quantifies community structure. The result of this maximization is a clustering through which tightly-connected groups of vertices emerge. Our application is to systems genetics, and we quantitatively compare MMC both to the hierarchical clustering method most commonly employed and to three popular spectral clustering approaches. We further validate MMC through analyses of human and Drosophila melanogaster expression data, demonstrating that the clusters we obtain are biologically meaningful. We show MMC to be effective and suitable to applications of large scale. In light of these features, we advocate MMC as a standard tool for exploration and hypothesis generation.

  19. Knowledge and inference

    CERN Document Server

    Nagao, Makoto

    1990-01-01

    Knowledge and Inference discusses an important problem for software systems: How do we treat knowledge and ideas on a computer and how do we use inference to solve problems on a computer? The book talks about the problems of knowledge and inference for the purpose of merging artificial intelligence and library science. The book begins by clarifying the concept of """"knowledge"""" from many points of view, followed by a chapter on the current state of library science and the place of artificial intelligence in library science. Subsequent chapters cover central topics in the artificial intellig

  20. A Network Inference Workflow Applied to Virulence-Related Processes in Salmonella typhimurium

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Ronald C.; Singhal, Mudita; Weller, Jennifer B.; Khoshnevis, Saeed; Shi, Liang; McDermott, Jason E.

    2009-04-20

    Inference of the structure of mRNA transcriptional regulatory networks, protein regulatory or interaction networks, and protein activation/inactivation-based signal transduction networks are critical tasks in systems biology. In this article we discuss a workflow for the reconstruction of parts of the transcriptional regulatory network of the pathogenic bacterium Salmonella typhimurium based on the information contained in sets of microarray gene expression data now available for that organism, and describe our results obtained by following this workflow. The primary tool is one of the network inference algorithms deployed in the Software Environment for BIological Network Inference (SEBINI). Specifically, we selected the algorithm called Context Likelihood of Relatedness (CLR), which uses the mutual information contained in the gene expression data to infer regulatory connections. The associated analysis pipeline automatically stores the inferred edges from the CLR runs within SEBINI and, upon request, transfers the inferred edges into either Cytoscape or the plug-in Collective Analysis of Biological of Biological Interaction Networks (CABIN) tool for further post-analysis of the inferred regulatory edges. The following article presents the outcome of this workflow, as well as the protocols followed for microarray data collection, data cleansing, and network inference. Our analysis revealed several interesting interactions, functional groups, metabolic pathways, and regulons in S. typhimurium.

  1. On the inference of function from structure using biomechanical modelling and simulation of extinct organisms

    Science.gov (United States)

    Hutchinson, John R.

    2012-01-01

    Biomechanical modelling and simulation techniques offer some hope for unravelling the complex inter-relationships of structure and function perhaps even for extinct organisms, but have their limitations owing to this complexity and the many unknown parameters for fossil taxa. Validation and sensitivity analysis are two indispensable approaches for quantifying the accuracy and reliability of such models or simulations. But there are other subtleties in biomechanical modelling that include investigator judgements about the level of simplicity versus complexity in model design or how uncertainty and subjectivity are dealt with. Furthermore, investigator attitudes toward models encompass a broad spectrum between extreme credulity and nihilism, influencing how modelling is conducted and perceived. Fundamentally, more data and more testing of methodology are required for the field to mature and build confidence in its inferences. PMID:21666064

  2. Geometric statistical inference

    International Nuclear Information System (INIS)

    Periwal, Vipul

    1999-01-01

    A reparametrization-covariant formulation of the inverse problem of probability is explicitly solved for finite sample sizes. The inferred distribution is explicitly continuous for finite sample size. A geometric solution of the statistical inference problem in higher dimensions is outlined

  3. TOPOLOGY OF A LARGE-SCALE STRUCTURE AS A TEST OF MODIFIED GRAVITY

    International Nuclear Information System (INIS)

    Wang Xin; Chen Xuelei; Park, Changbom

    2012-01-01

    The genus of the isodensity contours is a robust measure of the topology of a large-scale structure, and it is relatively insensitive to nonlinear gravitational evolution, galaxy bias, and redshift-space distortion. We show that the growth of density fluctuations is scale dependent even in the linear regime in some modified gravity theories, which opens a new possibility of testing the theories observationally. We propose to use the genus of the isodensity contours, an intrinsic measure of the topology of the large-scale structure, as a statistic to be used in such tests. In Einstein's general theory of relativity, density fluctuations grow at the same rate on all scales in the linear regime, and the genus per comoving volume is almost conserved as structures grow homologously, so we expect that the genus-smoothing-scale relation is basically time independent. However, in some modified gravity models where structures grow with different rates on different scales, the genus-smoothing-scale relation should change over time. This can be used to test the gravity models with large-scale structure observations. We study the cases of the f(R) theory, DGP braneworld theory as well as the parameterized post-Friedmann models. We also forecast how the modified gravity models can be constrained with optical/IR or redshifted 21 cm radio surveys in the near future.

  4. Anisotropy of the Cosmic Microwave Background Radiation on Large and Medium Angular Scales

    Science.gov (United States)

    Houghton, Anthony; Timbie, Peter

    1998-01-01

    This grant has supported work at Brown University on measurements of the 2.7 K Cosmic Microwave Background Radiation (CMB). The goal has been to characterize the spatial variations in the temperature of the CMB in order to understand the formation of large-scale structure in the universe. We have concurrently pursued two measurements using millimeter-wave telescopes carried aloft by scientific balloons. Both systems operate over a range of wavelengths, chosen to allow spectral removal of foreground sources such as the atmosphere, Galaxy, etc. The angular resolution of approx. 25 arcminutes is near the angular scale at which the most structure is predicted by current models to be visible in the CMB angular power spectrum. The main goal is to determine the angular scale of this structure; in turn we can infer the density parameter, Omega, for the universe as well as other cosmological parameters, such as the Hubble constant.

  5. Inferring Pairwise Interactions from Biological Data Using Maximum-Entropy Probability Models.

    Directory of Open Access Journals (Sweden)

    Richard R Stein

    2015-07-01

    Full Text Available Maximum entropy-based inference methods have been successfully used to infer direct interactions from biological datasets such as gene expression data or sequence ensembles. Here, we review undirected pairwise maximum-entropy probability models in two categories of data types, those with continuous and categorical random variables. As a concrete example, we present recently developed inference methods from the field of protein contact prediction and show that a basic set of assumptions leads to similar solution strategies for inferring the model parameters in both variable types. These parameters reflect interactive couplings between observables, which can be used to predict global properties of the biological system. Such methods are applicable to the important problems of protein 3-D structure prediction and association of gene-gene networks, and they enable potential applications to the analysis of gene alteration patterns and to protein design.

  6. INVESTIGATING THE FACTOR STRUCTURE OF THE BLOG ATTITUDE SCALE

    Directory of Open Access Journals (Sweden)

    Zahra SHAHSAVAR

    2010-10-01

    Full Text Available Due to the wide application of advanced technology in education, many attitude scales have been developed to evaluate learners’ attitudes toward educational tools. However, with the rapid development of emerging technologies, using blogs as one of the Web 2.0 tools is still in its infancy and few blog attitude scales have been developed yet. In view of this need, a lot of researchers like to design a new scale based on their conceptual and theoretical framework of their own study rather than using available scales. The present study reports the design and development of a blog attitude scale (BAS. The researchers developed a pool of items to capture the complexity of the blog attitude trait, selected 29 items in the content analysis, and assigned the scale comprising 29 items to 216 undergraduate students to explore the underlying structure of the BAS. In exploratory factor analysis, three factors were discovered: blog anxiety, blog desirability, and blog self-efficacy; 14 items were excluded. The extracted items were subjected to a confirmatory factor analysis which lent further support to the BAS underpinning structure.

  7. DAMPING OF ELECTRON DENSITY STRUCTURES AND IMPLICATIONS FOR INTERSTELLAR SCINTILLATION

    International Nuclear Information System (INIS)

    Smith, K. W.; Terry, P. W.

    2011-01-01

    The forms of electron density structures in kinetic Alfven wave (KAW) turbulence are studied in connection with scintillation. The focus is on small scales L ∼ 10 8 -10 10 cm where the KAW regime is active in the interstellar medium, principally within turbulent H II regions. Scales at 10 times the ion gyroradius and smaller are inferred to dominate scintillation in the theory of Boldyrev et al. From numerical solutions of a decaying KAW turbulence model, structure morphology reveals two types of localized structures, filaments and sheets, and shows that they arise in different regimes of resistive and diffusive damping. Minimal resistive damping yields localized current filaments that form out of Gaussian-distributed initial conditions. When resistive damping is large relative to diffusive damping, sheet-like structures form. In the filamentary regime, each filament is associated with a non-localized magnetic and density structure, circularly symmetric in cross section. Density and magnetic fields have Gaussian statistics (as inferred from Gaussian-valued kurtosis) while density gradients are strongly non-Gaussian, more so than current. This enhancement of non-Gaussian statistics in a derivative field is expected since gradient operations enhance small-scale fluctuations. The enhancement of density gradient kurtosis over current kurtosis is not obvious, yet it suggests that modest density fluctuations may yield large scintillation events during pulsar signal propagation. In the sheet regime the same statistical observations hold, despite the absence of localized filamentary structures. Probability density functions are constructed from statistical ensembles in both regimes, showing clear formation of long, highly non-Gaussian tails.

  8. Quantum Enhanced Inference in Markov Logic Networks.

    Science.gov (United States)

    Wittek, Peter; Gogolin, Christian

    2017-04-19

    Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.

  9. Quantum Enhanced Inference in Markov Logic Networks

    Science.gov (United States)

    Wittek, Peter; Gogolin, Christian

    2017-04-01

    Markov logic networks (MLNs) reconcile two opposing schools in machine learning and artificial intelligence: causal networks, which account for uncertainty extremely well, and first-order logic, which allows for formal deduction. An MLN is essentially a first-order logic template to generate Markov networks. Inference in MLNs is probabilistic and it is often performed by approximate methods such as Markov chain Monte Carlo (MCMC) Gibbs sampling. An MLN has many regular, symmetric structures that can be exploited at both first-order level and in the generated Markov network. We analyze the graph structures that are produced by various lifting methods and investigate the extent to which quantum protocols can be used to speed up Gibbs sampling with state preparation and measurement schemes. We review different such approaches, discuss their advantages, theoretical limitations, and their appeal to implementations. We find that a straightforward application of a recent result yields exponential speedup compared to classical heuristics in approximate probabilistic inference, thereby demonstrating another example where advanced quantum resources can potentially prove useful in machine learning.

  10. The large-scale environment from cosmological simulations - I. The baryonic cosmic web

    Science.gov (United States)

    Cui, Weiguang; Knebe, Alexander; Yepes, Gustavo; Yang, Xiaohu; Borgani, Stefano; Kang, Xi; Power, Chris; Staveley-Smith, Lister

    2018-01-01

    Using a series of cosmological simulations that includes one dark-matter-only (DM-only) run, one gas cooling-star formation-supernova feedback (CSF) run and one that additionally includes feedback from active galactic nuclei (AGNs), we classify the large-scale structures with both a velocity-shear-tensor code (VWEB) and a tidal-tensor code (PWEB). We find that the baryonic processes have almost no impact on large-scale structures - at least not when classified using aforementioned techniques. More importantly, our results confirm that the gas component alone can be used to infer the filamentary structure of the universe practically un-biased, which could be applied to cosmology constraints. In addition, the gas filaments are classified with its velocity (VWEB) and density (PWEB) fields, which can theoretically connect to the radio observations, such as H I surveys. This will help us to bias-freely link the radio observations with dark matter distributions at large scale.

  11. The origin of large scale cosmic structure

    International Nuclear Information System (INIS)

    Jones, B.J.T.; Palmer, P.L.

    1985-01-01

    The paper concerns the origin of large scale cosmic structure. The evolution of density perturbations, the nonlinear regime (Zel'dovich's solution and others), the Gott and Rees clustering hierarchy, the spectrum of condensations, and biassed galaxy formation, are all discussed. (UK)

  12. On the universal character of the large scale structure of the universe

    International Nuclear Information System (INIS)

    Demianski, M.; International Center for Relativistic Astrophysics; Rome Univ.; Doroshkevich, A.G.

    1991-01-01

    We review different theories of formation of the large scale structure of the Universe. Special emphasis is put on the theory of inertial instability. We show that for a large class of initial spectra the resulting two point correlation functions are similar. We discuss also the adhesion theory which uses the Burgers equation, Navier-Stokes equation or coagulation process. We review the Zeldovich theory of gravitational instability and discuss the internal structure of pancakes. Finally we discuss the role of the velocity potential in determining the global characteristics of large scale structures (distribution of caustics, scale of voids, etc.). In the last chapter we list the main unsolved problems and main successes of the theory of formation of large scale structure. (orig.)

  13. Inferring genome-wide patterns of admixture in Qataris using fifty-five ancestral populations

    Directory of Open Access Journals (Sweden)

    Omberg Larsson

    2012-06-01

    Full Text Available Abstract Background Populations of the Arabian Peninsula have a complex genetic structure that reflects waves of migrations including the earliest human migrations from Africa and eastern Asia, migrations along ancient civilization trading routes and colonization history of recent centuries. Results Here, we present a study of genome-wide admixture in this region, using 156 genotyped individuals from Qatar, a country located at the crossroads of these migration patterns. Since haplotypes of these individuals could have originated from many different populations across the world, we have developed a machine learning method "SupportMix" to infer loci-specific genomic ancestry when simultaneously analyzing many possible ancestral populations. Simulations show that SupportMix is not only more accurate than other popular admixture discovery tools but is the first admixture inference method that can efficiently scale for simultaneous analysis of 50-100 putative ancestral populations while being independent of prior demographic information. Conclusions By simultaneously using the 55 world populations from the Human Genome Diversity Panel, SupportMix was able to extract the fine-scale ancestry of the Qatar population, providing many new observations concerning the ancestry of the region. For example, as well as recapitulating the three major sub-populations in Qatar, composed of mainly Arabic, Persian, and African ancestry, SupportMix additionally identifies the specific ancestry of the Persian group to populations sampled in Greater Persia rather than from China and the ancestry of the African group to sub-Saharan origin and not Southern African Bantu origin as previously thought.

  14. An algebra-based method for inferring gene regulatory networks.

    Science.gov (United States)

    Vera-Licona, Paola; Jarrah, Abdul; Garcia-Puente, Luis David; McGee, John; Laubenbacher, Reinhard

    2014-03-26

    The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used. This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also predicting several of the

  15. A large-scale soil-structure interaction experiment: Design and construction

    International Nuclear Information System (INIS)

    Tang, H.T.; Tang, Y.K.; Stepp, J.C.; Wall, I.B.; Lin, E.; Cheng, S.C.; Lee, S.K.

    1989-01-01

    This paper describes the design and construction phase of the Large-Scale Soil-Structure Interaction Experiment project jointly sponsored by EPRI and Taipower. The project has two objectives: 1. to obtain an earthquake database which can be used to substantiate soil-structure interaction (SSI) models and analysis methods; and 2. to quantify nuclear power plant reactor containment and internal components seismic margin based on earthquake experience data. These objectives were accomplished by recording and analyzing data from two instrumented, scaled down, reinforced concrete containment structures during seismic events. The two model structures are sited in a high seismic region in Taiwan (SMART-1). A strong-motion seismic array network is located at the site. The containment models (1/4- and 1/12-scale) were constructed and instrumented specially for this experiment. Construction was completed and data recording began in September 1985. By November 1986, 18 strong motion earthquakes ranging from Richter magnitude 4.5 to 7.0 were recorded. (orig./HP)

  16. Epidemic spreading in weighted scale-free networks with community structure

    International Nuclear Information System (INIS)

    Chu, Xiangwei; Guan, Jihong; Zhang, Zhongzhi; Zhou, Shuigeng

    2009-01-01

    Many empirical studies reveal that the weights and community structure are ubiquitous in various natural and artificial networks. In this paper, based on the SI disease model, we investigate the epidemic spreading in weighted scale-free networks with community structure. Two exponents, α and β, are introduced to weight the internal edges and external edges, respectively; and a tunable probability parameter q is also introduced to adjust the strength of community structure. We find the external weighting exponent β plays a much more important role in slackening the epidemic spreading and reducing the danger brought by the epidemic than the internal weighting exponent α. Moreover, a novel result we find is that the strong community structure is no longer helpful for slackening the danger brought by the epidemic in the weighted cases. In addition, we show the hierarchical dynamics of the epidemic spreading in the weighted scale-free networks with communities which is also displayed in the famous BA scale-free networks

  17. Inferring Large-Scale Terrestrial Water Storage Through GRACE and GPS Data Fusion in Cloud Computing Environments

    Science.gov (United States)

    Rude, C. M.; Li, J. D.; Gowanlock, M.; Herring, T.; Pankratius, V.

    2016-12-01

    Surface subsidence due to depletion of groundwater can lead to permanent compaction of aquifers and damaged infrastructure. However, studies of such effects on a large scale are challenging and compute intensive because they involve fusing a variety of data sets beyond direct measurements from groundwater wells, such as gravity change measurements from the Gravity Recovery and Climate Experiment (GRACE) or surface displacements measured by GPS receivers. Our work therefore leverages Amazon cloud computing to enable these types of analyses spanning the entire continental US. Changes in groundwater storage are inferred from surface displacements measured by GPS receivers stationed throughout the country. Receivers located on bedrock are anti-correlated with changes in water levels from elastic deformation due to loading, while stations on aquifers correlate with groundwater changes due to poroelastic expansion and compaction. Correlating linearly detrended equivalent water thickness measurements from GRACE with linearly detrended and Kalman filtered vertical displacements of GPS stations located throughout the United States helps compensate for the spatial and temporal limitations of GRACE. Our results show that the majority of GPS stations are negatively correlated with GRACE in a statistically relevant way, as most GPS stations are located on bedrock in order to provide stable reference locations and measure geophysical processes such as tectonic deformations. Additionally, stations located on the Central Valley California aquifer show statistically significant positive correlations. Through the identification of positive and negative correlations, deformation phenomena can be classified as loading or poroelastic expansion due to changes in groundwater. This method facilitates further studies of terrestrial water storage on a global scale. This work is supported by NASA AIST-NNX15AG84G (PI: V. Pankratius) and Amazon.

  18. Electron Scale Structures and Magnetic Reconnection Signatures in the Turbulent Magnetosheath

    Science.gov (United States)

    Yordanova, E.; Voros, Z.; Varsani, A.; Graham, D. B.; Norgren, C.; Khotyaintsev, Yu. V.; Vaivads, A.; Eriksson, E.; Nakamura, R.; Lindqvist, P.-A.; hide

    2016-01-01

    Collisionless space plasma turbulence can generate reconnecting thin current sheets as suggested by recent results of numerical magnetohydrodynamic simulations. The Magnetospheric Multiscale (MMS) mission provides the first serious opportunity to verify whether small ion-electron-scale reconnection, generated by turbulence, resembles the reconnection events frequently observed in the magnetotail or at the magnetopause. Here we investigate field and particle observations obtained by the MMS fleet in the turbulent terrestrial magnetosheath behind quasi-parallel bow shock geometry. We observe multiple small-scale current sheets during the event and present a detailed look of one of the detected structures. The emergence of thin current sheets can lead to electron scale structures. Within these structures, we see signatures of ion demagnetization, electron jets, electron heating, and agyrotropy suggesting that MMS spacecraft observe reconnection at these scales.

  19. Coupling Fine-Scale Root and Canopy Structure Using Ground-Based Remote Sensing

    Directory of Open Access Journals (Sweden)

    Brady S. Hardiman

    2017-02-01

    Full Text Available Ecosystem physical structure, defined by the quantity and spatial distribution of biomass, influences a range of ecosystem functions. Remote sensing tools permit the non-destructive characterization of canopy and root features, potentially providing opportunities to link above- and belowground structure at fine spatial resolution in functionally meaningful ways. To test this possibility, we employed ground-based portable canopy LiDAR (PCL and ground penetrating radar (GPR along co-located transects in forested sites spanning multiple stages of ecosystem development and, consequently, of structural complexity. We examined canopy and root structural data for coherence (i.e., correlation in the frequency of spatial variation at multiple spatial scales ≤10 m within each site using wavelet analysis. Forest sites varied substantially in vertical canopy and root structure, with leaf area index and root mass more becoming even vertically as forests aged. In all sites, above- and belowground structure, characterized as mean maximum canopy height and root mass, exhibited significant coherence at a scale of 3.5–4 m, and results suggest that the scale of coherence may increase with stand age. Our findings demonstrate that canopy and root structure are linked at characteristic spatial scales, which provides the basis to optimize scales of observation. Our study highlights the potential, and limitations, for fusing LiDAR and radar technologies to quantitatively couple above- and belowground ecosystem structure.

  20. Goal inferences about robot behavior : goal inferences and human response behaviors

    NARCIS (Netherlands)

    Broers, H.A.T.; Ham, J.R.C.; Broeders, R.; De Silva, P.; Okada, M.

    2014-01-01

    This explorative research focused on the goal inferences human observers draw based on a robot's behavior, and the extent to which those inferences predict people's behavior in response to that robot. Results show that different robot behaviors cause different response behavior from people.

  1. Multiscale properties of DNA primary structure: cross-scale correlations

    International Nuclear Information System (INIS)

    Altajskij, M.V.; Ivanov, V.V.; Polozov, R.V.

    2000-01-01

    Cross-scale correlations of wavelet coefficients of the DNA coding sequences are calculated and compared to that of the generated random sequence of the same length. The coding sequences are shown to have strong correlation between large and small scale structures, while random sequences have not

  2. How CMB and large-scale structure constrain chameleon interacting dark energy

    International Nuclear Information System (INIS)

    Boriero, Daniel; Das, Subinoy; Wong, Yvonne Y.Y.

    2015-01-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength, can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H 0 tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H 0 value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys

  3. How CMB and large-scale structure constrain chameleon interacting dark energy

    Energy Technology Data Exchange (ETDEWEB)

    Boriero, Daniel [Fakultät für Physik, Universität Bielefeld, Universitätstr. 25, Bielefeld (Germany); Das, Subinoy [Indian Institute of Astrophisics, Bangalore, 560034 (India); Wong, Yvonne Y.Y., E-mail: boriero@physik.uni-bielefeld.de, E-mail: subinoy@iiap.res.in, E-mail: yvonne.y.wong@unsw.edu.au [School of Physics, The University of New South Wales, Sydney NSW 2052 (Australia)

    2015-07-01

    We explore a chameleon type of interacting dark matter-dark energy scenario in which a scalar field adiabatically traces the minimum of an effective potential sourced by the dark matter density. We discuss extensively the effect of this coupling on cosmological observables, especially the parameter degeneracies expected to arise between the model parameters and other cosmological parameters, and then test the model against observations of the cosmic microwave background (CMB) anisotropies and other cosmological probes. We find that the chameleon parameters α and β, which determine respectively the slope of the scalar field potential and the dark matter-dark energy coupling strength, can be constrained to α < 0.17 and β < 0.19 using CMB data and measurements of baryon acoustic oscillations. The latter parameter in particular is constrained only by the late Integrated Sachs-Wolfe effect. Adding measurements of the local Hubble expansion rate H{sub 0} tightens the bound on α by a factor of two, although this apparent improvement is arguably an artefact of the tension between the local measurement and the H{sub 0} value inferred from Planck data in the minimal ΛCDM model. The same argument also precludes chameleon models from mimicking a dark radiation component, despite a passing similarity between the two scenarios in that they both delay the epoch of matter-radiation equality. Based on the derived parameter constraints, we discuss possible signatures of the model for ongoing and future large-scale structure surveys.

  4. Impact of small-scale structures on estuarine circulation

    Science.gov (United States)

    Liu, Zhuo; Zhang, Yinglong J.; Wang, Harry V.; Huang, Hai; Wang, Zhengui; Ye, Fei; Sisson, Mac

    2018-05-01

    We present a novel and challenging application of a 3D estuary-shelf model to the study of the collective impact of many small-scale structures (bridge pilings of 1 m × 2 m in size) on larger-scale circulation in a tributary (James River) of Chesapeake Bay. We first demonstrate that the model is capable of effectively transitioning grid resolution from 400 m down to 1 m near the pilings without introducing undue numerical artifact. We then show that despite their small sizes and collectively small area as compared to the total channel cross-sectional area, the pilings exert a noticeable impact on the large-scale circulation, and also create a rich structure of vortices and wakes around the pilings. As a result, the water quality and local sedimentation patterns near the bridge piling area are likely to be affected as well. However, when evaluating over the entire waterbody of the project area, the near field effects are weighed with the areal percentage which is small compared to that for the larger unaffected area, and therefore the impact on the lower James River as a whole becomes relatively insignificant. The study highlights the importance of the use of high resolution in assessing the near-field impact of structures.

  5. The effects of incomplete protein interaction data on structural and evolutionary inferences

    DEFF Research Database (Denmark)

    de Silva, E; Thorne, T; Ingram, P

    2006-01-01

    of the inherent noise in protein interaction data. The effects of the incomplete nature of network data become very noticeable, especially for so-called network motifs. We also consider the effect of incomplete network data on functional and evolutionary inferences. Conclusion Crucially, when only small, partial...

  6. Entropic Inference

    OpenAIRE

    Caticha, Ariel

    2010-01-01

    In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEn...

  7. Macroscopic High-Temperature Structural Analysis Model of Small-Scale PCHE Prototype (II)

    International Nuclear Information System (INIS)

    Song, Kee Nam; Lee, Heong Yeon; Hong, Sung Deok; Park, Hong Yoon

    2011-01-01

    The IHX (intermediate heat exchanger) of a VHTR (very high-temperature reactor) is a core component that transfers the high heat generated by the VHTR at 950 .deg. C to a hydrogen production plant. Korea Atomic Energy Research Institute manufactured a small-scale prototype of a PCHE (printed circuit heat exchanger) that was being considered as a candidate for the IHX. In this study, as a part of high-temperature structural integrity evaluation of the small-scale PCHE prototype, we carried out high-temperature structural analysis modeling and macroscopic thermal and elastic structural analysis for the small-scale PCHE prototype under small-scale gas-loop test conditions. The modeling and analysis were performed as a precedent study prior to the performance test in the small-scale gas loop. The results obtained in this study will be compared with the test results for the small-scale PCHE. Moreover, these results will be used in the design of a medium-scale PCHE prototype

  8. Functional nanometer-scale structures

    Science.gov (United States)

    Chan, Tsz On Mario

    Nanometer-scale structures have properties that are fundamentally different from their bulk counterparts. Much research effort has been devoted in the past decades to explore new fabrication techniques, model the physical properties of these structures, and construct functional devices. The ability to manipulate and control the structure of matter at the nanoscale has made many new classes of materials available for the study of fundamental physical processes and potential applications. The interplay between fabrication techniques and physical understanding of the nanostructures and processes has revolutionized the physical and material sciences, providing far superior properties in materials for novel applications that benefit society. This thesis consists of two major aspects of my graduate research in nano-scale materials. In the first part (Chapters 3--6), a comprehensive study on the nanostructures based on electrospinning and thermal treatment is presented. Electrospinning is a well-established method for producing high-aspect-ratio fibrous structures, with fiber diameter ranging from 1 nm--1 microm. A polymeric solution is typically used as a precursor in electrospinning. In our study, the functionality of the nanostructure relies on both the nanostructure and material constituents. Metallic ions containing precursors were added to the polymeric precursor following a sol-gel process to prepare the solution suitable for electrospinning. A typical electrospinning process produces as-spun fibers containing both polymer and metallic salt precursors. Subsequent thermal treatments of the as-spun fibers were carried out in various conditions to produce desired structures. In most cases, polymer in the solution and the as-spun fibers acted as a backbone for the structure formation during the subsequent heat treatment, and were thermally removed in the final stage. Polymers were also designed to react with the metallic ion precursors during heat treatment in some

  9. Bayesian pedigree inference with small numbers of single nucleotide polymorphisms via a factor-graph representation.

    Science.gov (United States)

    Anderson, Eric C; Ng, Thomas C

    2016-02-01

    We develop a computational framework for addressing pedigree inference problems using small numbers (80-400) of single nucleotide polymorphisms (SNPs). Our approach relaxes the assumptions, which are commonly made, that sampling is complete with respect to the pedigree and that there is no genotyping error. It relies on representing the inferred pedigree as a factor graph and invoking the Sum-Product algorithm to compute and store quantities that allow the joint probability of the data to be rapidly computed under a large class of rearrangements of the pedigree structure. This allows efficient MCMC sampling over the space of pedigrees, and, hence, Bayesian inference of pedigree structure. In this paper we restrict ourselves to inference of pedigrees without loops using SNPs assumed to be unlinked. We present the methodology in general for multigenerational inference, and we illustrate the method by applying it to the inference of full sibling groups in a large sample (n=1157) of Chinook salmon typed at 95 SNPs. The results show that our method provides a better point estimate and estimate of uncertainty than the currently best-available maximum-likelihood sibling reconstruction method. Extensions of this work to more complex scenarios are briefly discussed. Published by Elsevier Inc.

  10. Structural Plasticity Denoises Responses and Improves Learning Speed

    Directory of Open Access Journals (Sweden)

    Robin Spiess

    2016-09-01

    Full Text Available Despite an abundance of computational models for learning of synaptic weights, there has been relatively little research on structural plasticity, i.e. the creation and elimination of synapses. Especially, it is not clear how structural plasticity works in concert with spike-timing-dependent plasticity (STDP and what advantages their combination offers.Here we present a fairly large-scale functional model that uses leaky integrate-and-fire neurons, STDP, homeostasis, recurrent connections, and structural plasticity to learn the input encoding, the relation between inputs, and to infer missing inputs. Using this model, we compare the error and the amount of noise in the network's responses with and without structural plasticity and the influence of structural plasticity on the learning speed of the network.Using structural plasticity during learning shows good results for learning the representation of input values, i.e. structural plasticity strongly reduces the noise of the response by preventing spikes with a high error.For inferring missing inputs we see similar results, with responses having less noise if the network was trained using structural plasticity.Additionally, using structural plasticity with pruning significantly decreased the time to learn weights suitable for inference.Presumably, this is due to the clearer signal containing less spikes that misrepresent the desired value. Therefore, this work shows that structural plasticity is not only able to improve upon the performance using STDP without structural plasticity but also speeds up learning.Additionally, it addresses the practical problem of limited resources for connectivity that is not only apparent in the mammalian neocortex but also in computer hardware or neuromorphic (brain-inspired hardware by efficiently pruning synapses without losing performance.

  11. Processing of Scalar Inferences by Mandarin Learners of English: An Online Measure.

    Directory of Open Access Journals (Sweden)

    Yowyu Lin

    Full Text Available Scalar inferences represent the condition when a speaker uses a weaker expression such as some in a pragmatic scale like , and s/he has the intention to reject the stronger use of the other word like all in the utterance. Considerable disagreement has arisen concerning how interlocutors derive the inferences. The study presented here tries to address this issue by examining online scalar inferences among Mandarin learners of English. To date, Default Inference and Relevance Theory have made different predictions regarding how people process scalar inferences. Findings from recently emerging first language studies did not fully resolved the debate but led to even more heated debates. The current three online psycholinguistic experiments reported here tried to address the processing of scalar inferences from second language perspective. Results showed that Mandarin learners of English showed faster reaction times and a higher acceptance rate when interpreting some as some but not all and this was true even when subjects were under time pressure, which was manifested in Experiment 2. Overall, the results of the experiments supported Default Theory. In addition, Experiment 3 also found that working memory capacity plays a critical role during scalar inference processing. High span readers were faster in accepting the some but not all interpretation than low span readers. However, compared with low span readers, high span readers were more likely to accept the some and possibly all condition, possibly due to their working memory capacity to generate scenarios to fit the interpretation.

  12. Evaluating Hierarchical Structure in Music Annotations.

    Science.gov (United States)

    McFee, Brian; Nieto, Oriol; Farbood, Morwaread M; Bello, Juan Pablo

    2017-01-01

    Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR), it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for "flat" descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.

  13. Evaluating Hierarchical Structure in Music Annotations

    Directory of Open Access Journals (Sweden)

    Brian McFee

    2017-08-01

    Full Text Available Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR, it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for “flat” descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement.

  14. Inferring the gene network underlying the branching of tomato inflorescence.

    Directory of Open Access Journals (Sweden)

    Laura Astola

    Full Text Available The architecture of tomato inflorescence strongly affects flower production and subsequent crop yield. To understand the genetic activities involved, insight into the underlying network of genes that initiate and control the sympodial growth in the tomato is essential. In this paper, we show how the structure of this network can be derived from available data of the expressions of the involved genes. Our approach starts from employing biological expert knowledge to select the most probable gene candidates behind branching behavior. To find how these genes interact, we develop a stepwise procedure for computational inference of the network structure. Our data consists of expression levels from primary shoot meristems, measured at different developmental stages on three different genotypes of tomato. With the network inferred by our algorithm, we can explain the dynamics corresponding to all three genotypes simultaneously, despite their apparent dissimilarities. We also correctly predict the chronological order of expression peaks for the main hubs in the network. Based on the inferred network, using optimal experimental design criteria, we are able to suggest an informative set of experiments for further investigation of the mechanisms underlying branching behavior.

  15. Learning Convex Inference of Marginals

    OpenAIRE

    Domke, Justin

    2012-01-01

    Graphical models trained using maximum likelihood are a common tool for probabilistic inference of marginal distributions. However, this approach suffers difficulties when either the inference process or the model is approximate. In this paper, the inference process is first defined to be the minimization of a convex function, inspired by free energy approximations. Learning is then done directly in terms of the performance of the inference process at univariate marginal prediction. The main ...

  16. Orientation Encoding and Viewpoint Invariance in Face Recognition: Inferring Neural Properties from Large-Scale Signals.

    Science.gov (United States)

    Ramírez, Fernando M

    2018-05-01

    Viewpoint-invariant face recognition is thought to be subserved by a distributed network of occipitotemporal face-selective areas that, except for the human anterior temporal lobe, have been shown to also contain face-orientation information. This review begins by highlighting the importance of bilateral symmetry for viewpoint-invariant recognition and face-orientation perception. Then, monkey electrophysiological evidence is surveyed describing key tuning properties of face-selective neurons-including neurons bimodally tuned to mirror-symmetric face-views-followed by studies combining functional magnetic resonance imaging (fMRI) and multivariate pattern analyses to probe the representation of face-orientation and identity information in humans. Altogether, neuroimaging studies suggest that face-identity is gradually disentangled from face-orientation information along the ventral visual processing stream. The evidence seems to diverge, however, regarding the prevalent form of tuning of neural populations in human face-selective areas. In this context, caveats possibly leading to erroneous inferences regarding mirror-symmetric coding are exposed, including the need to distinguish angular from Euclidean distances when interpreting multivariate pattern analyses. On this basis, this review argues that evidence from the fusiform face area is best explained by a view-sensitive code reflecting head angular disparity, consistent with a role of this area in face-orientation perception. Finally, the importance is stressed of explicit models relating neural properties to large-scale signals.

  17. Population genetic structure of the cotton bollworm Helicoverpa armigera (Hübner) (Lepidoptera: Noctuidae) in India as inferred from EPIC-PCR DNA markers.

    Science.gov (United States)

    Behere, Gajanan Tryambak; Tay, Wee Tek; Russell, Derek Alan; Kranthi, Keshav Raj; Batterham, Philip

    2013-01-01

    Helicoverpa armigera is an important pest of cotton and other agricultural crops in the Old World. Its wide host range, high mobility and fecundity, and the ability to adapt and develop resistance against all common groups of insecticides used for its management have exacerbated its pest status. An understanding of the population genetic structure in H. armigera under Indian agricultural conditions will help ascertain gene flow patterns across different agricultural zones. This study inferred the population genetic structure of Indian H. armigera using five Exon-Primed Intron-Crossing (EPIC)-PCR markers. Nested alternative EPIC markers detected moderate null allele frequencies (4.3% to 9.4%) in loci used to infer population genetic structure but the apparently genome-wide heterozygote deficit suggests in-breeding or a Wahlund effect rather than a null allele effect. Population genetic analysis of the 26 populations suggested significant genetic differentiation within India but especially in cotton-feeding populations in the 2006-07 cropping season. In contrast, overall pair-wise F(ST) estimates from populations feeding on food crops indicated no significant population substructure irrespective of cropping seasons. A Baysian cluster analysis was used to assign the genetic make-up of individuals to likely membership of population clusters. Some evidence was found for four major clusters with individuals in two populations from cotton in one year (from two populations in northern India) showing especially high homogeneity. Taken as a whole, this study found evidence of population substructure at host crop, temporal and spatial levels in Indian H. armigera, without, however, a clear biological rationale for these structures being evident.

  18. Implicit structural inversion of gravity data using linear programming, a validation study

    NARCIS (Netherlands)

    Zon, A.T. van; Roy Chowdhury, K.

    2010-01-01

    In this study, a regional scale gravity data set has been inverted to infer the structure (topography) of the top of the basement underlying sub-horizontal strata. We apply our method to this real data set for further proof of concept, validation and benchmarking against results from an earlier

  19. Challenges for Large Scale Structure Theory

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    I will describe some of the outstanding questions in Cosmology where answers could be provided by observations of the Large Scale Structure of the Universe at late times.I will discuss some of the theoretical challenges which will have to be overcome to extract this information from the observations. I will describe some of the theoretical tools that might be useful to achieve this goal. 

  20. The factor structure of the Social Interaction Anxiety Scale and the Social Phobia Scale.

    Science.gov (United States)

    Heidenreich, Thomas; Schermelleh-Engel, Karin; Schramm, Elisabeth; Hofmann, Stefan G; Stangier, Ulrich

    2011-05-01

    The Social Interaction Anxiety Scale (SIAS) and the Social Phobia Scale (SPS) are two compendium measures that have become some of the most popular self-report scales of social anxiety. Despite their popularity, it remains unclear whether it is necessary to maintain two separate scales of social anxiety. The primary objective of the present study was to examine the factor analytic structure of both measures to determine the factorial validity of each scale. For this purpose, we administered both scales to 577 patients at the beginning of outpatient treatment. Analyzing both scales simultaneously, a CFA with two correlated factors showed a better fit to the data than a single factor model. An additional EFA with an oblique rotation on all 40 items using the WLSMV estimator further supported the two factor solution. These results suggest that the SIAS and SPS measure similar, but not identical facets of social anxiety. Thus, our findings provide support to retain the SIAS and SPS as two separate scales. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-03-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data-space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper we use massive asymptotically-optimal data compression to reduce the dimensionality of the data-space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parameterized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate Density Estimation Likelihood-Free Inference with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological datasets.

  2. Recent Trends in Local-Scale Marine Biodiversity Reflect Community Structure and Human Impacts.

    Science.gov (United States)

    Elahi, Robin; O'Connor, Mary I; Byrnes, Jarrett E K; Dunic, Jillian; Eriksson, Britas Klemens; Hensel, Marc J S; Kearns, Patrick J

    2015-07-20

    The modern biodiversity crisis reflects global extinctions and local introductions. Human activities have dramatically altered rates and scales of processes that regulate biodiversity at local scales. Reconciling the threat of global biodiversity loss with recent evidence of stability at fine spatial scales is a major challenge and requires a nuanced approach to biodiversity change that integrates ecological understanding. With a new dataset of 471 diversity time series spanning from 1962 to 2015 from marine coastal ecosystems, we tested (1) whether biodiversity changed at local scales in recent decades, and (2) whether we can ignore ecological context (e.g., proximate human impacts, trophic level, spatial scale) and still make informative inferences regarding local change. We detected a predominant signal of increasing species richness in coastal systems since 1962 in our dataset, though net species loss was associated with localized effects of anthropogenic impacts. Our geographically extensive dataset is unlikely to be a random sample of marine coastal habitats; impacted sites (3% of our time series) were underrepresented relative to their global presence. These local-scale patterns do not contradict the prospect of accelerating global extinctions but are consistent with local species loss in areas with direct human impacts and increases in diversity due to invasions and range expansions in lower impact areas. Attempts to detect and understand local biodiversity trends are incomplete without information on local human activities and ecological context. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Neutrinos and large-scale structure

    International Nuclear Information System (INIS)

    Eisenstein, Daniel J.

    2015-01-01

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos

  4. Neutrinos and large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Eisenstein, Daniel J. [Daniel J. Eisenstein, Harvard-Smithsonian Center for Astrophysics, 60 Garden St., MS #20, Cambridge, MA 02138 (United States)

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  5. Mirror dark matter and large scale structure

    International Nuclear Information System (INIS)

    Ignatiev, A.Yu.; Volkas, R.R.

    2003-01-01

    Mirror matter is a dark matter candidate. In this paper, we reexamine the linear regime of density perturbation growth in a universe containing mirror dark matter. Taking adiabatic scale-invariant perturbations as the input, we confirm that the resulting processed power spectrum is richer than for the more familiar cases of cold, warm and hot dark matter. The new features include a maximum at a certain scale λ max , collisional damping below a smaller characteristic scale λ S ' , with oscillatory perturbations between the two. These scales are functions of the fundamental parameters of the theory. In particular, they decrease for decreasing x, the ratio of the mirror plasma temperature to that of the ordinary. For x∼0.2, the scale λ max becomes galactic. Mirror dark matter therefore leads to bottom-up large scale structure formation, similar to conventional cold dark matter, for x(less-or-similar sign)0.2. Indeed, the smaller the value of x, the closer mirror dark matter resembles standard cold dark matter during the linear regime. The differences pertain to scales smaller than λ S ' in the linear regime, and generally in the nonlinear regime because mirror dark matter is chemically complex and to some extent dissipative. Lyman-α forest data and the early reionization epoch established by WMAP may hold the key to distinguishing mirror dark matter from WIMP-style cold dark matter

  6. Puzzles of large scale structure and gravitation

    International Nuclear Information System (INIS)

    Sidharth, B.G.

    2006-01-01

    We consider the puzzle of cosmic voids bounded by two-dimensional structures of galactic clusters as also a puzzle pointed out by Weinberg: How can the mass of a typical elementary particle depend on a cosmic parameter like the Hubble constant? An answer to the first puzzle is proposed in terms of 'Scaled' Quantum Mechanical like behaviour which appears at large scales. The second puzzle can be answered by showing that the gravitational mass of an elementary particle has a Machian character (see Ahmed N. Cantorian small worked, Mach's principle and the universal mass network. Chaos, Solitons and Fractals 2004;21(4))

  7. Dark matter self-interactions and small scale structure

    Science.gov (United States)

    Tulin, Sean; Yu, Hai-Bo

    2018-02-01

    We review theories of dark matter (DM) beyond the collisionless paradigm, known as self-interacting dark matter (SIDM), and their observable implications for astrophysical structure in the Universe. Self-interactions are motivated, in part, due to the potential to explain long-standing (and more recent) small scale structure observations that are in tension with collisionless cold DM (CDM) predictions. Simple particle physics models for SIDM can provide a universal explanation for these observations across a wide range of mass scales spanning dwarf galaxies, low and high surface brightness spiral galaxies, and clusters of galaxies. At the same time, SIDM leaves intact the success of ΛCDM cosmology on large scales. This report covers the following topics: (1) small scale structure issues, including the core-cusp problem, the diversity problem for rotation curves, the missing satellites problem, and the too-big-to-fail problem, as well as recent progress in hydrodynamical simulations of galaxy formation; (2) N-body simulations for SIDM, including implications for density profiles, halo shapes, substructure, and the interplay between baryons and self-interactions; (3) semi-analytic Jeans-based methods that provide a complementary approach for connecting particle models with observations; (4) merging systems, such as cluster mergers (e.g., the Bullet Cluster) and minor infalls, along with recent simulation results for mergers; (5) particle physics models, including light mediator models and composite DM models; and (6) complementary probes for SIDM, including indirect and direct detection experiments, particle collider searches, and cosmological observations. We provide a summary and critical look for all current constraints on DM self-interactions and an outline for future directions.

  8. Probabilistic inductive inference: a survey

    OpenAIRE

    Ambainis, Andris

    2001-01-01

    Inductive inference is a recursion-theoretic theory of learning, first developed by E. M. Gold (1967). This paper surveys developments in probabilistic inductive inference. We mainly focus on finite inference of recursive functions, since this simple paradigm has produced the most interesting (and most complex) results.

  9. Gene regulatory network inference by point-based Gaussian approximation filters incorporating the prior information.

    Science.gov (United States)

    Jia, Bin; Wang, Xiaodong

    2013-12-17

    : The extended Kalman filter (EKF) has been applied to inferring gene regulatory networks. However, it is well known that the EKF becomes less accurate when the system exhibits high nonlinearity. In addition, certain prior information about the gene regulatory network exists in practice, and no systematic approach has been developed to incorporate such prior information into the Kalman-type filter for inferring the structure of the gene regulatory network. In this paper, an inference framework based on point-based Gaussian approximation filters that can exploit the prior information is developed to solve the gene regulatory network inference problem. Different point-based Gaussian approximation filters, including the unscented Kalman filter (UKF), the third-degree cubature Kalman filter (CKF3), and the fifth-degree cubature Kalman filter (CKF5) are employed. Several types of network prior information, including the existing network structure information, sparsity assumption, and the range constraint of parameters, are considered, and the corresponding filters incorporating the prior information are developed. Experiments on a synthetic network of eight genes and the yeast protein synthesis network of five genes are carried out to demonstrate the performance of the proposed framework. The results show that the proposed methods provide more accurate inference results than existing methods, such as the EKF and the traditional UKF.

  10. LAIT: a local ancestry inference toolkit.

    Science.gov (United States)

    Hui, Daniel; Fang, Zhou; Lin, Jerome; Duan, Qing; Li, Yun; Hu, Ming; Chen, Wei

    2017-09-06

    Inferring local ancestry in individuals of mixed ancestry has many applications, most notably in identifying disease-susceptible loci that vary among different ethnic groups. Many software packages are available for inferring local ancestry in admixed individuals. However, most of these existing software packages require specific formatted input files and generate output files in various types, yielding practical inconvenience. We developed a tool set, Local Ancestry Inference Toolkit (LAIT), which can convert standardized files into software-specific input file formats as well as standardize and summarize inference results for four popular local ancestry inference software: HAPMIX, LAMP, LAMP-LD, and ELAI. We tested LAIT using both simulated and real data sets and demonstrated that LAIT provides convenience to run multiple local ancestry inference software. In addition, we evaluated the performance of local ancestry software among different supported software packages, mainly focusing on inference accuracy and computational resources used. We provided a toolkit to facilitate the use of local ancestry inference software, especially for users with limited bioinformatics background.

  11. PERSISTENT ASYMMETRIC STRUCTURE OF SAGITTARIUS A* ON EVENT HORIZON SCALES

    International Nuclear Information System (INIS)

    Fish, Vincent L.; Doeleman, Sheperd S.; Lu, Ru-Sen; Akiyama, Kazunori; Beaudoin, Christopher; Cappallo, Roger; Johnson, Michael D.; Blackburn, Lindy; Blundell, Ray; Chael, Andrew A.; Broderick, Avery E.; Psaltis, Dimitrios; Chan, Chi-Kwan; Alef, Walter; Bertarini, Alessandra; Algaba, Juan Carlos; Asada, Keiichi; Bower, Geoffrey C.; Brinkerink, Christiaan; Chamberlin, Richard

    2016-01-01

    The Galactic Center black hole Sagittarius A* (Sgr A*) is a prime observing target for the Event Horizon Telescope (EHT), which can resolve the 1.3 mm emission from this source on angular scales comparable to that of the general relativistic shadow. Previous EHT observations have used visibility amplitudes to infer the morphology of the millimeter-wavelength emission. Potentially much richer source information is contained in the phases. We report on 1.3 mm phase information on Sgr A* obtained with the EHT on a total of 13 observing nights over four years. Closure phases, which are the sum of visibility phases along a closed triangle of interferometer baselines, are used because they are robust against phase corruptions introduced by instrumentation and the rapidly variable atmosphere. The median closure phase on a triangle including telescopes in California, Hawaii, and Arizona is nonzero. This result conclusively demonstrates that the millimeter emission is asymmetric on scales of a few Schwarzschild radii and can be used to break 180° rotational ambiguities inherent from amplitude data alone. The stability of the sign of the closure phase over most observing nights indicates persistent asymmetry in the image of Sgr A* that is not obscured by refraction due to interstellar electrons along the line of sight

  12. PERSISTENT ASYMMETRIC STRUCTURE OF SAGITTARIUS A* ON EVENT HORIZON SCALES

    Energy Technology Data Exchange (ETDEWEB)

    Fish, Vincent L.; Doeleman, Sheperd S.; Lu, Ru-Sen; Akiyama, Kazunori; Beaudoin, Christopher; Cappallo, Roger [Massachusetts Institute of Technology, Haystack Observatory, Route 40, Westford, MA 01886 (United States); Johnson, Michael D.; Blackburn, Lindy; Blundell, Ray; Chael, Andrew A. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Broderick, Avery E. [Perimeter Institute for Theoretical Physics, 31 Caroline Street North, Waterloo, ON N2L 2Y5 (Canada); Psaltis, Dimitrios; Chan, Chi-Kwan [Steward Observatory and Department of Astronomy, University of Arizona, 933 North Cherry Ave., Tucson, AZ 85721-0065 (United States); Alef, Walter; Bertarini, Alessandra [Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn (Germany); Algaba, Juan Carlos [Korea Astronomy and Space Science Institute, 776 Daedeokdae-ro, Yuseong-gu, Daejeon 305-348 (Korea, Republic of); Asada, Keiichi [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan (China); Bower, Geoffrey C. [Academia Sinica Institute for Astronomy and Astrophysics, 645 N. A‘ohōkū Place, Hilo, HI 96720 (United States); Brinkerink, Christiaan [Department of Astrophysics/IMAPP, Radboud University Nijmegen, P.O. Box 9010, 6500 GL, Nijmegen (Netherlands); Chamberlin, Richard, E-mail: vfish@haystack.mit.edu [Caltech Submillimeter Observatory, 111 Nowelo Street, Hilo, HI 96720 (United States); and others

    2016-04-01

    The Galactic Center black hole Sagittarius A* (Sgr A*) is a prime observing target for the Event Horizon Telescope (EHT), which can resolve the 1.3 mm emission from this source on angular scales comparable to that of the general relativistic shadow. Previous EHT observations have used visibility amplitudes to infer the morphology of the millimeter-wavelength emission. Potentially much richer source information is contained in the phases. We report on 1.3 mm phase information on Sgr A* obtained with the EHT on a total of 13 observing nights over four years. Closure phases, which are the sum of visibility phases along a closed triangle of interferometer baselines, are used because they are robust against phase corruptions introduced by instrumentation and the rapidly variable atmosphere. The median closure phase on a triangle including telescopes in California, Hawaii, and Arizona is nonzero. This result conclusively demonstrates that the millimeter emission is asymmetric on scales of a few Schwarzschild radii and can be used to break 180° rotational ambiguities inherent from amplitude data alone. The stability of the sign of the closure phase over most observing nights indicates persistent asymmetry in the image of Sgr A* that is not obscured by refraction due to interstellar electrons along the line of sight.

  13. Bayesian statistical inference

    Directory of Open Access Journals (Sweden)

    Bruno De Finetti

    2017-04-01

    Full Text Available This work was translated into English and published in the volume: Bruno De Finetti, Induction and Probability, Biblioteca di Statistica, eds. P. Monari, D. Cocchi, Clueb, Bologna, 1993.Bayesian statistical Inference is one of the last fundamental philosophical papers in which we can find the essential De Finetti's approach to the statistical inference.

  14. Inferring Stop-Locations from WiFi.

    Directory of Open Access Journals (Sweden)

    David Kofoed Wind

    Full Text Available Human mobility patterns are inherently complex. In terms of understanding these patterns, the process of converting raw data into series of stop-locations and transitions is an important first step which greatly reduces the volume of data, thus simplifying the subsequent analyses. Previous research into the mobility of individuals has focused on inferring 'stop locations' (places of stationarity from GPS or CDR data, or on detection of state (static/active. In this paper we bridge the gap between the two approaches: we introduce methods for detecting both mobility state and stop-locations. In addition, our methods are based exclusively on WiFi data. We study two months of WiFi data collected every two minutes by a smartphone, and infer stop-locations in the form of labelled time-intervals. For this purpose, we investigate two algorithms, both of which scale to large datasets: a greedy approach to select the most important routers and one which uses a density-based clustering algorithm to detect router fingerprints. We validate our results using participants' GPS data as well as ground truth data collected during a two month period.

  15. Phylogenetic Inference of HIV Transmission Clusters

    Directory of Open Access Journals (Sweden)

    Vlad Novitsky

    2017-10-01

    Full Text Available Better understanding the structure and dynamics of HIV transmission networks is essential for designing the most efficient interventions to prevent new HIV transmissions, and ultimately for gaining control of the HIV epidemic. The inference of phylogenetic relationships and the interpretation of results rely on the definition of the HIV transmission cluster. The definition of the HIV cluster is complex and dependent on multiple factors, including the design of sampling, accuracy of sequencing, precision of sequence alignment, evolutionary models, the phylogenetic method of inference, and specified thresholds for cluster support. While the majority of studies focus on clusters, non-clustered cases could also be highly informative. A new dimension in the analysis of the global and local HIV epidemics is the concept of phylogenetically distinct HIV sub-epidemics. The identification of active HIV sub-epidemics reveals spreading viral lineages and may help in the design of targeted interventions.HIVclustering can also be affected by sampling density. Obtaining a proper sampling density may increase statistical power and reduce sampling bias, so sampling density should be taken into account in study design and in interpretation of phylogenetic results. Finally, recent advances in long-range genotyping may enable more accurate inference of HIV transmission networks. If performed in real time, it could both inform public-health strategies and be clinically relevant (e.g., drug-resistance testing.

  16. Lagrangian space consistency relation for large scale structure

    International Nuclear Information System (INIS)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-01-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space

  17. Inflation, large scale structure and particle physics

    Indian Academy of Sciences (India)

    Logo of the Indian Academy of Sciences ... Hybrid inflation; Higgs scalar field; structure formation; curvation. ... We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which ... May 2018. Home · Volumes & Issues · Special Issues · Forthcoming Articles · Search · Editorial Board ...

  18. Is there a hierarchy of social inferences? The likelihood and speed of inferring intentionality, mind, and personality.

    Science.gov (United States)

    Malle, Bertram F; Holbrook, Jess

    2012-04-01

    People interpret behavior by making inferences about agents' intentionality, mind, and personality. Past research studied such inferences 1 at a time; in real life, people make these inferences simultaneously. The present studies therefore examined whether 4 major inferences (intentionality, desire, belief, and personality), elicited simultaneously in response to an observed behavior, might be ordered in a hierarchy of likelihood and speed. To achieve generalizability, the studies included a wide range of stimulus behaviors, presented them verbally and as dynamic videos, and assessed inferences both in a retrieval paradigm (measuring the likelihood and speed of accessing inferences immediately after they were made) and in an online processing paradigm (measuring the speed of forming inferences during behavior observation). Five studies provide evidence for a hierarchy of social inferences-from intentionality and desire to belief to personality-that is stable across verbal and visual presentations and that parallels the order found in developmental and primate research. (c) 2012 APA, all rights reserved.

  19. Signatures of non-universal large scales in conditional structure functions from various turbulent flows

    International Nuclear Information System (INIS)

    Blum, Daniel B; Voth, Greg A; Bewley, Gregory P; Bodenschatz, Eberhard; Gibert, Mathieu; Xu Haitao; Gylfason, Ármann; Mydlarski, Laurent; Yeung, P K

    2011-01-01

    We present a systematic comparison of conditional structure functions in nine turbulent flows. The flows studied include forced isotropic turbulence simulated on a periodic domain, passive grid wind tunnel turbulence in air and in pressurized SF 6 , active grid wind tunnel turbulence (in both synchronous and random driving modes), the flow between counter-rotating discs, oscillating grid turbulence and the flow in the Lagrangian exploration module (in both constant and random driving modes). We compare longitudinal Eulerian second-order structure functions conditioned on the instantaneous large-scale velocity in each flow to assess the ways in which the large scales affect the small scales in a variety of turbulent flows. Structure functions are shown to have larger values when the large-scale velocity significantly deviates from the mean in most flows, suggesting that dependence on the large scales is typical in many turbulent flows. The effects of the large-scale velocity on the structure functions can be quite strong, with the structure function varying by up to a factor of 2 when the large-scale velocity deviates from the mean by ±2 standard deviations. In several flows, the effects of the large-scale velocity are similar at all the length scales we measured, indicating that the large-scale effects are scale independent. In a few flows, the effects of the large-scale velocity are larger on the smallest length scales. (paper)

  20. Indirect inference with time series observed with error

    DEFF Research Database (Denmark)

    Rossi, Eduardo; Santucci de Magistris, Paolo

    estimation. We propose to solve this inconsistency by jointly estimating the nuisance and the structural parameters. Under standard assumptions, this estimator is consistent and asymptotically normal. A condition for the identification of ARMA plus noise is obtained. The proposed methodology is used......We analyze the properties of the indirect inference estimator when the observed series are contaminated by measurement error. We show that the indirect inference estimates are asymptotically biased when the nuisance parameters of the measurement error distribution are neglected in the indirect...... to estimate the parameters of continuous-time stochastic volatility models with auxiliary specifications based on realized volatility measures. Monte Carlo simulations shows the bias reduction of the indirect estimates obtained when the microstructure noise is explicitly modeled. Finally, an empirical...

  1. Analysis of small scale turbulent structures and the effect of spatial scales on gas transfer

    Science.gov (United States)

    Schnieders, Jana; Garbe, Christoph

    2014-05-01

    The exchange of gases through the air-sea interface strongly depends on environmental conditions such as wind stress and waves which in turn generate near surface turbulence. Near surface turbulence is a main driver of surface divergence which has been shown to cause highly variable transfer rates on relatively small spatial scales. Due to the cool skin of the ocean, heat can be used as a tracer to detect areas of surface convergence and thus gather information about size and intensity of a turbulent process. We use infrared imagery to visualize near surface aqueous turbulence and determine the impact of turbulent scales on exchange rates. Through the high temporal and spatial resolution of these types of measurements spatial scales as well as surface dynamics can be captured. The surface heat pattern is formed by distinct structures on two scales - small-scale short lived structures termed fish scales and larger scale cold streaks that are consistent with the footprints of Langmuir Circulations. There are two key characteristics of the observed surface heat patterns: 1. The surface heat patterns show characteristic features of scales. 2. The structure of these patterns change with increasing wind stress and surface conditions. In [2] turbulent cell sizes have been shown to systematically decrease with increasing wind speed until a saturation at u* = 0.7 cm/s is reached. Results suggest a saturation in the tangential stress. Similar behaviour has been observed by [1] for gas transfer measurements at higher wind speeds. In this contribution a new model to estimate the heat flux is applied which is based on the measured turbulent cell size und surface velocities. This approach allows the direct comparison of the net effect on heat flux of eddies of different sizes and a comparison to gas transfer measurements. Linking transport models with thermographic measurements, transfer velocities can be computed. In this contribution, we will quantify the effect of small scale

  2. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  3. Bayesian inference for Hawkes processes

    DEFF Research Database (Denmark)

    Rasmussen, Jakob Gulddahl

    2013-01-01

    The Hawkes process is a practically and theoretically important class of point processes, but parameter-estimation for such a process can pose various problems. In this paper we explore and compare two approaches to Bayesian inference. The first approach is based on the so-called conditional...... intensity function, while the second approach is based on an underlying clustering and branching structure in the Hawkes process. For practical use, MCMC (Markov chain Monte Carlo) methods are employed. The two approaches are compared numerically using three examples of the Hawkes process....

  4. Dynamic Arrest in Charged Colloidal Systems Exhibiting Large-Scale Structural Heterogeneities

    International Nuclear Information System (INIS)

    Haro-Perez, C.; Callejas-Fernandez, J.; Hidalgo-Alvarez, R.; Rojas-Ochoa, L. F.; Castaneda-Priego, R.; Quesada-Perez, M.; Trappe, V.

    2009-01-01

    Suspensions of charged liposomes are found to exhibit typical features of strongly repulsive fluid systems at short length scales, while exhibiting structural heterogeneities at larger length scales that are characteristic of attractive systems. We model the static structure factor of these systems using effective pair interaction potentials composed of a long-range attraction and a shorter range repulsion. Our modeling of the static structure yields conditions for dynamically arrested states at larger volume fractions, which we find to agree with the experimentally observed dynamics

  5. Human brain lesion-deficit inference remapped.

    Science.gov (United States)

    Mah, Yee-Haur; Husain, Masud; Rees, Geraint; Nachev, Parashkev

    2014-09-01

    Our knowledge of the anatomical organization of the human brain in health and disease draws heavily on the study of patients with focal brain lesions. Historically the first method of mapping brain function, it is still potentially the most powerful, establishing the necessity of any putative neural substrate for a given function or deficit. Great inferential power, however, carries a crucial vulnerability: without stronger alternatives any consistent error cannot be easily detected. A hitherto unexamined source of such error is the structure of the high-dimensional distribution of patterns of focal damage, especially in ischaemic injury-the commonest aetiology in lesion-deficit studies-where the anatomy is naturally shaped by the architecture of the vascular tree. This distribution is so complex that analysis of lesion data sets of conventional size cannot illuminate its structure, leaving us in the dark about the presence or absence of such error. To examine this crucial question we assembled the largest known set of focal brain lesions (n = 581), derived from unselected patients with acute ischaemic injury (mean age = 62.3 years, standard deviation = 17.8, male:female ratio = 0.547), visualized with diffusion-weighted magnetic resonance imaging, and processed with validated automated lesion segmentation routines. High-dimensional analysis of this data revealed a hidden bias within the multivariate patterns of damage that will consistently distort lesion-deficit maps, displacing inferred critical regions from their true locations, in a manner opaque to replication. Quantifying the size of this mislocalization demonstrates that past lesion-deficit relationships estimated with conventional inferential methodology are likely to be significantly displaced, by a magnitude dependent on the unknown underlying lesion-deficit relationship itself. Past studies therefore cannot be retrospectively corrected, except by new knowledge that would render them redundant

  6. Autonomous smart sensor network for full-scale structural health monitoring

    Science.gov (United States)

    Rice, Jennifer A.; Mechitov, Kirill A.; Spencer, B. F., Jr.; Agha, Gul A.

    2010-04-01

    The demands of aging infrastructure require effective methods for structural monitoring and maintenance. Wireless smart sensor networks offer the ability to enhance structural health monitoring (SHM) practices through the utilization of onboard computation to achieve distributed data management. Such an approach is scalable to the large number of sensor nodes required for high-fidelity modal analysis and damage detection. While smart sensor technology is not new, the number of full-scale SHM applications has been limited. This slow progress is due, in part, to the complex network management issues that arise when moving from a laboratory setting to a full-scale monitoring implementation. This paper presents flexible network management software that enables continuous and autonomous operation of wireless smart sensor networks for full-scale SHM applications. The software components combine sleep/wake cycling for enhanced power management with threshold detection for triggering network wide tasks, such as synchronized sensing or decentralized modal analysis, during periods of critical structural response.

  7. INFERENCE BUILDING BLOCKS

    Science.gov (United States)

    2018-02-15

    expressed a variety of inference techniques on discrete and continuous distributions: exact inference, importance sampling, Metropolis-Hastings (MH...without redoing any math or rewriting any code. And although our main goal is composable reuse, our performance is also good because we can use...control paths. • The Hakaru language can express mixtures of discrete and continuous distributions, but the current disintegration transformation

  8. Practical Bayesian Inference

    Science.gov (United States)

    Bailer-Jones, Coryn A. L.

    2017-04-01

    Preface; 1. Probability basics; 2. Estimation and uncertainty; 3. Statistical models and inference; 4. Linear models, least squares, and maximum likelihood; 5. Parameter estimation: single parameter; 6. Parameter estimation: multiple parameters; 7. Approximating distributions; 8. Monte Carlo methods for inference; 9. Parameter estimation: Markov chain Monte Carlo; 10. Frequentist hypothesis testing; 11. Model comparison; 12. Dealing with more complicated problems; References; Index.

  9. The Large-Scale Structure of Scientific Method

    Science.gov (United States)

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  10. Dynamical Mechanism of Scaling Behaviors in Multifractal Structure

    Science.gov (United States)

    Kim, Kyungsik; Jung, Jae Won; Kim, Soo Yong

    2010-03-01

    The pattern of stone distribution in the game of Go (Baduk, Weiqi, or Igo) can be treated in the mathematical and physical languages of multifractals. The concepts of fractals and multifractals have relevance to many fields of science and even arts. A significant and fascinating feature of this approach is that it provides a proper interpretation for the pattern of the two-colored (black and white) stones in terms of the numerical values of the generalized dimension and the scaling exponent. For our case, these statistical quantities can be estimated numerically from the black, white, and mixed stones, assuming the excluded edge effect that the cell form of the Go game has the self-similar structure. The result from the multifractal structure allows us to find a definite and reliable fractal dimension, and it precisely verifies that the fractal dimension becomes larger, as the cell of grids increases. We also find the strength of multifractal structures from the difference in the scaling exponents in the black, white, and mixed stones.

  11. A semiparametric graphical modelling approach for large-scale equity selection.

    Science.gov (United States)

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  12. Design of uav robust autopilot based on adaptive neuro-fuzzy inference system

    Directory of Open Access Journals (Sweden)

    Mohand Achour Touat

    2008-04-01

    Full Text Available  This paper is devoted to the application of adaptive neuro-fuzzy inference systems to the robust control of the UAV longitudinal motion. The adaptive neore-fuzzy inference system model needs to be trained by input/output data. This data were obtained from the modeling of a ”crisp” robust control system. The synthesis of this system is based on the separation theorem, which defines the structure and parameters of LQG-optimal controller, and further - robust optimization of this controller, based on the genetic algorithm. Such design procedure can define the rule base and parameters of fuzzyfication and defuzzyfication algorithms of the adaptive neore-fuzzy inference system controller, which ensure the robust properties of the control system. Simulation of the closed loop control system of UAV longitudinal motion with adaptive neore-fuzzy inference system controller demonstrates high efficiency of proposed design procedure.

  13. Spatial structure of ion-scale plasma turbulence

    Directory of Open Access Journals (Sweden)

    Yasuhito eNarita

    2014-03-01

    Full Text Available Spatial structure of small-scale plasma turbulence is studied under different conditions of plasma parameter beta directly in the three-dimensional wave vector domain. Two independent approaches are taken: observations of turbulent magnetic field fluctuations in the solar wind measured by four Cluster spacecraft, and direct numerical simulations of plasma turbulence using the hybrid code AIKEF, both resolving turbulence on the ion kinetic scales. The two methods provide independently evidence of wave vector anisotropy as a function of beta. Wave vector anisotropy is characterized primarily by an extension of the energy spectrum in the direction perpendicular to the large-scale magnetic field. The spectrum is strongly anisotropic at lower values of beta, and is more isotropic at higher values of beta. Cluster magnetic field data analysis also provides evidence of axial asymmetry of the spectrum in the directions around the large-scale field. Anisotropy is interpreted as filament formation as plasma evolves into turbulence. Axial asymmetry is interpreted as the effect of radial expansion of the solar wind from the corona.

  14. Studies in the extensively automatic construction of large odds-based inference networks from structured data. Examples from medical, bioinformatics, and health insurance claims data.

    Science.gov (United States)

    Robson, B; Boray, S

    2018-04-01

    Theoretical and methodological principles are presented for the construction of very large inference nets for odds calculations, composed of hundreds or many thousands or more of elements, in this paper generated by structured data mining. It is argued that the usual small inference nets can sometimes represent rather simple, arbitrary estimates. Examples of applications in clinical and public health data analysis, medical claims data and detection of irregular entries, and bioinformatics data, are presented. Construction of large nets benefits from application of a theory of expected information for sparse data and the Dirac notation and algebra. The extent to which these are important here is briefly discussed. Purposes of the study include (a) exploration of the properties of large inference nets and a perturbation and tacit conditionality models, (b) using these to propose simpler models including one that a physician could use routinely, analogous to a "risk score", (c) examination of the merit of describing optimal performance in a single measure that combines accuracy, specificity, and sensitivity in place of a ROC curve, and (d) relationship to methods for detecting anomalous and potentially fraudulent data. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Angular ellipticity correlations in a composite alignment model for elliptical and spiral galaxies and inference from weak lensing

    Science.gov (United States)

    Tugendhat, Tim M.; Schäfer, Björn Malte

    2018-05-01

    We investigate a physical, composite alignment model for both spiral and elliptical galaxies and its impact on cosmological parameter estimation from weak lensing for a tomographic survey. Ellipticity correlation functions and angular ellipticity spectra for spiral and elliptical galaxies are derived on the basis of tidal interactions with the cosmic large-scale structure and compared to the tomographic weak-lensing signal. We find that elliptical galaxies cause a contribution to the weak-lensing dominated ellipticity correlation on intermediate angular scales between ℓ ≃ 40 and ℓ ≃ 400 before that of spiral galaxies dominates on higher multipoles. The predominant term on intermediate scales is the negative cross-correlation between intrinsic alignments and weak gravitational lensing (GI-alignment). We simulate parameter inference from weak gravitational lensing with intrinsic alignments unaccounted; the bias induced by ignoring intrinsic alignments in a survey like Euclid is shown to be several times larger than the statistical error and can lead to faulty conclusions when comparing to other observations. The biases generally point into different directions in parameter space, such that in some cases one can observe a partial cancellation effect. Furthermore, it is shown that the biases increase with the number of tomographic bins used for the parameter estimation process. We quantify this parameter estimation bias in units of the statistical error and compute the loss of Bayesian evidence for a model due to the presence of systematic errors as well as the Kullback-Leibler divergence to quantify the distance between the true model and the wrongly inferred one.

  16. Logical inference and evaluation

    International Nuclear Information System (INIS)

    Perey, F.G.

    1981-01-01

    Most methodologies of evaluation currently used are based upon the theory of statistical inference. It is generally perceived that this theory is not capable of dealing satisfactorily with what are called systematic errors. Theories of logical inference should be capable of treating all of the information available, including that not involving frequency data. A theory of logical inference is presented as an extension of deductive logic via the concept of plausibility and the application of group theory. Some conclusions, based upon the application of this theory to evaluation of data, are also given

  17. Seeing Scale: Richard Dunn’s Structuralism

    Directory of Open Access Journals (Sweden)

    Keith Broadfoot

    2012-11-01

    Full Text Available Writing on the occasion of a retrospective of Richard Dunn’s work, Terence Maloon argued that ‘structuralism had an important bearing on virtually all of Richard Dunn’s mature works’, with ‘his modular, “crossed” formats’ being the most obvious manifestation of this. In this article I wish to reconsider this relation, withdrawing from a broad consideration of the framework of structuralism to focus on some of the quite particular ideas that Lacan proposed in response to structuralism. Beginning from a pivotal painting in the 1960s that developed out of Dunn’s experience of viewing the work of Barnett Newman, I wish to suggest a relation between the ongoing exploration of the thematic of scale in Dunn’s work and the idea of the symbolic that Lacan derives from structuralist thought. This relation, I argue, opens up a different way of understanding the art historical transition from Minimalism to Conceptual art.

  18. The limiting layer of fish scales: Structure and properties.

    Science.gov (United States)

    Arola, D; Murcia, S; Stossel, M; Pahuja, R; Linley, T; Devaraj, Arun; Ramulu, M; Ossa, E A; Wang, J

    2018-02-01

    Fish scales serve as a flexible natural armor that have received increasing attention across the materials community. Most efforts in this area have focused on the composite structure of the predominately organic elasmodine, and limited work addresses the highly mineralized external portion known as the Limiting Layer (LL). This coating serves as the first barrier to external threats and plays an important role in resisting puncture. In this investigation the structure, composition and mechanical behavior of the LL were explored for three different fish, including the arapaima (Arapaima gigas), the tarpon (Megalops atlanticus) and the carp (Cyprinus carpio). The scales of these three fish have received the most attention within the materials community. Features of the LL were evaluated with respect to anatomical position to distinguish site-specific functional differences. Results show that there are significant differences in the surface morphology of the LL from posterior and anterior regions in the scales, and between the three fish species. The calcium to phosphorus ratio and the mineral to collagen ratios of the LL are not equivalent among the three fish. Results from nanoindentation showed that the LL of tarpon scales is the hardest, followed by the carp and the arapaima and the differences in hardness are related to the apatite structure, possibly induced by the growth rate and environment of each fish. The natural armor of fish, turtles and other animals, has become a topic of substantial scientific interest. The majority of investigations have focused on the more highly organic layer known as the elasmodine. The present study addresses the highly mineralized external portion known as the Limiting Layer (LL). Specifically, the structure, composition and mechanical behavior of the LL were explored for three different fish, including the arapaima (Arapaima gigas), the tarpon (Megalops atlanticus) and the carp (Cyprinus carpio). Results show that there are

  19. Functional networks inference from rule-based machine learning models.

    Science.gov (United States)

    Lazzarini, Nicola; Widera, Paweł; Williamson, Stuart; Heer, Rakesh; Krasnogor, Natalio; Bacardit, Jaume

    2016-01-01

    Functional networks play an important role in the analysis of biological processes and systems. The inference of these networks from high-throughput (-omics) data is an area of intense research. So far, the similarity-based inference paradigm (e.g. gene co-expression) has been the most popular approach. It assumes a functional relationship between genes which are expressed at similar levels across different samples. An alternative to this paradigm is the inference of relationships from the structure of machine learning models. These models are able to capture complex relationships between variables, that often are different/complementary to the similarity-based methods. We propose a protocol to infer functional networks from machine learning models, called FuNeL. It assumes, that genes used together within a rule-based machine learning model to classify the samples, might also be functionally related at a biological level. The protocol is first tested on synthetic datasets and then evaluated on a test suite of 8 real-world datasets related to human cancer. The networks inferred from the real-world data are compared against gene co-expression networks of equal size, generated with 3 different methods. The comparison is performed from two different points of view. We analyse the enriched biological terms in the set of network nodes and the relationships between known disease-associated genes in a context of the network topology. The comparison confirms both the biological relevance and the complementary character of the knowledge captured by the FuNeL networks in relation to similarity-based methods and demonstrates its potential to identify known disease associations as core elements of the network. Finally, using a prostate cancer dataset as a case study, we confirm that the biological knowledge captured by our method is relevant to the disease and consistent with the specialised literature and with an independent dataset not used in the inference process. The

  20. Finding Community Structures In Social Activity Data

    KAUST Repository

    Peng, Chengbin

    2015-05-19

    Social activity data sets are increasing in number and volume. Finding community structure in such data is valuable in many applications. For example, understand- ing the community structure of social networks may reduce the spread of epidemics or boost advertising revenue; discovering partitions in tra c networks can help to optimize routing and to reduce congestion; finding a group of users with common interests can allow a system to recommend useful items. Among many aspects, qual- ity of inference and e ciency in finding community structures in such data sets are of paramount concern. In this thesis, we propose several approaches to improve com- munity detection in these aspects. The first approach utilizes the concept of K-cores to reduce the size of the problem. The K-core of a graph is the largest subgraph within which each node has at least K connections. We propose a framework that accelerates community detection. It first applies a traditional algorithm that is relatively slow to the K-core, and then uses a fast heuristic to infer community labels for the remaining nodes. The second approach is to scale the algorithm to multi-processor systems. We de- vise a scalable community detection algorithm for large networks based on stochastic block models. It is an alternating iterative algorithm using a maximum likelihood ap- proach. Compared with traditional inference algorithms for stochastic block models, our algorithm can scale to large networks and run on multi-processor systems. The time complexity is linear in the number of edges of the input network. The third approach is to improve the quality. We propose a framework for non- negative matrix factorization that allows the imposition of linear or approximately linear constraints on each factor. An example of the applications is to find community structures in bipartite networks, which is useful in recommender systems. Our algorithms are compared with the results in recent papers and their quality and e

  1. A rational inference approach to group and individual-level sentence comprehension performance in aphasia.

    Science.gov (United States)

    Warren, Tessa; Dickey, Michael Walsh; Liburd, Teljer L

    2017-07-01

    The rational inference, or noisy channel, account of language comprehension predicts that comprehenders are sensitive to the probabilities of different interpretations for a given sentence and adapt as these probabilities change (Gibson, Bergen & Piantadosi, 2013). This account provides an important new perspective on aphasic sentence comprehension: aphasia may increase the likelihood of sentence distortion, leading people with aphasia (PWA) to rely more on the prior probability of an interpretation and less on the form or structure of the sentence (Gibson, Sandberg, Fedorenko, Bergen & Kiran, 2015). We report the results of a sentence-picture matching experiment that tested the predictions of the rational inference account and other current models of aphasic sentence comprehension across a variety of sentence structures. Consistent with the rational inference account, PWA showed similar sensitivity to the probability of particular kinds of form distortions as age-matched controls, yet overall their interpretations relied more on prior probability and less on sentence form. As predicted by rational inference, but not by other models of sentence comprehension in aphasia, PWA's interpretations were more faithful to the form for active and passive sentences than for direct object and prepositional object sentences. However contra rational inference, there was no evidence that individual PWA's severity of syntactic or semantic impairment predicted their sensitivity to form versus the prior probability of a sentence, as cued by semantics. These findings confirm and extend previous findings that suggest the rational inference account holds promise for explaining aphasic and neurotypical comprehension, but they also raise new challenges for the account. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Inference

    DEFF Research Database (Denmark)

    Møller, Jesper

    (This text written by Jesper Møller, Aalborg University, is submitted for the collection ‘Stochastic Geometry: Highlights, Interactions and New Perspectives', edited by Wilfrid S. Kendall and Ilya Molchanov, to be published by ClarendonPress, Oxford, and planned to appear as Section 4.1 with the ......(This text written by Jesper Møller, Aalborg University, is submitted for the collection ‘Stochastic Geometry: Highlights, Interactions and New Perspectives', edited by Wilfrid S. Kendall and Ilya Molchanov, to be published by ClarendonPress, Oxford, and planned to appear as Section 4.......1 with the title ‘Inference'.) This contribution concerns statistical inference for parametric models used in stochastic geometry and based on quick and simple simulation free procedures as well as more comprehensive methods using Markov chain Monte Carlo (MCMC) simulations. Due to space limitations the focus...

  3. Lower complexity bounds for lifted inference

    DEFF Research Database (Denmark)

    Jaeger, Manfred

    2015-01-01

    instances of the model. Numerous approaches for such “lifted inference” techniques have been proposed. While it has been demonstrated that these techniques will lead to significantly more efficient inference on some specific models, there are only very recent and still quite restricted results that show...... the feasibility of lifted inference on certain syntactically defined classes of models. Lower complexity bounds that imply some limitations for the feasibility of lifted inference on more expressive model classes were established earlier in Jaeger (2000; Jaeger, M. 2000. On the complexity of inference about...... that under the assumption that NETIME≠ETIME, there is no polynomial lifted inference algorithm for knowledge bases of weighted, quantifier-, and function-free formulas. Further strengthening earlier results, this is also shown to hold for approximate inference and for knowledge bases not containing...

  4. Hierarchical modeling and inference in ecology: The analysis of data from populations, metapopulations and communities

    Science.gov (United States)

    Royle, J. Andrew; Dorazio, Robert M.

    2008-01-01

    A guide to data collection, modeling and inference strategies for biological survey data using Bayesian and classical statistical methods. This book describes a general and flexible framework for modeling and inference in ecological systems based on hierarchical models, with a strict focus on the use of probability models and parametric inference. Hierarchical models represent a paradigm shift in the application of statistics to ecological inference problems because they combine explicit models of ecological system structure or dynamics with models of how ecological systems are observed. The principles of hierarchical modeling are developed and applied to problems in population, metapopulation, community, and metacommunity systems. The book provides the first synthetic treatment of many recent methodological advances in ecological modeling and unifies disparate methods and procedures. The authors apply principles of hierarchical modeling to ecological problems, including * occurrence or occupancy models for estimating species distribution * abundance models based on many sampling protocols, including distance sampling * capture-recapture models with individual effects * spatial capture-recapture models based on camera trapping and related methods * population and metapopulation dynamic models * models of biodiversity, community structure and dynamics.

  5. The FRIGG project: From intermediate galactic scales to self-gravitating cores

    Science.gov (United States)

    Hennebelle, Patrick

    2018-03-01

    Context. Understanding the detailed structure of the interstellar gas is essential for our knowledge of the star formation process. Aim. The small-scale structure of the interstellar medium (ISM) is a direct consequence of the galactic scales and making the link between the two is essential. Methods: We perform adaptive mesh simulations that aim to bridge the gap between the intermediate galactic scales and the self-gravitating prestellar cores. For this purpose we use stratified supernova regulated ISM magneto-hydrodynamical simulations at the kpc scale to set up the initial conditions. We then zoom, performing a series of concentric uniform refinement and then refining on the Jeans length for the last levels. This allows us to reach a spatial resolution of a few 10-3 pc. The cores are identified using a clump finder and various criteria based on virial analysis. Their most relevant properties are computed and, due to the large number of objects formed in the simulations, reliable statistics are obtained. Results: The cores' properties show encouraging agreements with observations. The mass spectrum presents a clear powerlaw at high masses with an exponent close to ≃-1.3 and a peak at about 1-2 M⊙. The velocity dispersion and the angular momentum distributions are respectively a few times the local sound speed and a few 10-2 pc km s-1. We also find that the distribution of thermally supercritical cores present a range of magnetic mass-to-flux over critical mass-to-flux ratios, typically between ≃0.3 and 3 indicating that they are significantly magnetized. Investigating the time and spatial dependence of these statistical properties, we conclude that they are not significantly affected by the zooming procedure and that they do not present very large fluctuations. The most severe issue appears to be the dependence on the numerical resolution of the core mass function (CMF). While the core definition process may possibly introduce some biases, the peak tends to

  6. Scaling exponents of the velocity structure functions in the interplanetary medium

    Directory of Open Access Journals (Sweden)

    V. Carbone

    Full Text Available We analyze the scaling exponents of the velocity structure functions, obtained from the velocity fluctuations measured in the interplanetary space plasma. Using the expression for the energy transfer rate which seems the most relevant in describing the evolution of the pseudo-energy densities in the interplanetary medium, we introduce an energy cascade model derived from a simple fragmentation process, which takes into account the intermittency effect. In the absence and in the presence of the large-scale magnetic field decorrelation effect the model reduces to the fluid and the hydromagnetic p-model, respectively. We show that the scaling exponents of the q-th power of the velocity structure functions, as obtained by the model in the absence of the decorrelation effect, furnishes the best-fit to the data analyzed from the Voyager 2 velocity field measurements at 8.5 AU. Our results allow us to hypothesize a new kind of scale-similarity for magnetohydrodynamic turbulence when the decorrelation effect is at work, related to the fourth-order velocity structure function.

  7. Poly aniline synthesized in pilot scale: structural and morphological characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Mazzeu, Maria Alice Carvalho; Goncalves, Emerson Sarmento, E-mail: aie.mzz@hotmail.com [Instituto Tecnologico de Aeronautica (ITA), Sao Jose dos Campos, SP (Brazil); Gama, Adriana Medeiros [Instituto de Aeronautica e Espaco (IAE), Sao Jose dos Campos, SP (Brazil); Baldan, Mauricio Ribeiro [Instituto Nacional de Pesquisas Espaciais (INPE), Sao Jose dos Campos, SP (Brazil); Faria, Lohana Komorek [Universidade do Vale do Paraiba (UNIVAP), Sao Jose dos Campos, SP (Brazil)

    2016-07-01

    Full text: Among various conducting polymers, poly aniline (PAni) has received wide-spread attention because of its outstanding properties including simple and reversible doping–dedoping chemistry, stable electrical conduction mechanisms, high environmental stability and ease of synthesis [1]. Increasing applications require PAni at industrial scale and optimization of manufacturing processes are essential for this purpose. Since pilot scale influences hydrodynamics of the polymerizations system [2], pilot scale is an important instrument for evaluating amendments in the process. In this work, polyaniline was synthesized on pilot scale, with variation of reaction time for every synthesis, keeping the other parameters unchanged. The PAni salt first obtained was dedoped and the PAni-B (PAni in a base form, nonconductive) obtained was redoped with dodecylbenzenesulfonic acid (DBSA), when PAni-DBSA (PAni in a salt form, conductive) is obtained. The effects of synthesis conditions on the structural and morphological characteristics of PAni-B and PAni-DBSA are investigate by Raman Spectroscopy, XRD (X-ray diffractometer) and SEM (Scanning electron microscopy). Electrical conductivity was determined to redoped samples. Results were analyzed and we compare PAni forms to identifying the doping structure to PAni-DBSA by Raman spectroscopy. It was found too that reaction time can give some influence at conductivity. The XRD result showed differences in crystalline peaks of PAni-B and PAni-DBSA and this difference could be attributed mainly to the redoping process. Whereas the formation of crystals on a pilot scale may change because of effects caused by water flow, speed of polymerization could affect the formation of crystals too. The SEM pictures to PAni-B showed tiny coral reefs with globules structure and PAni-DBSA showed multilayer structure. References: 1 - Fratoddia I. et al. Sensors and Actuators B 220: 534–548 (2015); 2 - Roichman Y et al. Synthetic Metals 98

  8. Accelerating large-scale protein structure alignments with graphics processing units

    Directory of Open Access Journals (Sweden)

    Pang Bin

    2012-02-01

    Full Text Available Abstract Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs. As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.

  9. A neuro-fuzzy inference system for sensor monitoring

    International Nuclear Information System (INIS)

    Na, Man Gyun

    2001-01-01

    A neuro-fuzzy inference system combined with the wavelet denoising, PCA (principal component analysis) and SPRT (sequential probability ratio test) methods has been developed to monitor the relevant sensor using the information of other sensors. The paramters of the neuro-fuzzy inference system which estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce the time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors

  10. Mechanical properties and the laminate structure of Arapaima gigas scales.

    Science.gov (United States)

    Lin, Y S; Wei, C T; Olevsky, E A; Meyers, Marc A

    2011-10-01

    The Arapaima gigas scales play an important role in protecting this large Amazon basin fish against predators such as the piranha. They have a laminate composite structure composed of an external mineralized layer and internal lamellae with thickness of 50-60 μm each and composed of collagen fibers with ~1 μm diameter. The alignment of collagen fibers is consistent in each individual layer but varies from layer to layer, forming a non-orthogonal plywood structure, known as Bouligand stacking. X-ray diffraction revealed that the external surface of the scale contains calcium-deficient hydroxyapatite. EDS results confirm that the percentage of calcium is higher in the external layer. The micro-indentation hardness of the external layer (550 MPa) is considerably higher than that of the internal layer (200 MPa), consistent with its higher degree of mineralization. Tensile testing of the scales carried out in the dry and wet conditions shows that the strength and stiffness are hydration dependent. As is the case of most biological materials, the elastic modulus of the scale is strain-rate dependent. The strain-rate dependence of the elastic modulus, as expressed by the Ramberg-Osgood equation, is equal to 0.26, approximately ten times higher than that of bone. This is attributed to the higher fraction of collagen in the scales and to the high degree of hydration (30% H(2)O). Deproteinization of the scale reveals the structure of the mineral component consisting of an interconnected network of platelets with a thickness of ~50 nm and diameter of ~500 nm. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Variations on Bayesian Prediction and Inference

    Science.gov (United States)

    2016-05-09

    inference 2.2.1 Background There are a number of statistical inference problems that are not generally formulated via a full probability model...problem of inference about an unknown parameter, the Bayesian approach requires a full probability 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...the problem of inference about an unknown parameter, the Bayesian approach requires a full probability model/likelihood which can be an obstacle

  12. A cross-cultural investigation into the dimensional structure and stability of the Barriers to Research and Utilization Scale (BARRIERS Scale).

    Science.gov (United States)

    Williams, Brett; Brown, Ted; Costello, Shane

    2015-10-24

    It is important that scales exhibit strong measurement properties including those related to the investigation of issues that impact evidence-based practice. The validity of the Barriers to Research Utilization Scale (BARRIERS Scale) has recently been questioned in a systematic review. This study investigated the dimensional structure and stability of the 28 item BARRIERS Scale when completed by three groups of participants from three different cross-cultural environments. Data from the BARRIERS Scale completed by 696 occupational therapists from Australia (n = 137), Taiwan (n = 413), and the United Kingdom (n = 144) were analysed using principal components analysis, followed by Procrustes Transformation. Poorly fitting items were identified by low communalities, cross-loading, and theoretically inconsistent primary loadings, and were systematically removed until good fit was achieved. The cross-cultural stability of the component structure of the BARRIERS Scale was examined. A four component, 19 item version of the BARRIERS Scale emerged that demonstrated an improved dimensional fit and stability across the three participant groups. The resulting four components were consistent with the BARRIERS Scale as originally conceptualised. Findings from the study suggest that the four component, 19 item version of the BARRIERS Scale is a robust and valid measure for identifying barriers to research utilization for occupational therapists in paediatric health care settings across Australia, United Kingdom, and Taiwan. The four component 19 item version of the BARRIERS Scale exhibited good dimensional structure, internal consistency, and stability.

  13. Inferred rheological structure and mantle conditions from postseismic deformation following the 2010 Mw 7.2 El Mayor-Cucapah Earthquake

    Science.gov (United States)

    Dickinson-Lovell, Haylee; Huang, Mong-Han; Freed, Andrew M.; Fielding, Eric; Bürgmann, Roland; Andronicos, Christopher

    2018-06-01

    The 2010 Mw7.2 El Mayor-Cucapah earthquake provides a unique target of postseismic study as deformation extends across several distinct geological provinces, including the cold Mesozoic arc crust of the Peninsular Ranges and newly formed, hot, extending lithosphere within the Salton Trough. We use five years of global positioning system measurements to invert for afterslip and constrain a 3-D finite-element model that simulates viscoelastic relaxation. We find that afterslip cannot readily explain far-field displacements (more than 50 km from the epicentre). These displacements are best explained by viscoelastic relaxation of a horizontally and vertically heterogeneous lower crust and upper mantle. Lower viscosities beneath the Salton Trough compared to the Peninsular Ranges and other surrounding regions are consistent with inferred differences in the respective geotherms. Our inferred viscosity structure suggests that the depth of the Lithosphere/Asthenosphere Boundary (LAB) is ˜65 km below the Peninsular Ranges and ˜32 km beneath the Salton Trough. These depths are shallower than the corresponding seismic LAB. This suggests that the onset of partial melting in peridotite may control the depth to the base of the mechanical lithosphere. In contrast, the seismic LAB may correspond to an increase in the partial melt percentage associated with the change from a conductive to an adiabatic geotherm.

  14. The scale of population structure in Arabidopsis thaliana.

    Directory of Open Access Journals (Sweden)

    Alexander Platt

    2010-02-01

    Full Text Available The population structure of an organism reflects its evolutionary history and influences its evolutionary trajectory. It constrains the combination of genetic diversity and reveals patterns of past gene flow. Understanding it is a prerequisite for detecting genomic regions under selection, predicting the effect of population disturbances, or modeling gene flow. This paper examines the detailed global population structure of Arabidopsis thaliana. Using a set of 5,707 plants collected from around the globe and genotyped at 149 SNPs, we show that while A. thaliana as a species self-fertilizes 97% of the time, there is considerable variation among local groups. This level of outcrossing greatly limits observed heterozygosity but is sufficient to generate considerable local haplotypic diversity. We also find that in its native Eurasian range A. thaliana exhibits continuous isolation by distance at every geographic scale without natural breaks corresponding to classical notions of populations. By contrast, in North America, where it exists as an exotic species, A. thaliana exhibits little or no population structure at a continental scale but local isolation by distance that extends hundreds of km. This suggests a pattern for the development of isolation by distance that can establish itself shortly after an organism fills a new habitat range. It also raises questions about the general applicability of many standard population genetics models. Any model based on discrete clusters of interchangeable individuals will be an uneasy fit to organisms like A. thaliana which exhibit continuous isolation by distance on many scales.

  15. Factor structure of the Body Appreciation Scale among Malaysian women.

    Science.gov (United States)

    Swami, Viren; Chamorro-Premuzic, Tomas

    2008-12-01

    The present study examined the factor structure of a Malay version of the Body Appreciation Scale (BAS), a recently developed scale for the assessment of positive body image that has been shown to have a unidimensional structure in Western settings. Results of exploratory and confirmatory factor analyses based on data from community sample of 591 women in Kuala Lumpur, Malaysia, failed to support a unidimensional structure for the Malay BAS. Results of a confirmatory factor analysis suggested two stable factors, which were labelled 'General Body Appreciation' and 'Body Image Investment'. Multi-group analysis showed that the two-factor structure was invariant for both Malaysian Malay and Chinese women, and that there were no significant ethnic differences on either factor. Results also showed that General Body Appreciation was significant negatively correlated with participants' body mass index. These results are discussed in relation to possible cross-cultural differences in positive body image.

  16. KILOPARSEC-SCALE RADIO STRUCTURES IN NARROW-LINE SEYFERT 1 GALAXIES

    Energy Technology Data Exchange (ETDEWEB)

    Doi, Akihiro; Kino, Motoki [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuou-ku, Sagamihara, Kanagawa 252-5210 (Japan); Nagira, Hiroshi [Graduate School of Science and Engineering, Yamaguchi University, 1677-1 Yoshida, Yamaguchi, Yamaguchi 753-8512 (Japan); Kawakatu, Nozomu [Graduate School of Pure and Applied Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8571 (Japan); Nagai, Hiroshi [National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588 (Japan); Asada, Keiichi, E-mail: akihiro.doi@vsop.isas.jaxa.jp [Academia Sinica Institute of Astronomy and Astrophysics, P.O. Box 23-141, Taipei 10617, Taiwan (China)

    2012-11-20

    We report the finding of kiloparsec (kpc)-scale radio structures in three radio-loud narrow-line Seyfert 1 (NLS1) galaxies from the Faint Images of the Radio Sky at Twenty-centimeters of the Very Large Array, which increases the number of known radio-loud NLS1s with kpc-scale structures to six, including two {gamma}-ray-emitting NLS1s (PMN J0948+0022 and 1H 0323+342) detected by the Fermi Gamma-ray Space Telescope. The detection rate of extended radio emissions in NLS1s is lower than that in broad-line active galactic nuclei (AGNs) with a statistical significance. We found both core-dominated (blazar-like) and lobe-dominated (radio-galaxy-like) radio structures in these six NLS1s, which can be understood in the framework of the unified scheme of radio-loud AGNs that considers radio galaxies as non-beamed parent populations of blazars. Five of the six NLS1s have (1) extended radio luminosities suggesting jet kinetic powers of {approx}> 10{sup 44} erg s{sup -1}, which is sufficient to make jets escape from hosts' dense environments; (2) black holes of {approx}> 10{sup 7} M {sub Sun }, which can generate the necessary jet powers from near-Eddington mass accretion; and (3) two-sided radio structures at kpc scales, requiring expansion rates of {approx}0.01c-0.3c and kinematic ages of {approx}> 10{sup 7} years. On the other hand, most typical NLS1s would be driven by black holes of {approx}< 10{sup 7} M {sub Sun} in a limited lifetime of {approx}10{sup 7} years. Hence, the kpc-scale radio structures may originate in a small window of opportunity during the final stage of the NLS1 phase just before growing into broad-line AGNs.

  17. An improved method to characterise the modulation of small-scale turbulent by large-scale structures

    Science.gov (United States)

    Agostini, Lionel; Leschziner, Michael; Gaitonde, Datta

    2015-11-01

    A key aspect of turbulent boundary layer dynamics is ``modulation,'' which refers to degree to which the intensity of coherent large-scale structures (LS) cause an amplification or attenuation of the intensity of the small-scale structures (SS) through large-scale-linkage. In order to identify the variation of the amplitude of the SS motion, the envelope of the fluctuations needs to be determined. Mathis et al. (2009) proposed to define this latter by low-pass filtering the modulus of the analytic signal built from the Hilbert transform of SS. The validity of this definition, as a basis for quantifying the modulated SS signal, is re-examined on the basis of DNS data for a channel flow. The analysis shows that the modulus of the analytic signal is very sensitive to the skewness of its PDF, which is dependent, in turn, on the sign of the LS fluctuation and thus of whether these fluctuations are associated with sweeps or ejections. The conclusion is that generating an envelope by use of a low-pass filtering step leads to an important loss of information associated with the effects of the local skewness of the PDF of the SS on the modulation process. An improved Hilbert-transform-based method is proposed to characterize the modulation of SS turbulence by LS structures

  18. Multi-scale, multi-method geophysical investigations of the Valles Caldera

    Science.gov (United States)

    Barker, J. E.; Daneshvar, S.; Langhans, A.; Okorie, C.; Parapuzha, A.; Perez, N.; Turner, A.; Smith, E.; Carchedi, C. J. W.; Creighton, A.; Folsom, M.; Bedrosian, P.; Pellerin, L.; Feucht, D. W.; Kelly, S.; Ferguson, J. F.; McPhee, D.

    2017-12-01

    In 2016, the Summer of Applied Geophysical Experience (SAGE) program, in cooperation with the National Park Service, began a multi-year investigation into the structure and evolution of the Valles Caldera in northern New Mexico. The Valles Caldera is a 20-km wide topographic depression in the Jemez Mountains volcanic complex that formed during two massive ignimbrite eruptions at 1.65 and 1.26 Ma. Post-collapse volcanic activity in the caldera includes the rise of Redondo peak, a 1 km high resurgent dome, periodic eruptions of the Valles rhyolite along an inferred ring fracture zone, and the presence of a geothermal reservoir beneath the western caldera with temperatures in excess of 300°C at a mere 2 km depth. Broad sediment-filled valleys associated with lava-dammed Pleistocene lakes occupy much of the northern and southeastern caldera. SAGE activities to date have included collection of new gravity data (>120 stations) throughout the caldera, a transient electromagnetic (TEM) survey of Valle Grande, reprocessing of industrial magnetotelluric (MT) data collected in the 1980s, and new MT data collection both within and outside of the caldera. Gravity modeling provides constraints on the pre-Caldera structure, estimates of the thickness of Caldera fill, and reveals regional structural trends reflected in the geometry of post-Caldera collapse. At a more local scale, TEM-derived resistivity models image rhyolite flows radiating outward from nearby vents into the lacustrine sediments filling Valle Grande. Resistivity models along a 6-km long profile also provide hints of structural dismemberment along the inferred Valles and Toledo ring fracture zones. Preliminary MT modeling at the caldera scale reveals conductive caldera fill, the resistive crystalline basement, and an enigmatic mid-crustal conductor likely related to magmatic activity that post-dates caldera formation.

  19. A Structural Equation Modelling of the Academic Self-Concept Scale

    Science.gov (United States)

    Matovu, Musa

    2014-01-01

    The study aimed at validating the academic self-concept scale by Liu and Wang (2005) in measuring academic self-concept among university students. Structural equation modelling was used to validate the scale which was composed of two subscales; academic confidence and academic effort. The study was conducted on university students; males and…

  20. Inference problems in structural biology

    DEFF Research Database (Denmark)

    Olsson, Simon

    The structure and dynamics of biological molecules are essential for their function. Consequently, a wealth of experimental techniques have been developed to study these features. However, while experiments yield detailed information about geometrical features of molecules, this information is of...

  1. Adaptive Inference on General Graphical Models

    OpenAIRE

    Acar, Umut A.; Ihler, Alexander T.; Mettu, Ramgopal; Sumer, Ozgur

    2012-01-01

    Many algorithms and applications involve repeatedly solving variations of the same inference problem; for example we may want to introduce new evidence to the model or perform updates to conditional dependencies. The goal of adaptive inference is to take advantage of what is preserved in the model and perform inference more rapidly than from scratch. In this paper, we describe techniques for adaptive inference on general graphs that support marginal computation and updates to the conditional ...

  2. Validity and factor structure of the bodybuilding dependence scale

    OpenAIRE

    Smith, D; Hale, B

    2004-01-01

    Objectives: To investigate the factor structure, validity, and reliability of the bodybuilding dependence scale and to investigate differences in bodybuilding dependence between men and women and competitive and non-competitive bodybuilders.

  3. 3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study.

    Science.gov (United States)

    Dolz, Jose; Desrosiers, Christian; Ben Ayed, Ismail

    2018-04-15

    This study investigates a 3D and fully convolutional neural network (CNN) for subcortical brain structure segmentation in MRI. 3D CNN architectures have been generally avoided due to their computational and memory requirements during inference. We address the problem via small kernels, allowing deeper architectures. We further model both local and global context by embedding intermediate-layer outputs in the final prediction, which encourages consistency between features extracted at different scales and embeds fine-grained information directly in the segmentation process. Our model is efficiently trained end-to-end on a graphics processing unit (GPU), in a single stage, exploiting the dense inference capabilities of fully CNNs. We performed comprehensive experiments over two publicly available datasets. First, we demonstrate a state-of-the-art performance on the ISBR dataset. Then, we report a large-scale multi-site evaluation over 1112 unregistered subject datasets acquired from 17 different sites (ABIDE dataset), with ages ranging from 7 to 64 years, showing that our method is robust to various acquisition protocols, demographics and clinical factors. Our method yielded segmentations that are highly consistent with a standard atlas-based approach, while running in a fraction of the time needed by atlas-based methods and avoiding registration/normalization steps. This makes it convenient for massive multi-site neuroanatomical imaging studies. To the best of our knowledge, our work is the first to study subcortical structure segmentation on such large-scale and heterogeneous data. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Reliability of Multi-Category Rating Scales

    Science.gov (United States)

    Parker, Richard I.; Vannest, Kimberly J.; Davis, John L.

    2013-01-01

    The use of multi-category scales is increasing for the monitoring of IEP goals, classroom and school rules, and Behavior Improvement Plans (BIPs). Although they require greater inference than traditional data counting, little is known about the inter-rater reliability of these scales. This simulation study examined the performance of nine…

  5. Factor Structure of Child Behavior Scale Scores in Peruvian Preschoolers

    Science.gov (United States)

    Meyer, Erin L.; Schaefer, Barbara A.; Soto, Cesar Merino; Simmons, Crystal S.; Anguiano, Rebecca; Brett, Jeremy; Holman, Alea; Martin, Justin F.; Hata, Heidi K.; Roberts, Kimberly J.; Mello, Zena R.; Worrell, Frank C.

    2011-01-01

    Behavior rating scales aid in the identification of problem behaviors, as well as the development of interventions to reduce such behavior. Although scores on many behavior rating scales have been validated in the United States, there have been few such studies in other cultural contexts. In this study, the structural validity of scores on a…

  6. Mantle viscosity structure constrained by joint inversions of seismic velocities and density

    Science.gov (United States)

    Rudolph, M. L.; Moulik, P.; Lekic, V.

    2017-12-01

    The viscosity structure of Earth's deep mantle affects the thermal evolution of Earth, the ascent of mantle upwellings, sinking of subducted oceanic lithosphere, and the mixing of compositional heterogeneities in the mantle. Modeling the long-wavelength dynamic geoid allows us to constrain the radial viscosity profile of the mantle. Typically, in inversions for the mantle viscosity structure, wavespeed variations are mapped into density variations using a constant- or depth-dependent scaling factor. Here, we use a newly developed joint model of anisotropic Vs, Vp, density and transition zone topographies to generate a suite of solutions for the mantle viscosity structure directly from the seismologically constrained density structure. The density structure used to drive our forward models includes contributions from both thermal and compositional variations, including important contributions from compositionally dense material in the Large Low Velocity Provinces at the base of the mantle. These compositional variations have been neglected in the forward models used in most previous inversions and have the potential to significantly affect large-scale flow and thus the inferred viscosity structure. We use a transdimensional, hierarchical, Bayesian approach to solve the inverse problem, and our solutions for viscosity structure include an increase in viscosity below the base of the transition zone, in the shallow lower mantle. Using geoid dynamic response functions and an analysis of the correlation between the observed geoid and mantle structure, we demonstrate the underlying reason for this inference. Finally, we present a new family of solutions in which the data uncertainty is accounted for using covariance matrices associated with the mantle structure models.

  7. Reliability of the factor structure of the Multidimensional Scale of Interpersonal Reactivity (EMRI

    Directory of Open Access Journals (Sweden)

    Nilton S. Formiga

    2013-10-01

    Full Text Available This study aims to check the internal consistency and factor structure evaluative of the empathy scale in a high school and college sample in the state of Minas Gerais. The instruments that measure empathy can be easily found, however, of the existing, just multidimensional scale of interpersonal reactivity (Emri is the theoretical framework that has far more and better organized, and the scale that is most commonly used to assess this construct. Participated 488 subjects, male and female, with ages from 14-54 years old, distributed in primary and college levels in Patrocínio-MG composed this study sample. The subjects answered the Multidimensional Scale of Interpersonal Reactivity and socio-demographic data. From an equation analysis and structural modeling were observed psychometric indicators that assured the structural consistency of the scale, promoting in the security of the measure theoretical construct of empathy.

  8. Identification of the underlying factor structure of the Derriford Appearance Scale 24

    Directory of Open Access Journals (Sweden)

    Timothy P. Moss

    2015-07-01

    Full Text Available Background. The Derriford Appearance Scale24 (DAS24 is a widely used measure of distress and dysfunction in relation to self-consciousness of appearance. It has been used in clinical and research settings, and translated into numerous European and Asian languages. Hitherto, no study has conducted an analysis to determine the underlying factor structure of the scale.Methods. A large (n = 1,265 sample of community and hospital patients with a visible difference were recruited face to face or by post, and completed the DAS24.Results. A two factor solution was generated. An evaluation of the congruence of the factor solutions on each of the the hospital and the community samples using Tucker’s Coefficient of Congruence (rc = .979 and confirmatory factor analysis, which demonstrated a consistent factor structure. A main factor, general self consciousness (GSC, was represented by 18 items. Six items comprised a second factor, sexual and body self-consciousness (SBSC. The SBSC scale demonstrated greater sensitivity and specificity in identifying distress for sexually significant areas of the body.Discussion. The factor structure of the DAS24 facilitates a more nuanced interpretation of scores using this scale. Two conceptually and statistically coherent sub-scales were identified. The SBSC sub-scale offers a means of identifying distress and dysfunction around sexually significant areas of the body not previously possible with this scale.

  9. Nonlinear evolution of large-scale structure in the universe

    International Nuclear Information System (INIS)

    Frenk, C.S.; White, S.D.M.; Davis, M.

    1983-01-01

    Using N-body simulations we study the nonlinear development of primordial density perturbation in an Einstein--de Sitter universe. We compare the evolution of an initial distribution without small-scale density fluctuations to evolution from a random Poisson distribution. These initial conditions mimic the assumptions of the adiabatic and isothermal theories of galaxy formation. The large-scale structures which form in the two cases are markedly dissimilar. In particular, the correlation function xi(r) and the visual appearance of our adiabatic (or ''pancake'') models match better the observed distribution of galaxies. This distribution is characterized by large-scale filamentary structure. Because the pancake models do not evolve in a self-similar fashion, the slope of xi(r) steepens with time; as a result there is a unique epoch at which these models fit the galaxy observations. We find the ratio of cutoff length to correlation length at this time to be lambda/sub min//r 0 = 5.1; its expected value in a neutrino dominated universe is 4(Ωh) -1 (H 0 = 100h km s -1 Mpc -1 ). At early epochs these models predict a negligible amplitude for xi(r) and could explain the lack of measurable clustering in the Lyα absorption lines of high-redshift quasars. However, large-scale structure in our models collapses after z = 2. If this collapse precedes galaxy formation as in the usual pancake theory, galaxies formed uncomfortably recently. The extent of this problem may depend on the cosmological model used; the present series of experiments should be extended in the future to include models with Ω<1

  10. Responses in large-scale structure

    Energy Technology Data Exchange (ETDEWEB)

    Barreira, Alexandre; Schmidt, Fabian, E-mail: barreira@MPA-Garching.MPG.DE, E-mail: fabians@MPA-Garching.MPG.DE [Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching (Germany)

    2017-06-01

    We introduce a rigorous definition of general power-spectrum responses as resummed vertices with two hard and n soft momenta in cosmological perturbation theory. These responses measure the impact of long-wavelength perturbations on the local small-scale power spectrum. The kinematic structure of the responses (i.e., their angular dependence) can be decomposed unambiguously through a ''bias'' expansion of the local power spectrum, with a fixed number of physical response coefficients , which are only a function of the hard wavenumber k . Further, the responses up to n -th order completely describe the ( n +2)-point function in the squeezed limit, i.e. with two hard and n soft modes, which one can use to derive the response coefficients. This generalizes previous results, which relate the angle-averaged squeezed limit to isotropic response coefficients. We derive the complete expression of first- and second-order responses at leading order in perturbation theory, and present extrapolations to nonlinear scales based on simulation measurements of the isotropic response coefficients. As an application, we use these results to predict the non-Gaussian part of the angle-averaged matter power spectrum covariance Cov{sup NG}{sub ℓ=0}( k {sub 1}, k {sub 2}), in the limit where one of the modes, say k {sub 2}, is much smaller than the other. Without any free parameters, our model results are in very good agreement with simulations for k {sub 2} ∼< 0.06 h Mpc{sup −1}, and for any k {sub 1} ∼> 2 k {sub 2}. The well-defined kinematic structure of the power spectrum response also permits a quick evaluation of the angular dependence of the covariance matrix. While we focus on the matter density field, the formalism presented here can be generalized to generic tracers such as galaxies.

  11. Responses in large-scale structure

    Science.gov (United States)

    Barreira, Alexandre; Schmidt, Fabian

    2017-06-01

    We introduce a rigorous definition of general power-spectrum responses as resummed vertices with two hard and n soft momenta in cosmological perturbation theory. These responses measure the impact of long-wavelength perturbations on the local small-scale power spectrum. The kinematic structure of the responses (i.e., their angular dependence) can be decomposed unambiguously through a ``bias'' expansion of the local power spectrum, with a fixed number of physical response coefficients, which are only a function of the hard wavenumber k. Further, the responses up to n-th order completely describe the (n+2)-point function in the squeezed limit, i.e. with two hard and n soft modes, which one can use to derive the response coefficients. This generalizes previous results, which relate the angle-averaged squeezed limit to isotropic response coefficients. We derive the complete expression of first- and second-order responses at leading order in perturbation theory, and present extrapolations to nonlinear scales based on simulation measurements of the isotropic response coefficients. As an application, we use these results to predict the non-Gaussian part of the angle-averaged matter power spectrum covariance CovNGl=0(k1,k2), in the limit where one of the modes, say k2, is much smaller than the other. Without any free parameters, our model results are in very good agreement with simulations for k2 lesssim 0.06 h Mpc-1, and for any k1 gtrsim 2k2. The well-defined kinematic structure of the power spectrum response also permits a quick evaluation of the angular dependence of the covariance matrix. While we focus on the matter density field, the formalism presented here can be generalized to generic tracers such as galaxies.

  12. x- and xi-scaling of the Nuclear Structure Function at Large x

    International Nuclear Information System (INIS)

    Arrington, J.; Armstrong, C. S.; Averett, T.; Baker, O. K.; Bever, L. de; Bochna, C. W.; Boeglin, W.; Bray, B.; Carlini, R. D.; Collins, G.; Cothran, C.; Crabb, D.; Day, D.; Dunne, J. A.; Dutta, D.; Ent, R.; Filippone, B. W.; Honegger, A.; Hughes, E. W.; Jensen, J.; Jourdan, J.; Keppel, C. E.; Koltenuk, D. M.; Lindgren, R.; Lung, A.; Mack, D. J.; McCarthy, J.; McKeown, R. D.; Meekins, D.; Mitchell, J. H.; Mkrtchyan, H. G.; Niculescu, G.; Niculescu, I.; Petitjean, T.; Rondon, O.; Sick, I.; Smith, C.; Terburg, B.; Vulcan, W. F.; Wood, S. A.; Yan, C.; Zhao, J.; Zihlmann, B.

    2001-01-01

    Inclusive electron scattering data are presented for 2 H and Fe targets at an incident electron energy of 4.045 GeV for a range of momentum transfers from Q 2 = 1 to 7 (GeV/c) 2 . Data were taken at Jefferson Laboratory for low values of energy loss, corresponding to values of Bjorken x greater than or near 1. The structure functions do not show scaling in x in this range, where inelastic scattering is not expected to dominate the cross section. The data do show scaling, however, in the Nachtmann variable ξ. This scaling may be the result of Bloom Gilman duality in the nucleon structure function combined with the Fermi motion of the nucleons in the nucleus. The resulting extension of scaling to larger values of ξ opens up the possibility of accessing nuclear structure functions in the high-x region at lower values of Q 2 than previously believed

  13. The inference from a single case: moral versus scientific inferences in implementing new biotechnologies.

    Science.gov (United States)

    Hofmann, B

    2008-06-01

    Are there similarities between scientific and moral inference? This is the key question in this article. It takes as its point of departure an instance of one person's story in the media changing both Norwegian public opinion and a brand-new Norwegian law prohibiting the use of saviour siblings. The case appears to falsify existing norms and to establish new ones. The analysis of this case reveals similarities in the modes of inference in science and morals, inasmuch as (a) a single case functions as a counter-example to an existing rule; (b) there is a common presupposition of stability, similarity and order, which makes it possible to reason from a few cases to a general rule; and (c) this makes it possible to hold things together and retain order. In science, these modes of inference are referred to as falsification, induction and consistency. In morals, they have a variety of other names. Hence, even without abandoning the fact-value divide, there appear to be similarities between inference in science and inference in morals, which may encourage communication across the boundaries between "the two cultures" and which are relevant to medical humanities.

  14. Algorithms for MDC-Based Multi-locus Phylogeny Inference

    Science.gov (United States)

    Yu, Yun; Warnow, Tandy; Nakhleh, Luay

    One of the criteria for inferring a species tree from a collection of gene trees, when gene tree incongruence is assumed to be due to incomplete lineage sorting (ILS), is minimize deep coalescence, or MDC. Exact algorithms for inferring the species tree from rooted, binary trees under MDC were recently introduced. Nevertheless, in phylogenetic analyses of biological data sets, estimated gene trees may differ from true gene trees, be incompletely resolved, and not necessarily rooted. In this paper, we propose new MDC formulations for the cases where the gene trees are unrooted/binary, rooted/non-binary, and unrooted/non-binary. Further, we prove structural theorems that allow us to extend the algorithms for the rooted/binary gene tree case to these cases in a straightforward manner. Finally, we study the performance of these methods in coalescent-based computer simulations.

  15. Small Scales Structure of MHD Turbulence, Tubes or Ribbons?

    Science.gov (United States)

    Verdini, A.; Grappin, R.; Alexandrova, O.; Lion, S.

    2017-12-01

    Observations in the solar wind indicate that turbulent eddies change their anisotropy with scales [1]. At large scales eddies are elongated in direction perpendicular to the mean-field axis. This is the result of solar wind expansion that affects both the anisotropy and single-spacecraft measurments [2,3]. At small scales one recovers the anisotropy expected in strong MHD turbulence and constrained by the so-called critical balance: eddies are elongated along the mean-field axis. However, the actual eddy shape is intermediate between tubes and ribbons, preventing us to discriminate between two concurrent theories that predict 2D axysimmetric anisotropy [4] or full 3D anisotropy [5]. We analyse 10 years of WIND data and apply a numerically-derived criterion to select intervals in which solar wind expansion is expected to be negligible. By computing the anisotropy of structure functions with respect to the local mean field we obtain for the first time scaling relations that are in agreement with full 3D anisotropy, i.e. ribbons-like structures. However, we cannot obtain the expected scaling relations for the alignment angle which, according to the theory, is physically responsible for the departure from axisymmetry. In addition, a further change of anisotropy occurs well above the proton scales. We discuss the implication of our findings and how numerical simulations can help interpreting the observed spectral anisotropy. [1] Chen et al., ApJ, 768:120, 2012 [2] Verdini & Grappin, ApJL, 808:L34, 2015 [3] Vech & Chen, ApJL, 832:L16, 2016 [4] Goldreich & Shridar, ApJ, 438:763, 1995 [5] Boldyrev, ApJL, 626:L37, 2005

  16. A comparison of algorithms for inference and learning in probabilistic graphical models.

    Science.gov (United States)

    Frey, Brendan J; Jojic, Nebojsa

    2005-09-01

    Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.

  17. Mapping the MMPI-2-RF Specific Problems Scales Onto Extant Psychopathology Structures.

    Science.gov (United States)

    Sellbom, Martin

    2017-01-01

    A main objective in developing the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008 ) was to link the hierarchical structure of the instrument's scales to contemporary psychopathology and personality models for greater enhancement of construct validity. Initial evidence published with the Restructured Clinical scales has indicated promising results in that the higher order structure of these measures maps onto those reported in the extant psychopathology literature. This study focused on evaluating the internal structure of the Specific Problems and Interest scales, which have not yet been examined in this manner. Two large, mixed-gender outpatient and correctional samples were used. Exploratory factor analyses revealed consistent evidence for a 4-factor structure representing somatization, negative affect, externalizing, and social detachment. Convergent and discriminant validity analyses in the outpatient sample yielded a pattern of results consistent with expectations. These findings add further evidence to indicate that the MMPI-2-RF hierarchy of scales map onto extant psychopathology literature, and also add support to the notion that somatization and detachment should be considered important higher order domains in the psychopathology literature.

  18. Understanding the Functionality of Human Activity Hotspots from Their Scaling Pattern Using Trajectory Data

    Directory of Open Access Journals (Sweden)

    Tao Jia

    2017-11-01

    Full Text Available Human activity hotspots are the clusters of activity locations in space and time, and a better understanding of their functionality would be useful for urban land use planning and transportation. In this article, using trajectory data, we aim to infer the functionality of human activity hotspots from their scaling pattern in a reliable way. Specifically, a large number of stopping locations are extracted from trajectory data, which are then aggregated into activity hotspots. Activity hotspots are found to display scaling patterns in terms of the sublinear scaling relationships between the number of stopping locations and the number of points of interest (POIs, which indicates economies of scale of human interactions with urban land use. Importantly, this scaling pattern remains stable over time. This finding inspires us to devise an allometric ruler to identify the activity hotspots, whose functionality could be reliably estimated using the stopping locations. Thereafter, a novel Bayesian inference model is proposed to infer their urban functionality, which examines the spatial and temporal information of stopping locations covering 75 days. Experimental results suggest that the functionality of identified activity hotspots are reliably inferred by stopping locations, such as the railway station.

  19. Introductory statistical inference

    CERN Document Server

    Mukhopadhyay, Nitis

    2014-01-01

    This gracefully organized text reveals the rigorous theory of probability and statistical inference in the style of a tutorial, using worked examples, exercises, figures, tables, and computer simulations to develop and illustrate concepts. Drills and boxed summaries emphasize and reinforce important ideas and special techniques.Beginning with a review of the basic concepts and methods in probability theory, moments, and moment generating functions, the author moves to more intricate topics. Introductory Statistical Inference studies multivariate random variables, exponential families of dist

  20. Multi-scale habitat selection modeling: A review and outlook

    Science.gov (United States)

    Kevin McGarigal; Ho Yi Wan; Kathy A. Zeller; Brad C. Timm; Samuel A. Cushman

    2016-01-01

    Scale is the lens that focuses ecological relationships. Organisms select habitat at multiple hierarchical levels and at different spatial and/or temporal scales within each level. Failure to properly address scale dependence can result in incorrect inferences in multi-scale habitat selection modeling studies.

  1. Simulation and Statistical Inference of Stochastic Reaction Networks with Applications to Epidemic Models

    KAUST Repository

    Moraes, Alvaro

    2015-01-01

    Epidemics have shaped, sometimes more than wars and natural disasters, demo- graphic aspects of human populations around the world, their health habits and their economies. Ebola and the Middle East Respiratory Syndrome (MERS) are clear and current examples of potential hazards at planetary scale. During the spread of an epidemic disease, there are phenomena, like the sudden extinction of the epidemic, that can not be captured by deterministic models. As a consequence, stochastic models have been proposed during the last decades. A typical forward problem in the stochastic setting could be the approximation of the expected number of infected individuals found in one month from now. On the other hand, a typical inverse problem could be, given a discretely observed set of epidemiological data, infer the transmission rate of the epidemic or its basic reproduction number. Markovian epidemic models are stochastic models belonging to a wide class of pure jump processes known as Stochastic Reaction Networks (SRNs), that are intended to describe the time evolution of interacting particle systems where one particle interacts with the others through a finite set of reaction channels. SRNs have been mainly developed to model biochemical reactions but they also have applications in neural networks, virus kinetics, and dynamics of social networks, among others. 4 This PhD thesis is focused on novel fast simulation algorithms and statistical inference methods for SRNs. Our novel Multi-level Monte Carlo (MLMC) hybrid simulation algorithms provide accurate estimates of expected values of a given observable of SRNs at a prescribed final time. They are designed to control the global approximation error up to a user-selected accuracy and up to a certain confidence level, and with near optimal computational work. We also present novel dual-weighted residual expansions for fast estimation of weak and strong errors arising from the MLMC methodology. Regarding the statistical inference

  2. Active inference, communication and hermeneutics.

    Science.gov (United States)

    Friston, Karl J; Frith, Christopher D

    2015-07-01

    Hermeneutics refers to interpretation and translation of text (typically ancient scriptures) but also applies to verbal and non-verbal communication. In a psychological setting it nicely frames the problem of inferring the intended content of a communication. In this paper, we offer a solution to the problem of neural hermeneutics based upon active inference. In active inference, action fulfils predictions about how we will behave (e.g., predicting we will speak). Crucially, these predictions can be used to predict both self and others--during speaking and listening respectively. Active inference mandates the suppression of prediction errors by updating an internal model that generates predictions--both at fast timescales (through perceptual inference) and slower timescales (through perceptual learning). If two agents adopt the same model, then--in principle--they can predict each other and minimise their mutual prediction errors. Heuristically, this ensures they are singing from the same hymn sheet. This paper builds upon recent work on active inference and communication to illustrate perceptual learning using simulated birdsongs. Our focus here is the neural hermeneutics implicit in learning, where communication facilitates long-term changes in generative models that are trying to predict each other. In other words, communication induces perceptual learning and enables others to (literally) change our minds and vice versa. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Inference of Transcriptional Network for Pluripotency in Mouse Embryonic Stem Cells

    International Nuclear Information System (INIS)

    Aburatani, S

    2015-01-01

    In embryonic stem cells, various transcription factors (TFs) maintain pluripotency. To gain insights into the regulatory system controlling pluripotency, I inferred the regulatory relationships between the TFs expressed in ES cells. In this study, I applied a method based on structural equation modeling (SEM), combined with factor analysis, to 649 expression profiles of 19 TF genes measured in mouse Embryonic Stem Cells (ESCs). The factor analysis identified 19 TF genes that were regulated by several unmeasured factors. Since the known cell reprogramming TF genes (Pou5f1, Sox2 and Nanog) are regulated by different factors, each estimated factor is considered to be an input for signal transduction to control pluripotency in mouse ESCs. In the inferred network model, TF proteins were also arranged as unmeasured factors that control other TFs. The interpretation of the inferred network model revealed the regulatory mechanism for controlling pluripotency in ES cells

  4. Abductive Inference using Array-Based Logic

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall; Falster, Peter; Møller, Gert L.

    The notion of abduction has found its usage within a wide variety of AI fields. Computing abductive solutions has, however, shown to be highly intractable in logic programming. To avoid this intractability we present a new approach to logicbased abduction; through the geometrical view of data...... employed in array-based logic we embrace abduction in a simple structural operation. We argue that a theory of abduction on this form allows for an implementation which, at runtime, can perform abductive inference quite efficiently on arbitrary rules of logic representing knowledge of finite domains....

  5. An analysis pipeline for the inference of protein-protein interaction networks

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Ronald C.; Singhal, Mudita; Daly, Don S.; Gilmore, Jason M.; Cannon, William R.; Domico, Kelly O.; White, Amanda M.; Auberry, Deanna L.; Auberry, Kenneth J.; Hooker, Brian S.; Hurst, G. B.; McDermott, Jason E.; McDonald, W. H.; Pelletier, Dale A.; Schmoyer, Denise A.; Wiley, H. S.

    2009-12-01

    An analysis pipeline has been created for deployment of a novel algorithm, the Bayesian Estimator of Protein-Protein Association Probabilities (BEPro), for use in the reconstruction of protein-protein interaction networks. We have combined the Software Environment for BIological Network Inference (SEBINI), an interactive environment for the deployment and testing of network inference algorithms that use high-throughput data, and the Collective Analysis of Biological Interaction Networks (CABIN), software that allows integration and analysis of protein-protein interaction and gene-to-gene regulatory evidence obtained from multiple sources, to allow interactions computed by BEPro to be stored, visualized, and further analyzed. Incorporating BEPro into SEBINI and automatically feeding the resulting inferred network into CABIN, we have created a structured workflow for protein-protein network inference and supplemental analysis from sets of mass spectrometry bait-prey experiment data. SEBINI demo site: https://www.emsl.pnl.gov /SEBINI/ Contact: ronald.taylor@pnl.gov. BEPro is available at http://www.pnl.gov/statistics/BEPro3/index.htm. Contact: ds.daly@pnl.gov. CABIN is available at http://www.sysbio.org/dataresources/cabin.stm. Contact: mudita.singhal@pnl.gov.

  6. Complex modular structure of large-scale brain networks

    Science.gov (United States)

    Valencia, M.; Pastor, M. A.; Fernández-Seara, M. A.; Artieda, J.; Martinerie, J.; Chavez, M.

    2009-06-01

    Modular structure is ubiquitous among real-world networks from related proteins to social groups. Here we analyze the modular organization of brain networks at a large scale (voxel level) extracted from functional magnetic resonance imaging signals. By using a random-walk-based method, we unveil the modularity of brain webs and show modules with a spatial distribution that matches anatomical structures with functional significance. The functional role of each node in the network is studied by analyzing its patterns of inter- and intramodular connections. Results suggest that the modular architecture constitutes the structural basis for the coexistence of functional integration of distant and specialized brain areas during normal brain activities at rest.

  7. A four-scale homogenization analysis of creep of a nuclear containment structure

    Energy Technology Data Exchange (ETDEWEB)

    Tran, A.B. [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Échelle, MSME UMR 8208 CNRS, 5 bd Descartes, F-77454 Marne-la-Vallée (France); EDF R and D – Département MMC Site des Renardières – Avenue des Renardières - Ecuelles, 77818 Moret sur Loing Cedex (France); Department of Applied Informatics in Construction, National University of Civil Engineering, 55 Giai Phong Road, Hai Ba Trung District, Hanoi (Viet Nam); Yvonnet, J., E-mail: julien.yvonnet@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Échelle, MSME UMR 8208 CNRS, 5 bd Descartes, F-77454 Marne-la-Vallée (France); He, Q.-C. [Université Paris-Est, Laboratoire Modélisation et Simulation Multi Échelle, MSME UMR 8208 CNRS, 5 bd Descartes, F-77454 Marne-la-Vallée (France); Toulemonde, C.; Sanahuja, J. [EDF R and D – Département MMC Site des Renardières – Avenue des Renardières - Ecuelles, 77818 Moret sur Loing Cedex (France)

    2013-12-15

    A four-scale approach is proposed to predict the creep behavior of a concrete structure. The behavior of concrete is modeled through a numerical multiscale methodology, by successively homogenizing the viscoelastic behavior at different scales, starting from the cement paste. The homogenization is carried out by numerically constructing an effective relaxation tensor at each scale. In this framework, the impact of modifying the microstructural parameters can be directly observed on the structure response, like the interaction of the creep of concrete with the prestressing tendons network, and the effects of an internal pressure which might occur during a nuclear accident.

  8. A four-scale homogenization analysis of creep of a nuclear containment structure

    International Nuclear Information System (INIS)

    Tran, A.B.; Yvonnet, J.; He, Q.-C.; Toulemonde, C.; Sanahuja, J.

    2013-01-01

    A four-scale approach is proposed to predict the creep behavior of a concrete structure. The behavior of concrete is modeled through a numerical multiscale methodology, by successively homogenizing the viscoelastic behavior at different scales, starting from the cement paste. The homogenization is carried out by numerically constructing an effective relaxation tensor at each scale. In this framework, the impact of modifying the microstructural parameters can be directly observed on the structure response, like the interaction of the creep of concrete with the prestressing tendons network, and the effects of an internal pressure which might occur during a nuclear accident

  9. Structures and Intermittency in Small Scales Solar Wind Turbulence

    International Nuclear Information System (INIS)

    Sahraoui, Fouad; Goldstein, Melvyn

    2010-01-01

    Several observations in space plasmas have reported the presence of coherent structures at different plasma scales. Structure formation is believed to result from nonlinear interactions between the plasma modes, which depend strongly on their phase synchronization. Despite this important role of the phases in turbulence, very limited work has been devoted to study the phases as potential tracers of nonlinearities in comparison with the wealth of literature on power spectra of turbulence where phases are totally missed. The reason why the phases are seldom used is probably because they usually appear to be completely mixed (due to their dependence on an arbitrary time origin and to 2π periodicity). To handle the phases properly, a new method based on using surrogate data has been developed recently to detect coherent structures in magnetized plasmas [Sahraoui, PRE, 2008]. Here, we show new applications of the technique to study the nature (weak vs strong, self-similar vs intermittent) of the small scale turbulence in the solar wind using the Cluster observations.

  10. Probabilistic Inference of Biological Networks via Data Integration

    Directory of Open Access Journals (Sweden)

    Mark F. Rogers

    2015-01-01

    Full Text Available There is significant interest in inferring the structure of subcellular networks of interaction. Here we consider supervised interactive network inference in which a reference set of known network links and nonlinks is used to train a classifier for predicting new links. Many types of data are relevant to inferring functional links between genes, motivating the use of data integration. We use pairwise kernels to predict novel links, along with multiple kernel learning to integrate distinct sources of data into a decision function. We evaluate various pairwise kernels to establish which are most informative and compare individual kernel accuracies with accuracies for weighted combinations. By associating a probability measure with classifier predictions, we enable cautious classification, which can increase accuracy by restricting predictions to high-confidence instances, and data cleaning that can mitigate the influence of mislabeled training instances. Although one pairwise kernel (the tensor product pairwise kernel appears to work best, different kernels may contribute complimentary information about interactions: experiments in S. cerevisiae (yeast reveal that a weighted combination of pairwise kernels applied to different types of data yields the highest predictive accuracy. Combined with cautious classification and data cleaning, we can achieve predictive accuracies of up to 99.6%.

  11. Lack of sex-biased dispersal promotes fine-scale genetic structure in alpine ungulates

    Science.gov (United States)

    Gretchen H. Roffler; Sandra L. Talbot; Gordon Luikart; George K. Sage; Kristy L. Pilgrim; Layne G. Adams; Michael K. Schwartz

    2014-01-01

    Identifying patterns of fine-scale genetic structure in natural populations can advance understanding of critical ecological processes such as dispersal and gene flow across heterogeneous landscapes. Alpine ungulates generally exhibit high levels of genetic structure due to female philopatry and patchy configuration of mountain habitats. We assessed the spatial scale...

  12. Relationships between avian richness and landscape structure at multiple scales using multiple landscapes

    Science.gov (United States)

    Michael S. Mitchell; Scott H. Rutzmoser; T. Bently Wigley; Craig Loehle; John A. Gerwin; Patrick D. Keyser; Richard A. Lancia; Roger W. Perry; Christopher L. Reynolds; Ronald E. Thill; Robert Weih; Don White; Petra Bohall Wood

    2006-01-01

    Little is known about factors that structure biodiversity on landscape scales, yet current land management protocols, such as forest certification programs, place an increasing emphasis on managing for sustainable biodiversity at landscape scales. We used a replicated landscape study to evaluate relationships between forest structure and avian diversity at both stand...

  13. Inferring epidemic contact structure from phylogenetic trees.

    Directory of Open Access Journals (Sweden)

    Gabriel E Leventhal

    Full Text Available Contact structure is believed to have a large impact on epidemic spreading and consequently using networks to model such contact structure continues to gain interest in epidemiology. However, detailed knowledge of the exact contact structure underlying real epidemics is limited. Here we address the question whether the structure of the contact network leaves a detectable genetic fingerprint in the pathogen population. To this end we compare phylogenies generated by disease outbreaks in simulated populations with different types of contact networks. We find that the shape of these phylogenies strongly depends on contact structure. In particular, measures of tree imbalance allow us to quantify to what extent the contact structure underlying an epidemic deviates from a null model contact network and illustrate this in the case of random mixing. Using a phylogeny from the Swiss HIV epidemic, we show that this epidemic has a significantly more unbalanced tree than would be expected from random mixing.

  14. Dual linear structured support vector machine tracking method via scale correlation filter

    Science.gov (United States)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  15. Model distinguishability and inference robustness in mechanisms of cholera transmission and loss of immunity

    OpenAIRE

    Lee, Elizabeth C.; Kelly, Michael R.; Ochocki, Brad M.; Akinwumi, Segun M.; Hamre, Karen E. S.; Tien, Joseph H.; Eisenberg, Marisa C.

    2016-01-01

    Mathematical models of cholera and waterborne disease vary widely in their structures, in terms of transmission pathways, loss of immunity, and other features. These differences may yield different predictions and parameter estimates from the same data. Given the increasing use of models to inform public health decision-making, it is important to assess distinguishability (whether models can be distinguished based on fit to data) and inference robustness (whether model inferences are robust t...

  16. Fractal properties and small-scale structure of cosmic string networks

    International Nuclear Information System (INIS)

    Martins, C.J.A.P.; Shellard, E.P.S.

    2006-01-01

    We present results from a detailed numerical study of the small-scale and loop production properties of cosmic string networks, based on the largest and highest resolution string simulations to date. We investigate the nontrivial fractal properties of cosmic strings, in particular, the fractal dimension and renormalized string mass per unit length, and we also study velocity correlations. We demonstrate important differences between string networks in flat (Minkowski) spacetime and the two very similar expanding cases. For high resolution matter era network simulations, we provide strong evidence that small-scale structure has converged to 'scaling' on all dynamical length scales, without the need for other radiative damping mechanisms. We also discuss preliminary evidence that the dominant loop production size is also approaching scaling

  17. Structured ecosystem-scale approach to marine water quality management

    CSIR Research Space (South Africa)

    Taljaard, Susan

    2006-10-01

    Full Text Available and implement environmental management programmes. A structured ecosystem-scale approach for the design and implementation of marine water quality management programmes developed by the CSIR (South Africa) in response to recent advances in policies...

  18. Non-parametric co-clustering of large scale sparse bipartite networks on the GPU

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Mørup, Morten; Hansen, Lars Kai

    2011-01-01

    of row and column clusters from a hypothesis space of an infinite number of clusters. To reach large scale applications of co-clustering we exploit that parameter inference for co-clustering is well suited for parallel computing. We develop a generic GPU framework for efficient inference on large scale...... sparse bipartite networks and achieve a speedup of two orders of magnitude compared to estimation based on conventional CPUs. In terms of scalability we find for networks with more than 100 million links that reliable inference can be achieved in less than an hour on a single GPU. To efficiently manage...

  19. Inference of population splits and mixtures from genome-wide allele frequency data.

    Directory of Open Access Journals (Sweden)

    Joseph K Pickrell

    Full Text Available Many aspects of the historical relationships between populations in a species are reflected in genetic data. Inferring these relationships from genetic data, however, remains a challenging task. In this paper, we present a statistical model for inferring the patterns of population splits and mixtures in multiple populations. In our model, the sampled populations in a species are related to their common ancestor through a graph of ancestral populations. Using genome-wide allele frequency data and a Gaussian approximation to genetic drift, we infer the structure of this graph. We applied this method to a set of 55 human populations and a set of 82 dog breeds and wild canids. In both species, we show that a simple bifurcating tree does not fully describe the data; in contrast, we infer many migration events. While some of the migration events that we find have been detected previously, many have not. For example, in the human data, we infer that Cambodians trace approximately 16% of their ancestry to a population ancestral to other extant East Asian populations. In the dog data, we infer that both the boxer and basenji trace a considerable fraction of their ancestry (9% and 25%, respectively to wolves subsequent to domestication and that East Asian toy breeds (the Shih Tzu and the Pekingese result from admixture between modern toy breeds and "ancient" Asian breeds. Software implementing the model described here, called TreeMix, is available at http://treemix.googlecode.com.

  20. Hypersingular integral equations, waveguiding effects in Cantorian Universe and genesis of large scale structures

    International Nuclear Information System (INIS)

    Iovane, G.; Giordano, P.

    2005-01-01

    In this work we introduce the hypersingular integral equations and analyze a realistic model of gravitational waveguides on a cantorian space-time. A waveguiding effect is considered with respect to the large scale structure of the Universe, where the structure formation appears as if it were a classically self-similar random process at all astrophysical scales. The result is that it seems we live in an El Naschie's o (∞) Cantorian space-time, where gravitational lensing and waveguiding effects can explain the appearing Universe. In particular, we consider filamentary and planar large scale structures as possible refraction channels for electromagnetic radiation coming from cosmological structures. From this vision the Universe appears like a large self-similar adaptive mirrors set, thanks to three numerical simulations. Consequently, an infinite Universe is just an optical illusion that is produced by mirroring effects connected with the large scale structure of a finite and not a large Universe

  1. Bayesian inference for identifying interaction rules in moving animal groups.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    Full Text Available The emergence of similar collective patterns from different self-propelled particle models of animal groups points to a restricted set of "universal" classes for these patterns. While universality is interesting, it is often the fine details of animal interactions that are of biological importance. Universality thus presents a challenge to inferring such interactions from macroscopic group dynamics since these can be consistent with many underlying interaction models. We present a Bayesian framework for learning animal interaction rules from fine scale recordings of animal movements in swarms. We apply these techniques to the inverse problem of inferring interaction rules from simulation models, showing that parameters can often be inferred from a small number of observations. Our methodology allows us to quantify our confidence in parameter fitting. For example, we show that attraction and alignment terms can be reliably estimated when animals are milling in a torus shape, while interaction radius cannot be reliably measured in such a situation. We assess the importance of rate of data collection and show how to test different models, such as topological and metric neighbourhood models. Taken together our results both inform the design of experiments on animal interactions and suggest how these data should be best analysed.

  2. The structure of tubulin-binding cofactor A from Leishmania major infers a mode of association during the early stages of microtubule assembly

    Energy Technology Data Exchange (ETDEWEB)

    Barrack, Keri L.; Fyfe, Paul K.; Hunter, William N., E-mail: w.n.hunter@dundee.ac.uk [University of Dundee, Dow Street, Dundee DD1 5EH, Scotland (United Kingdom)

    2015-04-21

    The structure of a tubulin-binding cofactor from L. major is reported and compared with yeast, plant and human orthologues. Tubulin-binding cofactor A (TBCA) participates in microtubule formation, a key process in eukaryotic biology to create the cytoskeleton. There is little information on how TBCA might interact with β-tubulin en route to microtubule biogenesis. To address this, the protozoan Leishmania major was targeted as a model system. The crystal structure of TBCA and comparisons with three orthologous proteins are presented. The presence of conserved features infers that electrostatic interactions that are likely to involve the C-terminal tail of β-tubulin are key to association. This study provides a reagent and template to support further work in this area.

  3. Inference in `poor` languages

    Energy Technology Data Exchange (ETDEWEB)

    Petrov, S.

    1996-10-01

    Languages with a solvable implication problem but without complete and consistent systems of inference rules (`poor` languages) are considered. The problem of existence of finite complete and consistent inference rule system for a ``poor`` language is stated independently of the language or rules syntax. Several properties of the problem arc proved. An application of results to the language of join dependencies is given.

  4. Feelings about culture scales: development, factor structure, reliability, and validity.

    Science.gov (United States)

    Maffini, Cara S; Wong, Y Joel

    2015-04-01

    Although measures of cultural identity, values, and behavior exist in the multicultural psychological literature, there is currently no measure that explicitly assesses ethnic minority individuals' positive and negative affect toward culture. Therefore, we developed 2 new measures called the Feelings About Culture Scale--Ethnic Culture and Feelings About Culture Scale--Mainstream American Culture and tested their psychometric properties. In 6 studies, we piloted the measures, conducted factor analyses to clarify their factor structure, and examined reliability and validity. The factor structure revealed 2 dimensions reflecting positive and negative affect for each measure. Results provided evidence for convergent, discriminant, criterion-related, and incremental validity as well as the reliability of the scales. The Feelings About Culture Scales are the first known measures to examine both positive and negative affect toward an individual's ethnic culture and mainstream American culture. The focus on affect captures dimensions of psychological experiences that differ from cognitive and behavioral constructs often used to measure cultural orientation. These measures can serve as a valuable contribution to both research and counseling by providing insight into the nuanced affective experiences ethnic minority individuals have toward culture. (c) 2015 APA, all rights reserved).

  5. Neural mechanisms underlying valence inferences to sound: The role of the right angular gyrus.

    Science.gov (United States)

    Bravo, Fernando; Cross, Ian; Hawkins, Sarah; Gonzalez, Nadia; Docampo, Jorge; Bruno, Claudio; Stamatakis, Emmanuel Andreas

    2017-07-28

    We frequently infer others' intentions based on non-verbal auditory cues. Although the brain underpinnings of social cognition have been extensively studied, no empirical work has yet examined the impact of musical structure manipulation on the neural processing of emotional valence during mental state inferences. We used a novel sound-based theory-of-mind paradigm in which participants categorized stimuli of different sensory dissonance level in terms of positive/negative valence. Whilst consistent with previous studies which propose facilitated encoding of consonances, our results demonstrated that distinct levels of consonance/dissonance elicited differential influences on the right angular gyrus, an area implicated in mental state attribution and attention reorienting processes. Functional and effective connectivity analyses further showed that consonances modulated a specific inhibitory interaction from associative memory to mental state attribution substrates. Following evidence suggesting that individuals with autism may process social affective cues differently, we assessed the relationship between participants' task performance and self-reported autistic traits in clinically typical adults. Higher scores on the social cognition scales of the AQ were associated with deficits in recognising positive valence in consonant sound cues. These findings are discussed with respect to Bayesian perspectives on autistic perception, which highlight a functional failure to optimize precision in relation to prior beliefs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Origin of the large scale structures of the universe

    International Nuclear Information System (INIS)

    Oaknin, David H.

    2004-01-01

    We revise the statistical properties of the primordial cosmological density anisotropies that, at the time of matter-radiation equality, seeded the gravitational development of large scale structures in the otherwise homogeneous and isotropic Friedmann-Robertson-Walker flat universe. Our analysis shows that random fluctuations of the density field at the same instant of equality and with comoving wavelength shorter than the causal horizon at that time can naturally account, when globally constrained to conserve the total mass (energy) of the system, for the observed scale invariance of the anisotropies over cosmologically large comoving volumes. Statistical systems with similar features are generically known as glasslike or latticelike. Obviously, these conclusions conflict with the widely accepted understanding of the primordial structures reported in the literature, which requires an epoch of inflationary cosmology to precede the standard expansion of the universe. The origin of the conflict must be found in the widespread, but unjustified, claim that scale invariant mass (energy) anisotropies at the instant of equality over comoving volumes of cosmological size, larger than the causal horizon at the time, must be generated by fluctuations in the density field with comparably large comoving wavelength

  7. EI: A Program for Ecological Inference

    Directory of Open Access Journals (Sweden)

    Gary King

    2004-09-01

    Full Text Available The program EI provides a method of inferring individual behavior from aggregate data. It implements the statistical procedures, diagnostics, and graphics from the book A Solution to the Ecological Inference Problem: Reconstructing Individual Behavior from Aggregate Data (King 1997. Ecological inference, as traditionally defined, is the process of using aggregate (i.e., "ecological" data to infer discrete individual-level relationships of interest when individual-level data are not available. Ecological inferences are required in political science research when individual-level surveys are unavailable (e.g., local or comparative electoral politics, unreliable (racial politics, insufficient (political geography, or infeasible (political history. They are also required in numerous areas of ma jor significance in public policy (e.g., for applying the Voting Rights Act and other academic disciplines ranging from epidemiology and marketing to sociology and quantitative history.

  8. Geophysical mapping of complex glaciogenic large-scale structures

    DEFF Research Database (Denmark)

    Høyer, Anne-Sophie

    2013-01-01

    This thesis presents the main results of a four year PhD study concerning the use of geophysical data in geological mapping. The study is related to the Geocenter project, “KOMPLEKS”, which focuses on the mapping of complex, large-scale geological structures. The study area is approximately 100 km2...... data types and co-interpret them in order to improve our geological understanding. However, in order to perform this successfully, methodological considerations are necessary. For instance, a structure indicated by a reflection in the seismic data is not always apparent in the resistivity data...... information) can be collected. The geophysical data are used together with geological analyses from boreholes and pits to interpret the geological history of the hill-island. The geophysical data reveal that the glaciotectonic structures truncate at the surface. The directions of the structures were mapped...

  9. Frequentist and Bayesian inference for Gaussian-log-Gaussian wavelet trees and statistical signal processing applications

    DEFF Research Database (Denmark)

    Jacobsen, Christian Robert Dahl; Møller, Jesper

    2017-01-01

    We introduce new estimation methods for a subclass of the Gaussian scale mixture models for wavelet trees by Wainwright, Simoncelli and Willsky that rely on modern results for composite likelihoods and approximate Bayesian inference. Our methodology is illustrated for denoising and edge detection...

  10. Factor Structure of the Exercise Self-Efficacy Scale

    Science.gov (United States)

    Cornick, Jessica E.

    2015-01-01

    The current study utilized exercise self-efficacy ratings from undergraduate students to assess the factor structure of the Self-Efficacy to Regulate Exercise Scale (Bandura, 1997, 2006). An exploratory factor analysis (n = 759) indicated a two-factor model solution and three separate confirmatory factor analyses (n = 1,798) supported this…

  11. Thermal interaction in crusted melt jets with large-scale structures

    Energy Technology Data Exchange (ETDEWEB)

    Sugiyama, Ken-ichiro; Sotome, Fuminori; Ishikawa, Michio [Hokkaido Univ., Sapporo (Japan). Faculty of Engineering

    1998-01-01

    The objective of the present study is to experimentally observe thermal interaction which would be capable of triggering due to entrainment, or entrapment in crusted melt jets with `large-scale structure`. The present experiment was carried out by dropping molten zinc and molten tin of 100 grams, of which mass was sufficient to generate large-scale structures of melt jets. The experimental results show that the thermal interaction of entrapment type occurs in molten-zinc jets with rare probability, and the thermal interaction of entrainment type occurs in molten tin jets with high probability. The difference of thermal interaction between molten zinc and molten tin may attribute to differences of kinematic viscosity and melting point between them. (author)

  12. The existence of very large-scale structures in the universe

    Energy Technology Data Exchange (ETDEWEB)

    Goicoechea, L J; Martin-Mirones, J M [Universidad de Cantabria Santander, (ES)

    1989-09-01

    Assuming that the dipole moment observed in the cosmic background radiation (microwaves and X-rays) can be interpreted as a consequence of the motion of the observer toward a non-local and very large-scale structure in our universe, we study the perturbation of the m-z relation by this inhomogeneity, the dynamical contribution of sources to the dipole anisotropy in the X-ray background and the imprint that several structures with such characteristics would have had on the microwave background at the decoupling. We conclude that in this model the observed anisotropy in the microwave background on intermediate angular scales ({approx}10{sup 0}) may be in conflict with the existence of superstructures.

  13. Initial condition effects on large scale structure in numerical simulations of plane mixing layers

    Science.gov (United States)

    McMullan, W. A.; Garrett, S. J.

    2016-01-01

    In this paper, Large Eddy Simulations are performed on the spatially developing plane turbulent mixing layer. The simulated mixing layers originate from initially laminar conditions. The focus of this research is on the effect of the nature of the imposed fluctuations on the large-scale spanwise and streamwise structures in the flow. Two simulations are performed; one with low-level three-dimensional inflow fluctuations obtained from pseudo-random numbers, the other with physically correlated fluctuations of the same magnitude obtained from an inflow generation technique. Where white-noise fluctuations provide the inflow disturbances, no spatially stationary streamwise vortex structure is observed, and the large-scale spanwise turbulent vortical structures grow continuously and linearly. These structures are observed to have a three-dimensional internal geometry with branches and dislocations. Where physically correlated provide the inflow disturbances a "streaky" streamwise structure that is spatially stationary is observed, with the large-scale turbulent vortical structures growing with the square-root of time. These large-scale structures are quasi-two-dimensional, on top of which the secondary structure rides. The simulation results are discussed in the context of the varying interpretations of mixing layer growth that have been postulated. Recommendations are made concerning the data required from experiments in order to produce accurate numerical simulation recreations of real flows.

  14. FINE-SCALE STRUCTURES OF FLUX ROPES TRACKED BY ERUPTING MATERIAL

    Energy Technology Data Exchange (ETDEWEB)

    Li Ting; Zhang Jun, E-mail: liting@nao.cas.cn, E-mail: zjun@nao.cas.cn [Key Laboratory of Solar Activity, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China)

    2013-06-20

    We present Solar Dynamics Observatory observations of two flux ropes tracked out by material from a surge and a failed filament eruption on 2012 July 29 and August 4, respectively. For the first event, the interaction between the erupting surge and a loop-shaped filament in the east seems to 'peel off' the filament and add bright mass into the flux rope body. The second event is associated with a C-class flare that occurs several minutes before the filament activation. The two flux ropes are, respectively, composed of 85 {+-} 12 and 102 {+-} 15 fine-scale structures, with an average width of about 1.''6. Our observations show that two extreme ends of the flux rope are rooted in opposite polarity fields and each end is composed of multiple footpoints (FPs) of fine-scale structures. The FPs of the fine-scale structures are located at network magnetic fields, with magnetic fluxes from 5.6 Multiplication-Sign 10{sup 18} Mx to 8.6 Multiplication-Sign 10{sup 19} Mx. Moreover, almost half of the FPs show converging motion of smaller magnetic structures over 10 hr before the appearance of the flux rope. By calculating the magnetic fields of the FPs, we deduce that the two flux ropes occupy at least 4.3 Multiplication-Sign 10{sup 20} Mx and 7.6 Multiplication-Sign 10{sup 20} Mx magnetic fluxes, respectively.

  15. PhySIC_IST: cleaning source trees to infer more informative supertrees.

    Science.gov (United States)

    Scornavacca, Celine; Berry, Vincent; Lefort, Vincent; Douzery, Emmanuel J P; Ranwez, Vincent

    2008-10-04

    Supertree methods combine phylogenies with overlapping sets of taxa into a larger one. Topological conflicts frequently arise among source trees for methodological or biological reasons, such as long branch attraction, lateral gene transfers, gene duplication/loss or deep gene coalescence. When topological conflicts occur among source trees, liberal methods infer supertrees containing the most frequent alternative, while veto methods infer supertrees not contradicting any source tree, i.e. discard all conflicting resolutions. When the source trees host a significant number of topological conflicts or have a small taxon overlap, supertree methods of both kinds can propose poorly resolved, hence uninformative, supertrees. To overcome this problem, we propose to infer non-plenary supertrees, i.e. supertrees that do not necessarily contain all the taxa present in the source trees, discarding those whose position greatly differs among source trees or for which insufficient information is provided. We detail a variant of the PhySIC veto method called PhySIC_IST that can infer non-plenary supertrees. PhySIC_IST aims at inferring supertrees that satisfy the same appealing theoretical properties as with PhySIC, while being as informative as possible under this constraint. The informativeness of a supertree is estimated using a variation of the CIC (Cladistic Information Content) criterion, that takes into account both the presence of multifurcations and the absence of some taxa. Additionally, we propose a statistical preprocessing step called STC (Source Trees Correction) to correct the source trees prior to the supertree inference. STC is a liberal step that removes the parts of each source tree that significantly conflict with other source trees. Combining STC with a veto method allows an explicit trade-off between veto and liberal approaches, tuned by a single parameter.Performing large-scale simulations, we observe that STC+PhySIC_IST infers much more informative

  16. Inference of gene regulatory networks from time series by Tsallis entropy

    Directory of Open Access Journals (Sweden)

    de Oliveira Evaldo A

    2011-05-01

    Full Text Available Abstract Background The inference of gene regulatory networks (GRNs from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information, a new criterion function is here proposed. Results In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5

  17. Fusion And Inference From Multiple And Massive Disparate Distributed Dynamic Data Sets

    Science.gov (United States)

    2017-07-01

    computational execution together form a comprehensive, widely- applicable paradigm for statistical graph inference. Approved for Public Release; Distribution...always involve challenging empirical modeling and implementation issues. Our project has propelled the mathematical development, statistical design...D. J., and Sussman, D. L., “A limit theorem for scaled eigenvectors of random dot product graphs,” Sankhya A. Mathemat - ical Statistics and

  18. Development of porous structure simulator for multi-scale simulation of irregular porous catalysts

    International Nuclear Information System (INIS)

    Koyama, Michihisa; Suzuki, Ai; Sahnoun, Riadh; Tsuboi, Hideyuki; Hatakeyama, Nozomu; Endou, Akira; Takaba, Hiromitsu; Kubo, Momoji; Del Carpio, Carlos A.; Miyamoto, Akira

    2008-01-01

    Efficient development of highly functional porous materials, used as catalysts in the automobile industry, demands a meticulous knowledge of the nano-scale interface at the electronic and atomistic scale. However, it is often difficult to correlate the microscopic interfacial interactions with macroscopic characteristics of the materials; for instance, the interaction between a precious metal and its support oxide with long-term sintering properties of the catalyst. Multi-scale computational chemistry approaches can contribute to bridge the gap between micro- and macroscopic characteristics of these materials; however this type of multi-scale simulations has been difficult to apply especially to porous materials. To overcome this problem, we have developed a novel mesoscopic approach based on a porous structure simulator. This simulator can construct automatically irregular porous structures on a computer, enabling simulations with complex meso-scale structures. Moreover, in this work we have developed a new method to simulate long-term sintering properties of metal particles on porous catalysts. Finally, we have applied the method to the simulation of sintering properties of Pt on alumina support. This newly developed method has enabled us to propose a multi-scale simulation approach for porous catalysts

  19. Adaptive nonparametric Bayesian inference using location-scale mixture priors

    NARCIS (Netherlands)

    Jonge, de R.; Zanten, van J.H.

    2010-01-01

    We study location-scale mixture priors for nonparametric statistical problems, including multivariate regression, density estimation and classification. We show that a rate-adaptive procedure can be obtained if the prior is properly constructed. In particular, we show that adaptation is achieved if

  20. Phase space properties of local observables and structure of scaling limits

    International Nuclear Information System (INIS)

    Buchholz, D.

    1995-05-01

    For any given algebra of local observables in relativistic quantum field theory there exists an associated scaling algebra which permits one to introduce renormalization group transformations and to construct the scaling (short distance) limit of the theory. On the basis of this result it is discussed how the phase space properties of a theory determine the structure of its scaling limit. Bounds on the number of local degrees of freedom appearing in the scaling limit are given which allow one to distinguish between theories with classical and quantum scaling limits. The results can also be used to establish physically significant algebraic properties of the scaling limit theories, such as the split property. (orig.)

  1. The Student Perception of University Support and Structure Scale: Development and Validation

    Science.gov (United States)

    Wintre, Maxine G.; Gates, Shawn K. E.; Pancer, W. Mark; Pratt, Michael S.; Polivy, Janet; Birnie-Lefcovitch, S.; Adams, Gerald

    2009-01-01

    A new scale, the Student Perception of University Support and Structure Scale (SPUSS), was developed for research on the transition to university. The scale was based on concepts derived from Baumrind's (1971) theory of parenting styles. Data were obtained from two separate cohorts of freshmen (n=759 and 397) attending six Canadian universities of…

  2. Inferring regulatory networks from experimental morphological phenotypes: a computational method reverse-engineers planarian regeneration.

    Directory of Open Access Journals (Sweden)

    Daniel Lobo

    2015-06-01

    Full Text Available Transformative applications in biomedicine require the discovery of complex regulatory networks that explain the development and regeneration of anatomical structures, and reveal what external signals will trigger desired changes of large-scale pattern. Despite recent advances in bioinformatics, extracting mechanistic pathway models from experimental morphological data is a key open challenge that has resisted automation. The fundamental difficulty of manually predicting emergent behavior of even simple networks has limited the models invented by human scientists to pathway diagrams that show necessary subunit interactions but do not reveal the dynamics that are sufficient for complex, self-regulating pattern to emerge. To finally bridge the gap between high-resolution genetic data and the ability to understand and control patterning, it is critical to develop computational tools to efficiently extract regulatory pathways from the resultant experimental shape phenotypes. For example, planarian regeneration has been studied for over a century, but despite increasing insight into the pathways that control its stem cells, no constructive, mechanistic model has yet been found by human scientists that explains more than one or two key features of its remarkable ability to regenerate its correct anatomical pattern after drastic perturbations. We present a method to infer the molecular products, topology, and spatial and temporal non-linear dynamics of regulatory networks recapitulating in silico the rich dataset of morphological phenotypes resulting from genetic, surgical, and pharmacological experiments. We demonstrated our approach by inferring complete regulatory networks explaining the outcomes of the main functional regeneration experiments in the planarian literature; By analyzing all the datasets together, our system inferred the first systems-biology comprehensive dynamical model explaining patterning in planarian regeneration. This method

  3. An Inference Language for Imaging

    DEFF Research Database (Denmark)

    Pedemonte, Stefano; Catana, Ciprian; Van Leemput, Koen

    2014-01-01

    We introduce iLang, a language and software framework for probabilistic inference. The iLang framework enables the definition of directed and undirected probabilistic graphical models and the automated synthesis of high performance inference algorithms for imaging applications. The iLang framewor...

  4. Skin and scales of teleost fish: Simple structure but high performance and multiple functions

    Science.gov (United States)

    Vernerey, Franck J.; Barthelat, Francois

    2014-08-01

    Natural and man-made structural materials perform similar functions such as structural support or protection. Therefore they rely on the same types of properties: strength, robustness, lightweight. Nature can therefore provide a significant source of inspiration for new and alternative engineering designs. We report here some results regarding a very common, yet largely unknown, type of biological material: fish skin. Within a thin, flexible and lightweight layer, fish skins display a variety of strain stiffening and stabilizing mechanisms which promote multiple functions such as protection, robustness and swimming efficiency. We particularly discuss four important features pertaining to scaled skins: (a) a strongly elastic tensile behavior that is independent from the presence of rigid scales, (b) a compressive response that prevents buckling and wrinkling instabilities, which are usually predominant for thin membranes, (c) a bending response that displays nonlinear stiffening mechanisms arising from geometric constraints between neighboring scales and (d) a robust structure that preserves the above characteristics upon the loss or damage of structural elements. These important properties make fish skin an attractive model for the development of very thin and flexible armors and protective layers, especially when combined with the high penetration resistance of individual scales. Scaled structures inspired by fish skin could find applications in ultra-light and flexible armor systems, flexible electronics or the design of smart and adaptive morphing structures for aerospace vehicles.

  5. Structural Analysis of Treatment Cycles Representing Transitions between Nursing Organizational Units Inferred from Diabetes

    Science.gov (United States)

    Dehmer, Matthias; Kurt, Zeyneb; Emmert-Streib, Frank; Them, Christa; Schulc, Eva; Hofer, Sabine

    2015-01-01

    In this paper, we investigate treatment cycles inferred from diabetes data by means of graph theory. We define the term treatment cycles graph-theoretically and perform a descriptive as well as quantitative analysis thereof. Also, we interpret our findings in terms of nursing and clinical management. PMID:26030296

  6. Inferring animal social networks and leadership: applications for passive monitoring arrays.

    Science.gov (United States)

    Jacoby, David M P; Papastamatiou, Yannis P; Freeman, Robin

    2016-11-01

    Analyses of animal social networks have frequently benefited from techniques derived from other disciplines. Recently, machine learning algorithms have been adopted to infer social associations from time-series data gathered using remote, telemetry systems situated at provisioning sites. We adapt and modify existing inference methods to reveal the underlying social structure of wide-ranging marine predators moving through spatial arrays of passive acoustic receivers. From six months of tracking data for grey reef sharks (Carcharhinus amblyrhynchos) at Palmyra atoll in the Pacific Ocean, we demonstrate that some individuals emerge as leaders within the population and that this behavioural coordination is predicted by both sex and the duration of co-occurrences between conspecifics. In doing so, we provide the first evidence of long-term, spatially extensive social processes in wild sharks. To achieve these results, we interrogate simulated and real tracking data with the explicit purpose of drawing attention to the key considerations in the use and interpretation of inference methods and their impact on resultant social structure. We provide a modified translation of the GMMEvents method for R, including new analyses quantifying the directionality and duration of social events with the aim of encouraging the careful use of these methods more widely in less tractable social animal systems but where passive telemetry is already widespread. © 2016 The Authors.

  7. Impact of noise on molecular network inference.

    Directory of Open Access Journals (Sweden)

    Radhakrishnan Nagarajan

    Full Text Available Molecular entities work in concert as a system and mediate phenotypic outcomes and disease states. There has been recent interest in modelling the associations between molecular entities from their observed expression profiles as networks using a battery of algorithms. These networks have proven to be useful abstractions of the underlying pathways and signalling mechanisms. Noise is ubiquitous in molecular data and can have a pronounced effect on the inferred network. Noise can be an outcome of several factors including: inherent stochastic mechanisms at the molecular level, variation in the abundance of molecules, heterogeneity, sensitivity of the biological assay or measurement artefacts prevalent especially in high-throughput settings. The present study investigates the impact of discrepancies in noise variance on pair-wise dependencies, conditional dependencies and constraint-based Bayesian network structure learning algorithms that incorporate conditional independence tests as a part of the learning process. Popular network motifs and fundamental connections, namely: (a common-effect, (b three-chain, and (c coherent type-I feed-forward loop (FFL are investigated. The choice of these elementary networks can be attributed to their prevalence across more complex networks. Analytical expressions elucidating the impact of discrepancies in noise variance on pairwise dependencies and conditional dependencies for special cases of these motifs are presented. Subsequently, the impact of noise on two popular constraint-based Bayesian network structure learning algorithms such as Grow-Shrink (GS and Incremental Association Markov Blanket (IAMB that implicitly incorporate tests for conditional independence is investigated. Finally, the impact of noise on networks inferred from publicly available single cell molecular expression profiles is investigated. While discrepancies in noise variance are overlooked in routine molecular network inference, the

  8. Genetic differentiation across multiple spatial scales of the Red Sea of the corals Stylophora pistillata and Pocillopora verrucosa

    KAUST Repository

    Monroe, Alison

    2015-12-01

    Observing populations at different spatial scales gives greater insight into the specific processes driving genetic differentiation and population structure. Here we determined population connectivity across multiple spatial scales in the Red Sea to determine the population structures of two reef building corals Stylophora pistillata and Pocillopora verrucosa. The Red sea is a 2,250 km long body of water with extremely variable latitudinal environmental gradients. Mitochondrial and microsatellite markers were used to determine distinct lineages and to look for genetic differentiation among sampling sites. No distinctive population structure across the latitudinal gradient was discovered within this study suggesting a phenotypic plasticity of both these species to various environments. Stylophora pistillata displayed a heterogeneous distribution of three distinct genetic populations on both a fine and large scale. Fst, Gst, and Dest were all significant (p-value<0.05) and showed moderate genetic differentiation between all sampling sites. However this seems to be byproduct of the heterogeneous distribution, as no distinct genetic population breaks were found. Stylophora pistillata showed greater population structure on a fine scale suggesting genetic selection based on fine scale environmental variations. However, further environmental and oceanographic data is needed to make more inferences on this structure at small spatial scales. This study highlights the deficits of knowledge of both the Red Sea and coral plasticity in regards to local environmental conditions.

  9. Co-Cure-Ply Resins for High Performance, Large-Scale Structures

    Data.gov (United States)

    National Aeronautics and Space Administration — Large-scale composite structures are commonly joined by secondary bonding of molded-and-cured thermoset components. This approach may result in unpredictable joint...

  10. Inference

    DEFF Research Database (Denmark)

    Møller, Jesper

    2010-01-01

    Chapter 9: This contribution concerns statistical inference for parametric models used in stochastic geometry and based on quick and simple simulation free procedures as well as more comprehensive methods based on a maximum likelihood or Bayesian approach combined with markov chain Monte Carlo...... (MCMC) techniques. Due to space limitations the focus is on spatial point processes....

  11. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  12. Probing Mantle Heterogeneity Across Spatial Scales

    Science.gov (United States)

    Hariharan, A.; Moulik, P.; Lekic, V.

    2017-12-01

    Inferences of mantle heterogeneity in terms of temperature, composition, grain size, melt and crystal structure may vary across local, regional and global scales. Probing these scale-dependent effects require quantitative comparisons and reconciliation of tomographic models that vary in their regional scope, parameterization, regularization and observational constraints. While a range of techniques like radial correlation functions and spherical harmonic analyses have revealed global features like the dominance of long-wavelength variations in mantle heterogeneity, they have limited applicability for specific regions of interest like subduction zones and continental cratons. Moreover, issues like discrepant 1-D reference Earth models and related baseline corrections have impeded the reconciliation of heterogeneity between various regional and global models. We implement a new wavelet-based approach that allows for structure to be filtered simultaneously in both the spectral and spatial domain, allowing us to characterize heterogeneity on a range of scales and in different geographical regions. Our algorithm extends a recent method that expanded lateral variations into the wavelet domain constructed on a cubed sphere. The isolation of reference velocities in the wavelet scaling function facilitates comparisons between models constructed with arbitrary 1-D reference Earth models. The wavelet transformation allows us to quantify the scale-dependent consistency between tomographic models in a region of interest and investigate the fits to data afforded by heterogeneity at various dominant wavelengths. We find substantial and spatially varying differences in the spectrum of heterogeneity between two representative global Vp models constructed using different data and methodologies. Applying the orthonormality of the wavelet expansion, we isolate detailed variations in velocity from models and evaluate additional fits to data afforded by adding such complexities to long

  13. Galaxies distribution in the universe: large-scale statistics and structures

    International Nuclear Information System (INIS)

    Maurogordato, Sophie

    1988-01-01

    This research thesis addresses the distribution of galaxies in the Universe, and more particularly large scale statistics and structures. Based on an assessment of the main used statistical techniques, the author outlines the need to develop additional tools to correlation functions in order to characterise the distribution. She introduces a new indicator: the probability of a volume randomly tested in the distribution to be void. This allows a characterisation of void properties at the work scales (until 10h"-"1 Mpc) in the Harvard Smithsonian Center for Astrophysics Redshift Survey, or CfA catalog. A systematic analysis of statistical properties of different sub-samples has then been performed with respect to the size and location, luminosity class, and morphological type. This analysis is then extended to different scenarios of structure formation. A program of radial speed measurements based on observations allows the determination of possible relationships between apparent structures. The author also presents results of the search for south extensions of Perseus supernova [fr

  14. Primordial Non-Gaussianity in the Large-Scale Structure of the Universe

    Directory of Open Access Journals (Sweden)

    Vincent Desjacques

    2010-01-01

    generated the cosmological fluctuations observed today. Any detection of significant non-Gaussianity would thus have profound implications for our understanding of cosmic structure formation. The large-scale mass distribution in the Universe is a sensitive probe of the nature of initial conditions. Recent theoretical progress together with rapid developments in observational techniques will enable us to critically confront predictions of inflationary scenarios and set constraints as competitive as those from the Cosmic Microwave Background. In this paper, we review past and current efforts in the search for primordial non-Gaussianity in the large-scale structure of the Universe.

  15. Inference regarding multiple structural changes in linear models with endogenous regressors☆

    Science.gov (United States)

    Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia

    2012-01-01

    This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021

  16. Large-scale structure in the universe: Theory vs observations

    International Nuclear Information System (INIS)

    Kashlinsky, A.; Jones, B.J.T.

    1990-01-01

    A variety of observations constrain models of the origin of large scale cosmic structures. We review here the elements of current theories and comment in detail on which of the current observational data provide the principal constraints. We point out that enough observational data have accumulated to constrain (and perhaps determine) the power spectrum of primordial density fluctuations over a very large range of scales. We discuss the theories in the light of observational data and focus on the potential of future observations in providing even (and ever) tighter constraints. (orig.)

  17. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    Science.gov (United States)

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  18. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons.

    Directory of Open Access Journals (Sweden)

    Dejan Pecevski

    2011-12-01

    Full Text Available An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away" and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.

  19. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons.

    Science.gov (United States)

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-12-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away") and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.

  20. Processor farming method for multi-scale analysis of masonry structures

    Science.gov (United States)

    Krejčí, Tomáš; Koudelka, Tomáš

    2017-07-01

    This paper describes a processor farming method for a coupled heat and moisture transport in masonry using a two-level approach. The motivation for the two-level description comes from difficulties connected with masonry structures, where the size of stone blocks is much larger than the size of mortar layers and very fine finite element mesh has to be used. The two-level approach is suitable for parallel computing because nearly all computations can be performed independently with little synchronization. This approach is called processor farming. The master processor is dealing with the macro-scale level - the structure and the slave processors are dealing with a homogenization procedure on the meso-scale level which is represented by an appropriate representative volume element.

  1. Accelerated probabilistic inference of RNA structure evolution

    Directory of Open Access Journals (Sweden)

    Holmes Ian

    2005-03-01

    Full Text Available Abstract Background Pairwise stochastic context-free grammars (Pair SCFGs are powerful tools for evolutionary analysis of RNA, including simultaneous RNA sequence alignment and secondary structure prediction, but the associated algorithms are intensive in both CPU and memory usage. The same problem is faced by other RNA alignment-and-folding algorithms based on Sankoff's 1985 algorithm. It is therefore desirable to constrain such algorithms, by pre-processing the sequences and using this first pass to limit the range of structures and/or alignments that can be considered. Results We demonstrate how flexible classes of constraint can be imposed, greatly reducing the computational costs while maintaining a high quality of structural homology prediction. Any score-attributed context-free grammar (e.g. energy-based scoring schemes, or conditionally normalized Pair SCFGs is amenable to this treatment. It is now possible to combine independent structural and alignment constraints of unprecedented general flexibility in Pair SCFG alignment algorithms. We outline several applications to the bioinformatics of RNA sequence and structure, including Waterman-Eggert N-best alignments and progressive multiple alignment. We evaluate the performance of the algorithm on test examples from the RFAM database. Conclusion A program, Stemloc, that implements these algorithms for efficient RNA sequence alignment and structure prediction is available under the GNU General Public License.

  2. Subsurface mapping of Rustenburg Layered Suite (RLS), Bushveld Complex, South Africa: Inferred structural features using borehole data and spatial analysis

    Science.gov (United States)

    Bamisaiye, O. A.; Eriksson, P. G.; Van Rooy, J. L.; Brynard, H. M.; Foya, S.; Billay, A. Y.; Nxumalo, V.

    2017-08-01

    Faults and other structural features within the mafic-ultramafic layers of the Bushveld Complex have been a major issue mainly for exploration and mine planning. This study employed a new approach in detecting faults with both regional and meter scale offsets, which was not possible with the usually applied structure contour mapping. Interpretations of faults from structural and isopach maps were previously based on geological experience, while meter-scale faults were virtually impossible to detect from such maps. Spatial analysis was performed using borehole data primarily. This resulted in the identification of previously known structures and other hitherto unsuspected structural features. Consequently, the location, trends, and geometry of faults and some regional features within the Rustenburg Layered Suite (RLS) that might not be easy to detect through field mapping are adequately described in this study.

  3. Fractals and the Large-Scale Structure in the Universe

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 4. Fractals and the Large-Scale Structure in the Universe - Is the Cosmological Principle Valid? A K Mittal T R Seshadri. General Article Volume 7 Issue 4 April 2002 pp 39-47 ...

  4. A structured ecosystem-scale approach to marine water quality ...

    African Journals Online (AJOL)

    These, in turn, created the need for holistic and integrated frameworks within which to design and implement environmental management programmes. A structured ecosystem-scale approach for the design and implementation of marine water quality management programmes developed by the CSIR (South Africa) in ...

  5. A new scaling approach for the mesoscale simulation of magnetic domain structures using Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Radhakrishnan, B., E-mail: radhakrishnb@ornl.gov; Eisenbach, M.; Burress, T.A.

    2017-06-15

    Highlights: • Developed new scaling technique for dipole–dipole interaction energy. • Developed new scaling technique for exchange interaction energy. • Used scaling laws to extend atomistic simulations to micrometer length scale. • Demonstrated transition from mono-domain to vortex magnetic structure. • Simulated domain wall width and transition length scale agree with experiments. - Abstract: A new scaling approach has been proposed for the spin exchange and the dipole–dipole interaction energy as a function of the system size. The computed scaling laws are used in atomistic Monte Carlo simulations of magnetic moment evolution to predict the transition from single domain to a vortex structure as the system size increases. The width of a 180° – domain wall extracted from the simulated structures is in close agreement with experimentally values for an F–Si alloy. The transition size from a single domain to a vortex structure is also in close agreement with theoretically predicted and experimentally measured values for Fe.

  6. Inflation and large scale structure formation after COBE

    International Nuclear Information System (INIS)

    Schaefer, R.K.; Shafi, Q.

    1992-06-01

    The simplest realizations of the new inflationary scenario typically give rise to primordial density fluctuations which deviate logarithmically from the scale free Harrison-Zeldovich spectrum. We consider a number of such examples and, in each case we normalize the amplitude of the fluctuations with the recent COBE measurement of the microwave background anisotropy. The predictions for the bulk velocities as well as anisotropies on smaller (1-2 degrees) angular scales are compared with the Harrison-Zeldovich case. Deviations from the latter range from a few to about 15 percent. We also estimate the redshift beyond which the quasars would not be expected to be seen. The inflationary quasar cutoff redshifts can vary by as much as 25% from the Harrison-Zeldovich case. We find that the inflationary scenario provides a good starting point for a theory of large scale structure in the universe provided the dark matter is a combination of cold plus (10-30%) hot components. (author). 27 refs, 1 fig., 1 tab

  7. Structural Quality of Service in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup

    , telephony and data. To meet the requirements of the different applications, and to handle the increased vulnerability to failures, the ability to design robust networks providing good Quality of Service is crucial. However, most planning of large-scale networks today is ad-hoc based, leading to highly...... complex networks lacking predictability and global structural properties. The thesis applies the concept of Structural Quality of Service to formulate desirable global properties, and it shows how regular graph structures can be used to obtain such properties.......Digitalization has created the base for co-existence and convergence in communications, leading to an increasing use of multi service networks. This is for example seen in the Fiber To The Home implementations, where a single fiber is used for virtually all means of communication, including TV...

  8. Forward and backward inference in spatial cognition.

    Directory of Open Access Journals (Sweden)

    Will D Penny

    Full Text Available This paper shows that the various computations underlying spatial cognition can be implemented using statistical inference in a single probabilistic model. Inference is implemented using a common set of 'lower-level' computations involving forward and backward inference over time. For example, to estimate where you are in a known environment, forward inference is used to optimally combine location estimates from path integration with those from sensory input. To decide which way to turn to reach a goal, forward inference is used to compute the likelihood of reaching that goal under each option. To work out which environment you are in, forward inference is used to compute the likelihood of sensory observations under the different hypotheses. For reaching sensory goals that require a chaining together of decisions, forward inference can be used to compute a state trajectory that will lead to that goal, and backward inference to refine the route and estimate control signals that produce the required trajectory. We propose that these computations are reflected in recent findings of pattern replay in the mammalian brain. Specifically, that theta sequences reflect decision making, theta flickering reflects model selection, and remote replay reflects route and motor planning. We also propose a mapping of the above computational processes onto lateral and medial entorhinal cortex and hippocampus.

  9. Sign Inference for Dynamic Signed Networks via Dictionary Learning

    Directory of Open Access Journals (Sweden)

    Yi Cen

    2013-01-01

    Full Text Available Mobile online social network (mOSN is a burgeoning research area. However, most existing works referring to mOSNs deal with static network structures and simply encode whether relationships among entities exist or not. In contrast, relationships in signed mOSNs can be positive or negative and may be changed with time and locations. Applying certain global characteristics of social balance, in this paper, we aim to infer the unknown relationships in dynamic signed mOSNs and formulate this sign inference problem as a low-rank matrix estimation problem. Specifically, motivated by the Singular Value Thresholding (SVT algorithm, a compact dictionary is selected from the observed dataset. Based on this compact dictionary, the relationships in the dynamic signed mOSNs are estimated via solving the formulated problem. Furthermore, the estimation accuracy is improved by employing a dictionary self-updating mechanism.

  10. Scaling behavior in urban development process of Tokyo City and hierarchical dynamical structure

    International Nuclear Information System (INIS)

    Matsuba, Ikuo; Namatame, Masanori

    2003-01-01

    We study a geometric structure of urban development process which pays particular attention to scaling properties in the settlement area and inhabitant population through changes in the scaling exponents. Both the degree to which the space is fulfilled and the rate at which it is filled are obtained for the residential development in Tokyo. For distances larger than the city boundary, there is a sharp cross-over to a suburban region with a quite intriguing variation with a distance from the center of the city. The population densities in this region are found to collapse into a single scaling function with the scaling exponent 0.678 in the early 1990s in which the growth of the population attenuates. We propose a cellular automata model using the simulated annealing method that succeeds in reproducing the qualitative similar structural complexity of the actual city by taking into account the transportation system, especially railroad network. Finally, a possible theoretical consideration is given in analogous with fluid dynamics. Scaling of the population density is obtained assuming that there is a dynamical hierarchical structure in the scaling region where the stationarity is fulfilled. The theoretically obtained exponent 2/3 agrees well with the observed one

  11. Spectrally tuned structural and pigmentary coloration of birdwing butterfly wing scales.

    Science.gov (United States)

    Wilts, Bodo D; Matsushita, Atsuko; Arikawa, Kentaro; Stavenga, Doekele G

    2015-10-06

    The colourful wing patterns of butterflies play an important role for enhancing fitness; for instance, by providing camouflage, for interspecific mate recognition, or for aposematic display. Closely related butterfly species can have dramatically different wing patterns. The phenomenon is assumed to be caused by ecological processes with changing conditions, e.g. in the environment, and also by sexual selection. Here, we investigate the birdwing butterflies, Ornithoptera, the largest butterflies of the world, together forming a small genus in the butterfly family Papilionidae. The wings of these butterflies are marked by strongly coloured patches. The colours are caused by specially structured wing scales, which act as a chirped multilayer reflector, but the scales also contain papiliochrome pigments, which act as a spectral filter. The combined structural and pigmentary effects tune the coloration of the wing scales. The tuned colours are presumably important for mate recognition and signalling. By applying electron microscopy, (micro-)spectrophotometry and scatterometry we found that the various mechanisms of scale coloration of the different birdwing species strongly correlate with the taxonomical distribution of Ornithoptera species. © 2015 The Author(s).

  12. First order augmentation to tensor voting for boundary inference and multiscale analysis in 3D.

    Science.gov (United States)

    Tong, Wai-Shun; Tang, Chi-Keung; Mordohai, Philippos; Medioni, Gérard

    2004-05-01

    Most computer vision applications require the reliable detection of boundaries. In the presence of outliers, missing data, orientation discontinuities, and occlusion, this problem is particularly challenging. We propose to address it by complementing the tensor voting framework, which was limited to second order properties, with first order representation and voting. First order voting fields and a mechanism to vote for 3D surface and volume boundaries and curve endpoints in 3D are defined. Boundary inference is also useful for a second difficult problem in grouping, namely, automatic scale selection. We propose an algorithm that automatically infers the smallest scale that can preserve the finest details. Our algorithm then proceeds with progressively larger scales to ensure continuity where it has not been achieved. Therefore, the proposed approach does not oversmooth features or delay the handling of boundaries and discontinuities until model misfit occurs. The interaction of smooth features, boundaries, and outliers is accommodated by the unified representation, making possible the perceptual organization of data in curves, surfaces, volumes, and their boundaries simultaneously. We present results on a variety of data sets to show the efficacy of the improved formalism.

  13. Some limitations of public sequence data for phylogenetic inference (in plants).

    Science.gov (United States)

    Hinchliff, Cody E; Smith, Stephen Andrew

    2014-01-01

    The GenBank database contains essentially all of the nucleotide sequence data generated for published molecular systematic studies, but for the majority of taxa these data remain sparse. GenBank has value for phylogenetic methods that leverage data-mining and rapidly improving computational methods, but the limits imposed by the sparse structure of the data are not well understood. Here we present a tree representing 13,093 land plant genera--an estimated 80% of extant plant diversity--to illustrate the potential of public sequence data for broad phylogenetic inference in plants, and we explore the limits to inference imposed by the structure of these data using theoretical foundations from phylogenetic data decisiveness. We find that despite very high levels of missing data (over 96%), the present data retain the potential to inform over 86.3% of all possible phylogenetic relationships. Most of these relationships, however, are informed by small amounts of data--approximately half are informed by fewer than four loci, and more than 99% are informed by fewer than fifteen. We also apply an information theoretic measure of branch support to assess the strength of phylogenetic signal in the data, revealing many poorly supported branches concentrated near the tips of the tree, where data are sparse and the limiting effects of this sparseness are stronger. We argue that limits to phylogenetic inference and signal imposed by low data coverage may pose significant challenges for comprehensive phylogenetic inference at the species level. Computational requirements provide additional limits for large reconstructions, but these may be overcome by methodological advances, whereas insufficient data coverage can only be remedied by additional sampling effort. We conclude that public databases have exceptional value for modern systematics and evolutionary biology, and that a continued emphasis on expanding taxonomic and genomic coverage will play a critical role in developing

  14. Lack of sex-biased dispersal promotes fine-scale genetic structure in alpine ungulates

    Science.gov (United States)

    Roffler, Gretchen H.; Talbot, Sandra L.; Luikart, Gordon; Sage, George K.; Pilgrim, Kristy L.; Adams, Layne G.; Schwartz, Michael K.

    2014-01-01

    Identifying patterns of fine-scale genetic structure in natural populations can advance understanding of critical ecological processes such as dispersal and gene flow across heterogeneous landscapes. Alpine ungulates generally exhibit high levels of genetic structure due to female philopatry and patchy configuration of mountain habitats. We assessed the spatial scale of genetic structure and the amount of gene flow in 301 Dall’s sheep (Ovis dalli dalli) at the landscape level using 15 nuclear microsatellites and 473 base pairs of the mitochondrial (mtDNA) control region. Dall’s sheep exhibited significant genetic structure within contiguous mountain ranges, but mtDNA structure occurred at a broader geographic scale than nuclear DNA within the study area, and mtDNA structure for other North American mountain sheep populations. No evidence of male-mediated gene flow or greater philopatry of females was observed; there was little difference between markers with different modes of inheritance (pairwise nuclear DNA F ST = 0.004–0.325; mtDNA F ST = 0.009–0.544), and males were no more likely than females to be recent immigrants. Historical patterns based on mtDNA indicate separate northern and southern lineages and a pattern of expansion following regional glacial retreat. Boundaries of genetic clusters aligned geographically with prominent mountain ranges, icefields, and major river valleys based on Bayesian and hierarchical modeling of microsatellite and mtDNA data. Our results suggest that fine-scale genetic structure in Dall’s sheep is influenced by limited dispersal, and structure may be weaker in populations occurring near ancestral levels of density and distribution in continuous habitats compared to other alpine ungulates that have experienced declines and marked habitat fragmentation.

  15. The topology of large-scale structure. III. Analysis of observations

    International Nuclear Information System (INIS)

    Gott, J.R. III; Weinberg, D.H.; Miller, J.; Thuan, T.X.; Schneider, S.E.

    1989-01-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a meatball topology. 66 refs

  16. The topology of large-scale structure. III - Analysis of observations

    Science.gov (United States)

    Gott, J. Richard, III; Miller, John; Thuan, Trinh X.; Schneider, Stephen E.; Weinberg, David H.; Gammie, Charles; Polk, Kevin; Vogeley, Michael; Jeffrey, Scott; Bhavsar, Suketu P.; Melott, Adrian L.; Giovanelli, Riccardo; Hayes, Martha P.; Tully, R. Brent; Hamilton, Andrew J. S.

    1989-05-01

    A recently developed algorithm for quantitatively measuring the topology of large-scale structures in the universe was applied to a number of important observational data sets. The data sets included an Abell (1958) cluster sample out to Vmax = 22,600 km/sec, the Giovanelli and Haynes (1985) sample out to Vmax = 11,800 km/sec, the CfA sample out to Vmax = 5000 km/sec, the Thuan and Schneider (1988) dwarf sample out to Vmax = 3000 km/sec, and the Tully (1987) sample out to Vmax = 3000 km/sec. It was found that, when the topology is studied on smoothing scales significantly larger than the correlation length (i.e., smoothing length, lambda, not below 1200 km/sec), the topology is spongelike and is consistent with the standard model in which the structure seen today has grown from small fluctuations caused by random noise in the early universe. When the topology is studied on the scale of lambda of about 600 km/sec, a small shift is observed in the genus curve in the direction of a 'meatball' topology.

  17. Gauging Variational Inference

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ahn, Sungsoo [Korea Advanced Inst. Science and Technology (KAIST), Daejeon (Korea, Republic of); Shin, Jinwoo [Korea Advanced Inst. Science and Technology (KAIST), Daejeon (Korea, Republic of)

    2017-05-25

    Computing partition function is the most important statistical inference task arising in applications of Graphical Models (GM). Since it is computationally intractable, approximate methods have been used to resolve the issue in practice, where meanfield (MF) and belief propagation (BP) are arguably the most popular and successful approaches of a variational type. In this paper, we propose two new variational schemes, coined Gauged-MF (G-MF) and Gauged-BP (G-BP), improving MF and BP, respectively. Both provide lower bounds for the partition function by utilizing the so-called gauge transformation which modifies factors of GM while keeping the partition function invariant. Moreover, we prove that both G-MF and G-BP are exact for GMs with a single loop of a special structure, even though the bare MF and BP perform badly in this case. Our extensive experiments, on complete GMs of relatively small size and on large GM (up-to 300 variables) confirm that the newly proposed algorithms outperform and generalize MF and BP.

  18. A formal model of interpersonal inference

    Directory of Open Access Journals (Sweden)

    Michael eMoutoussis

    2014-03-01

    Full Text Available Introduction: We propose that active Bayesian inference – a general framework for decision-making – can equally be applied to interpersonal exchanges. Social cognition, however, entails special challenges. We address these challenges through a novel formulation of a formal model and demonstrate its psychological significance. Method: We review relevant literature, especially with regards to interpersonal representations, formulate a mathematical model and present a simulation study. The model accommodates normative models from utility theory and places them within the broader setting of Bayesian inference. Crucially, we endow people's prior beliefs, into which utilities are absorbed, with preferences of self and others. The simulation illustrates the model's dynamics and furnishes elementary predictions of the theory. Results: 1. Because beliefs about self and others inform both the desirability and plausibility of outcomes, in this framework interpersonal representations become beliefs that have to be actively inferred. This inference, akin to 'mentalising' in the psychological literature, is based upon the outcomes of interpersonal exchanges. 2. We show how some well-known social-psychological phenomena (e.g. self-serving biases can be explained in terms of active interpersonal inference. 3. Mentalising naturally entails Bayesian updating of how people value social outcomes. Crucially this includes inference about one’s own qualities and preferences. Conclusion: We inaugurate a Bayes optimal framework for modelling intersubject variability in mentalising during interpersonal exchanges. Here, interpersonal representations are endowed with explicit functional and affective properties. We suggest the active inference framework lends itself to the study of psychiatric conditions where mentalising is distorted.

  19. A novel mutual information-based Boolean network inference method from time-series gene expression data.

    Directory of Open Access Journals (Sweden)

    Shohag Barman

    Full Text Available Inferring a gene regulatory network from time-series gene expression data in systems biology is a challenging problem. Many methods have been suggested, most of which have a scalability limitation due to the combinatorial cost of searching a regulatory set of genes. In addition, they have focused on the accurate inference of a network structure only. Therefore, there is a pressing need to develop a network inference method to search regulatory genes efficiently and to predict the network dynamics accurately.In this study, we employed a Boolean network model with a restricted update rule scheme to capture coarse-grained dynamics, and propose a novel mutual information-based Boolean network inference (MIBNI method. Given time-series gene expression data as an input, the method first identifies a set of initial regulatory genes using mutual information-based feature selection, and then improves the dynamics prediction accuracy by iteratively swapping a pair of genes between sets of the selected regulatory genes and the other genes. Through extensive simulations with artificial datasets, MIBNI showed consistently better performance than six well-known existing methods, REVEAL, Best-Fit, RelNet, CST, CLR, and BIBN in terms of both structural and dynamics prediction accuracy. We further tested the proposed method with two real gene expression datasets for an Escherichia coli gene regulatory network and a fission yeast cell cycle network, and also observed better results using MIBNI compared to the six other methods.Taken together, MIBNI is a promising tool for predicting both the structure and the dynamics of a gene regulatory network.

  20. Finite element modeling of multilayered structures of fish scales.

    Science.gov (United States)

    Chandler, Mei Qiang; Allison, Paul G; Rodriguez, Rogie I; Moser, Robert D; Kennedy, Alan J

    2014-12-01

    The interlinked fish scales of Atractosteus spatula (alligator gar) and Polypterus senegalus (gray and albino bichir) are effective multilayered armor systems for protecting fish from threats such as aggressive conspecific interactions or predation. Both types of fish scales have multi-layered structures with a harder and stiffer outer layer, and softer and more compliant inner layers. However, there are differences in relative layer thickness, property mismatch between layers, the property gradations and nanostructures in each layer. The fracture paths and patterns of both scales under microindentation loads were different. In this work, finite element models of fish scales of A. spatula and P. senegalus were built to investigate the mechanics of their multi-layered structures under penetration loads. The models simulate a rigid microindenter penetrating the fish scales quasi-statically to understand the observed experimental results. Study results indicate that the different fracture patterns and crack paths observed in the experiments were related to the different stress fields caused by the differences in layer thickness, and spatial distribution of the elastic and plastic properties in the layers, and the differences in interface properties. The parametric studies and experimental results suggest that smaller fish such as P. senegalus may have adopted a thinner outer layer for light-weighting and improved mobility, and meanwhile adopted higher strength and higher modulus at the outer layer, and stronger interface properties to prevent ring cracking and interface cracking, and larger fish such as A. spatula and Arapaima gigas have lower strength and lower modulus at the outer layers and weaker interface properties, but have adopted thicker outer layers to provide adequate protection against ring cracking and interface cracking, possibly because weight is less of a concern relative to the smaller fish such as P. senegalus. Published by Elsevier Ltd.

  1. Scaling of structural failure

    Energy Technology Data Exchange (ETDEWEB)

    Bazant, Z.P. [Northwestern Univ., Evanston, IL (United States); Chen, Er-Ping [Sandia National Lab., Albuquerque, NM (United States)

    1997-01-01

    This article attempts to review the progress achieved in the understanding of scaling and size effect in the failure of structures. Particular emphasis is placed on quasibrittle materials for which the size effect is complicated. Attention is focused on three main types of size effects, namely the statistical size effect due to randomness of strength, the energy release size effect, and the possible size effect due to fractality of fracture or microcracks. Definitive conclusions on the applicability of these theories are drawn. Subsequently, the article discusses the application of the known size effect law for the measurement of material fracture properties, and the modeling of the size effect by the cohesive crack model, nonlocal finite element models and discrete element models. Extensions to compression failure and to the rate-dependent material behavior are also outlined. The damage constitutive law needed for describing a microcracked material in the fracture process zone is discussed. Various applications to quasibrittle materials, including concrete, sea ice, fiber composites, rocks and ceramics are presented.

  2. Distributional Inference

    NARCIS (Netherlands)

    Kroese, A.H.; van der Meulen, E.A.; Poortema, Klaas; Schaafsma, W.

    1995-01-01

    The making of statistical inferences in distributional form is conceptionally complicated because the epistemic 'probabilities' assigned are mixtures of fact and fiction. In this respect they are essentially different from 'physical' or 'frequency-theoretic' probabilities. The distributional form is

  3. The factor structure of the self-directed learning readiness scale | de ...

    African Journals Online (AJOL)

    The factor structure of the Self-Directed Learning Readiness Scale (SDLRS) was investigated for Afrikaans and English-speaking first-year university students. Five factors were extracted and rotated to oblique simple structure for both groups. Four of the five factors were satisfactorily replicated. The fifth factor appeared to ...

  4. Application of multi-scale (cross-) sample entropy for structural health monitoring

    Science.gov (United States)

    Lin, Tzu-Kang; Liang, Jui-Chang

    2015-08-01

    This study proposes an information-theoretic structural health monitoring (SHM) system based on multi-scale entropy (MSE) and multi-scale cross-sample entropy (MSCE). By measuring the ambient vibration signal from a structure, the damage condition can be rapidly evaluated via MSE analysis. The damage location can then be detected by analyzing the signals of different floors under the same damage condition via MSCE analysis. Moreover, a damage index is proposed to efficiently quantify the SHM process. Unlike some existing SHM methods, no experimental database or numerical model is required. Instead, a reference measurement of the current stage can initiate and launch the SHM system. A numerical simulation of a four-story steel structure is used to verify that the damage location and condition can be detected by the proposed SHM algorithm, and the location can be efficiently quantified by the damage index. A seven-story scaled-down benchmark structure is then employed for experimental verification. Based on the results, the damage condition can be correctly assessed, and average accuracy rates of 63.4 and 86.6% for the damage location can be achieved using the MSCE and damage index methods, respectively. As only the ambient vibration signal is required with a set of initial reference measurements, the proposed SHM system can be implemented practically with low cost.

  5. Time clustered sampling can inflate the inferred substitution rate in foot-and-mouth disease virus analyses

    DEFF Research Database (Denmark)

    Pedersen, Casper-Emil Tingskov; Frandsen, Peter; Wekesa, Sabenzia N.

    2015-01-01

    abundance of sequence data sampled under widely different schemes, an effort to keep results consistent and comparable is needed. This study emphasizes commonly disregarded problems in the inference of evolutionary rates in viral sequence data when sampling is unevenly distributed on a temporal scale...... through a study of the foot-and-mouth (FMD) disease virus serotypes SAT 1 and SAT 2. Our study shows that clustered temporal sampling in phylogenetic analyses of FMD viruses will strongly bias the inferences of substitution rates and tMRCA because the inferred rates in such data sets reflect a rate closer...... to the mutation rate rather than the substitution rate. Estimating evolutionary parameters from viral sequences should be performed with due consideration of the differences in short-term and longer-term evolutionary processes occurring within sets of temporally sampled viruses, and studies should carefully...

  6. Continuous Integrated Invariant Inference, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed project will develop a new technique for invariant inference and embed this and other current invariant inference and checking techniques in an...

  7. Halo Models of Large Scale Structure and Reliability of Cosmological N-Body Simulations

    Directory of Open Access Journals (Sweden)

    José Gaite

    2013-05-01

    Full Text Available Halo models of the large scale structure of the Universe are critically examined, focusing on the definition of halos as smooth distributions of cold dark matter. This definition is essentially based on the results of cosmological N-body simulations. By a careful analysis of the standard assumptions of halo models and N-body simulations and by taking into account previous studies of self-similarity of the cosmic web structure, we conclude that N-body cosmological simulations are not fully reliable in the range of scales where halos appear. Therefore, to have a consistent definition of halos is necessary either to define them as entities of arbitrary size with a grainy rather than smooth structure or to define their size in terms of small-scale baryonic physics.

  8. Road Traffic Anomaly Detection via Collaborative Path Inference from GPS Snippets

    Directory of Open Access Journals (Sweden)

    Hongtao Wang

    2017-03-01

    Full Text Available Road traffic anomaly denotes a road segment that is anomalous in terms of traffic flow of vehicles. Detecting road traffic anomalies from GPS (Global Position System snippets data is becoming critical in urban computing since they often suggest underlying events. However, the noisy ands parse nature of GPS snippets data have ushered multiple problems, which have prompted the detection of road traffic anomalies to be very challenging. To address these issues, we propose a two-stage solution which consists of two components: a Collaborative Path Inference (CPI model and a Road Anomaly Test (RAT model. CPI model performs path inference incorporating both static and dynamic features into a Conditional Random Field (CRF. Dynamic context features are learned collaboratively from large GPS snippets via a tensor decomposition technique. Then RAT calculates the anomalous degree for each road segment from the inferred fine-grained trajectories in given time intervals. We evaluated our method using a large scale real world dataset, which includes one-month GPS location data from more than eight thousand taxi cabs in Beijing. The evaluation results show the advantages of our method beyond other baseline techniques.

  9. Personality Assessment Inventory scale characteristics and factor structure in the assessment of alcohol dependency.

    Science.gov (United States)

    Schinka, J A

    1995-02-01

    Individual scale characteristics and the inventory structure of the Personality Assessment Inventory (PAI; Morey, 1991) were examined by conducting internal consistency and factor analyses of item and scale score data from a large group (N = 301) of alcohol-dependent patients. Alpha coefficients, mean inter-item correlations, and corrected item-total scale correlations for the sample paralleled values reported by Morey for a large clinical sample. Minor differences in the scale factor structure of the inventory from Morey's clinical sample were found. Overall, the findings support the use of the PAI in the assessment of personality and psychopathology of alcohol-dependent patients.

  10. Quantifying uncertainty in soot volume fraction estimates using Bayesian inference of auto-correlated laser-induced incandescence measurements

    Science.gov (United States)

    Hadwin, Paul J.; Sipkens, T. A.; Thomson, K. A.; Liu, F.; Daun, K. J.

    2016-01-01

    Auto-correlated laser-induced incandescence (AC-LII) infers the soot volume fraction (SVF) of soot particles by comparing the spectral incandescence from laser-energized particles to the pyrometrically inferred peak soot temperature. This calculation requires detailed knowledge of model parameters such as the absorption function of soot, which may vary with combustion chemistry, soot age, and the internal structure of the soot. This work presents a Bayesian methodology to quantify such uncertainties. This technique treats the additional "nuisance" model parameters, including the soot absorption function, as stochastic variables and incorporates the current state of knowledge of these parameters into the inference process through maximum entropy priors. While standard AC-LII analysis provides a point estimate of the SVF, Bayesian techniques infer the posterior probability density, which will allow scientists and engineers to better assess the reliability of AC-LII inferred SVFs in the context of environmental regulations and competing diagnostics.

  11. Inferences about nested subsets structure when not all species are detected

    Science.gov (United States)

    Cam, E.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.

    2000-01-01

    Comparisons of species composition among ecological communities of different size have often provided evidence that the species in communities with lower species richness form nested subsets of the species in larger communities. In the vast majority of studies, the question of nested subsets has been addressed using information on presence-absence, where a '0' is interpreted as the absence of a given species from a given location. Most of the methodological discussion in earlier studies investigating nestedness concerns the approach to generation of model-based matrices. However, it is most likely that in many situations investigators cannot detect all the species present in the location sampled. The possibility that zeros in incidence matrices reflect nondetection rather than absence of species has not been considered in studies addressing nested subsets, even though the position of zeros in these matrices forms the basis of earlier inference methods. These sampling artifacts are likely to lead to erroneous conclusions about both variation over space in species richness and the degree of similarity of the various locations. Here we propose an approach to investigation of nestedness, based on statistical inference methods explicitly incorporating species detection probability, that take into account the probabilistic nature of the sampling process. We use presence-absence data collected under Pollock?s robust capture-recapture design, and resort to an estimator of species richness originally developed for closed populations to assess the proportion of species shared by different locations. We develop testable predictions corresponding to the null hypothesis of a nonnested pattern, and an alternative hypothesis of perfect nestedness. We also present an index for assessing the degree of nestedness of a system of ecological communities. We illustrate our approach using avian data from the North American Breeding Bird Survey collected in Florida Keys.

  12. Estimating uncertainty of inference for validation

    Energy Technology Data Exchange (ETDEWEB)

    Booker, Jane M [Los Alamos National Laboratory; Langenbrunner, James R [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2010-09-30

    We present a validation process based upon the concept that validation is an inference-making activity. This has always been true, but the association has not been as important before as it is now. Previously, theory had been confirmed by more data, and predictions were possible based on data. The process today is to infer from theory to code and from code to prediction, making the role of prediction somewhat automatic, and a machine function. Validation is defined as determining the degree to which a model and code is an accurate representation of experimental test data. Imbedded in validation is the intention to use the computer code to predict. To predict is to accept the conclusion that an observable final state will manifest; therefore, prediction is an inference whose goodness relies on the validity of the code. Quantifying the uncertainty of a prediction amounts to quantifying the uncertainty of validation, and this involves the characterization of uncertainties inherent in theory/models/codes and the corresponding data. An introduction to inference making and its associated uncertainty is provided as a foundation for the validation problem. A mathematical construction for estimating the uncertainty in the validation inference is then presented, including a possibility distribution constructed to represent the inference uncertainty for validation under uncertainty. The estimation of inference uncertainty for validation is illustrated using data and calculations from Inertial Confinement Fusion (ICF). The ICF measurements of neutron yield and ion temperature were obtained for direct-drive inertial fusion capsules at the Omega laser facility. The glass capsules, containing the fusion gas, were systematically selected with the intent of establishing a reproducible baseline of high-yield 10{sup 13}-10{sup 14} neutron output. The deuterium-tritium ratio in these experiments was varied to study its influence upon yield. This paper on validation inference is the

  13. Development of the simulation package 'ELSES' for extra-large-scale electronic structure calculation

    International Nuclear Information System (INIS)

    Hoshi, T; Fujiwara, T

    2009-01-01

    An early-stage version of the simulation package 'ELSES' (extra-large-scale electronic structure calculation) is developed for simulating the electronic structure and dynamics of large systems, particularly nanometer-scale and ten-nanometer-scale systems (see www.elses.jp). Input and output files are written in the extensible markup language (XML) style for general users. Related pre-/post-simulation tools are also available. A practical workflow and an example are described. A test calculation for the GaAs bulk system is shown, to demonstrate that the present code can handle systems with more than one atom species. Several future aspects are also discussed.

  14. A canonical correlation analysis-based dynamic bayesian network prior to infer gene regulatory networks from multiple types of biological data.

    Science.gov (United States)

    Baur, Brittany; Bozdag, Serdar

    2015-04-01

    One of the challenging and important computational problems in systems biology is to infer gene regulatory networks (GRNs) of biological systems. Several methods that exploit gene expression data have been developed to tackle this problem. In this study, we propose the use of copy number and DNA methylation data to infer GRNs. We developed an algorithm that scores regulatory interactions between genes based on canonical correlation analysis. In this algorithm, copy number or DNA methylation variables are treated as potential regulator variables, and expression variables are treated as potential target variables. We first validated that the canonical correlation analysis method is able to infer true interactions in high accuracy. We showed that the use of DNA methylation or copy number datasets leads to improved inference over steady-state expression. Our results also showed that epigenetic and structural information could be used to infer directionality of regulatory interactions. Additional improvements in GRN inference can be gleaned from incorporating the result in an informative prior in a dynamic Bayesian algorithm. This is the first study that incorporates copy number and DNA methylation into an informative prior in dynamic Bayesian framework. By closely examining top-scoring interactions with different sources of epigenetic or structural information, we also identified potential novel regulatory interactions.

  15. A large-scale soil-structure interaction experiment: Part I design and construction

    International Nuclear Information System (INIS)

    Tang, H.T.; Tang, Y.K.; Wall, I.B.; Lin, E.

    1987-01-01

    In the simulated earthquake experiments (SIMQUAKE) sponsored by EPRI, the detonation of vertical arrays of explosives propagated wave motions through the ground to the model structures. Although such a simulation can provide information about dynamic soil-structure interaction (SSI) characteristics in a strong motion environment, it lacks seismic wave scattering characteristics for studying seismic input to the soil-structure system and the effect of different kinds of wave composition to the soil-structure response. To supplement the inadequacy of the simulated earthquake SSI experiment, the Electric Power Research Institute (EPRI) and the Taiwan Power Company (Taipower) jointly sponsored a large scale SSI experiment in the field. The objectives of the experiment are: (1) to obtain actual strong motion earthquakes induced database in a soft-soil environment which will substantiate predictive and design SSI models;and (2) to assess nuclear power plant reactor containment internal components dynamic response and margins relating to actual earthquake-induced excitation. These objectives are accomplished by recording and analyzing data from two instrumented, scaled down, (1/4- and 1/12-scale) reinforced concrete containments sited in a high seismic region in Taiwan where a strong-motion seismic array network is located

  16. Social Cognition as Reinforcement Learning: Feedback Modulates Emotion Inference.

    Science.gov (United States)

    Zaki, Jamil; Kallman, Seth; Wimmer, G Elliott; Ochsner, Kevin; Shohamy, Daphna

    2016-09-01

    Neuroscientific studies of social cognition typically employ paradigms in which perceivers draw single-shot inferences about the internal states of strangers. Real-world social inference features much different parameters: People often encounter and learn about particular social targets (e.g., friends) over time and receive feedback about whether their inferences are correct or incorrect. Here, we examined this process and, more broadly, the intersection between social cognition and reinforcement learning. Perceivers were scanned using fMRI while repeatedly encountering three social targets who produced conflicting visual and verbal emotional cues. Perceivers guessed how targets felt and received feedback about whether they had guessed correctly. Visual cues reliably predicted one target's emotion, verbal cues predicted a second target's emotion, and neither reliably predicted the third target's emotion. Perceivers successfully used this information to update their judgments over time. Furthermore, trial-by-trial learning signals-estimated using two reinforcement learning models-tracked activity in ventral striatum and ventromedial pFC, structures associated with reinforcement learning, and regions associated with updating social impressions, including TPJ. These data suggest that learning about others' emotions, like other forms of feedback learning, relies on domain-general reinforcement mechanisms as well as domain-specific social information processing.

  17. Structure function scaling in a Reλ = 250 turbulent mixing layer

    KAUST Repository

    Attili, Antonio

    2011-12-22

    A highly resolved Direct Numerical Simulation of a spatially developing turbulent mixing layer is presented. In the fully developed region, the flow achieves a turbulent Reynolds number Reλ = 250, high enough for a clear separation between large and dissipative scales, so for the presence of an inertial range. Structure functions have been calculated in the self-similar region using velocity time series and Taylor\\'s frozen turbulence hypothesis. The Extended Self-Similarity (ESS) concept has been employed to evaluate relative scaling exponents. A wide range of scales with scaling exponents and intermittency levels equal to homogeneous isotropic turbulence has been identified. Moreover an additional scaling range exists for larger scales; it is characterized by smaller exponents, similar to the values reported in the literature for flows with strong shear.

  18. Structure function scaling in a Reλ = 250 turbulent mixing layer

    KAUST Repository

    Attili, Antonio; Bisetti, Fabrizio

    2011-01-01

    A highly resolved Direct Numerical Simulation of a spatially developing turbulent mixing layer is presented. In the fully developed region, the flow achieves a turbulent Reynolds number Reλ = 250, high enough for a clear separation between large and dissipative scales, so for the presence of an inertial range. Structure functions have been calculated in the self-similar region using velocity time series and Taylor's frozen turbulence hypothesis. The Extended Self-Similarity (ESS) concept has been employed to evaluate relative scaling exponents. A wide range of scales with scaling exponents and intermittency levels equal to homogeneous isotropic turbulence has been identified. Moreover an additional scaling range exists for larger scales; it is characterized by smaller exponents, similar to the values reported in the literature for flows with strong shear.

  19. A Genome-Scale Investigation of How Sequence, Function, and Tree-Based Gene Properties Influence Phylogenetic Inference.

    Science.gov (United States)

    Shen, Xing-Xing; Salichos, Leonidas; Rokas, Antonis

    2016-09-02

    Molecular phylogenetic inference is inherently dependent on choices in both methodology and data. Many insightful studies have shown how choices in methodology, such as the model of sequence evolution or optimality criterion used, can strongly influence inference. In contrast, much less is known about the impact of choices in the properties of the data, typically genes, on phylogenetic inference. We investigated the relationships between 52 gene properties (24 sequence-based, 19 function-based, and 9 tree-based) with each other and with three measures of phylogenetic signal in two assembled data sets of 2,832 yeast and 2,002 mammalian genes. We found that most gene properties, such as evolutionary rate (measured through the percent average of pairwise identity across taxa) and total tree length, were highly correlated with each other. Similarly, several gene properties, such as gene alignment length, Guanine-Cytosine content, and the proportion of tree distance on internal branches divided by relative composition variability (treeness/RCV), were strongly correlated with phylogenetic signal. Analysis of partial correlations between gene properties and phylogenetic signal in which gene evolutionary rate and alignment length were simultaneously controlled, showed similar patterns of correlations, albeit weaker in strength. Examination of the relative importance of each gene property on phylogenetic signal identified gene alignment length, alongside with number of parsimony-informative sites and variable sites, as the most important predictors. Interestingly, the subsets of gene properties that optimally predicted phylogenetic signal differed considerably across our three phylogenetic measures and two data sets; however, gene alignment length and RCV were consistently included as predictors of all three phylogenetic measures in both yeasts and mammals. These results suggest that a handful of sequence-based gene properties are reliable predictors of phylogenetic signal

  20. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.

    Science.gov (United States)

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-11-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.

  1. Scale characters analysis for gully structure in the watersheds of loess landforms based on digital elevation models

    Science.gov (United States)

    Zhu, Hongchun; Zhao, Yipeng; Liu, Haiying

    2018-04-01

    Scale is the basic attribute for expressing and describing spatial entity and phenomena. It offers theoretical significance in the study of gully structure information, variable characteristics of watershed morphology, and development evolution at different scales. This research selected five different areas in China's Loess Plateau as the experimental region and used DEM data at different scales as the experimental data. First, the change rule of the characteristic parameters of the data at different scales was analyzed. The watershed structure information did not change along with a change in the data scale. This condition was proven by selecting indices of gully bifurcation ratio and fractal dimension as characteristic parameters of watershed structure information. Then, the change rule of the characteristic parameters of gully structure with different analysis scales was analyzed by setting the scale sequence of analysis at the extraction gully. The gully structure of the watershed changed with variations in the analysis scale, and the change rule was obvious when the gully level changed. Finally, the change rule of the characteristic parameters of the gully structure at different areas was analyzed. The gully fractal dimension showed a significant numerical difference in different areas, whereas the variation of the gully branch ratio was small. The change rule indicated that the development degree of the gully obviously varied in different regions, but the morphological structure was basically similar.

  2. Scale characters analysis for gully structure in the watersheds of loess landforms based on digital elevation models

    Science.gov (United States)

    Zhu, Hongchun; Zhao, Yipeng; Liu, Haiying

    2018-06-01

    Scale is the basic attribute for expressing and describing spatial entity and phenomena. It offers theoretical significance in the study of gully structure information, variable characteristics of watershed morphology, and development evolution at different scales. This research selected five different areas in China's Loess Plateau as the experimental region and used DEM data at different scales as the experimental data. First, the change rule of the characteristic parameters of the data at different scales was analyzed. The watershed structure information did not change along with a change in the data scale. This condition was proven by selecting indices of gully bifurcation ratio and fractal dimension as characteristic parameters of watershed structure information. Then, the change rule of the characteristic parameters of gully structure with different analysis scales was analyzed by setting the scale sequence of analysis at the extraction gully. The gully structure of the watershed changed with variations in the analysis scale, and the change rule was obvious when the gully level changed. Finally, the change rule of the characteristic parameters of the gully structure at different areas was analyzed. The gully fractal dimension showed a significant numerical difference in different areas, whereas the variation of the gully branch ratio was small. The change rule indicated that the development degree of the gully obviously varied in different regions, but the morphological structure was basically similar.

  3. Methodology for the inference of gene function from phenotype data.

    Science.gov (United States)

    Ascensao, Joao A; Dolan, Mary E; Hill, David P; Blake, Judith A

    2014-12-12

    Biomedical ontologies are increasingly instrumental in the advancement of biological research primarily through their use to efficiently consolidate large amounts of data into structured, accessible sets. However, ontology development and usage can be hampered by the segregation of knowledge by domain that occurs due to independent development and use of the ontologies. The ability to infer data associated with one ontology to data associated with another ontology would prove useful in expanding information content and scope. We here focus on relating two ontologies: the Gene Ontology (GO), which encodes canonical gene function, and the Mammalian Phenotype Ontology (MP), which describes non-canonical phenotypes, using statistical methods to suggest GO functional annotations from existing MP phenotype annotations. This work is in contrast to previous studies that have focused on inferring gene function from phenotype primarily through lexical or semantic similarity measures. We have designed and tested a set of algorithms that represents a novel methodology to define rules for predicting gene function by examining the emergent structure and relationships between the gene functions and phenotypes rather than inspecting the terms semantically. The algorithms inspect relationships among multiple phenotype terms to deduce if there are cases where they all arise from a single gene function. We apply this methodology to data about genes in the laboratory mouse that are formally represented in the Mouse Genome Informatics (MGI) resource. From the data, 7444 rule instances were generated from five generalized rules, resulting in 4818 unique GO functional predictions for 1796 genes. We show that our method is capable of inferring high-quality functional annotations from curated phenotype data. As well as creating inferred annotations, our method has the potential to allow for the elucidation of unforeseen, biologically significant associations between gene function and

  4. Inferences on the evidence for radioactive 53Mn in the early Solar System

    International Nuclear Information System (INIS)

    Typhoon Lee

    1986-01-01

    Time-scales for various processes during the formation of the early Solar System have been inferred from data on several now-extinct radionuclides. The author examines recently reported data on an extinct nuclide, 53 Mn, and shows that the data are inconsistent with the predictions of a single-stage evolution model. Alternative interpretations of the inconsistency are discussed. (U.K.)

  5. fastBMA: scalable network inference and transitive reduction.

    Science.gov (United States)

    Hung, Ling-Hong; Shi, Kaiyuan; Wu, Migao; Young, William Chad; Raftery, Adrian E; Yeung, Ka Yee

    2017-10-01

    Inferring genetic networks from genome-wide expression data is extremely demanding computationally. We have developed fastBMA, a distributed, parallel, and scalable implementation of Bayesian model averaging (BMA) for this purpose. fastBMA also includes a computationally efficient module for eliminating redundant indirect edges in the network by mapping the transitive reduction to an easily solved shortest-path problem. We evaluated the performance of fastBMA on synthetic data and experimental genome-wide time series yeast and human datasets. When using a single CPU core, fastBMA is up to 100 times faster than the next fastest method, LASSO, with increased accuracy. It is a memory-efficient, parallel, and distributed application that scales to human genome-wide expression data. A 10 000-gene regulation network can be obtained in a matter of hours using a 32-core cloud cluster (2 nodes of 16 cores). fastBMA is a significant improvement over its predecessor ScanBMA. It is more accurate and orders of magnitude faster than other fast network inference methods such as the 1 based on LASSO. The improved scalability allows it to calculate networks from genome scale data in a reasonable time frame. The transitive reduction method can improve accuracy in denser networks. fastBMA is available as code (M.I.T. license) from GitHub (https://github.com/lhhunghimself/fastBMA), as part of the updated networkBMA Bioconductor package (https://www.bioconductor.org/packages/release/bioc/html/networkBMA.html) and as ready-to-deploy Docker images (https://hub.docker.com/r/biodepot/fastbma/). © The Authors 2017. Published by Oxford University Press.

  6. Detecting Hierarchical Structure in Networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2012-01-01

    Many real-world networks exhibit hierarchical organization. Previous models of hierarchies within relational data has focused on binary trees; however, for many networks it is unknown whether there is hierarchical structure, and if there is, a binary tree might not account well for it. We propose...... a generative Bayesian model that is able to infer whether hierarchies are present or not from a hypothesis space encompassing all types of hierarchical tree structures. For efficient inference we propose a collapsed Gibbs sampling procedure that jointly infers a partition and its hierarchical structure....... On synthetic and real data we demonstrate that our model can detect hierarchical structure leading to better link-prediction than competing models. Our model can be used to detect if a network exhibits hierarchical structure, thereby leading to a better comprehension and statistical account the network....

  7. Novel material and structural design for large-scale marine protective devices

    International Nuclear Information System (INIS)

    Qiu, Ang; Lin, Wei; Ma, Yong; Zhao, Chengbi; Tang, Youhong

    2015-01-01

    Highlights: • Large-scale protective devices with different structural designs have been optimized. • Large-scale protective devices with novel material designs have been optimized. • Protective devices constructed of sandwich panels have the best anti-collision performance. • Protective devices with novel material design can reduce weight and construction cost. - Abstract: Large-scale protective devices must endure the impact of severe forces, large structural deformation, the increased stress and strain rate effects, and multiple coupling effects. In evaluation of the safety of conceptual design through simulation, several key parameters considered in this research are maximum impact force, energy dissipated by the impactor (e.g. a ship) and energy absorbed by the device and the impactor stroke. During impact, the main function of the ring beam structure is to resist and buffer the impact force between ship and bridge pile caps, which could guarantee that the magnitude of impact force meets the corresponding requirements. The means of improving anti-collision performance can be to increase the strength of the beam section or to exchange the steel material with novel fiber reinforced polymer laminates. The main function of the buoyancy tank is to absorb and transfer the ship’s kinetic energy through large plastic deformation, damage, or friction occurring within itself. The energy absorption effect can be improved by structure optimization or by the use of new sandwich panels. Structural and material optimization schemes are proposed on the basis of conceptual design in this research, and protective devices constructed of sandwich panels prove to have the best anti-collision performance

  8. Quantum-Like Representation of Non-Bayesian Inference

    Science.gov (United States)

    Asano, M.; Basieva, I.; Khrennikov, A.; Ohya, M.; Tanaka, Y.

    2013-01-01

    This research is related to the problem of "irrational decision making or inference" that have been discussed in cognitive psychology. There are some experimental studies, and these statistical data cannot be described by classical probability theory. The process of decision making generating these data cannot be reduced to the classical Bayesian inference. For this problem, a number of quantum-like coginitive models of decision making was proposed. Our previous work represented in a natural way the classical Bayesian inference in the frame work of quantum mechanics. By using this representation, in this paper, we try to discuss the non-Bayesian (irrational) inference that is biased by effects like the quantum interference. Further, we describe "psychological factor" disturbing "rationality" as an "environment" correlating with the "main system" of usual Bayesian inference.

  9. Bayesian Inference Methods for Sparse Channel Estimation

    DEFF Research Database (Denmark)

    Pedersen, Niels Lovmand

    2013-01-01

    This thesis deals with sparse Bayesian learning (SBL) with application to radio channel estimation. As opposed to the classical approach for sparse signal representation, we focus on the problem of inferring complex signals. Our investigations within SBL constitute the basis for the development...... of Bayesian inference algorithms for sparse channel estimation. Sparse inference methods aim at finding the sparse representation of a signal given in some overcomplete dictionary of basis vectors. Within this context, one of our main contributions to the field of SBL is a hierarchical representation...... analysis of the complex prior representation, where we show that the ability to induce sparse estimates of a given prior heavily depends on the inference method used and, interestingly, whether real or complex variables are inferred. We also show that the Bayesian estimators derived from the proposed...

  10. Phosphorylation variation during the cell cycle scales with structural propensities of proteins.

    Directory of Open Access Journals (Sweden)

    Stefka Tyanova

    Full Text Available Phosphorylation at specific residues can activate a protein, lead to its localization to particular compartments, be a trigger for protein degradation and fulfill many other biological functions. Protein phosphorylation is increasingly being studied at a large scale and in a quantitative manner that includes a temporal dimension. By contrast, structural properties of identified phosphorylation sites have so far been investigated in a static, non-quantitative way. Here we combine for the first time dynamic properties of the phosphoproteome with protein structural features. At six time points of the cell division cycle we investigate how the variation of the amount of phosphorylation correlates with the protein structure in the vicinity of the modified site. We find two distinct phosphorylation site groups: intrinsically disordered regions tend to contain sites with dynamically varying levels, whereas regions with predominantly regular secondary structures retain more constant phosphorylation levels. The two groups show preferences for different amino acids in their kinase recognition motifs - proline and other disorder-associated residues are enriched in the former group and charged residues in the latter. Furthermore, these preferences scale with the degree of disorderedness, from regular to irregular and to disordered structures. Our results suggest that the structural organization of the region in which a phosphorylation site resides may serve as an additional control mechanism. They also imply that phosphorylation sites are associated with different time scales that serve different functional needs.

  11. Statistical inference an integrated Bayesianlikelihood approach

    CERN Document Server

    Aitkin, Murray

    2010-01-01

    Filling a gap in current Bayesian theory, Statistical Inference: An Integrated Bayesian/Likelihood Approach presents a unified Bayesian treatment of parameter inference and model comparisons that can be used with simple diffuse prior specifications. This novel approach provides new solutions to difficult model comparison problems and offers direct Bayesian counterparts of frequentist t-tests and other standard statistical methods for hypothesis testing.After an overview of the competing theories of statistical inference, the book introduces the Bayes/likelihood approach used throughout. It pre

  12. Estimating Population Parameters using the Structured Serial Coalescent with Bayesian MCMC Inference when some Demes are Hidden

    Directory of Open Access Journals (Sweden)

    Allen Rodrigo

    2006-01-01

    Full Text Available Using the structured serial coalescent with Bayesian MCMC and serial samples, we estimate population size when some demes are not sampled or are hidden, ie ghost demes. It is found that even with the presence of a ghost deme, accurate inference was possible if the parameters are estimated with the true model. However with an incorrect model, estimates were biased and can be positively misleading. We extend these results to the case where there are sequences from the ghost at the last time sample. This case can arise in HIV patients, when some tissue samples and viral sequences only become available after death. When some sequences from the ghost deme are available at the last sampling time, estimation bias is reduced and accurate estimation of parameters associated with the ghost deme is possible despite sampling bias. Migration rates for this case are also shown to be good estimates when migration values are low.

  13. Particle-scale structure in frozen colloidal suspensions from small-angle x-ray scattering

    KAUST Repository

    Spannuth, Melissa

    2011-02-01

    During directional solidification of the solvent in a colloidal suspension, the colloidal particles segregate from the growing solid, forming high-particle-density regions with structure on a hierarchy of length scales ranging from that of the particle-scale packing to the large-scale spacing between these regions. Previous work has concentrated mostly on the medium- to large-length scale structure, as it is the most accessible and thought to be more technologically relevant. However, the packing of the colloids at the particle scale is an important component not only in theoretical descriptions of the segregation process, but also to the utility of freeze-cast materials for new applications. Here we present the results of experiments in which we investigated this structure across a wide range of length scales using a combination of small-angle x-ray scattering and direct optical imaging. As expected, during freezing the particles were concentrated into regions between ice dendrites forming a microscopic pattern of high- and low-particle-density regions. X-ray scattering indicates that the particles in the high-density regions were so closely packed as to be touching. However, the arrangement of the particles does not conform to that predicted by standard interparticle pair potentials, suggesting that the particle packing induced by freezing differs from that formed during equilibrium densification processes. © 2011 American Physical Society.

  14. Particle-scale structure in frozen colloidal suspensions from small-angle x-ray scattering

    KAUST Repository

    Spannuth, Melissa; Mochrie, S. G. J.; Peppin, S. S. L.; Wettlaufer, J. S.

    2011-01-01

    During directional solidification of the solvent in a colloidal suspension, the colloidal particles segregate from the growing solid, forming high-particle-density regions with structure on a hierarchy of length scales ranging from that of the particle-scale packing to the large-scale spacing between these regions. Previous work has concentrated mostly on the medium- to large-length scale structure, as it is the most accessible and thought to be more technologically relevant. However, the packing of the colloids at the particle scale is an important component not only in theoretical descriptions of the segregation process, but also to the utility of freeze-cast materials for new applications. Here we present the results of experiments in which we investigated this structure across a wide range of length scales using a combination of small-angle x-ray scattering and direct optical imaging. As expected, during freezing the particles were concentrated into regions between ice dendrites forming a microscopic pattern of high- and low-particle-density regions. X-ray scattering indicates that the particles in the high-density regions were so closely packed as to be touching. However, the arrangement of the particles does not conform to that predicted by standard interparticle pair potentials, suggesting that the particle packing induced by freezing differs from that formed during equilibrium densification processes. © 2011 American Physical Society.

  15. Using the Karolinska Scales of Personality on male juvenile delinquents: relationships between scales and factor structure.

    Science.gov (United States)

    Dåderman, Anna M; Hellström, Ake; Wennberg, Peter; Törestad, Bertil

    2005-01-01

    The aim of the present study was to investigate relationships between scales from the Karolinska Scales of Personality (KSP) and the factor structure of the KSP in a sample of male juvenile delinquents. The KSP was administered to a group of male juvenile delinquents (n=55, mean age 17 years; standard deviation=1.2) from four Swedish national correctional institutions for serious offenders. As expected, the KSP showed appropriate correlations between the scales. Factor analysis (maximum likelihood) arrived at a four-factor solution in this sample, which is in line with previous research performed in a non-clinical sample of Swedish males. More research is needed in a somewhat larger sample of juvenile delinquents in order to confirm the present results regarding the factor solution.

  16. Influence of structured sidewalls on the wetting states and superhydrophobic stability of surfaces with dual-scale roughness

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Huaping, E-mail: wuhuaping@gmail.com [Key Laboratory of E& M (Zhejiang University of Technology), Ministry of Education & Zhejiang Province, Hangzhou 310014 (China); State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology, Dalian 116024 (China); Zhu, Kai; Wu, Bingbing [Key Laboratory of E& M (Zhejiang University of Technology), Ministry of Education & Zhejiang Province, Hangzhou 310014 (China); Lou, Jia [Piezoelectric Device Laboratory, Department of Mechanics and Engineering Science, Ningbo University, Ningbo, Zhejiang 315211 (China); Zhang, Zheng [Key Laboratory of E& M (Zhejiang University of Technology), Ministry of Education & Zhejiang Province, Hangzhou 310014 (China); Chai, Guozhong, E-mail: chaigz@zjut.edu.cn [Key Laboratory of E& M (Zhejiang University of Technology), Ministry of Education & Zhejiang Province, Hangzhou 310014 (China)

    2016-09-30

    Highlights: • Apparent contact angle equation of all wetting states on dual-scale rough surfaces is derived. • Structured sidewalls can improve superhydrophobicity than smooth sidewalls. • Structured sidewalls can enlarge ACA than smooth sidewalls. • Structured sidewalls present an advantage over smooth sidewalls in terms of enhancing superhydrophobic stability. - Abstract: The superhydrophobicity of biological surfaces with dual-scale roughness has recently received considerable attention because of the unique wettability of such surfaces. Based on this, artificial micro/nano hierarchical structures with structured sidewalls and smooth sidewalls were designed and the influences of sidewall configurations (i.e., structured and smooth) on the wetting state of micro/nano hierarchical structures were systematically investigated based on thermodynamics and the principle of minimum free energy. Wetting transition and superhydrophobic stability were then analyzed for a droplet on dual-scale rough surfaces with structured and smooth sidewalls. Theoretical analysis results show that dual-scale rough surfaces with structured sidewalls have a larger “stable superhydrophobic region” than those with smooth sidewalls. The dual-scale rough surfaces with smooth sidewalls can enlarge the apparent contact angle (ACA) without improvement in the superhydrophobic stability. By contrast, dual-scale rough surfaces with structured sidewalls present an advantage over those with smooth sidewalls in terms of enlarging ACA and enhancing superhydrophobic stability. The proposed thermodynamic model is valid when compared with previous experimental data and numerical analysis results, which is helpful for designing and understanding the wetting states and superhydrophobic stability of surfaces with dual-scale roughness.

  17. Large-scale processes in the upper layers of the Indian Ocean inferred from temperature climatology

    Digital Repository Service at National Institute of Oceanography (India)

    Unnikrishnan, A.S.; PrasannaKumar, S.; Navelkar, G.S.

    stream_size 28477 stream_content_type text/plain stream_name J_Mar_Res_55_93.pdf.txt stream_source_info J_Mar_Res_55_93.pdf.txt Content-Encoding UTF-8 Content-Type text/plain; charset=UTF-8 Journal of Marine Research, 55... in the eastern region. Qualitative evidences obtained from the distribution of depth of 20°C isotherm and computed Ekman pumping velocities are consistent with the above inferences. From the time-longitude plot of the depth of the 20°C isotherm, the phase...

  18. Scaling for deuteron structure functions in a relativistic light-front model

    International Nuclear Information System (INIS)

    Polyzou, W.N.; Gloeckle, W.

    1996-01-01

    Scaling limits of the structure functions [B.D. Keister, Phys. Rev. C 37, 1765 (1988)], W 1 and W 2 , are studied in a relativistic model of the two-nucleon system. The relativistic model is defined by a unitary representation, U(Λ,a), of the Poincaracute e group which acts on the Hilbert space of two spinless nucleons. The representation is in Dirac close-quote s [P.A.M. Dirac, Rev. Mod. Phys. 21, 392 (1949)] light-front formulation of relativistic quantum mechanics and is designed to give the experimental deuteron mass and n-p scattering length. A model hadronic current operator that is conserved and covariant with respect to this representation is used to define the structure tensor. This work is the first step in a relativistic extension of the results of Hueber, Gloeckle, and Boemelburg. The nonrelativistic limit of the model is shown to be consistent with the nonrelativistic model of Hueber, Gloeckle, and Boemelburg. [D. Hueber et al. Phys. Rev. C 42, 2342 (1990)]. The relativistic and nonrelativistic scaling limits, for both Bjorken and y scaling are compared. The interpretation of y scaling in the relativistic model is studied critically. The standard interpretation of y scaling requires a soft wave function which is not realized in this model. The scaling limits in both the relativistic and nonrelativistic case are related to probability distributions associated with the target deuteron. copyright 1996 The American Physical Society

  19. Uniform functional structure across spatial scales in an intertidal benthic assemblage.

    Science.gov (United States)

    Barnes, R S K; Hamylton, Sarah

    2015-05-01

    To investigate the causes of the remarkable similarity of emergent assemblage properties that has been demonstrated across disparate intertidal seagrass sites and assemblages, this study examined whether their emergent functional-group metrics are scale related by testing the null hypothesis that functional diversity and the suite of dominant functional groups in seagrass-associated macrofauna are robust structural features of such assemblages and do not vary spatially across nested scales within a 0.4 ha area. This was carried out via a lattice of 64 spatially referenced stations. Although densities of individual components were patchily dispersed across the locality, rank orders of importance of the 14 functional groups present, their overall functional diversity and evenness, and the proportions of the total individuals contained within each showed, in contrast, statistically significant spatial uniformity, even at areal scales functional groups in their geospatial context also revealed weaker than expected levels of spatial autocorrelation, and then only at the smaller scales and amongst the most dominant groups, and only a small number of negative correlations occurred between the proportional importances of the individual groups. In effect, such patterning was a surface veneer overlying remarkable stability of assemblage functional composition across all spatial scales. Although assemblage species composition is known to be homogeneous in some soft-sediment marine systems over equivalent scales, this combination of patchy individual components yet basically constant functional-group structure seems as yet unreported. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Using Artificial Intelligence to Retrieve the Optimal Parameters and Structures of Adaptive Network-Based Fuzzy Inference System for Typhoon Precipitation Forecast Modeling

    Directory of Open Access Journals (Sweden)

    Chien-Lin Huang

    2015-01-01

    Full Text Available This study aims to construct a typhoon precipitation forecast model providing forecasts one to six hours in advance using optimal model parameters and structures retrieved from a combination of the adaptive network-based fuzzy inference system (ANFIS and artificial intelligence. To enhance the accuracy of the precipitation forecast, two structures were then used to establish the precipitation forecast model for a specific lead-time: a single-model structure and a dual-model hybrid structure where the forecast models of higher and lower precipitation were integrated. In order to rapidly, automatically, and accurately retrieve the optimal parameters and structures of the ANFIS-based precipitation forecast model, a tabu search was applied to identify the adjacent radius in subtractive clustering when constructing the ANFIS structure. The coupled structure was also employed to establish a precipitation forecast model across short and long lead-times in order to improve the accuracy of long-term precipitation forecasts. The study area is the Shimen Reservoir, and the analyzed period is from 2001 to 2009. Results showed that the optimal initial ANFIS parameters selected by the tabu search, combined with the dual-model hybrid method and the coupled structure, provided the favors in computation efficiency and high-reliability predictions in typhoon precipitation forecasts regarding short to long lead-time forecasting horizons.