WorldWideScience

Sample records for constrained phylogenetic models

  1. Fast optimization of statistical potentials for structurally constrained phylogenetic models

    Directory of Open Access Journals (Sweden)

    Rodrigue Nicolas

    2009-09-01

    Full Text Available Abstract Background Statistical approaches for protein design are relevant in the field of molecular evolutionary studies. In recent years, new, so-called structurally constrained (SC models of protein-coding sequence evolution have been proposed, which use statistical potentials to assess sequence-structure compatibility. In a previous work, we defined a statistical framework for optimizing knowledge-based potentials especially suited to SC models. Our method used the maximum likelihood principle and provided what we call the joint potentials. However, the method required numerical estimations by the use of computationally heavy Markov Chain Monte Carlo sampling algorithms. Results Here, we develop an alternative optimization procedure, based on a leave-one-out argument coupled to fast gradient descent algorithms. We assess that the leave-one-out potential yields very similar results to the joint approach developed previously, both in terms of the resulting potential parameters, and by Bayes factor evaluation in a phylogenetic context. On the other hand, the leave-one-out approach results in a considerable computational benefit (up to a 1,000 fold decrease in computational time for the optimization procedure. Conclusion Due to its computational speed, the optimization method we propose offers an attractive alternative for the design and empirical evaluation of alternative forms of potentials, using large data sets and high-dimensional parameterizations.

  2. Phylogenetic constrains on mycorrhizal specificity in eight Dendrobium (Orchidaceae) species.

    Science.gov (United States)

    Xing, Xiaoke; Ma, Xueting; Men, Jinxin; Chen, Yanhong; Guo, Shunxing

    2017-05-01

    Plant phylogeny constrains orchid mycorrhizal (OrM) fungal community composition in some orchids. Here, we investigated the structures of the OrM fungal communities of eight Dendrobium species in one niche to determine whether similarities in the OrM fungal communities correlated with the phylogeny of the host plants and whether the Dendrobium-OrM fungal interactions are phylogenetically conserved. A phylogeny based on DNA data was constructed for the eight coexisting Dendrobium species, and the OrM fungal communities were characterized by their roots. There were 31 different fungal lineages associated with the eight Dendrobium species. In total, 82.98% of the identified associations belonging to Tulasnellaceae, and a smaller proportion involved members of the unknown Basidiomycota (9.67%). Community analyses revealed that phylogenetically related Dendrobium tended to interact with a similar set of Tulasnellaceae fungi. The interactions between Dendrobium and Tulasnellaceae fungi were significantly influenced by the phylogenetic relationships among the Dendrobium species. Our results provide evidence that the mycorrhizal specificity in the eight coexisting Dendrobium species was phylogenetically conserved.

  3. Constrained CPn models

    International Nuclear Information System (INIS)

    Latorre, J.I.; Luetken, C.A.

    1988-11-01

    We construct a large new class of two dimensional sigma models with Kaehler target spaces which are algebraic manifolds realized as complete interactions in weighted CP n spaces. They are N=2 superconformally symmetric and particular choices of constraints give Calabi-Yau target spaces which are nontrivial string vacua. (orig.)

  4. Improved Maximum Parsimony Models for Phylogenetic Networks.

    Science.gov (United States)

    Van Iersel, Leo; Jones, Mark; Scornavacca, Celine

    2018-05-01

    Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.

  5. Mathematical Modeling of Constrained Hamiltonian Systems

    NARCIS (Netherlands)

    Schaft, A.J. van der; Maschke, B.M.

    1995-01-01

    Network modelling of unconstrained energy conserving physical systems leads to an intrinsic generalized Hamiltonian formulation of the dynamics. Constrained energy conserving physical systems are directly modelled as implicit Hamiltonian systems with regard to a generalized Dirac structure on the

  6. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...... to the stress field as well as the FE calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of two powder compacts constrained by a rigid substrate is simulated and compared to free sintering of the same samples. Constrained sintering result in a larger number...

  7. New substitution models for rooting phylogenetic trees.

    Science.gov (United States)

    Williams, Tom A; Heaps, Sarah E; Cherlin, Svetlana; Nye, Tom M W; Boys, Richard J; Embley, T Martin

    2015-09-26

    The root of a phylogenetic tree is fundamental to its biological interpretation, but standard substitution models do not provide any information on its position. Here, we describe two recently developed models that relax the usual assumptions of stationarity and reversibility, thereby facilitating root inference without the need for an outgroup. We compare the performance of these models on a classic test case for phylogenetic methods, before considering two highly topical questions in evolutionary biology: the deep structure of the tree of life and the root of the archaeal radiation. We show that all three alignments contain meaningful rooting information that can be harnessed by these new models, thus complementing and extending previous work based on outgroup rooting. In particular, our analyses exclude the root of the tree of life from the eukaryotes or Archaea, placing it on the bacterial stem or within the Bacteria. They also exclude the root of the archaeal radiation from several major clades, consistent with analyses using other rooting methods. Overall, our results demonstrate the utility of non-reversible and non-stationary models for rooting phylogenetic trees, and identify areas where further progress can be made. © 2015 The Authors.

  8. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  9. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  10. Models of Flux Tubes from Constrained Relaxation

    Indian Academy of Sciences (India)

    tribpo

    J. Astrophys. Astr. (2000) 21, 299 302. Models of Flux Tubes from Constrained Relaxation. Α. Mangalam* & V. Krishan†, Indian Institute of Astrophysics, Koramangala,. Bangalore 560 034, India. *e mail: mangalam @ iiap. ernet. in. † e mail: vinod@iiap.ernet.in. Abstract. We study the relaxation of a compressible plasma to ...

  11. Terrestrial Sagnac delay constraining modified gravity models

    Science.gov (United States)

    Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.

    2018-04-01

    Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.

  12. Epitope discovery with phylogenetic hidden Markov models.

    LENUS (Irish Health Repository)

    Lacerda, Miguel

    2010-05-01

    Existing methods for the prediction of immunologically active T-cell epitopes are based on the amino acid sequence or structure of pathogen proteins. Additional information regarding the locations of epitopes may be acquired by considering the evolution of viruses in hosts with different immune backgrounds. In particular, immune-dependent evolutionary patterns at sites within or near T-cell epitopes can be used to enhance epitope identification. We have developed a mutation-selection model of T-cell epitope evolution that allows the human leukocyte antigen (HLA) genotype of the host to influence the evolutionary process. This is one of the first examples of the incorporation of environmental parameters into a phylogenetic model and has many other potential applications where the selection pressures exerted on an organism can be related directly to environmental factors. We combine this novel evolutionary model with a hidden Markov model to identify contiguous amino acid positions that appear to evolve under immune pressure in the presence of specific host immune alleles and that therefore represent potential epitopes. This phylogenetic hidden Markov model provides a rigorous probabilistic framework that can be combined with sequence or structural information to improve epitope prediction. As a demonstration, we apply the model to a data set of HIV-1 protein-coding sequences and host HLA genotypes.

  13. A constrained supersymmetric left-right model

    Energy Technology Data Exchange (ETDEWEB)

    Hirsch, Martin [AHEP Group, Instituto de Física Corpuscular - C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, Apartado 22085, E-46071 València (Spain); Krauss, Manuel E. [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Opferkuch, Toby [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Porod, Werner [Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Staub, Florian [Theory Division, CERN,1211 Geneva 23 (Switzerland)

    2016-03-02

    We present a supersymmetric left-right model which predicts gauge coupling unification close to the string scale and extra vector bosons at the TeV scale. The subtleties in constructing a model which is in agreement with the measured quark masses and mixing for such a low left-right breaking scale are discussed. It is shown that in the constrained version of this model radiative breaking of the gauge symmetries is possible and a SM-like Higgs is obtained. Additional CP-even scalars of a similar mass or even much lighter are possible. The expected mass hierarchies for the supersymmetric states differ clearly from those of the constrained MSSM. In particular, the lightest down-type squark, which is a mixture of the sbottom and extra vector-like states, is always lighter than the stop. We also comment on the model’s capability to explain current anomalies observed at the LHC.

  14. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  15. Online constrained model-based reinforcement learning

    CSIR Research Space (South Africa)

    Van Niekerk, B

    2017-08-01

    Full Text Available Constrained Model-based Reinforcement Learning Benjamin van Niekerk School of Computer Science University of the Witwatersrand South Africa Andreas Damianou∗ Amazon.com Cambridge, UK Benjamin Rosman Council for Scientific and Industrial Research, and School... MULTIPLE SHOOTING Using direct multiple shooting (Bock and Plitt, 1984), problem (1) can be transformed into a structured non- linear program (NLP). First, the time horizon [t0, t0 + T ] is partitioned into N equal subintervals [tk, tk+1] for k = 0...

  16. Constraining supergravity models from gluino production

    International Nuclear Information System (INIS)

    Barbieri, R.; Gamberini, G.; Giudice, G.F.; Ridolfi, G.

    1988-01-01

    The branching ratios for gluino decays g tilde → qanti qΧ, g tilde → gΧ into a stable undetected neutralino are computed as functions of the relevant parameters of the underlying supergravity theory. A simple way of constraining supergravity models from gluino production emerges. The effectiveness of hadronic versus e + e - colliders in the search for supersymmetry can be directly compared. (orig.)

  17. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    Directory of Open Access Journals (Sweden)

    Villemereuil Pierre de

    2012-06-01

    Full Text Available Abstract Background Uncertainty in comparative analyses can come from at least two sources: a phylogenetic uncertainty in the tree topology or branch lengths, and b uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow and inflated significance in hypothesis testing (e.g. p-values will be too small. Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible

  18. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    Science.gov (United States)

    2012-01-01

    Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for

  19. A program for verification of phylogenetic network models.

    Science.gov (United States)

    Gunawan, Andreas D M; Lu, Bingxin; Zhang, Louxin

    2016-09-01

    Genetic material is transferred in a non-reproductive manner across species more frequently than commonly thought, particularly in the bacteria kingdom. On one hand, extant genomes are thus more properly considered as a fusion product of both reproductive and non-reproductive genetic transfers. This has motivated researchers to adopt phylogenetic networks to study genome evolution. On the other hand, a gene's evolution is usually tree-like and has been studied for over half a century. Accordingly, the relationships between phylogenetic trees and networks are the basis for the reconstruction and verification of phylogenetic networks. One important problem in verifying a network model is determining whether or not certain existing phylogenetic trees are displayed in a phylogenetic network. This problem is formally called the tree containment problem. It is NP-complete even for binary phylogenetic networks. We design an exponential time but efficient method for determining whether or not a phylogenetic tree is displayed in an arbitrary phylogenetic network. It is developed on the basis of the so-called reticulation-visible property of phylogenetic networks. A C-program is available for download on http://www.math.nus.edu.sg/∼matzlx/tcp_package matzlx@nus.edu.sg Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Reflected stochastic differential equation models for constrained animal movement

    Science.gov (United States)

    Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.

    2017-01-01

    Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.

  1. Constrained optimization via simulation models for new product innovation

    Science.gov (United States)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  2. Maximum parsimony, substitution model, and probability phylogenetic trees.

    Science.gov (United States)

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  3. Slow logarithmic relaxation in models with hierarchically constrained dynamics

    OpenAIRE

    Brey, J. J.; Prados, A.

    2000-01-01

    A general kind of models with hierarchically constrained dynamics is shown to exhibit logarithmic anomalous relaxation, similarly to a variety of complex strongly interacting materials. The logarithmic behavior describes most of the decay of the response function.

  4. Different relationships between temporal phylogenetic turnover and phylogenetic similarity and in two forests were detected by a new null model.

    Science.gov (United States)

    Huang, Jian-Xiong; Zhang, Jian; Shen, Yong; Lian, Ju-yu; Cao, Hong-lin; Ye, Wan-hui; Wu, Lin-fang; Bin, Yue

    2014-01-01

    Ecologists have been monitoring community dynamics with the purpose of understanding the rates and causes of community change. However, there is a lack of monitoring of community dynamics from the perspective of phylogeny. We attempted to understand temporal phylogenetic turnover in a 50 ha tropical forest (Barro Colorado Island, BCI) and a 20 ha subtropical forest (Dinghushan in southern China, DHS). To obtain temporal phylogenetic turnover under random conditions, two null models were used. The first shuffled names of species that are widely used in community phylogenetic analyses. The second simulated demographic processes with careful consideration on the variation in dispersal ability among species and the variations in mortality both among species and among size classes. With the two models, we tested the relationships between temporal phylogenetic turnover and phylogenetic similarity at different spatial scales in the two forests. Results were more consistent with previous findings using the second null model suggesting that the second null model is more appropriate for our purposes. With the second null model, a significantly positive relationship was detected between phylogenetic turnover and phylogenetic similarity in BCI at a 10 m×10 m scale, potentially indicating phylogenetic density dependence. This relationship in DHS was significantly negative at three of five spatial scales. This could indicate abiotic filtering processes for community assembly. Using variation partitioning, we found phylogenetic similarity contributed to variation in temporal phylogenetic turnover in the DHS plot but not in BCI plot. The mechanisms for community assembly in BCI and DHS vary from phylogenetic perspective. Only the second null model detected this difference indicating the importance of choosing a proper null model.

  5. Dimensional Reduction for the General Markov Model on Phylogenetic Trees.

    Science.gov (United States)

    Sumner, Jeremy G

    2017-03-01

    We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.

  6. Constrained KP models as integrable matrix hierarchies

    International Nuclear Information System (INIS)

    Aratyn, H.; Ferreira, L.A.; Gomes, J.F.; Zimerman, A.H.

    1997-01-01

    We formulate the constrained KP hierarchy (denoted by cKP K+1,M ) as an affine [cflx sl](M+K+1) matrix integrable hierarchy generalizing the Drinfeld endash Sokolov hierarchy. Using an algebraic approach, including the graded structure of the generalized Drinfeld endash Sokolov hierarchy, we are able to find several new universal results valid for the cKP hierarchy. In particular, our method yields a closed expression for the second bracket obtained through Dirac reduction of any untwisted affine Kac endash Moody current algebra. An explicit example is given for the case [cflx sl](M+K+1), for which a closed expression for the general recursion operator is also obtained. We show how isospectral flows are characterized and grouped according to the semisimple non-regular element E of sl(M+K+1) and the content of the center of the kernel of E. copyright 1997 American Institute of Physics

  7. Phylogenetic mixtures and linear invariants for equal input models.

    Science.gov (United States)

    Casanellas, Marta; Steel, Mike

    2017-04-01

    The reconstruction of phylogenetic trees from molecular sequence data relies on modelling site substitutions by a Markov process, or a mixture of such processes. In general, allowing mixed processes can result in different tree topologies becoming indistinguishable from the data, even for infinitely long sequences. However, when the underlying Markov process supports linear phylogenetic invariants, then provided these are sufficiently informative, the identifiability of the tree topology can be restored. In this paper, we investigate a class of processes that support linear invariants once the stationary distribution is fixed, the 'equal input model'. This model generalizes the 'Felsenstein 1981' model (and thereby the Jukes-Cantor model) from four states to an arbitrary number of states (finite or infinite), and it can also be described by a 'random cluster' process. We describe the structure and dimension of the vector spaces of phylogenetic mixtures and of linear invariants for any fixed phylogenetic tree (and for all trees-the so called 'model invariants'), on any number n of leaves. We also provide a precise description of the space of mixtures and linear invariants for the special case of [Formula: see text] leaves. By combining techniques from discrete random processes and (multi-) linear algebra, our results build on a classic result that was first established by James Lake (Mol Biol Evol 4:167-191, 1987).

  8. Constrained bayesian inference of project performance models

    OpenAIRE

    Sunmola, Funlade

    2013-01-01

    Project performance models play an important role in the management of project success. When used for monitoring projects, they can offer predictive ability such as indications of possible delivery problems. Approaches for monitoring project performance relies on available project information including restrictions imposed on the project, particularly the constraints of cost, quality, scope and time. We study in this paper a Bayesian inference methodology for project performance modelling in ...

  9. Path integral formulation and Feynman rules for phylogenetic branching models

    Energy Technology Data Exchange (ETDEWEB)

    Jarvis, P D; Bashford, J D; Sumner, J G [School of Mathematics and Physics, University of Tasmania, GPO Box 252C, 7001 Hobart, TAS (Australia)

    2005-11-04

    A dynamical picture of phylogenetic evolution is given in terms of Markov models on a state space, comprising joint probability distributions for character types of taxonomic classes. Phylogenetic branching is a process which augments the number of taxa under consideration, and hence the rank of the underlying joint probability state tensor. We point out the combinatorial necessity for a second-quantized, or Fock space setting, incorporating discrete counting labels for taxa and character types, to allow for a description in the number basis. Rate operators describing both time evolution without branching, and also phylogenetic branching events, are identified. A detailed development of these ideas is given, using standard transcriptions from the microscopic formulation of non-equilibrium reaction-diffusion or birth-death processes. These give the relations between stochastic rate matrices, the matrix elements of the corresponding evolution operators representing them, and the integral kernels needed to implement these as path integrals. The 'free' theory (without branching) is solved, and the correct trilinear 'interaction' terms (representing branching events) are presented. The full model is developed in perturbation theory via the derivation of explicit Feynman rules which establish that the probabilities (pattern frequencies of leaf colourations) arising as matrix elements of the time evolution operator are identical with those computed via the standard analysis. Simple examples (phylogenetic trees with two or three leaves), are discussed in detail. Further implications for the work are briefly considered including the role of time reparametrization covariance.

  10. Path integral formulation and Feynman rules for phylogenetic branching models

    International Nuclear Information System (INIS)

    Jarvis, P D; Bashford, J D; Sumner, J G

    2005-01-01

    A dynamical picture of phylogenetic evolution is given in terms of Markov models on a state space, comprising joint probability distributions for character types of taxonomic classes. Phylogenetic branching is a process which augments the number of taxa under consideration, and hence the rank of the underlying joint probability state tensor. We point out the combinatorial necessity for a second-quantized, or Fock space setting, incorporating discrete counting labels for taxa and character types, to allow for a description in the number basis. Rate operators describing both time evolution without branching, and also phylogenetic branching events, are identified. A detailed development of these ideas is given, using standard transcriptions from the microscopic formulation of non-equilibrium reaction-diffusion or birth-death processes. These give the relations between stochastic rate matrices, the matrix elements of the corresponding evolution operators representing them, and the integral kernels needed to implement these as path integrals. The 'free' theory (without branching) is solved, and the correct trilinear 'interaction' terms (representing branching events) are presented. The full model is developed in perturbation theory via the derivation of explicit Feynman rules which establish that the probabilities (pattern frequencies of leaf colourations) arising as matrix elements of the time evolution operator are identical with those computed via the standard analysis. Simple examples (phylogenetic trees with two or three leaves), are discussed in detail. Further implications for the work are briefly considered including the role of time reparametrization covariance

  11. The simplified models approach to constraining supersymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Perez, Genessis [Institut fuer Theoretische Physik, Karlsruher Institut fuer Technologie (KIT), Wolfgang-Gaede-Str. 1, 76131 Karlsruhe (Germany); Kulkarni, Suchita [Laboratoire de Physique Subatomique et de Cosmologie, Universite Grenoble Alpes, CNRS IN2P3, 53 Avenue des Martyrs, 38026 Grenoble (France)

    2015-07-01

    The interpretation of the experimental results at the LHC are model dependent, which implies that the searches provide limited constraints on scenarios such as supersymmetry (SUSY). The Simplified Models Spectra (SMS) framework used by ATLAS and CMS collaborations is useful to overcome this limitation. SMS framework involves a small number of parameters (all the properties are reduced to the mass spectrum, the production cross section and the branching ratio) and hence is more generic than presenting results in terms of soft parameters. In our work, the SMS framework was used to test Natural SUSY (NSUSY) scenario. To accomplish this task, two automated tools (SModelS and Fastlim) were used to decompose the NSUSY parameter space in terms of simplified models and confront the theoretical predictions against the experimental results. The achievement of both, just as the strengths and limitations, are here expressed for the NSUSY scenario.

  12. Constraining composite Higgs models using LHC data

    Science.gov (United States)

    Banerjee, Avik; Bhattacharyya, Gautam; Kumar, Nilanjana; Ray, Tirtha Sankar

    2018-03-01

    We systematically study the modifications in the couplings of the Higgs boson, when identified as a pseudo Nambu-Goldstone boson of a strong sector, in the light of LHC Run 1 and Run 2 data. For the minimal coset SO(5)/SO(4) of the strong sector, we focus on scenarios where the standard model left- and right-handed fermions (specifically, the top and bottom quarks) are either in 5 or in the symmetric 14 representation of SO(5). Going beyond the minimal 5 L - 5 R representation, to what we call here the `extended' models, we observe that it is possible to construct more than one invariant in the Yukawa sector. In such models, the Yukawa couplings of the 125 GeV Higgs boson undergo nontrivial modifications. The pattern of such modifications can be encoded in a generic phenomenological Lagrangian which applies to a wide class of such models. We show that the presence of more than one Yukawa invariant allows the gauge and Yukawa coupling modifiers to be decorrelated in the `extended' models, and this decorrelation leads to a relaxation of the bound on the compositeness scale ( f ≥ 640 GeV at 95% CL, as compared to f ≥ 1 TeV for the minimal 5 L - 5 R representation model). We also study the Yukawa coupling modifications in the context of the next-to-minimal strong sector coset SO(6)/SO(5) for fermion-embedding up to representations of dimension 20. While quantifying our observations, we have performed a detailed χ 2 fit using the ATLAS and CMS combined Run 1 and available Run 2 data.

  13. Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.

    Science.gov (United States)

    Giedt, Joel; Thomas, Anthony W; Young, Ross D

    2009-11-13

    Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.

  14. Physics constrained nonlinear regression models for time series

    International Nuclear Information System (INIS)

    Majda, Andrew J; Harlim, John

    2013-01-01

    A central issue in contemporary science is the development of data driven statistical nonlinear dynamical models for time series of partial observations of nature or a complex physical model. It has been established recently that ad hoc quadratic multi-level regression (MLR) models can have finite-time blow up of statistical solutions and/or pathological behaviour of their invariant measure. Here a new class of physics constrained multi-level quadratic regression models are introduced, analysed and applied to build reduced stochastic models from data of nonlinear systems. These models have the advantages of incorporating memory effects in time as well as the nonlinear noise from energy conserving nonlinear interactions. The mathematical guidelines for the performance and behaviour of these physics constrained MLR models as well as filtering algorithms for their implementation are developed here. Data driven applications of these new multi-level nonlinear regression models are developed for test models involving a nonlinear oscillator with memory effects and the difficult test case of the truncated Burgers–Hopf model. These new physics constrained quadratic MLR models are proposed here as process models for Bayesian estimation through Markov chain Monte Carlo algorithms of low frequency behaviour in complex physical data. (paper)

  15. Bayesian nonparametric clustering in phylogenetics: modeling antigenic evolution in influenza.

    Science.gov (United States)

    Cybis, Gabriela B; Sinsheimer, Janet S; Bedford, Trevor; Rambaut, Andrew; Lemey, Philippe; Suchard, Marc A

    2018-01-30

    Influenza is responsible for up to 500,000 deaths every year, and antigenic variability represents much of its epidemiological burden. To visualize antigenic differences across many viral strains, antigenic cartography methods use multidimensional scaling on binding assay data to map influenza antigenicity onto a low-dimensional space. Analysis of such assay data ideally leads to natural clustering of influenza strains of similar antigenicity that correlate with sequence evolution. To understand the dynamics of these antigenic groups, we present a framework that jointly models genetic and antigenic evolution by combining multidimensional scaling of binding assay data, Bayesian phylogenetic machinery and nonparametric clustering methods. We propose a phylogenetic Chinese restaurant process that extends the current process to incorporate the phylogenetic dependency structure between strains in the modeling of antigenic clusters. With this method, we are able to use the genetic information to better understand the evolution of antigenicity throughout epidemics, as shown in applications of this model to H1N1 influenza. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Frequency Constrained ShiftCP Modeling of Neuroimaging Data

    DEFF Research Database (Denmark)

    Mørup, Morten; Hansen, Lars Kai; Madsen, Kristoffer H.

    2011-01-01

    The shift invariant multi-linear model based on the CandeComp/PARAFAC (CP) model denoted ShiftCP has proven useful for the modeling of latency changes in trial based neuroimaging data[17]. In order to facilitate component interpretation we presently extend the shiftCP model such that the extracted...... components can be constrained to pertain to predefined frequency ranges such as alpha, beta and gamma activity. To infer the number of components in the model we propose to apply automatic relevance determination by imposing priors that define the range of variation of each component of the shiftCP model...

  17. Modeling constrained sintering of bi-layered tubular structures

    DEFF Research Database (Denmark)

    Tadesse Molla, Tesfaye; Kothanda Ramachandran, Dhavanesan; Ni, De Wei

    2015-01-01

    Constrained sintering of tubular bi-layered structures is being used in the development of various technologies. Densification mismatch between the layers making the tubular bi-layer can generate stresses, which may create processing defects. An analytical model is presented to describe the densi...... and thermo-mechanical analysis. Results from the analytical model are found to agree well with finite element simulations as well as measurements from sintering experiment....

  18. Constraining new physics models with isotope shift spectroscopy

    Science.gov (United States)

    Frugiuele, Claudia; Fuchs, Elina; Perez, Gilad; Schlaffer, Matthias

    2017-07-01

    Isotope shifts of transition frequencies in atoms constrain generic long- and intermediate-range interactions. We focus on new physics scenarios that can be most strongly constrained by King linearity violation such as models with B -L vector bosons, the Higgs portal, and chameleon models. With the anticipated precision, King linearity violation has the potential to set the strongest laboratory bounds on these models in some regions of parameter space. Furthermore, we show that this method can probe the couplings relevant for the protophobic interpretation of the recently reported Be anomaly. We extend the formalism to include an arbitrary number of transitions and isotope pairs and fit the new physics coupling to the currently available isotope shift measurements.

  19. Dark matter scenarios in a constrained model with Dirac gauginos

    CERN Document Server

    Goodsell, Mark D.; Müller, Tobias; Porod, Werner; Staub, Florian

    2015-01-01

    We perform the first analysis of Dark Matter scenarios in a constrained model with Dirac Gauginos. The model under investigation is the Constrained Minimal Dirac Gaugino Supersymmetric Standard model (CMDGSSM) where the Majorana mass terms of gauginos vanish. However, $R$-symmetry is broken in the Higgs sector by an explicit and/or effective $B_\\mu$-term. This causes a mass splitting between Dirac states in the fermion sector and the neutralinos, which provide the dark matter candidate, become pseudo-Dirac states. We discuss two scenarios: the universal case with all scalar masses unified at the GUT scale, and the case with non-universal Higgs soft-terms. We identify different regions in the parameter space which fullfil all constraints from the dark matter abundance, the limits from SUSY and direct dark matter searches and the Higgs mass. Most of these points can be tested with the next generation of direct dark matter detection experiments.

  20. Constrained convex minimization via model-based excessive gap

    OpenAIRE

    Tran Dinh, Quoc; Cevher, Volkan

    2014-01-01

    We introduce a model-based excessive gap technique to analyze first-order primal- dual methods for constrained convex minimization. As a result, we construct new primal-dual methods with optimal convergence rates on the objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox-function selection strategy, our framework subsumes the augmented Lagrangian, and alternating methods as special cases, where our rates apply.

  1. Toward Cognitively Constrained Models of Language Processing: A Review

    Directory of Open Access Journals (Sweden)

    Margreet Vogelzang

    2017-09-01

    Full Text Available Language processing is not an isolated capacity, but is embedded in other aspects of our cognition. However, it is still largely unexplored to what extent and how language processing interacts with general cognitive resources. This question can be investigated with cognitively constrained computational models, which simulate the cognitive processes involved in language processing. The theoretical claims implemented in cognitive models interact with general architectural constraints such as memory limitations. This way, it generates new predictions that can be tested in experiments, thus generating new data that can give rise to new theoretical insights. This theory-model-experiment cycle is a promising method for investigating aspects of language processing that are difficult to investigate with more traditional experimental techniques. This review specifically examines the language processing models of Lewis and Vasishth (2005, Reitter et al. (2011, and Van Rij et al. (2010, all implemented in the cognitive architecture Adaptive Control of Thought—Rational (Anderson et al., 2004. These models are all limited by the assumptions about cognitive capacities provided by the cognitive architecture, but use different linguistic approaches. Because of this, their comparison provides insight into the extent to which assumptions about general cognitive resources influence concretely implemented models of linguistic competence. For example, the sheer speed and accuracy of human language processing is a current challenge in the field of cognitive modeling, as it does not seem to adhere to the same memory and processing capacities that have been found in other cognitive processes. Architecture-based cognitive models of language processing may be able to make explicit which language-specific resources are needed to acquire and process natural language. The review sheds light on cognitively constrained models of language processing from two angles: we

  2. A Few Expanding Integrable Models, Hamiltonian Structures and Constrained Flows

    International Nuclear Information System (INIS)

    Zhang Yufeng

    2011-01-01

    Two kinds of higher-dimensional Lie algebras and their loop algebras are introduced, for which a few expanding integrable models including the coupling integrable couplings of the Broer-Kaup (BK) hierarchy and the dispersive long wave (DLW) hierarchy as well as the TB hierarchy are obtained. From the reductions of the coupling integrable couplings, the corresponding coupled integrable couplings of the BK equation, the DLW equation, and the TB equation are obtained, respectively. Especially, the coupling integrable coupling of the TB equation reduces to a few integrable couplings of the well-known mKdV equation. The Hamiltonian structures of the coupling integrable couplings of the three kinds of soliton hierarchies are worked out, respectively, by employing the variational identity. Finally, we decompose the BK hierarchy of evolution equations into x-constrained flows and t n -constrained flows whose adjoint representations and the Lax pairs are given. (general)

  3. Constraining viscous dark energy models with the latest cosmological data

    Science.gov (United States)

    Wang, Deng; Yan, Yang-Jie; Meng, Xin-He

    2017-10-01

    Based on the assumption that the dark energy possessing bulk viscosity is homogeneously and isotropically permeated in the universe, we propose three new viscous dark energy (VDE) models to characterize the accelerating universe. By constraining these three models with the latest cosmological observations, we find that they just deviate very slightly from the standard cosmological model and can alleviate effectively the current H_0 tension between the local observation by the Hubble Space Telescope and the global measurement by the Planck Satellite. Interestingly, we conclude that a spatially flat universe in our VDE model with cosmic curvature is still supported by current data, and the scale invariant primordial power spectrum is strongly excluded at least at the 5.5σ confidence level in the three VDE models as the Planck result. We also give the 95% upper limits of the typical bulk viscosity parameter η in the three VDE scenarios.

  4. Constraining viscous dark energy models with the latest cosmological data

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Deng [Nankai University, Theoretical Physics Division, Chern Institute of Mathematics, Tianjin (China); Yan, Yang-Jie; Meng, Xin-He [Nankai University, Department of Physics, Tianjin (China)

    2017-10-15

    Based on the assumption that the dark energy possessing bulk viscosity is homogeneously and isotropically permeated in the universe, we propose three new viscous dark energy (VDE) models to characterize the accelerating universe. By constraining these three models with the latest cosmological observations, we find that they just deviate very slightly from the standard cosmological model and can alleviate effectively the current H{sub 0} tension between the local observation by the Hubble Space Telescope and the global measurement by the Planck Satellite. Interestingly, we conclude that a spatially flat universe in our VDE model with cosmic curvature is still supported by current data, and the scale invariant primordial power spectrum is strongly excluded at least at the 5.5σ confidence level in the three VDE models as the Planck result. We also give the 95% upper limits of the typical bulk viscosity parameter η in the three VDE scenarios. (orig.)

  5. An inexact fuzzy-chance-constrained air quality management model.

    Science.gov (United States)

    Xu, Ye; Huang, Guohe; Qin, Xiaosheng

    2010-07-01

    Regional air pollution is a major concern for almost every country because it not only directly relates to economic development, but also poses significant threats to environment and public health. In this study, an inexact fuzzy-chance-constrained air quality management model (IFAMM) was developed for regional air quality management under uncertainty. IFAMM was formulated through integrating interval linear programming (ILP) within a fuzzy-chance-constrained programming (FCCP) framework and could deal with uncertainties expressed as not only possibilistic distributions but also discrete intervals in air quality management systems. Moreover, the constraints with fuzzy variables could be satisfied at different confidence levels such that various solutions with different risk and cost considerations could be obtained. The developed model was applied to a hypothetical case of regional air quality management. Six abatement technologies and sulfur dioxide (SO2) emission trading under uncertainty were taken into consideration. The results demonstrated that IFAMM could help decision-makers generate cost-effective air quality management patterns, gain in-depth insights into effects of the uncertainties, and analyze tradeoffs between system economy and reliability. The results also implied that the trading scheme could achieve lower total abatement cost than a nontrading one.

  6. Phylogenetic trees

    OpenAIRE

    Baños, Hector; Bushek, Nathaniel; Davidson, Ruth; Gross, Elizabeth; Harris, Pamela E.; Krone, Robert; Long, Colby; Stewart, Allen; Walker, Robert

    2016-01-01

    We introduce the package PhylogeneticTrees for Macaulay2 which allows users to compute phylogenetic invariants for group-based tree models. We provide some background information on phylogenetic algebraic geometry and show how the package PhylogeneticTrees can be used to calculate a generating set for a phylogenetic ideal as well as a lower bound for its dimension. Finally, we show how methods within the package can be used to compute a generating set for the join of any two ideals.

  7. Maximizing entropy of image models for 2-D constrained coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Danieli, Matteo; Burini, Nino

    2010-01-01

    This paper considers estimating and maximizing the entropy of two-dimensional (2-D) fields with application to 2-D constrained coding. We consider Markov random fields (MRF), which have a non-causal description, and the special case of Pickard random fields (PRF). The PRF are 2-D causal finite...... context models, which define stationary probability distributions on finite rectangles and thus allow for calculation of the entropy. We consider two binary constraints and revisit the hard square constraint given by forbidding neighboring 1s and provide novel results for the constraint that no uniform 2...... £ 2 squares contains all 0s or all 1s. The maximum values of the entropy for the constraints are estimated and binary PRF satisfying the constraint are characterized and optimized w.r.t. the entropy. The maximum binary PRF entropy is 0.839 bits/symbol for the no uniform squares constraint. The entropy...

  8. Gluon field strength correlation functions within a constrained instanton model

    International Nuclear Information System (INIS)

    Dorokhov, A.E.; Esaibegyan, S.V.; Maximov, A.E.; Mikhailov, S.V.

    2000-01-01

    We suggest a constrained instanton (CI) solution in the physical QCD vacuum which is described by large-scale vacuum field fluctuations. This solution decays exponentially at large distances. It is stable only if the interaction of the instanton with the background vacuum field is small and additional constraints are introduced. The CI solution is explicitly constructed in the ansatz form, and the two-point vacuum correlator of the gluon field strengths is calculated in the framework of the effective instanton vacuum model. At small distances the results are qualitatively similar to the single instanton case; in particular, the D 1 invariant structure is small, which is in agreement with the lattice calculations. (orig.)

  9. A Model of Desired Performance in Phylogenetic Tree Construction for Teaching Evolution.

    Science.gov (United States)

    Brewer, Steven D.

    This research paper examines phylogenetic tree construction-a form of problem solving in biology-by studying the strategies and heuristics used by experts. One result of the research is the development of a model of desired performance for phylogenetic tree construction. A detailed description of the model and the sample problems which illustrate…

  10. Bilevel Fuzzy Chance Constrained Hospital Outpatient Appointment Scheduling Model

    Directory of Open Access Journals (Sweden)

    Xiaoyang Zhou

    2016-01-01

    Full Text Available Hospital outpatient departments operate by selling fixed period appointments for different treatments. The challenge being faced is to improve profit by determining the mix of full time and part time doctors and allocating appointments (which involves scheduling a combination of doctors, patients, and treatments to a time period in a department optimally. In this paper, a bilevel fuzzy chance constrained model is developed to solve the hospital outpatient appointment scheduling problem based on revenue management. In the model, the hospital, the leader in the hierarchy, decides the mix of the hired full time and part time doctors to maximize the total profit; each department, the follower in the hierarchy, makes the decision of the appointment scheduling to maximize its own profit while simultaneously minimizing surplus capacity. Doctor wage and demand are considered as fuzzy variables to better describe the real-life situation. Then we use chance operator to handle the model with fuzzy parameters and equivalently transform the appointment scheduling model into a crisp model. Moreover, interactive algorithm based on satisfaction is employed to convert the bilevel programming into a single level programming, in order to make it solvable. Finally, the numerical experiments were executed to demonstrate the efficiency and effectiveness of the proposed approaches.

  11. Sampling from stochastic reservoir models constrained by production data

    Energy Technology Data Exchange (ETDEWEB)

    Hegstad, Bjoern Kaare

    1997-12-31

    When a petroleum reservoir is evaluated, it is important to forecast future production of oil and gas and to assess forecast uncertainty. This is done by defining a stochastic model for the reservoir characteristics, generating realizations from this model and applying a fluid flow simulator to the realizations. The reservoir characteristics define the geometry of the reservoir, initial saturation, petrophysical properties etc. This thesis discusses how to generate realizations constrained by production data, that is to say, the realizations should reproduce the observed production history of the petroleum reservoir within the uncertainty of these data. The topics discussed are: (1) Theoretical framework, (2) History matching, forecasting and forecasting uncertainty, (3) A three-dimensional test case, (4) Modelling transmissibility multipliers by Markov random fields, (5) Up scaling, (6) The link between model parameters, well observations and production history in a simple test case, (7) Sampling the posterior using optimization in a hierarchical model, (8) A comparison of Rejection Sampling and Metropolis-Hastings algorithm, (9) Stochastic simulation and conditioning by annealing in reservoir description, and (10) Uncertainty assessment in history matching and forecasting. 139 refs., 85 figs., 1 tab.

  12. Elastic Model Transitions Using Quadratic Inequality Constrained Least Squares

    Science.gov (United States)

    Orr, Jeb S.

    2012-01-01

    A technique is presented for initializing multiple discrete finite element model (FEM) mode sets for certain types of flight dynamics formulations that rely on superposition of orthogonal modes for modeling the elastic response. Such approaches are commonly used for modeling launch vehicle dynamics, and challenges arise due to the rapidly time-varying nature of the rigid-body and elastic characteristics. By way of an energy argument, a quadratic inequality constrained least squares (LSQI) algorithm is employed to e ect a smooth transition from one set of FEM eigenvectors to another with no requirement that the models be of similar dimension or that the eigenvectors be correlated in any particular way. The physically unrealistic and controversial method of eigenvector interpolation is completely avoided, and the discrete solution approximates that of the continuously varying system. The real-time computational burden is shown to be negligible due to convenient features of the solution method. Simulation results are presented, and applications to staging and other discontinuous mass changes are discussed

  13. Constraining statistical-model parameters using fusion and spallation reactions

    Directory of Open Access Journals (Sweden)

    Charity Robert J.

    2011-10-01

    Full Text Available The de-excitation of compound nuclei has been successfully described for several decades by means of statistical models. However, such models involve a large number of free parameters and ingredients that are often underconstrained by experimental data. We show how the degeneracy of the model ingredients can be partially lifted by studying different entrance channels for de-excitation, which populate different regions of the parameter space of the compound nucleus. Fusion reactions, in particular, play an important role in this strategy because they fix three out of four of the compound-nucleus parameters (mass, charge and total excitation energy. The present work focuses on fission and intermediate-mass-fragment emission cross sections. We prove how equivalent parameter sets for fusion-fission reactions can be resolved using another entrance channel, namely spallation reactions. Intermediate-mass-fragment emission can be constrained in a similar way. An interpretation of the best-fit IMF barriers in terms of the Wigner energies of the nascent fragments is discussed.

  14. A Constraint Model for Constrained Hidden Markov Models

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2009-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we extend HMMs with constraints and show how the familiar Viterbi algorithm can be generalized, based on constraint solving ...

  15. Analyzing Phylogenetic Trees with Timed and Probabilistic Model Checking: The Lactose Persistence Case Study.

    Science.gov (United States)

    Requeno, José Ignacio; Colom, José Manuel

    2014-12-01

    Model checking is a generic verification technique that allows the phylogeneticist to focus on models and specifications instead of on implementation issues. Phylogenetic trees are considered as transition systems over which we interrogate phylogenetic questions written as formulas of temporal logic. Nonetheless, standard logics become insufficient for certain practices of phylogenetic analysis since they do not allow the inclusion of explicit time and probabilities. The aim of this paper is to extend the application of model checking techniques beyond qualitative phylogenetic properties and adapt the existing logical extensions and tools to the field of phylogeny. The introduction of time and probabilities in phylogenetic specifications is motivated by the study of a real example: the analysis of the ratio of lactose intolerance in some populations and the date of appearance of this phenotype.

  16. Dark matter in a constrained E6 inspired SUSY model

    International Nuclear Information System (INIS)

    Athron, P.; Harries, D.; Nevzorov, R.; Williams, A.G.

    2016-01-01

    We investigate dark matter in a constrained E 6 inspired supersymmetric model with an exact custodial symmetry and compare with the CMSSM. The breakdown of E 6 leads to an additional U(1) N symmetry and a discrete matter parity. The custodial and matter symmetries imply there are two stable dark matter candidates, though one may be extremely light and contribute negligibly to the relic density. We demonstrate that a predominantly Higgsino, or mixed bino-Higgsino, neutralino can account for all of the relic abundance of dark matter, while fitting a 125 GeV SM-like Higgs and evading LHC limits on new states. However we show that the recent LUX 2016 limit on direct detection places severe constraints on the mixed bino-Higgsino scenarios that explain all of the dark matter. Nonetheless we still reveal interesting scenarios where the gluino, neutralino and chargino are light and discoverable at the LHC, but the full relic abundance is not accounted for. At the same time we also show that there is a huge volume of parameter space, with a predominantly Higgsino dark matter candidate that explains all the relic abundance, that will be discoverable with XENON1T. Finally we demonstrate that for the E 6 inspired model the exotic leptoquarks could still be light and within range of future LHC searches.

  17. Constrained variability of modeled T:ET ratio across biomes

    Science.gov (United States)

    Fatichi, Simone; Pappas, Christoforos

    2017-07-01

    A large variability (35-90%) in the ratio of transpiration to total evapotranspiration (referred here as T:ET) across biomes or even at the global scale has been documented by a number of studies carried out with different methodologies. Previous empirical results also suggest that T:ET does not covary with mean precipitation and has a positive dependence on leaf area index (LAI). Here we use a mechanistic ecohydrological model, with a refined process-based description of evaporation from the soil surface, to investigate the variability of T:ET across biomes. Numerical results reveal a more constrained range and higher mean of T:ET (70 ± 9%, mean ± standard deviation) when compared to observation-based estimates. T:ET is confirmed to be independent from mean precipitation, while it is found to be correlated with LAI seasonally but uncorrelated across multiple sites. Larger LAI increases evaporation from interception but diminishes ground evaporation with the two effects largely compensating each other. These results offer mechanistic model-based evidence to the ongoing research about the patterns of T:ET and the factors influencing its magnitude across biomes.

  18. Investigating multiple solutions in the constrained minimal supersymmetric standard model

    Energy Technology Data Exchange (ETDEWEB)

    Allanach, B.C. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); George, Damien P. [DAMTP, CMS, University of Cambridge,Wilberforce Road, Cambridge, CB3 0HA (United Kingdom); Cavendish Laboratory, University of Cambridge,JJ Thomson Avenue, Cambridge, CB3 0HE (United Kingdom); Nachman, Benjamin [SLAC, Stanford University,2575 Sand Hill Rd, Menlo Park, CA 94025 (United States)

    2014-02-07

    Recent work has shown that the Constrained Minimal Supersymmetric Standard Model (CMSSM) can possess several distinct solutions for certain values of its parameters. The extra solutions were not previously found by public supersymmetric spectrum generators because fixed point iteration (the algorithm used by the generators) is unstable in the neighbourhood of these solutions. The existence of the additional solutions calls into question the robustness of exclusion limits derived from collider experiments and cosmological observations upon the CMSSM, because limits were only placed on one of the solutions. Here, we map the CMSSM by exploring its multi-dimensional parameter space using the shooting method, which is not subject to the stability issues which can plague fixed point iteration. We are able to find multiple solutions where in all previous literature only one was found. The multiple solutions are of two distinct classes. One class, close to the border of bad electroweak symmetry breaking, is disfavoured by LEP2 searches for neutralinos and charginos. The other class has sparticles that are heavy enough to evade the LEP2 bounds. Chargino masses may differ by up to around 10% between the different solutions, whereas other sparticle masses differ at the sub-percent level. The prediction for the dark matter relic density can vary by a hundred percent or more between the different solutions, so analyses employing the dark matter constraint are incomplete without their inclusion.

  19. A HARDCORE model for constraining an exoplanet's core size

    Science.gov (United States)

    Suissa, Gabrielle; Chen, Jingjing; Kipping, David

    2018-05-01

    The interior structure of an exoplanet is hidden from direct view yet likely plays a crucial role in influencing the habitability of the Earth analogues. Inferences of the interior structure are impeded by a fundamental degeneracy that exists between any model comprising more than two layers and observations constraining just two bulk parameters: mass and radius. In this work, we show that although the inverse problem is indeed degenerate, there exists two boundary conditions that enables one to infer the minimum and maximum core radius fraction, CRFmin and CRFmax. These hold true even for planets with light volatile envelopes, but require the planet to be fully differentiated and that layers denser than iron are forbidden. With both bounds in hand, a marginal CRF can also be inferred by sampling in-between. After validating on the Earth, we apply our method to Kepler-36b and measure CRFmin = (0.50 ± 0.07), CRFmax = (0.78 ± 0.02), and CRFmarg = (0.64 ± 0.11), broadly consistent with the Earth's true CRF value of 0.55. We apply our method to a suite of hypothetical measurements of synthetic planets to serve as a sensitivity analysis. We find that CRFmin and CRFmax have recovered uncertainties proportional to the relative error on the planetary density, but CRFmarg saturates to between 0.03 and 0.16 once (Δρ/ρ) drops below 1-2 per cent. This implies that mass and radius alone cannot provide any better constraints on internal composition once bulk density constraints hit around a per cent, providing a clear target for observers.

  20. Fuzzy chance constrained linear programming model for scrap charge optimization in steel production

    DEFF Research Database (Denmark)

    Rong, Aiying; Lahdelma, Risto

    2008-01-01

    the uncertainty based on fuzzy set theory and constrain the failure risk based on a possibility measure. Consequently, the scrap charge optimization problem is modeled as a fuzzy chance constrained linear programming problem. Since the constraints of the model mainly address the specification of the product...

  1. Toward cognitively constrained models of language processing : A review

    NARCIS (Netherlands)

    Vogelzang, Margreet; Mills, Anne C.; Reitter, David; van Rij, Jacolien; Hendriks, Petra; van Rijn, Hedderik

    2017-01-01

    Language processing is not an isolated capacity, but is embedded in other aspects of our cognition. However, it is still largely unexplored to what extent and how language processing interacts with general cognitive resources. This question can be investigated with cognitively constrained

  2. Identifiability of tree-child phylogenetic networks under a probabilistic recombination-mutation model of evolution.

    Science.gov (United States)

    Francis, Andrew; Moulton, Vincent

    2018-06-07

    Phylogenetic networks are an extension of phylogenetic trees which are used to represent evolutionary histories in which reticulation events (such as recombination and hybridization) have occurred. A central question for such networks is that of identifiability, which essentially asks under what circumstances can we reliably identify the phylogenetic network that gave rise to the observed data? Recently, identifiability results have appeared for networks relative to a model of sequence evolution that generalizes the standard Markov models used for phylogenetic trees. However, these results are quite limited in terms of the complexity of the networks that are considered. In this paper, by introducing an alternative probabilistic model for evolution along a network that is based on some ground-breaking work by Thatte for pedigrees, we are able to obtain an identifiability result for a much larger class of phylogenetic networks (essentially the class of so-called tree-child networks). To prove our main theorem, we derive some new results for identifying tree-child networks combinatorially, and then adapt some techniques developed by Thatte for pedigrees to show that our combinatorial results imply identifiability in the probabilistic setting. We hope that the introduction of our new model for networks could lead to new approaches to reliably construct phylogenetic networks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Constraining the interacting dark energy models from weak gravity conjecture and recent observations

    International Nuclear Information System (INIS)

    Chen Ximing; Wang Bin; Pan Nana; Gong Yungui

    2011-01-01

    We examine the effectiveness of the weak gravity conjecture in constraining the dark energy by comparing with observations. For general dark energy models with plausible phenomenological interactions between dark sectors, we find that although the weak gravity conjecture can constrain the dark energy, the constraint is looser than that from the observations.

  4. The DINA model as a constrained general diagnostic model: Two variants of a model equivalency.

    Science.gov (United States)

    von Davier, Matthias

    2014-02-01

    The 'deterministic-input noisy-AND' (DINA) model is one of the more frequently applied diagnostic classification models for binary observed responses and binary latent variables. The purpose of this paper is to show that the model is equivalent to a special case of a more general compensatory family of diagnostic models. Two equivalencies are presented. Both project the original DINA skill space and design Q-matrix using mappings into a transformed skill space as well as a transformed Q-matrix space. Both variants of the equivalency produce a compensatory model that is mathematically equivalent to the (conjunctive) DINA model. This equivalency holds for all DINA models with any type of Q-matrix, not only for trivial (simple-structure) cases. The two versions of the equivalency presented in this paper are not implied by the recently suggested log-linear cognitive diagnosis model or the generalized DINA approach. The equivalencies presented here exist independent of these recently derived models since they solely require a linear - compensatory - general diagnostic model without any skill interaction terms. Whenever it can be shown that one model can be viewed as a special case of another more general one, conclusions derived from any particular model-based estimates are drawn into question. It is widely known that multidimensional models can often be specified in multiple ways while the model-based probabilities of observed variables stay the same. This paper goes beyond this type of equivalency by showing that a conjunctive diagnostic classification model can be expressed as a constrained special case of a general compensatory diagnostic modelling framework. © 2013 The British Psychological Society.

  5. CP properties of symmetry-constrained two-Higgs-doublet models

    CERN Document Server

    Ferreira, P M; Nachtmann, O; Silva, Joao P

    2010-01-01

    The two-Higgs-doublet model can be constrained by imposing Higgs-family symmetries and/or generalized CP symmetries. It is known that there are only six independent classes of such symmetry-constrained models. We study the CP properties of all cases in the bilinear formalism. An exact symmetry implies CP conservation. We show that soft breaking of the symmetry can lead to spontaneous CP violation (CPV) in three of the classes.

  6. Modeling and analysis of rotating plates by using self sensing active constrained layer damping

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Zheng Chao; Wong, Pak Kin; Chong, Ian Ian [Univ. of Macau, Macau (China)

    2012-10-15

    This paper proposes a new finite element model for active constrained layer damped (CLD) rotating plate with self sensing technique. Constrained layer damping can effectively reduce the vibration in rotating structures. Unfortunately, most existing research models the rotating structures as beams that are not the case many times. It is meaningful to model the rotating part as plates because of improvements on both the accuracy and the versatility. At the same time, existing research shows that the active constrained layer damping provides a more effective vibration control approach than the passive constrained layer damping. Thus, in this work, a single layer finite element is adopted to model a three layer active constrained layer damped rotating plate. Unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Also, the constraining layer is made of piezoelectric material to work as both the self sensing sensor and actuator. Then, a proportional control strategy is implemented to effectively control the displacement of the tip end of the rotating plate. Additionally, a parametric study is conducted to explore the impact of some design parameters on structure's modal characteristics.

  7. Modeling and analysis of rotating plates by using self sensing active constrained layer damping

    International Nuclear Information System (INIS)

    Xie, Zheng Chao; Wong, Pak Kin; Chong, Ian Ian

    2012-01-01

    This paper proposes a new finite element model for active constrained layer damped (CLD) rotating plate with self sensing technique. Constrained layer damping can effectively reduce the vibration in rotating structures. Unfortunately, most existing research models the rotating structures as beams that are not the case many times. It is meaningful to model the rotating part as plates because of improvements on both the accuracy and the versatility. At the same time, existing research shows that the active constrained layer damping provides a more effective vibration control approach than the passive constrained layer damping. Thus, in this work, a single layer finite element is adopted to model a three layer active constrained layer damped rotating plate. Unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Also, the constraining layer is made of piezoelectric material to work as both the self sensing sensor and actuator. Then, a proportional control strategy is implemented to effectively control the displacement of the tip end of the rotating plate. Additionally, a parametric study is conducted to explore the impact of some design parameters on structure's modal characteristics

  8. Complementarity of flux- and biometric-based data to constrain parameters in a terrestrial carbon model

    Directory of Open Access Journals (Sweden)

    Zhenggang Du

    2015-03-01

    Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also

  9. Top ten models constrained by b {yields} s{gamma}

    Energy Technology Data Exchange (ETDEWEB)

    Hewett, J.L. [Stanford Univ., CA (United States)

    1994-12-01

    The radiative decay b {yields} s{gamma} is examined in the Standard Model and in nine classes of models which contain physics beyond the Standard Model. The constraints which may be placed on these models from the recent results of the CLEO Collaboration on both inclusive and exclusive radiative B decays is summarized. Reasonable bounds are found for the parameters in some cases.

  10. A new dynamic null model for phylogenetic community structure

    NARCIS (Netherlands)

    Pigot, Alex L; Etienne, Rampal S

    Phylogenies are increasingly applied to identify the mechanisms structuring ecological communities but progress has been hindered by a reliance on statistical null models that ignore the historical process of community assembly. Here, we address this, and develop a dynamic null model of assembly by

  11. A Local Search Modeling for Constrained Optimum Paths Problems (Extended Abstract

    Directory of Open Access Journals (Sweden)

    Quang Dung Pham

    2009-10-01

    Full Text Available Constrained Optimum Path (COP problems appear in many real-life applications, especially on communication networks. Some of these problems have been considered and solved by specific techniques which are usually difficult to extend. In this paper, we introduce a novel local search modeling for solving some COPs by local search. The modeling features the compositionality, modularity, reuse and strengthens the benefits of Constrained-Based Local Search. We also apply the modeling to the edge-disjoint paths problem (EDP. We show that side constraints can easily be added in the model. Computational results show the significance of the approach.

  12. A constrained rasch model of trace redintegration in serial recall.

    Science.gov (United States)

    Roodenrys, Steven; Miller, Leonie M

    2008-04-01

    The notion that verbal short-term memory tasks, such as serial recall, make use of information in long-term as well as in short-term memory is instantiated in many models of these tasks. Such models incorporate a process in which degraded traces retrieved from a short-term store are reconstructed, or redintegrated (Schweickert, 1993), through the use of information in long-term memory. This article presents a conceptual and mathematical model of this process based on a class of item-response theory models. It is demonstrated that this model provides a better fit to three sets of data than does the multinomial processing tree model of redintegration (Schweickert, 1993) and that a number of conceptual accounts of serial recall can be related to the parameters of the model.

  13. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Directory of Open Access Journals (Sweden)

    Jan Hasenauer

    2014-07-01

    Full Text Available Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  14. ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.

    Science.gov (United States)

    Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J

    2014-07-01

    Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.

  15. Linear programming model to construct phylogenetic network for 16S rRNA sequences of photosynthetic organisms and influenza viruses.

    Science.gov (United States)

    Mathur, Rinku; Adlakha, Neeru

    2014-06-01

    Phylogenetic trees give the information about the vertical relationships of ancestors and descendants but phylogenetic networks are used to visualize the horizontal relationships among the different organisms. In order to predict reticulate events there is a need to construct phylogenetic networks. Here, a Linear Programming (LP) model has been developed for the construction of phylogenetic network. The model is validated by using data sets of chloroplast of 16S rRNA sequences of photosynthetic organisms and Influenza A/H5N1 viruses. Results obtained are in agreement with those obtained by earlier researchers.

  16. Model checking software for phylogenetic trees using distribution and database methods

    Directory of Open Access Journals (Sweden)

    Requeno José Ignacio

    2013-12-01

    Full Text Available Model checking, a generic and formal paradigm stemming from computer science based on temporal logics, has been proposed for the study of biological properties that emerge from the labeling of the states defined over the phylogenetic tree. This strategy allows us to use generic software tools already present in the industry. However, the performance of traditional model checking is penalized when scaling the system for large phylogenies. To this end, two strategies are presented here. The first one consists of partitioning the phylogenetic tree into a set of subgraphs each one representing a subproblem to be verified so as to speed up the computation time and distribute the memory consumption. The second strategy is based on uncoupling the information associated to each state of the phylogenetic tree (mainly, the DNA sequence and exporting it to an external tool for the management of large information systems. The integration of all these approaches outperforms the results of monolithic model checking and helps us to execute the verification of properties in a real phylogenetic tree.

  17. Constraining new physics with collider measurements of Standard Model signatures

    Energy Technology Data Exchange (ETDEWEB)

    Butterworth, Jonathan M. [Department of Physics and Astronomy, University College London,Gower St., London, WC1E 6BT (United Kingdom); Grellscheid, David [IPPP, Department of Physics, Durham University,Durham, DH1 3LE (United Kingdom); Krämer, Michael; Sarrazin, Björn [Institute for Theoretical Particle Physics and Cosmology, RWTH Aachen University,Sommerfeldstr. 16, 52056 Aachen (Germany); Yallup, David [Department of Physics and Astronomy, University College London,Gower St., London, WC1E 6BT (United Kingdom)

    2017-03-14

    A new method providing general consistency constraints for Beyond-the-Standard-Model (BSM) theories, using measurements at particle colliders, is presented. The method, ‘Constraints On New Theories Using Rivet’, CONTUR, exploits the fact that particle-level differential measurements made in fiducial regions of phase-space have a high degree of model-independence. These measurements can therefore be compared to BSM physics implemented in Monte Carlo generators in a very generic way, allowing a wider array of final states to be considered than is typically the case. The CONTUR approach should be seen as complementary to the discovery potential of direct searches, being designed to eliminate inconsistent BSM proposals in a context where many (but perhaps not all) measurements are consistent with the Standard Model. We demonstrate, using a competitive simplified dark matter model, the power of this approach. The CONTUR method is highly scaleable to other models and future measurements.

  18. Inference with constrained hidden Markov models in PRISM

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2010-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference. De......_different are integrated. We experimentally validate our approach on the biologically motivated problem of global pairwise alignment.......A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we show how HMMs can be extended with side-constraints and present constraint solving techniques for efficient inference...

  19. Constraining Stochastic Parametrisation Schemes Using High-Resolution Model Simulations

    Science.gov (United States)

    Christensen, H. M.; Dawson, A.; Palmer, T.

    2017-12-01

    Stochastic parametrisations are used in weather and climate models as a physically motivated way to represent model error due to unresolved processes. Designing new stochastic schemes has been the target of much innovative research over the last decade. While a focus has been on developing physically motivated approaches, many successful stochastic parametrisation schemes are very simple, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) multiplicative scheme `Stochastically Perturbed Parametrisation Tendencies' (SPPT). The SPPT scheme improves the skill of probabilistic weather and seasonal forecasts, and so is widely used. However, little work has focused on assessing the physical basis of the SPPT scheme. We address this matter by using high-resolution model simulations to explicitly measure the `error' in the parametrised tendency that SPPT seeks to represent. The high resolution simulations are first coarse-grained to the desired forecast model resolution before they are used to produce initial conditions and forcing data needed to drive the ECMWF Single Column Model (SCM). By comparing SCM forecast tendencies with the evolution of the high resolution model, we can measure the `error' in the forecast tendencies. In this way, we provide justification for the multiplicative nature of SPPT, and for the temporal and spatial scales of the stochastic perturbations. However, we also identify issues with the SPPT scheme. It is therefore hoped these measurements will improve both holistic and process based approaches to stochastic parametrisation. Figure caption: Instantaneous snapshot of the optimal SPPT stochastic perturbation, derived by comparing high-resolution simulations with a low resolution forecast model.

  20. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Rust, John; Schjerning, Bertel

    2015-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). They used an inefficient version of the nested fixed point algorithm that relies on successive app...

  1. Constrained Optimization Approaches to Estimation of Structural Models

    DEFF Research Database (Denmark)

    Iskhakov, Fedor; Jinhyuk, Lee; Rust, John

    2016-01-01

    We revisit the comparison of mathematical programming with equilibrium constraints (MPEC) and nested fixed point (NFXP) algorithms for estimating structural dynamic models by Su and Judd (SJ, 2012). Their implementation of the nested fixed point algorithm used successive approximations to solve t...

  2. Modeling Power-Constrained Optimal Backlight Dimming for Color Displays

    DEFF Research Database (Denmark)

    Burini, Nino; Nadernejad, Ehsan; Korhonen, Jari

    2013-01-01

    In this paper, we present a framework for modeling color liquid crystal displays (LCDs) having local light-emitting diode (LED) backlight with dimming capability. The proposed framework includes critical aspects like leakage, clipping, light diffusion and human perception of luminance and allows...

  3. A marked correlation function for constraining modified gravity models

    Science.gov (United States)

    White, Martin

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a `generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  4. A marked correlation function for constraining modified gravity models

    Energy Technology Data Exchange (ETDEWEB)

    White, Martin, E-mail: mwhite@berkeley.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States)

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a 'generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  5. Uncovering the Best Skill Multimap by Constraining the Error Probabilities of the Gain-Loss Model

    Science.gov (United States)

    Anselmi, Pasquale; Robusto, Egidio; Stefanutti, Luca

    2012-01-01

    The Gain-Loss model is a probabilistic skill multimap model for assessing learning processes. In practical applications, more than one skill multimap could be plausible, while none corresponds to the true one. The article investigates whether constraining the error probabilities is a way of uncovering the best skill assignment among a number of…

  6. PALM: a paralleled and integrated framework for phylogenetic inference with automatic likelihood model selectors.

    Directory of Open Access Journals (Sweden)

    Shu-Hwa Chen

    Full Text Available BACKGROUND: Selecting an appropriate substitution model and deriving a tree topology for a given sequence set are essential in phylogenetic analysis. However, such time consuming, computationally intensive tasks rely on knowledge of substitution model theories and related expertise to run through all possible combinations of several separate programs. To ensure a thorough and efficient analysis and avert tedious manipulations of various programs, this work presents an intuitive framework, the phylogenetic reconstruction with automatic likelihood model selectors (PALM, with convincing, updated algorithms and a best-fit model selection mechanism for seamless phylogenetic analysis. METHODOLOGY: As an integrated framework of ClustalW, PhyML, MODELTEST, ProtTest, and several in-house programs, PALM evaluates the fitness of 56 substitution models for nucleotide sequences and 112 substitution models for protein sequences with scores in various criteria. The input for PALM can be either sequences in FASTA format or a sequence alignment file in PHYLIP format. To accelerate the computing of maximum likelihood and bootstrapping, this work integrates MPICH2/PhyML, PalmMonitor and Palm job controller across several machines with multiple processors and adopts the task parallelism approach. Moreover, an intuitive and interactive web component, PalmTree, is developed for displaying and operating the output tree with options of tree rooting, branches swapping, viewing the branch length values, and viewing bootstrapping score, as well as removing nodes to restart analysis iteratively. SIGNIFICANCE: The workflow of PALM is straightforward and coherent. Via a succinct, user-friendly interface, researchers unfamiliar with phylogenetic analysis can easily use this server to submit sequences, retrieve the output, and re-submit a job based on a previous result if some sequences are to be deleted or added for phylogenetic reconstruction. PALM results in an inference of

  7. Risk reserve constrained economic dispatch model with wind power penetration

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, W.; Sun, H.; Peng, Y. [Department of Electrical and Electronics Engineering, Dalian University of Technology, Dalian, 116024 (China)

    2010-12-15

    This paper develops a modified economic dispatch (ED) optimization model with wind power penetration. Due to the uncertain nature of wind speed, both overestimation and underestimation of the available wind power are compensated using the up and down spinning reserves. In order to determine both of these two reserve demands, the risk-based up and down spinning reserve constraints are presented considering not only the uncertainty of available wind power, but also the load forecast error and generator outage rates. The predictor-corrector primal-dual interior point method is utilized to solve the proposed ED model. Simulation results of a system with ten conventional generators and one wind farm demonstrate the effectiveness of the proposed method. (authors)

  8. Constraining quantum collapse inflationary models with CMB data

    Energy Technology Data Exchange (ETDEWEB)

    Benetti, Micol; Alcaniz, Jailson S. [Departamento de Astronomia, Observatório Nacional, 20921-400, Rio de Janeiro, RJ (Brazil); Landau, Susana J., E-mail: micolbenetti@on.br, E-mail: slandau@df.uba.ar, E-mail: alcaniz@on.br [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires and IFIBA, CONICET, Ciudad Universitaria, PabI, Buenos Aires 1428 (Argentina)

    2016-12-01

    The hypothesis of the self-induced collapse of the inflaton wave function was proposed as responsible for the emergence of inhomogeneity and anisotropy at all scales. This proposal was studied within an almost de Sitter space-time approximation for the background, which led to a perfect scale-invariant power spectrum, and also for a quasi-de Sitter background, which allows to distinguish departures from the standard approach due to the inclusion of the collapse hypothesis. In this work we perform a Bayesian model comparison for two different choices of the self-induced collapse in a full quasi-de Sitter expansion scenario. In particular, we analyze the possibility of detecting the imprint of these collapse schemes at low multipoles of the anisotropy temperature power spectrum of the Cosmic Microwave Background (CMB) using the most recent data provided by the Planck Collaboration. Our results show that one of the two collapse schemes analyzed provides the same Bayesian evidence of the minimal standard cosmological model ΛCDM, while the other scenario is weakly disfavoured with respect to the standard cosmology.

  9. A Constrained and Versioned Data Model for TEAM Data

    Science.gov (United States)

    Andelman, S.; Baru, C.; Chandra, S.; Fegraus, E.; Lin, K.

    2009-04-01

    The objective of the Tropical Ecology Assessment and Monitoring Network (www.teamnetwork.org) is "To generate real time data for monitoring long-term trends in tropical biodiversity through a global network of TEAM sites (i.e. field stations in tropical forests), providing an early warning system on the status of biodiversity to effectively guide conservation action". To achieve this, the TEAM Network operates by collecting data via standardized protocols at TEAM Sites. The standardized TEAM protocols include the Climate, Vegetation and Terrestrial Vertebrate Protocols. Some sites also implement additional protocols. There are currently 7 TEAM Sites with plans to grow the network to 15 by June 30, 2009 and 50 TEAM Sites by the end of 2010. At each TEAM Site, data is gathered as defined by the protocols and according to a predefined sampling schedule. The TEAM data is organized and stored in a database based on the TEAM spatio-temporal data model. This data model is at the core of the TEAM Information System - it consumes and executes spatio-temporal queries, and analytical functions that are performed on TEAM data, and defines the object data types, relationships and operations that maintain database integrity. The TEAM data model contains object types including types for observation objects (e.g. bird, butterfly and trees), sampling unit, person, role, protocol, site and the relationship of these object types. Each observation data record is a set of attribute values of an observation object and is always associated with a sampling unit, an observation timestamp or time interval, a versioned protocol and data collectors. The operations on the TEAM data model can be classified as read operations, insert operations and update operations. Following are some typical operations: The operation get(site, protocol, [sampling unit block, sampling unit,] start time, end time) returns all data records using the specified protocol and collected at the specified site, block

  10. An Experimental Comparison of Similarity Assessment Measures for 3D Models on Constrained Surface Deformation

    Science.gov (United States)

    Quan, Lulin; Yang, Zhixin

    2010-05-01

    To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.

  11. A Constrained Standard Model: Effects of Fayet-Iliopoulos Terms

    International Nuclear Information System (INIS)

    Barbieri, Riccardo; Hall, Lawrence J.; Nomura, Yasunori

    2001-01-01

    In (1)the one Higgs doublet standard model was obtained by an orbifold projection of a 5D supersymmetric theory in an essentially unique way, resulting in a prediction for the Higgs mass m H = 127 +- 8 GeV and for the compactification scale 1/R = 370 +- 70 GeV. The dominant one loop contribution to the Higgs potential was found to be finite, while the above uncertainties arose from quadratically divergent brane Z factors and from other higher loop contributions. In (3), a quadratically divergent Fayet-Iliopoulos term was found at one loop in this theory. We show that the resulting uncertainties in the predictions for the Higgs boson mass and the compactification scale are small, about 25percent of the uncertainties quoted above, and hence do not affect the original predictions. However, a tree level brane Fayet-Iliopoulos term could, if large enough, modify these predictions, especially for 1/R.

  12. Phylogenetic turnover during subtropical forest succession across environmental and phylogenetic scales.

    Science.gov (United States)

    Purschke, Oliver; Michalski, Stefan G; Bruelheide, Helge; Durka, Walter

    2017-12-01

    Although spatial and temporal patterns of phylogenetic community structure during succession are inherently interlinked and assembly processes vary with environmental and phylogenetic scales, successional studies of community assembly have yet to integrate spatial and temporal components of community structure, while accounting for scaling issues. To gain insight into the processes that generate biodiversity after disturbance, we combine analyses of spatial and temporal phylogenetic turnover across phylogenetic scales, accounting for covariation with environmental differences. We compared phylogenetic turnover, at the species- and individual-level, within and between five successional stages, representing woody plant communities in a subtropical forest chronosequence. We decomposed turnover at different phylogenetic depths and assessed its covariation with between-plot abiotic differences. Phylogenetic turnover between stages was low relative to species turnover and was not explained by abiotic differences. However, within the late-successional stages, there was high presence-/absence-based turnover (clustering) that occurred deep in the phylogeny and covaried with environmental differentiation. Our results support a deterministic model of community assembly where (i) phylogenetic composition is constrained through successional time, but (ii) toward late succession, species sorting into preferred habitats according to niche traits that are conserved deep in phylogeny, becomes increasingly important.

  13. Maximum entropy production: Can it be used to constrain conceptual hydrological models?

    Science.gov (United States)

    M.C. Westhoff; E. Zehe

    2013-01-01

    In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...

  14. Improved Modeling Approaches for Constrained Sintering of Bi-Layered Porous Structures

    DEFF Research Database (Denmark)

    Tadesse Molla, Tesfaye; Frandsen, Henrik Lund; Esposito, Vincenzo

    2012-01-01

    Shape instabilities during constrained sintering experiment of bi-layer porous and dense cerium gadolinium oxide (CGO) structures have been analyzed. An analytical and a numerical model based on the continuum theory of sintering has been implemented to describe the evolution of bow and densificat...

  15. On meeting capital requirements with a chance-constrained optimization model.

    Science.gov (United States)

    Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan

    2016-01-01

    This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.

  16. Rooting phylogenetic trees under the coalescent model using site pattern probabilities.

    Science.gov (United States)

    Tian, Yuan; Kubatko, Laura

    2017-12-19

    Phylogenetic tree inference is a fundamental tool to estimate ancestor-descendant relationships among different species. In phylogenetic studies, identification of the root - the most recent common ancestor of all sampled organisms - is essential for complete understanding of the evolutionary relationships. Rooted trees benefit most downstream application of phylogenies such as species classification or study of adaptation. Often, trees can be rooted by using outgroups, which are species that are known to be more distantly related to the sampled organisms than any other species in the phylogeny. However, outgroups are not always available in evolutionary research. In this study, we develop a new method for rooting species tree under the coalescent model, by developing a series of hypothesis tests for rooting quartet phylogenies using site pattern probabilities. The power of this method is examined by simulation studies and by application to an empirical North American rattlesnake data set. The method shows high accuracy across the simulation conditions considered, and performs well for the rattlesnake data. Thus, it provides a computationally efficient way to accurately root species-level phylogenies that incorporates the coalescent process. The method is robust to variation in substitution model, but is sensitive to the assumption of a molecular clock. Our study establishes a computationally practical method for rooting species trees that is more efficient than traditional methods. The method will benefit numerous evolutionary studies that require rooting a phylogenetic tree without having to specify outgroups.

  17. The algebra of the general Markov model on phylogenetic trees and networks.

    Science.gov (United States)

    Sumner, J G; Holland, B R; Jarvis, P D

    2012-04-01

    It is known that the Kimura 3ST model of sequence evolution on phylogenetic trees can be extended quite naturally to arbitrary split systems. However, this extension relies heavily on mathematical peculiarities of the associated Hadamard transformation, and providing an analogous augmentation of the general Markov model has thus far been elusive. In this paper, we rectify this shortcoming by showing how to extend the general Markov model on trees to include incompatible edges; and even further to more general network models. This is achieved by exploring the algebra of the generators of the continuous-time Markov chain together with the “splitting” operator that generates the branching process on phylogenetic trees. For simplicity, we proceed by discussing the two state case and then show that our results are easily extended to more states with little complication. Intriguingly, upon restriction of the two state general Markov model to the parameter space of the binary symmetric model, our extension is indistinguishable from the Hadamard approach only on trees; as soon as any incompatible splits are introduced the two approaches give rise to differing probability distributions with disparate structure. Through exploration of a simple example, we give an argument that our extension to more general networks has desirable properties that the previous approaches do not share. In particular, our construction allows for convergent evolution of previously divergent lineages; a property that is of significant interest for biological applications.

  18. Criticisms and defences of the balance-of-payments constrained growth model: some old, some new

    Directory of Open Access Journals (Sweden)

    John S.L. McCombie

    2011-12-01

    Full Text Available This paper assesses various critiques that have been levelled over the years against Thirlwall’s Law and the balance-of-payments constrained growth model. It starts by assessing the criticisms that the law is largely capturing an identity; that the law of one price renders the model incoherent; and that statistical testing using cross-country data rejects the hypothesis that the actual and the balance-of-payments equilibrium growth rates are the same. It goes on to consider the argument that calculations of the “constant-market-shares” income elasticities of demand for exports demonstrate that the UK (and by implication other advanced countries could not have been balance-of-payments constrained in the early postwar period. Next Krugman’s interpretation of the law (or what he terms the “45-degree rule”, which is at variance with the usual demand-oriented explanation, is examined. The paper next assesses attempts to reconcile the demand and supply side of the model and examines whether or not the balance-of-payments constrained growth model is subject to the fallacy of composition. It concludes that none of these criticisms invalidate the model, which remains a powerful explanation of why growth rates differ.

  19. Inexact nonlinear improved fuzzy chance-constrained programming model for irrigation water management under uncertainty

    Science.gov (United States)

    Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping

    2018-01-01

    An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.

  20. Modelling and Vibration Control of Beams with Partially Debonded Active Constrained Layer Damping Patch

    Science.gov (United States)

    SUN, D.; TONG, L.

    2002-05-01

    A detailed model for the beams with partially debonded active constraining damping (ACLD) treatment is presented. In this model, the transverse displacement of the constraining layer is considered to be non-identical to that of the host structure. In the perfect bonding region, the viscoelastic core is modelled to carry both peel and shear stresses, while in the debonding area, it is assumed that no peel and shear stresses be transferred between the host beam and the constraining layer. The adhesive layer between the piezoelectric sensor and the host beam is also considered in this model. In active control, the positive position feedback control is employed to control the first mode of the beam. Based on this model, the incompatibility of the transverse displacements of the active constraining layer and the host beam is investigated. The passive and active damping behaviors of the ACLD patch with different thicknesses, locations and lengths are examined. Moreover, the effects of debonding of the damping layer on both passive and active control are examined via a simulation example. The results show that the incompatibility of the transverse displacements is remarkable in the regions near the ends of the ACLD patch especially for the high order vibration modes. It is found that a thinner damping layer may lead to larger shear strain and consequently results in a larger passive and active damping. In addition to the thickness of the damping layer, its length and location are also key factors to the hybrid control. The numerical results unveil that edge debonding can lead to a reduction of both passive and active damping, and the hybrid damping may be more sensitive to the debonding of the damping layer than the passive damping.

  1. Characterizing and modeling the free recovery and constrained recovery behavior of a polyurethane shape memory polymer

    International Nuclear Information System (INIS)

    Volk, Brent L; Lagoudas, Dimitris C; Maitland, Duncan J

    2011-01-01

    In this work, tensile tests and one-dimensional constitutive modeling were performed on a high recovery force polyurethane shape memory polymer that is being considered for biomedical applications. The tensile tests investigated the free recovery (zero load) response as well as the constrained displacement recovery (stress recovery) response at extension values up to 25%, and two consecutive cycles were performed during each test. The material was observed to recover 100% of the applied deformation when heated at zero load in the second thermomechanical cycle, and a stress recovery of 1.5–4.2 MPa was observed for the constrained displacement recovery experiments. After the experiments were performed, the Chen and Lagoudas model was used to simulate and predict the experimental results. The material properties used in the constitutive model—namely the coefficients of thermal expansion, shear moduli, and frozen volume fraction—were calibrated from a single 10% extension free recovery experiment. The model was then used to predict the material response for the remaining free recovery and constrained displacement recovery experiments. The model predictions match well with the experimental data

  2. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    CERN Document Server

    Gato-Rivera, Beatriz

    1992-01-01

    A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.

  3. Can climate variability information constrain a hydrological model for an ungauged Costa Rican catchment?

    Science.gov (United States)

    Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven

    2017-04-01

    Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to

  4. Epoch of reionization 21 cm forecasting from MCMC-constrained semi-numerical models

    Science.gov (United States)

    Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.

    2017-06-01

    The recent low value of Planck Collaboration XLVII integrated optical depth to Thomson scattering suggests that the reionization occurred fairly suddenly, disfavouring extended reionization scenarios. This will have a significant impact on the 21 cm power spectrum. Using a semi-numerical framework, we improve our model from instantaneous to include time-integrated ionization and recombination effects, and find that this leads to more sudden reionization. It also yields larger H II bubbles that lead to an order of magnitude more 21 cm power on large scales, while suppressing the small-scale ionization power. Local fluctuations in the neutral hydrogen density play the dominant role in boosting the 21 cm power spectrum on large scales, while recombinations are subdominant. We use a Monte Carlo Markov chain approach to constrain our model to observations of the star formation rate functions at z = 6, 7, 8 from Bouwens et al., the Planck Collaboration XLVII optical depth measurements and the Becker & Bolton ionizing emissivity data at z ˜ 5. We then use this constrained model to perform 21 cm forecasting for Low Frequency Array, Hydrogen Epoch of Reionization Array and Square Kilometre Array in order to determine how well such data can characterize the sources driving reionization. We find that the Mock 21 cm power spectrum alone can somewhat constrain the halo mass dependence of ionizing sources, the photon escape fraction and ionizing amplitude, but combining the Mock 21 cm data with other current observations enables us to separately constrain all these parameters. Our framework illustrates how the future 21 cm data can play a key role in understanding the sources and topology of reionization as observations improve.

  5. Network-constrained Cournot models of liberalized electricity markets: the devil is in the details

    International Nuclear Information System (INIS)

    Neuhoff, Karsten; Barquin, Julian; Vazquez, Miguel; Boots, Maroeska; Rijkers, Fieke A.M.; Ehrenmann, Andreas; Hobbs, Benjamin F.

    2005-01-01

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model. (Author)

  6. Network-constrained Cournot models of liberalized electricity markets. The devil is in the details

    Energy Technology Data Exchange (ETDEWEB)

    Neuhoff, Karsten [Department of Applied Economics, Sidgwick Ave., University of Cambridge, CB3 9DE (United Kingdom); Barquin, Julian; Vazquez, Miguel [Instituto de Investigacion Tecnologica, Universidad Pontificia Comillas, c/Santa Cruz de Marcenado 26-28015 Madrid (Spain); Boots, Maroeska G. [Energy Research Centre of the Netherlands ECN, Badhuisweg 3, 1031 CM Amsterdam (Netherlands); Ehrenmann, Andreas [Judge Institute of Management, University of Cambridge, Trumpington Street, CB2 1AG (United Kingdom); Hobbs, Benjamin F. [Department of Geography and Environmental Engineering, Johns Hopkins University, Baltimore, MD 21218 (United States); Rijkers, Fieke A.M. [Contributed while at ECN, now at Nederlandse Mededingingsautoriteit (NMa), Dte, Postbus 16326, 2500 BH Den Haag (Netherlands)

    2005-05-15

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model.

  7. Network-constrained Cournot models of liberalized electricity markets: the devil is in the details

    Energy Technology Data Exchange (ETDEWEB)

    Neuhoff, Karsten [Cambridge Univ., Dept. of Applied Economics, Cambridge (United Kingdom); Barquin, Julian; Vazquez, Miguel [Universidad Pontificia Comillas, Inst. de Investigacion Tecnologica, Madrid (Spain); Boots, Maroeska; Rijkers, Fieke A.M. [Energy Research Centre of the Netherlands ECN, Amsterdam (Netherlands); Ehrenmann, Andreas [Cambridge Univ., Judge Inst. of Management, Cambridge (United Kingdom); Hobbs, Benjamin F. [Johns Hopkins Univ., Dept. of Geography and Environmental Engineering, Baltimore, MD (United States)

    2005-05-01

    Numerical models of transmission-constrained electricity markets are used to inform regulatory decisions. How robust are their results? Three research groups used the same data set for the northwest Europe power market as input for their models. Under competitive conditions, the results coincide, but in the Cournot case, the predicted prices differed significantly. The Cournot equilibria are highly sensitive to assumptions about market design (whether timing of generation and transmission decisions is sequential or integrated) and expectations of generators regarding how their decisions affect transmission prices and fringe generation. These sensitivities are qualitatively similar to those predicted by a simple two-node model. (Author)

  8. Modeling and query the uncertainty of network constrained moving objects based on RFID data

    Science.gov (United States)

    Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie

    2007-06-01

    The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.

  9. Constrained-path quantum Monte Carlo approach for non-yrast states within the shell model

    Energy Technology Data Exchange (ETDEWEB)

    Bonnard, J. [INFN, Sezione di Padova, Padova (Italy); LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France); Juillet, O. [LPC Caen, ENSICAEN, Universite de Caen, CNRS/IN2P3, Caen (France)

    2016-04-15

    The present paper intends to present an extension of the constrained-path quantum Monte Carlo approach allowing to reconstruct non-yrast states in order to reach the complete spectroscopy of nuclei within the interacting shell model. As in the yrast case studied in a previous work, the formalism involves a variational symmetry-restored wave function assuming two central roles. First, it guides the underlying Brownian motion to improve the efficiency of the sampling. Second, it constrains the stochastic paths according to the phaseless approximation to control sign or phase problems that usually plague fermionic QMC simulations. Proof-of-principle results in the sd valence space are reported. They prove the ability of the scheme to offer remarkably accurate binding energies for both even- and odd-mass nuclei irrespective of the considered interaction. (orig.)

  10. Modeling Dzyaloshinskii-Moriya Interaction at Transition Metal Interfaces: Constrained Moment versus Generalized Bloch Theorem

    KAUST Repository

    Dong, Yao-Jun; Belabbes, Abderrezak; Manchon, Aurelien

    2017-01-01

    Dzyaloshinskii-Moriya interaction (DMI) at Pt/Co interfaces is investigated theoretically using two different first principles methods. The first one uses the constrained moment method to build a spin spiral in real space, while the second method uses the generalized Bloch theorem approach to construct a spin spiral in reciprocal space. We show that although the two methods produce an overall similar total DMI energy, the dependence of DMI as a function of the spin spiral wavelength is dramatically different. We suggest that long-range magnetic interactions, that determine itinerant magnetism in transition metals, are responsible for this discrepancy. We conclude that the generalized Bloch theorem approach is more adapted to model DMI in transition metal systems, where magnetism is delocalized, while the constrained moment approach is mostly applicable to weak or insulating magnets, where magnetism is localized.

  11. Modeling Dzyaloshinskii-Moriya Interaction at Transition Metal Interfaces: Constrained Moment versus Generalized Bloch Theorem

    KAUST Repository

    Dong, Yao-Jun

    2017-10-29

    Dzyaloshinskii-Moriya interaction (DMI) at Pt/Co interfaces is investigated theoretically using two different first principles methods. The first one uses the constrained moment method to build a spin spiral in real space, while the second method uses the generalized Bloch theorem approach to construct a spin spiral in reciprocal space. We show that although the two methods produce an overall similar total DMI energy, the dependence of DMI as a function of the spin spiral wavelength is dramatically different. We suggest that long-range magnetic interactions, that determine itinerant magnetism in transition metals, are responsible for this discrepancy. We conclude that the generalized Bloch theorem approach is more adapted to model DMI in transition metal systems, where magnetism is delocalized, while the constrained moment approach is mostly applicable to weak or insulating magnets, where magnetism is localized.

  12. Model Predictive Control Based on Kalman Filter for Constrained Hammerstein-Wiener Systems

    Directory of Open Access Journals (Sweden)

    Man Hong

    2013-01-01

    Full Text Available To precisely track the reactor temperature in the entire working condition, the constrained Hammerstein-Wiener model describing nonlinear chemical processes such as in the continuous stirred tank reactor (CSTR is proposed. A predictive control algorithm based on the Kalman filter for constrained Hammerstein-Wiener systems is designed. An output feedback control law regarding the linear subsystem is derived by state observation. The size of reaction heat produced and its influence on the output are evaluated by the Kalman filter. The observation and evaluation results are calculated by the multistep predictive approach. Actual control variables are computed while considering the constraints of the optimal control problem in a finite horizon through the receding horizon. The simulation example of the CSTR tester shows the effectiveness and feasibility of the proposed algorithm.

  13. Estimation of rates-across-sites distributions in phylogenetic substitution models.

    Science.gov (United States)

    Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J

    2003-10-01

    Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.

  14. Measuring fit of sequence data to phylogenetic model: gain of power using marginal tests.

    Science.gov (United States)

    Waddell, Peter J; Ota, Rissa; Penny, David

    2009-10-01

    Testing fit of data to model is fundamentally important to any science, but publications in the field of phylogenetics rarely do this. Such analyses discard fundamental aspects of science as prescribed by Karl Popper. Indeed, not without cause, Popper (Unended quest: an intellectual autobiography. Fontana, London, 1976) once argued that evolutionary biology was unscientific as its hypotheses were untestable. Here we trace developments in assessing fit from Penny et al. (Nature 297:197-200, 1982) to the present. We compare the general log-likelihood ratio (the G or G (2) statistic) statistic between the evolutionary tree model and the multinomial model with that of marginalized tests applied to an alignment (using placental mammal coding sequence data). It is seen that the most general test does not reject the fit of data to model (P approximately 0.5), but the marginalized tests do. Tests on pairwise frequency (F) matrices, strongly (P < 0.001) reject the most general phylogenetic (GTR) models commonly in use. It is also clear (P < 0.01) that the sequences are not stationary in their nucleotide composition. Deviations from stationarity and homogeneity seem to be unevenly distributed amongst taxa; not necessarily those expected from examining other regions of the genome. By marginalizing the 4( t ) patterns of the i.i.d. model to observed and expected parsimony counts, that is, from constant sites, to singletons, to parsimony informative characters of a minimum possible length, then the likelihood ratio test regains power, and it too rejects the evolutionary model with P < 0.001. Given such behavior over relatively recent evolutionary time, readers in general should maintain a healthy skepticism of results, as the scale of the systematic errors in published trees may really be far larger than the analytical methods (e.g., bootstrap) report.

  15. Modeling Dynamic Contrast-Enhanced MRI Data with a Constrained Local AIF

    DEFF Research Database (Denmark)

    Duan, Chong; Kallehauge, Jesper F.; Pérez-Torres, Carlos J

    2018-01-01

    PURPOSE: This study aims to develop a constrained local arterial input function (cL-AIF) to improve quantitative analysis of dynamic contrast-enhanced (DCE)-magnetic resonance imaging (MRI) data by accounting for the contrast-agent bolus amplitude error in the voxel-specific AIF. PROCEDURES....... RESULTS: When the data model included the cL-AIF, tracer kinetic parameters were correctly estimated from in silico data under contrast-to-noise conditions typical of clinical DCE-MRI experiments. Considering the clinical cervical cancer data, Bayesian model selection was performed for all tumor voxels...

  16. Kovacs effect and fluctuation-dissipation relations in 1D kinetically constrained models

    International Nuclear Information System (INIS)

    Buhot, Arnaud

    2003-01-01

    Strong and fragile glass relaxation behaviours are obtained simply changing the constraints of the kinetically constrained Ising chain from symmetric to purely asymmetric. We study the out-of-equilibrium dynamics of these two models focusing on the Kovacs effect and the fluctuation-dissipation (FD) relations. The Kovacs or memory effect, commonly observed in structural glasses, is present for both constraints but enhanced with the asymmetric ones. Most surprisingly, the related FD relations satisfy the FD theorem in both cases. This result strongly differs from the simple quenching procedure where the asymmetric model presents strong deviations from the FD theorem

  17. A cost-constrained model of strategic service quality emphasis in nursing homes.

    Science.gov (United States)

    Davis, M A; Provan, K G

    1996-02-01

    This study employed structural equation modeling to test the relationship between three aspects of the environmental context of nursing homes; Medicaid dependence, ownership status, and market demand, and two basic strategic orientations: low cost and differentiation based on service quality emphasis. Hypotheses were proposed and tested against data collected from a sample of nursing homes operating in a single state. Because of the overwhelming importance of cost control in the nursing home industry, a cost constrained strategy perspective was supported. Specifically, while the three contextual variables had no direct effect on service quality emphasis, the entire model was supported when cost control orientation was introduced as a mediating variable.

  18. A Chance-Constrained Economic Dispatch Model in Wind-Thermal-Energy Storage System

    Directory of Open Access Journals (Sweden)

    Yanzhe Hu

    2017-03-01

    Full Text Available As a type of renewable energy, wind energy is integrated into the power system with more and more penetration levels. It is challenging for the power system operators (PSOs to cope with the uncertainty and variation of the wind power and its forecasts. A chance-constrained economic dispatch (ED model for the wind-thermal-energy storage system (WTESS is developed in this paper. An optimization model with the wind power and the energy storage system (ESS is first established with the consideration of both the economic benefits of the system and less wind curtailments. The original wind power generation is processed by the ESS to obtain the final wind power output generation (FWPG. A Gaussian mixture model (GMM distribution is adopted to characterize the probabilistic and cumulative distribution functions with an analytical expression. Then, a chance-constrained ED model integrated by the wind-energy storage system (W-ESS is developed by considering both the overestimation costs and the underestimation costs of the system and solved by the sequential linear programming method. Numerical simulation results using the wind power data in four wind farms are performed on the developed ED model with the IEEE 30-bus system. It is verified that the developed ED model is effective to integrate the uncertain and variable wind power. The GMM distribution could accurately fit the actual distribution of the final wind power output, and the ESS could help effectively decrease the operation costs.

  19. Unrealistic phylogenetic trees may improve phylogenetic footprinting.

    Science.gov (United States)

    Nettling, Martin; Treutler, Hendrik; Cerquides, Jesus; Grosse, Ivo

    2017-06-01

    The computational investigation of DNA binding motifs from binding sites is one of the classic tasks in bioinformatics and a prerequisite for understanding gene regulation as a whole. Due to the development of sequencing technologies and the increasing number of available genomes, approaches based on phylogenetic footprinting become increasingly attractive. Phylogenetic footprinting requires phylogenetic trees with attached substitution probabilities for quantifying the evolution of binding sites, but these trees and substitution probabilities are typically not known and cannot be estimated easily. Here, we investigate the influence of phylogenetic trees with different substitution probabilities on the classification performance of phylogenetic footprinting using synthetic and real data. For synthetic data we find that the classification performance is highest when the substitution probability used for phylogenetic footprinting is similar to that used for data generation. For real data, however, we typically find that the classification performance of phylogenetic footprinting surprisingly increases with increasing substitution probabilities and is often highest for unrealistically high substitution probabilities close to one. This finding suggests that choosing realistic model assumptions might not always yield optimal predictions in general and that choosing unrealistically high substitution probabilities close to one might actually improve the classification performance of phylogenetic footprinting. The proposed PF is implemented in JAVA and can be downloaded from https://github.com/mgledi/PhyFoo. : martin.nettling@informatik.uni-halle.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  20. Constraining the models' response of tropical low clouds to SST forcings using CALIPSO observations

    Science.gov (United States)

    Cesana, G.; Del Genio, A. D.; Ackerman, A. S.; Brient, F.; Fridlind, A. M.; Kelley, M.; Elsaesser, G.

    2017-12-01

    Low-cloud response to a warmer climate is still pointed out as being the largest source of uncertainty in the last generation of climate models. To date there is no consensus among the models on whether the tropical low cloudiness would increase or decrease in a warmer climate. In addition, it has been shown that - depending on their climate sensitivity - the models either predict deeper or shallower low clouds. Recently, several relationships between inter-model characteristics of the present-day climate and future climate changes have been highlighted. These so-called emergent constraints aim to target relevant model improvements and to constrain models' projections based on current climate observations. Here we propose to use - for the first time - 10 years of CALIPSO cloud statistics to assess the ability of the models to represent the vertical structure of tropical low clouds for abnormally warm SST. We use a simulator approach to compare observations and simulations and focus on the low-layered clouds (i.e. z fraction. Vertically, the clouds deepen namely by decreasing the cloud fraction in the lowest levels and increasing it around the top of the boundary-layer. This feature is coincident with an increase of the high-level cloud fraction (z > 6.5km). Although the models' spread is large, the multi-model mean captures the observed variations but with a smaller amplitude. We then employ the GISS model to investigate how changes in cloud parameterizations affect the response of low clouds to warmer SSTs on the one hand; and how they affect the variations of the model's cloud profiles with respect to environmental parameters on the other hand. Finally, we use CALIPSO observations to constrain the model by determining i) what set of parameters allows reproducing the observed relationships and ii) what are the consequences on the cloud feedbacks. These results point toward process-oriented constraints of low-cloud responses to surface warming and environmental

  1. A distance constrained synaptic plasticity model of C. elegans neuronal network

    Science.gov (United States)

    Badhwar, Rahul; Bagler, Ganesh

    2017-03-01

    Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.

  2. Source model for the Copahue volcano magmaplumbing system constrained by InSARsurface deformation observations

    Science.gov (United States)

    Lundgren, P.; Nikkhoo, M.; Samsonov, S. V.; Milillo, P.; Gil-Cruz, F., Sr.; Lazo, J.

    2017-12-01

    Copahue volcano straddling the edge of the Agrio-Caviahue caldera along the Chile-Argentinaborder in the southern Andes has been in unrest since inflation began in late 2011. We constrain Copahue'ssource models with satellite and airborne interferometric synthetic aperture radar (InSAR) deformationobservations. InSAR time series from descending track RADARSAT-2 and COSMO-SkyMed data span theentire inflation period from 2011 to 2016, with their initially high rates of 12 and 15 cm/yr, respectively,slowing only slightly despite ongoing small eruptions through 2016. InSAR ascending and descending tracktime series for the 2013-2016 time period constrain a two-source compound dislocation model, with a rate ofvolume increase of 13 × 106 m3/yr. They consist of a shallow, near-vertical, elongated source centered at2.5 km beneath the summit and a deeper, shallowly plunging source centered at 7 km depth connecting theshallow source to the deeper caldera. The deeper source is located directly beneath the volcano tectonicseismicity with the lower bounds of the seismicity parallel to the plunge of the deep source. InSAR time seriesalso show normal fault offsets on the NE flank Copahue faults. Coulomb stress change calculations forright-lateral strike slip (RLSS), thrust, and normal receiver faults show positive values in the north caldera forboth RLSS and normal faults, suggesting that northward trending seismicity and Copahue fault motion withinthe caldera are caused by the modeled sources. Together, the InSAR-constrained source model and theseismicity suggest a deep conduit or transfer zone where magma moves from the central caldera toCopahue's upper edifice.

  3. Dynamical insurance models with investment: Constrained singular problems for integrodifferential equations

    Science.gov (United States)

    Belkina, T. A.; Konyukhova, N. B.; Kurochkin, S. V.

    2016-01-01

    Previous and new results are used to compare two mathematical insurance models with identical insurance company strategies in a financial market, namely, when the entire current surplus or its constant fraction is invested in risky assets (stocks), while the rest of the surplus is invested in a risk-free asset (bank account). Model I is the classical Cramér-Lundberg risk model with an exponential claim size distribution. Model II is a modification of the classical risk model (risk process with stochastic premiums) with exponential distributions of claim and premium sizes. For the survival probability of an insurance company over infinite time (as a function of its initial surplus), there arise singular problems for second-order linear integrodifferential equations (IDEs) defined on a semiinfinite interval and having nonintegrable singularities at zero: model I leads to a singular constrained initial value problem for an IDE with a Volterra integral operator, while II model leads to a more complicated nonlocal constrained problem for an IDE with a non-Volterra integral operator. A brief overview of previous results for these two problems depending on several positive parameters is given, and new results are presented. Additional results are concerned with the formulation, analysis, and numerical study of "degenerate" problems for both models, i.e., problems in which some of the IDE parameters vanish; moreover, passages to the limit with respect to the parameters through which we proceed from the original problems to the degenerate ones are singular for small and/or large argument values. Such problems are of mathematical and practical interest in themselves. Along with insurance models without investment, they describe the case of surplus completely invested in risk-free assets, as well as some noninsurance models of surplus dynamics, for example, charity-type models.

  4. Bayesian selection of misspecified models is overconfident and may cause spurious posterior probabilities for phylogenetic trees.

    Science.gov (United States)

    Yang, Ziheng; Zhu, Tianqi

    2018-02-20

    The Bayesian method is noted to produce spuriously high posterior probabilities for phylogenetic trees in analysis of large datasets, but the precise reasons for this overconfidence are unknown. In general, the performance of Bayesian selection of misspecified models is poorly understood, even though this is of great scientific interest since models are never true in real data analysis. Here we characterize the asymptotic behavior of Bayesian model selection and show that when the competing models are equally wrong, Bayesian model selection exhibits surprising and polarized behaviors in large datasets, supporting one model with full force while rejecting the others. If one model is slightly less wrong than the other, the less wrong model will eventually win when the amount of data increases, but the method may become overconfident before it becomes reliable. We suggest that this extreme behavior may be a major factor for the spuriously high posterior probabilities for evolutionary trees. The philosophical implications of our results to the application of Bayesian model selection to evaluate opposing scientific hypotheses are yet to be explored, as are the behaviors of non-Bayesian methods in similar situations.

  5. Robust model predictive control for constrained continuous-time nonlinear systems

    Science.gov (United States)

    Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong

    2018-02-01

    In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.

  6. Inexact Multistage Stochastic Chance Constrained Programming Model for Water Resources Management under Uncertainties

    Directory of Open Access Journals (Sweden)

    Hong Zhang

    2017-01-01

    Full Text Available In order to formulate water allocation schemes under uncertainties in the water resources management systems, an inexact multistage stochastic chance constrained programming (IMSCCP model is proposed. The model integrates stochastic chance constrained programming, multistage stochastic programming, and inexact stochastic programming within a general optimization framework to handle the uncertainties occurring in both constraints and objective. These uncertainties are expressed as probability distributions, interval with multiply distributed stochastic boundaries, dynamic features of the long-term water allocation plans, and so on. Compared with the existing inexact multistage stochastic programming, the IMSCCP can be used to assess more system risks and handle more complicated uncertainties in water resources management systems. The IMSCCP model is applied to a hypothetical case study of water resources management. In order to construct an approximate solution for the model, a hybrid algorithm, which incorporates stochastic simulation, back propagation neural network, and genetic algorithm, is proposed. The results show that the optimal value represents the maximal net system benefit achieved with a given confidence level under chance constraints, and the solutions provide optimal water allocation schemes to multiple users over a multiperiod planning horizon.

  7. Constrained consequence

    CSIR Research Space (South Africa)

    Britz, K

    2011-09-01

    Full Text Available their basic properties and relationship. In Section 3 we present a modal instance of these constructions which also illustrates with an example how to reason abductively with constrained entailment in a causal or action oriented context. In Section 4 we... of models with the former approach, whereas in Section 3.3 we give an example illustrating ways in which C can be de ned with both. Here we employ the following versions of local consequence: De nition 3.4. Given a model M = hW;R;Vi and formulas...

  8. Event-triggered decentralized robust model predictive control for constrained large-scale interconnected systems

    Directory of Open Access Journals (Sweden)

    Ling Lu

    2016-12-01

    Full Text Available This paper considers the problem of event-triggered decentralized model predictive control (MPC for constrained large-scale linear systems subject to additive bounded disturbances. The constraint tightening method is utilized to formulate the MPC optimization problem. The local predictive control law for each subsystem is determined aperiodically by relevant triggering rule which allows a considerable reduction of the computational load. And then, the robust feasibility and closed-loop stability are proved and it is shown that every subsystem state will be driven into a robust invariant set. Finally, the effectiveness of the proposed approach is illustrated via numerical simulations.

  9. Modeling Oil Exploration and Production: Resource-Constrained and Agent-Based Approaches

    International Nuclear Information System (INIS)

    Jakobsson, Kristofer

    2010-05-01

    Energy is essential to the functioning of society, and oil is the single largest commercial energy source. Some analysts have concluded that the peak in oil production is soon about to happen on the global scale, while others disagree. Such incompatible views can persist because the issue of 'peak oil' cuts through the established scientific disciplines. The question is: what characterizes the modeling approaches that are available today, and how can they be further developed to improve a trans-disciplinary understanding of oil depletion? The objective of this thesis is to present long-term scenarios of oil production (Paper I) using a resource-constrained model; and an agent-based model of the oil exploration process (Paper II). It is also an objective to assess the strengths, limitations, and future development potentials of resource-constrained modeling, analytical economic modeling, and agent-based modeling. Resource-constrained models are only suitable when the time frame is measured in decades, but they can give a rough indication of which production scenarios are reasonable given the size of the resource. However, the models are comprehensible, transparent and the only feasible long-term forecasting tools at present. It is certainly possible to distinguish between reasonable scenarios, based on historically observed parameter values, and unreasonable scenarios with parameter values obtained through flawed analogy. The economic subfield of optimal depletion theory is founded on the notion of rational economic agents, and there is a causal relation between decisions made at the micro-level and the macro-result. In terms of future improvements, however, the analytical form considerably restricts the versatility of the approach. Agent-based modeling makes it feasible to combine economically motivated agents with a physical environment. An example relating to oil exploration is given in Paper II, where it is shown that the exploratory activities of individual

  10. Feasibility Assessment of a Fine-Grained Access Control Model on Resource Constrained Sensors.

    Science.gov (United States)

    Uriarte Itzazelaia, Mikel; Astorga, Jasone; Jacob, Eduardo; Huarte, Maider; Romaña, Pedro

    2018-02-13

    Upcoming smart scenarios enabled by the Internet of Things (IoT) envision smart objects that provide services that can adapt to user behavior or be managed to achieve greater productivity. In such environments, smart things are inexpensive and, therefore, constrained devices. However, they are also critical components because of the importance of the information that they provide. Given this, strong security is a requirement, but not all security mechanisms in general and access control models in particular are feasible. In this paper, we present the feasibility assessment of an access control model that utilizes a hybrid architecture and a policy language that provides dynamic fine-grained policy enforcement in the sensors, which requires an efficient message exchange protocol called Hidra. This experimental performance assessment includes a prototype implementation, a performance evaluation model, the measurements and related discussions, which demonstrate the feasibility and adequacy of the analyzed access control model.

  11. Constraining spatial variations of the fine-structure constant in symmetron models

    Directory of Open Access Journals (Sweden)

    A.M.M. Pinho

    2017-06-01

    Full Text Available We introduce a methodology to test models with spatial variations of the fine-structure constant α, based on the calculation of the angular power spectrum of these measurements. This methodology enables comparisons of observations and theoretical models through their predictions on the statistics of the α variation. Here we apply it to the case of symmetron models. We find no indications of deviations from the standard behavior, with current data providing an upper limit to the strength of the symmetron coupling to gravity (log⁡β2<−0.9 when this is the only free parameter, and not able to constrain the model when also the symmetry breaking scale factor aSSB is free to vary.

  12. Phylogenetic tree reconstruction accuracy and model fit when proportions of variable sites change across the tree.

    Science.gov (United States)

    Shavit Grievink, Liat; Penny, David; Hendy, Michael D; Holland, Barbara R

    2010-05-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction.

  13. Constrained parameterisation of photosynthetic capacity causes significant increase of modelled tropical vegetation surface temperature

    Science.gov (United States)

    Kattge, J.; Knorr, W.; Raddatz, T.; Wirth, C.

    2009-04-01

    Photosynthetic capacity is one of the most sensitive parameters of terrestrial biosphere models whose representation in global scale simulations has been severely hampered by a lack of systematic analyses using a sufficiently broad database. Due to its coupling to stomatal conductance changes in the parameterisation of photosynthetic capacity may potentially influence transpiration rates and vegetation surface temperature. Here, we provide a constrained parameterisation of photosynthetic capacity for different plant functional types in the context of the photosynthesis model proposed by Farquhar et al. (1980), based on a comprehensive compilation of leaf photosynthesis rates and leaf nitrogen content. Mean values of photosynthetic capacity were implemented into the coupled climate-vegetation model ECHAM5/JSBACH and modelled gross primary production (GPP) is compared to a compilation of independent observations on stand scale. Compared to the current standard parameterisation the root-mean-squared difference between modelled and observed GPP is substantially reduced for almost all PFTs by the new parameterisation of photosynthetic capacity. We find a systematic depression of NUE (photosynthetic capacity divided by leaf nitrogen content) on certain tropical soils that are known to be deficient in phosphorus. Photosynthetic capacity of tropical trees derived by this study is substantially lower than standard estimates currently used in terrestrial biosphere models. This causes a decrease of modelled GPP while it significantly increases modelled tropical vegetation surface temperatures, up to 0.8°C. These results emphasise the importance of a constrained parameterisation of photosynthetic capacity not only for the carbon cycle, but also for the climate system.

  14. An Equilibrium Chance-Constrained Multiobjective Programming Model with Birandom Parameters and Its Application to Inventory Problem

    Directory of Open Access Journals (Sweden)

    Zhimiao Tao

    2013-01-01

    Full Text Available An equilibrium chance-constrained multiobjective programming model with birandom parameters is proposed. A type of linear model is converted into its crisp equivalent model. Then a birandom simulation technique is developed to tackle the general birandom objective functions and birandom constraints. By embedding the birandom simulation technique, a modified genetic algorithm is designed to solve the equilibrium chance-constrained multiobjective programming model. We apply the proposed model and algorithm to a real-world inventory problem and show the effectiveness of the model and the solution method.

  15. A supply function model for representing the strategic bidding of the producers in constrained electricity markets

    International Nuclear Information System (INIS)

    Bompard, Ettore; Napoli, Roberto; Lu, Wene; Jiang, Xiuchen

    2010-01-01

    The modeling of the bidding behaviour of the producer is a key-point in the modeling and simulation of the competitive electricity markets. In our paper, the linear supply function model is applied so as to find the Supply Function Equilibrium analytically. It also proposed a new and efficient approach to find SFEs for the network constrained electricity markets by finding the best slope of the supply function with the help of changing the intercept, and the method can be applied on the large systems. The approach proposed is applied to study IEEE-118 bus test systems and the comparison between bidding slope and bidding intercept is presented, as well, with reference to the test system. (author)

  16. Chance-constrained programming models for capital budgeting with NPV as fuzzy parameters

    Science.gov (United States)

    Huang, Xiaoxia

    2007-01-01

    In an uncertain economic environment, experts' knowledge about outlays and cash inflows of available projects consists of much vagueness instead of randomness. Investment outlays and annual net cash flows of a project are usually predicted by using experts' knowledge. Fuzzy variables can overcome the difficulties in predicting these parameters. In this paper, capital budgeting problem with fuzzy investment outlays and fuzzy annual net cash flows is studied based on credibility measure. Net present value (NPV) method is employed, and two fuzzy chance-constrained programming models for capital budgeting problem are provided. A fuzzy simulation-based genetic algorithm is provided for solving the proposed model problems. Two numerical examples are also presented to illustrate the modelling idea and the effectiveness of the proposed algorithm.

  17. A Hybrid Method for the Modelling and Optimisation of Constrained Search Problems

    Directory of Open Access Journals (Sweden)

    Sitek Pawel

    2014-08-01

    Full Text Available The paper presents a concept and the outline of the implementation of a hybrid approach to modelling and solving constrained problems. Two environments of mathematical programming (in particular, integer programming and declarative programming (in particular, constraint logic programming were integrated. The strengths of integer programming and constraint logic programming, in which constraints are treated in a different way and different methods are implemented, were combined to use the strengths of both. The hybrid method is not worse than either of its components used independently. The proposed approach is particularly important for the decision models with an objective function and many discrete decision variables added up in multiple constraints. To validate the proposed approach, two illustrative examples are presented and solved. The first example is the authors’ original model of cost optimisation in the supply chain with multimodal transportation. The second one is the two-echelon variant of the well-known capacitated vehicle routing problem.

  18. The Balance-of-Payments-Constrained Growth Model and the Limits to Export-Led Growth

    Directory of Open Access Journals (Sweden)

    Robert A. Blecker

    2000-12-01

    Full Text Available This paper discusses how A. P. Thirlwall's model of balance-of-payments-constrained growth can be adapted to analyze the idea of a "fallacy of composition" in the export-led growth strategy of many developing countries. The Deaton-Muellbauer model of the Almost Ideal Demand System (AIDS is used to represent the adding-up constraints on individual countries' exports, when they are all trying to export competing products to the same foreign markets (i.e. newly industrializing countries are exporting similar types of manufactured goods to the OECD countries. The relevance of the model to the recent financial crises in developing countries and policy alternatives for redirecting development strategies are also discussed.

  19. Efficient non-negative constrained model-based inversion in optoacoustic tomography

    International Nuclear Information System (INIS)

    Ding, Lu; Luís Deán-Ben, X; Lutzweiler, Christian; Razansky, Daniel; Ntziachristos, Vasilis

    2015-01-01

    The inversion accuracy in optoacoustic tomography depends on a number of parameters, including the number of detectors employed, discrete sampling issues or imperfectness of the forward model. These parameters result in ambiguities on the reconstructed image. A common ambiguity is the appearance of negative values, which have no physical meaning since optical absorption can only be higher or equal than zero. We investigate herein algorithms that impose non-negative constraints in model-based optoacoustic inversion. Several state-of-the-art non-negative constrained algorithms are analyzed. Furthermore, an algorithm based on the conjugate gradient method is introduced in this work. We are particularly interested in investigating whether positive restrictions lead to accurate solutions or drive the appearance of errors and artifacts. It is shown that the computational performance of non-negative constrained inversion is higher for the introduced algorithm than for the other algorithms, while yielding equivalent results. The experimental performance of this inversion procedure is then tested in phantoms and small animals, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach. The study performed validates the use of non-negative constraints for improving image accuracy compared to unconstrained methods, while maintaining computational efficiency. (paper)

  20. Structural model of the Northern Latium volcanic area constrained by MT, gravity and aeromagnetic data

    Directory of Open Access Journals (Sweden)

    P. Gasparini

    1997-06-01

    Full Text Available The results of about 120 magnetotelluric soundings carried out in the Vulsini, Vico and Sabatini volcanic areas were modeled along with Bouguer and aeromagnetic anomalies to reconstruct a model of the structure of the shallow (less than 5 km of depth crust. The interpretations were constrained by the information gathered from the deep boreholes drilled for geothermal exploration. MT and aeromagnetic anomalies allow the depth to the top of the sedimentary basement and the thickness of the volcanic layer to be inferred. Gravity anomalies are strongly affected by the variations of morphology of the top of the sedimentary basement, consisting of a Tertiary flysch, and of the interface with the underlying Mesozoic carbonates. Gravity data have also been used to extrapolate the thickness of the neogenic unit indicated by some boreholes. There is no evidence for other important density and susceptibility heterogeneities and deeper sources of magnetic and/or gravity anomalies in all the surveyed area.

  1. Constraining models of f(R) gravity with Planck and WiggleZ power spectrum data

    Science.gov (United States)

    Dossett, Jason; Hu, Bin; Parkinson, David

    2014-03-01

    In order to explain cosmic acceleration without invoking ``dark'' physics, we consider f(R) modified gravity models, which replace the standard Einstein-Hilbert action in General Relativity with a higher derivative theory. We use data from the WiggleZ Dark Energy survey to probe the formation of structure on large scales which can place tight constraints on these models. We combine the large-scale structure data with measurements of the cosmic microwave background from the Planck surveyor. After parameterizing the modification of the action using the Compton wavelength parameter B0, we constrain this parameter using ISiTGR, assuming an initial non-informative log prior probability distribution of this cross-over scale. We find that the addition of the WiggleZ power spectrum provides the tightest constraints to date on B0 by an order of magnitude, giving log10(B0) explanation.

  2. Stock management in hospital pharmacy using chance-constrained model predictive control.

    Science.gov (United States)

    Jurado, I; Maestre, J M; Velarde, P; Ocampo-Martinez, C; Fernández, I; Tejera, B Isla; Prado, J R Del

    2016-05-01

    One of the most important problems in the pharmacy department of a hospital is stock management. The clinical need for drugs must be satisfied with limited work labor while minimizing the use of economic resources. The complexity of the problem resides in the random nature of the drug demand and the multiple constraints that must be taken into account in every decision. In this article, chance-constrained model predictive control is proposed to deal with this problem. The flexibility of model predictive control allows taking into account explicitly the different objectives and constraints involved in the problem while the use of chance constraints provides a trade-off between conservativeness and efficiency. The solution proposed is assessed to study its implementation in two Spanish hospitals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Adaptively Constrained Stochastic Model Predictive Control for the Optimal Dispatch of Microgrid

    Directory of Open Access Journals (Sweden)

    Xiaogang Guo

    2018-01-01

    Full Text Available In this paper, an adaptively constrained stochastic model predictive control (MPC is proposed to achieve less-conservative coordination between energy storage units and uncertain renewable energy sources (RESs in a microgrid (MG. Besides the economic objective of MG operation, the limits of state-of-charge (SOC and discharging/charging power of the energy storage unit are formulated as chance constraints when accommodating uncertainties of RESs, considering mild violations of these constraints are allowed during long-term operation, and a closed-loop online update strategy is performed to adaptively tighten or relax constraints according to the actual deviation probability of violation level from the desired one as well as the current change rate of deviation probability. Numerical studies show that the proposed adaptively constrained stochastic MPC for MG optimal operation is much less conservative compared with the scenario optimization based robust MPC, and also presents a better convergence performance to the desired constraint violation level than other online update strategies.

  4. Constraining the dark energy models with H (z ) data: An approach independent of H0

    Science.gov (United States)

    Anagnostopoulos, Fotios K.; Basilakos, Spyros

    2018-03-01

    We study the performance of the latest H (z ) data in constraining the cosmological parameters of different cosmological models, including that of Chevalier-Polarski-Linder w0w1 parametrization. First, we introduce a statistical procedure in which the chi-square estimator is not affected by the value of the Hubble constant. As a result, we find that the H (z ) data do not rule out the possibility of either nonflat models or dynamical dark energy cosmological models. However, we verify that the time varying equation-of-state parameter w (z ) is not constrained by the current expansion data. Combining the H (z ) and the Type Ia supernova data, we find that the H (z )/SNIa overall statistical analysis provides a substantial improvement of the cosmological constraints with respect to those of the H (z ) analysis. Moreover, the w0-w1 parameter space provided by the H (z )/SNIa joint analysis is in very good agreement with that of Planck 2015, which confirms that the present analysis with the H (z ) and supernova type Ia (SNIa) probes correctly reveals the expansion of the Universe as found by the team of Planck. Finally, we generate sets of Monte Carlo realizations in order to quantify the ability of the H (z ) data to provide strong constraints on the dark energy model parameters. The Monte Carlo approach shows significant improvement of the constraints, when increasing the sample to 100 H (z ) measurements. Such a goal can be achieved in the future, especially in the light of the next generation of surveys.

  5. Reversible polymorphism-aware phylogenetic models and their application to tree inference.

    Science.gov (United States)

    Schrempf, Dominik; Minh, Bui Quang; De Maio, Nicola; von Haeseler, Arndt; Kosiol, Carolin

    2016-10-21

    We present a reversible Polymorphism-Aware Phylogenetic Model (revPoMo) for species tree estimation from genome-wide data. revPoMo enables the reconstruction of large scale species trees for many within-species samples. It expands the alphabet of DNA substitution models to include polymorphic states, thereby, naturally accounting for incomplete lineage sorting. We implemented revPoMo in the maximum likelihood software IQ-TREE. A simulation study and an application to great apes data show that the runtimes of our approach and standard substitution models are comparable but that revPoMo has much better accuracy in estimating trees, divergence times and mutation rates. The advantage of revPoMo is that an increase of sample size per species improves estimations but does not increase runtime. Therefore, revPoMo is a valuable tool with several applications, from speciation dating to species tree reconstruction. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Evaluating the microbial diversity of an in vitro model of the human large intestine by phylogenetic microarray analysis

    NARCIS (Netherlands)

    Rajilic-Stojanovic, M.; Maathuis, A.; Heilig, G.H.J.; Venema, K.; Vos, de W.M.; Smidt, H.

    2010-01-01

    A high-density phylogenetic microarray targeting small subunit rRNA (SSU rRNA) sequences of over 1000 microbial phylotypes of the human gastrointestinal tract, the HITChip, was used to assess the impact of faecal inoculum preparation and operation conditions on an in vitro model of the human large

  7. Constraining model parameters on remotely sensed evaporation: justification for distribution in ungauged basins?

    Directory of Open Access Journals (Sweden)

    H. C. Winsemius

    2008-12-01

    Full Text Available In this study, land surface related parameter distributions of a conceptual semi-distributed hydrological model are constrained by employing time series of satellite-based evaporation estimates during the dry season as explanatory information. The approach has been applied to the ungauged Luangwa river basin (150 000 (km2 in Zambia. The information contained in these evaporation estimates imposes compliance of the model with the largest outgoing water balance term, evaporation, and a spatially and temporally realistic depletion of soil moisture within the dry season. The model results in turn provide a better understanding of the information density of remotely sensed evaporation. Model parameters to which evaporation is sensitive, have been spatially distributed on the basis of dominant land cover characteristics. Consequently, their values were conditioned by means of Monte-Carlo sampling and evaluation on satellite evaporation estimates. The results show that behavioural parameter sets for model units with similar land cover are indeed clustered. The clustering reveals hydrologically meaningful signatures in the parameter response surface: wetland-dominated areas (also called dambos show optimal parameter ranges that reflect vegetation with a relatively small unsaturated zone (due to the shallow rooting depth of the vegetation which is easily moisture stressed. The forested areas and highlands show parameter ranges that indicate a much deeper root zone which is more drought resistent. Clustering was consequently used to formulate fuzzy membership functions that can be used to constrain parameter realizations in further calibration. Unrealistic parameter ranges, found for instance in the high unsaturated soil zone values in the highlands may indicate either overestimation of satellite-based evaporation or model structural deficiencies. We believe that in these areas, groundwater uptake into the root zone and lateral movement of

  8. A Nonparametric Shape Prior Constrained Active Contour Model for Segmentation of Coronaries in CTA Images

    Directory of Open Access Journals (Sweden)

    Yin Wang

    2014-01-01

    Full Text Available We present a nonparametric shape constrained algorithm for segmentation of coronary arteries in computed tomography images within the framework of active contours. An adaptive scale selection scheme, based on the global histogram information of the image data, is employed to determine the appropriate window size for each point on the active contour, which improves the performance of the active contour model in the low contrast local image regions. The possible leakage, which cannot be identified by using intensity features alone, is reduced through the application of the proposed shape constraint, where the shape of circular sampled intensity profile is used to evaluate the likelihood of current segmentation being considered vascular structures. Experiments on both synthetic and clinical datasets have demonstrated the efficiency and robustness of the proposed method. The results on clinical datasets have shown that the proposed approach is capable of extracting more detailed coronary vessels with subvoxel accuracy.

  9. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los; Schö nlieb, Carola-Bibiane

    2013-01-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  10. A Modified FCM Classifier Constrained by Conditional Random Field Model for Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    WANG Shaoyu

    2016-12-01

    Full Text Available Remote sensing imagery has abundant spatial correlation information, but traditional pixel-based clustering algorithms don't take the spatial information into account, therefore the results are often not good. To this issue, a modified FCM classifier constrained by conditional random field model is proposed. Adjacent pixels' priori classified information will have a constraint on the classification of the center pixel, thus extracting spatial correlation information. Spectral information and spatial correlation information are considered at the same time when clustering based on second order conditional random field. What's more, the global optimal inference of pixel's classified posterior probability can be get using loopy belief propagation. The experiment shows that the proposed algorithm can effectively maintain the shape feature of the object, and the classification accuracy is higher than traditional algorithms.

  11. Image denoising: Learning the noise model via nonsmooth PDE-constrained optimization

    KAUST Repository

    Reyes, Juan Carlos De los

    2013-11-01

    We propose a nonsmooth PDE-constrained optimization approach for the determination of the correct noise model in total variation (TV) image denoising. An optimization problem for the determination of the weights corresponding to different types of noise distributions is stated and existence of an optimal solution is proved. A tailored regularization approach for the approximation of the optimal parameter values is proposed thereafter and its consistency studied. Additionally, the differentiability of the solution operator is proved and an optimality system characterizing the optimal solutions of each regularized problem is derived. The optimal parameter values are numerically computed by using a quasi-Newton method, together with semismooth Newton type algorithms for the solution of the TV-subproblems. © 2013 American Institute of Mathematical Sciences.

  12. Input-constrained model predictive control via the alternating direction method of multipliers

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Andersen, Martin S.

    2014-01-01

    This paper presents an algorithm, based on the alternating direction method of multipliers, for the convex optimal control problem arising in input-constrained model predictive control. We develop an efficient implementation of the algorithm for the extended linear quadratic control problem (LQCP......) with input and input-rate limits. The algorithm alternates between solving an extended LQCP and a highly structured quadratic program. These quadratic programs are solved using a Riccati iteration procedure, and a structure-exploiting interior-point method, respectively. The computational cost per iteration...... is quadratic in the dimensions of the controlled system, and linear in the length of the prediction horizon. Simulations show that the approach proposed in this paper is more than an order of magnitude faster than several state-of-the-art quadratic programming algorithms, and that the difference in computation...

  13. An ensemble Kalman filter for statistical estimation of physics constrained nonlinear regression models

    International Nuclear Information System (INIS)

    Harlim, John; Mahdi, Adam; Majda, Andrew J.

    2014-01-01

    A central issue in contemporary science is the development of nonlinear data driven statistical–dynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partial noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (east–west) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model

  14. DATA-CONSTRAINED CORONAL MASS EJECTIONS IN A GLOBAL MAGNETOHYDRODYNAMICS MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Jin, M. [Lockheed Martin Solar and Astrophysics Lab, Palo Alto, CA 94304 (United States); Manchester, W. B.; Van der Holst, B.; Sokolov, I.; Tóth, G.; Gombosi, T. I. [Climate and Space Sciences and Engineering, University of Michigan, Ann Arbor, MI 48109 (United States); Mullinix, R. E.; Taktakishvili, A.; Chulaki, A., E-mail: jinmeng@lmsal.com, E-mail: chipm@umich.edu, E-mail: richard.e.mullinix@nasa.gov, E-mail: Aleksandre.Taktakishvili-1@nasa.gov [Community Coordinated Modeling Center, NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2017-01-10

    We present a first-principles-based coronal mass ejection (CME) model suitable for both scientific and operational purposes by combining a global magnetohydrodynamics (MHD) solar wind model with a flux-rope-driven CME model. Realistic CME events are simulated self-consistently with high fidelity and forecasting capability by constraining initial flux rope parameters with observational data from GONG, SOHO /LASCO, and STEREO /COR. We automate this process so that minimum manual intervention is required in specifying the CME initial state. With the newly developed data-driven Eruptive Event Generator using Gibson–Low configuration, we present a method to derive Gibson–Low flux rope parameters through a handful of observational quantities so that the modeled CMEs can propagate with the desired CME speeds near the Sun. A test result with CMEs launched with different Carrington rotation magnetograms is shown. Our study shows a promising result for using the first-principles-based MHD global model as a forecasting tool, which is capable of predicting the CME direction of propagation, arrival time, and ICME magnetic field at 1 au (see the companion paper by Jin et al. 2016a).

  15. Empirical Succession Mapping and Data Assimilation to Constrain Demographic Processes in an Ecosystem Model

    Science.gov (United States)

    Kelly, R.; Andrews, T.; Dietze, M.

    2015-12-01

    Shifts in ecological communities in response to environmental change have implications for biodiversity, ecosystem function, and feedbacks to global climate change. Community composition is fundamentally the product of demography, but demographic processes are simplified or missing altogether in many ecosystem, Earth system, and species distribution models. This limitation arises in part because demographic data are noisy and difficult to synthesize. As a consequence, demographic processes are challenging to formulate in models in the first place, and to verify and constrain with data thereafter. Here, we used a novel analysis of the USFS Forest Inventory Analysis to improve the representation of demography in an ecosystem model. First, we created an Empirical Succession Mapping (ESM) based on ~1 million individual tree observations from the eastern U.S. to identify broad demographic patterns related to forest succession and disturbance. We used results from this analysis to guide reformulation of the Ecosystem Demography model (ED), an existing forest simulator with explicit tree demography. Results from the ESM reveal a coherent, cyclic pattern of change in temperate forest tree size and density over the eastern U.S. The ESM captures key ecological processes including succession, self-thinning, and gap-filling, and quantifies the typical trajectory of these processes as a function of tree size and stand density. Recruitment is most rapid in early-successional stands with low density and mean diameter, but slows as stand density increases; mean diameter increases until thinning promotes recruitment of small-diameter trees. Strikingly, the upper bound of size-density space that emerges in the ESM conforms closely to the self-thinning power law often observed in ecology. The ED model obeys this same overall size-density boundary, but overestimates plot-level growth, mortality, and fecundity rates, leading to unrealistic emergent demographic patterns. In particular

  16. A constrained multinomial Probit route choice model in the metro network: Formulation, estimation and application

    Science.gov (United States)

    Zhang, Yongsheng; Wei, Heng; Zheng, Kangning

    2017-01-01

    Considering that metro network expansion brings us with more alternative routes, it is attractive to integrate the impacts of routes set and the interdependency among alternative routes on route choice probability into route choice modeling. Therefore, the formulation, estimation and application of a constrained multinomial probit (CMNP) route choice model in the metro network are carried out in this paper. The utility function is formulated as three components: the compensatory component is a function of influencing factors; the non-compensatory component measures the impacts of routes set on utility; following a multivariate normal distribution, the covariance of error component is structured into three parts, representing the correlation among routes, the transfer variance of route, and the unobserved variance respectively. Considering multidimensional integrals of the multivariate normal probability density function, the CMNP model is rewritten as Hierarchical Bayes formula and M-H sampling algorithm based Monte Carlo Markov Chain approach is constructed to estimate all parameters. Based on Guangzhou Metro data, reliable estimation results are gained. Furthermore, the proposed CMNP model also shows a good forecasting performance for the route choice probabilities calculation and a good application performance for transfer flow volume prediction. PMID:28591188

  17. Constraining climate sensitivity and continental versus seafloor weathering using an inverse geological carbon cycle model.

    Science.gov (United States)

    Krissansen-Totton, Joshua; Catling, David C

    2017-05-22

    The relative influences of tectonics, continental weathering and seafloor weathering in controlling the geological carbon cycle are unknown. Here we develop a new carbon cycle model that explicitly captures the kinetics of seafloor weathering to investigate carbon fluxes and the evolution of atmospheric CO 2 and ocean pH since 100 Myr ago. We compare model outputs to proxy data, and rigorously constrain model parameters using Bayesian inverse methods. Assuming our forward model is an accurate representation of the carbon cycle, to fit proxies the temperature dependence of continental weathering must be weaker than commonly assumed. We find that 15-31 °C (1σ) surface warming is required to double the continental weathering flux, versus 3-10 °C in previous work. In addition, continental weatherability has increased 1.7-3.3 times since 100 Myr ago, demanding explanation by uplift and sea-level changes. The average Earth system climate sensitivity is  K (1σ) per CO 2 doubling, which is notably higher than fast-feedback estimates. These conclusions are robust to assumptions about outgassing, modern fluxes and seafloor weathering kinetics.

  18. Modeling and Simulation of the Gonghe geothermal field (Qinghai, China) Constrained by Geophysical

    Science.gov (United States)

    Zeng, Z.; Wang, K.; Zhao, X.; Huai, N.; He, R.

    2017-12-01

    The Gonghe geothermal field in Qinghai is important because of its variety of geothermal resource types. Now, the Gonghe geothermal field has been a demonstration area of geothermal development and utilization in China. It has been the topic of numerous geophysical investigations conducted to determine the depth to and the nature of the heat source, and to image the channel of heat flow. This work focuses on the causes of geothermal fields used numerical simulation method constrained by geophysical data. At first, by analyzing and inverting an magnetotelluric (MT) measurements profile across this area we obtain the deep resistivity distribution. Using the gravity anomaly inversion constrained by the resistivity profile, the density of the basins and the underlying rocks can be calculated. Combined with the measured parameters of rock thermal conductivity, the 2D geothermal conceptual model of Gonghe area is constructed. Then, the unstructured finite element method is used to simulate the heat conduction equation and the geothermal field. Results of this model were calibrated with temperature data for the observation well. A good match was achieved between the measured values and the model's predicted values. At last, geothermal gradient and heat flow distribution of this model are calculated(fig.1.). According to the results of geophysical exploration, there is a low resistance and low density region (d5) below the geothermal field. We recognize that this anomaly is generated by tectonic motion, and this tectonic movement creates a mantle-derived heat upstream channel. So that the anomalous basement heat flow values are higher than in other regions. The model's predicted values simulated using that boundary condition has a good match with the measured values. The simulated heat flow values show that the mantle-derived heat flow migrates through the boundary of the low-resistance low-density anomaly area to the Gonghe geothermal field, with only a small fraction

  19. Sequential optimization of a terrestrial biosphere model constrained by multiple satellite based products

    Science.gov (United States)

    Ichii, K.; Kondo, M.; Wang, W.; Hashimoto, H.; Nemani, R. R.

    2012-12-01

    Various satellite-based spatial products such as evapotranspiration (ET) and gross primary productivity (GPP) are now produced by integration of ground and satellite observations. Effective use of these multiple satellite-based products in terrestrial biosphere models is an important step toward better understanding of terrestrial carbon and water cycles. However, due to the complexity of terrestrial biosphere models with large number of model parameters, the application of these spatial data sets in terrestrial biosphere models is difficult. In this study, we established an effective but simple framework to refine a terrestrial biosphere model, Biome-BGC, using multiple satellite-based products as constraints. We tested the framework in the monsoon Asia region covered by AsiaFlux observations. The framework is based on the hierarchical analysis (Wang et al. 2009) with model parameter optimization constrained by satellite-based spatial data. The Biome-BGC model is separated into several tiers to minimize the freedom of model parameter selections and maximize the independency from the whole model. For example, the snow sub-model is first optimized using MODIS snow cover product, followed by soil water sub-model optimized by satellite-based ET (estimated by an empirical upscaling method; Support Vector Regression (SVR) method; Yang et al. 2007), photosynthesis model optimized by satellite-based GPP (based on SVR method), and respiration and residual carbon cycle models optimized by biomass data. As a result of initial assessment, we found that most of default sub-models (e.g. snow, water cycle and carbon cycle) showed large deviations from remote sensing observations. However, these biases were removed by applying the proposed framework. For example, gross primary productivities were initially underestimated in boreal and temperate forest and overestimated in tropical forests. However, the parameter optimization scheme successfully reduced these biases. Our analysis

  20. Greenland ice sheet model parameters constrained using simulations of the Eemian Interglacial

    Directory of Open Access Journals (Sweden)

    A. Robinson

    2011-04-01

    Full Text Available Using a new approach to force an ice sheet model, we performed an ensemble of simulations of the Greenland Ice Sheet evolution during the last two glacial cycles, with emphasis on the Eemian Interglacial. This ensemble was generated by perturbing four key parameters in the coupled regional climate-ice sheet model and by introducing additional uncertainty in the prescribed "background" climate change. The sensitivity of the surface melt model to climate change was determined to be the dominant driver of ice sheet instability, as reflected by simulated ice sheet loss during the Eemian Interglacial period. To eliminate unrealistic parameter combinations, constraints from present-day and paleo information were applied. The constraints include (i the diagnosed present-day surface mass balance partition between surface melting and ice discharge at the margin, (ii the modeled present-day elevation at GRIP; and (iii the modeled elevation reduction at GRIP during the Eemian. Using these three constraints, a total of 360 simulations with 90 different model realizations were filtered down to 46 simulations and 20 model realizations considered valid. The paleo constraint eliminated more sensitive melt parameter values, in agreement with the surface mass balance partition assumption. The constrained simulations resulted in a range of Eemian ice loss of 0.4–4.4 m sea level equivalent, with a more likely range of about 3.7–4.4 m sea level if the GRIP δ18O isotope record can be considered an accurate proxy for the precipitation-weighted annual mean temperatures.

  1. A Monte Carlo approach to constraining uncertainties in modelled downhole gravity gradiometry applications

    Science.gov (United States)

    Matthews, Samuel J.; O'Neill, Craig; Lackie, Mark A.

    2017-06-01

    Gravity gradiometry has a long legacy, with airborne/marine applications as well as surface applications receiving renewed recent interest. Recent instrumental advances has led to the emergence of downhole gravity gradiometry applications that have the potential for greater resolving power than borehole gravity alone. This has promise in both the petroleum and geosequestration industries; however, the effect of inherent uncertainties in the ability of downhole gravity gradiometry to resolve a subsurface signal is unknown. Here, we utilise the open source modelling package, Fatiando a Terra, to model both the gravity and gravity gradiometry responses of a subsurface body. We use a Monte Carlo approach to vary the geological structure and reference densities of the model within preset distributions. We then perform 100 000 simulations to constrain the mean response of the buried body as well as uncertainties in these results. We varied our modelled borehole to be either centred on the anomaly, adjacent to the anomaly (in the x-direction), and 2500 m distant to the anomaly (also in the x-direction). We demonstrate that gravity gradiometry is able to resolve a reservoir-scale modelled subsurface density variation up to 2500 m away, and that certain gravity gradient components (Gzz, Gxz, and Gxx) are particularly sensitive to this variation in gravity/gradiometry above the level of uncertainty in the model. The responses provided by downhole gravity gradiometry modelling clearly demonstrate a technique that can be utilised in determining a buried density contrast, which will be of particular use in the emerging industry of CO2 geosequestration. The results also provide a strong benchmark for the development of newly emerging prototype downhole gravity gradiometers.

  2. Internet gaming disorder: Inadequate diagnostic criteria wrapped in a constraining conceptual model.

    Science.gov (United States)

    Starcevic, Vladan

    2017-06-01

    Background and aims The paper "Chaos and confusion in DSM-5 diagnosis of Internet Gaming Disorder: Issues, concerns, and recommendations for clarity in the field" by Kuss, Griffiths, and Pontes (in press) critically examines the DSM-5 diagnostic criteria for Internet gaming disorder (IGD) and addresses the issue of whether IGD should be reconceptualized as gaming disorder, regardless of whether video games are played online or offline. This commentary provides additional critical perspectives on the concept of IGD. Methods The focus of this commentary is on the addiction model on which the concept of IGD is based, the nature of the DSM-5 criteria for IGD, and the inclusion of withdrawal symptoms and tolerance as the diagnostic criteria for IGD. Results The addiction framework on which the DSM-5 concept of IGD is based is not without problems and represents only one of multiple theoretical approaches to problematic gaming. The polythetic, non-hierarchical DSM-5 diagnostic criteria for IGD make the concept of IGD unacceptably heterogeneous. There is no support for maintaining withdrawal symptoms and tolerance as the diagnostic criteria for IGD without their substantial revision. Conclusions The addiction model of IGD is constraining and does not contribute to a better understanding of the various patterns of problematic gaming. The corresponding diagnostic criteria need a thorough overhaul, which should be based on a model of problematic gaming that can accommodate its disparate aspects.

  3. An Anatomically Constrained Model for Path Integration in the Bee Brain.

    Science.gov (United States)

    Stone, Thomas; Webb, Barbara; Adden, Andrea; Weddig, Nicolai Ben; Honkanen, Anna; Templin, Rachel; Wcislo, William; Scimeca, Luca; Warrant, Eric; Heinze, Stanley

    2017-10-23

    Path integration is a widespread navigational strategy in which directional changes and distance covered are continuously integrated on an outward journey, enabling a straight-line return to home. Bees use vision for this task-a celestial-cue-based visual compass and an optic-flow-based visual odometer-but the underlying neural integration mechanisms are unknown. Using intracellular electrophysiology, we show that polarized-light-based compass neurons and optic-flow-based speed-encoding neurons converge in the central complex of the bee brain, and through block-face electron microscopy, we identify potential integrator cells. Based on plausible output targets for these cells, we propose a complete circuit for path integration and steering in the central complex, with anatomically identified neurons suggested for each processing step. The resulting model circuit is thus fully constrained biologically and provides a functional interpretation for many previously unexplained architectural features of the central complex. Moreover, we show that the receptive fields of the newly discovered speed neurons can support path integration for the holonomic motion (i.e., a ground velocity that is not precisely aligned with body orientation) typical of bee flight, a feature not captured in any previously proposed model of path integration. In a broader context, the model circuit presented provides a general mechanism for producing steering signals by comparing current and desired headings-suggesting a more basic function for central complex connectivity, from which path integration may have evolved. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Chance-constrained/stochastic linear programming model for acid rain abatement. I. Complete colinearity and noncolinearity

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, J H; McBean, E A; Farquhar, G J

    1985-01-01

    A Linear Programming model is presented for development of acid rain abatement strategies in eastern North America. For a system comprised of 235 large controllable point sources and 83 uncontrolled area sources, it determines the least-cost method of reducing SO/sub 2/ emissions to satisfy maximum wet sulfur deposition limits at 20 sensitive receptor locations. In this paper, the purely deterministic model is extended to a probabilistic form by incorporating the effects of meteorologic variability on the long-range pollutant transport processes. These processes are represented by source-receptor-specific transfer coefficients. Experiments for quantifying the spatial variability of transfer coefficients showed their distributions to be approximately lognormal with logarithmic standard deviations consistently about unity. Three methods of incorporating second-moment random variable uncertainty into the deterministic LP framework are described: Two-Stage Programming Under Uncertainty, Chance-Constrained Programming and Stochastic Linear Programming. A composite CCP-SLP model is developed which embodies the two-dimensional characteristics of transfer coefficient uncertainty. Two probabilistic formulations are described involving complete colinearity and complete noncolinearity for the transfer coefficient covariance-correlation structure. The completely colinear and noncolinear formulations are considered extreme bounds in a meteorologic sense and yield abatement strategies of largely didactic value. Such strategies can be characterized as having excessive costs and undesirable deposition results in the completely colinear case and absence of a clearly defined system risk level (other than expected-value) in the noncolinear formulation.

  5. Constraining Distributed Catchment Models by Incorporating Perceptual Understanding of Spatial Hydrologic Behaviour

    Science.gov (United States)

    Hutton, Christopher; Wagener, Thorsten; Freer, Jim; Han, Dawei

    2016-04-01

    and valley slopes within the catchment are used to identify behavioural models. The process of converting qualitative information into quantitative constraints forces us to evaluate the assumptions behind our perceptual understanding in order to derive robust constraints, and therefore fairly reject models and avoid type II errors. Likewise, consideration needs to be given to the commensurability problem when mapping perceptual understanding to constrain model states.

  6. Phylogenetic Trees From Sequences

    Science.gov (United States)

    Ryvkin, Paul; Wang, Li-San

    In this chapter, we review important concepts and approaches for phylogeny reconstruction from sequence data.We first cover some basic definitions and properties of phylogenetics, and briefly explain how scientists model sequence evolution and measure sequence divergence. We then discuss three major approaches for phylogenetic reconstruction: distance-based phylogenetic reconstruction, maximum parsimony, and maximum likelihood. In the third part of the chapter, we review how multiple phylogenies are compared by consensus methods and how to assess confidence using bootstrapping. At the end of the chapter are two sections that list popular software packages and additional reading.

  7. Commitment Versus Persuasion in the Three-Party Constrained Voter Model

    Science.gov (United States)

    Mobilia, Mauro

    2013-04-01

    In the framework of the three-party constrained voter model, where voters of two radical parties ( A and B) interact with "centrists" ( C and C ζ ), we study the competition between a persuasive majority and a committed minority. In this model, A's and B's are incompatible voters that can convince centrists or be swayed by them. Here, radical voters are more persuasive than centrists, whose sub-population comprises susceptible agents C and a fraction ζ of centrist zealots C ζ . Whereas C's may adopt the opinions A and B with respective rates 1+ δ A and 1+ δ B (with δ A ≥ δ B >0), C ζ 's are committed individuals that always remain centrists. Furthermore, A and B voters can become (susceptible) centrists C with a rate 1. The resulting competition between commitment and persuasion is studied in the mean field limit and for a finite population on a complete graph. At mean field level, there is a continuous transition from a coexistence phase when ζpersuasion, here consensus is reached much slower ( ζpersuasive voters and centrists coexist when δ A > δ B , whereas all species coexist when δ A = δ B . When ζ≥Δ c and the initial density of centrists is low, one finds τ˜ln N (when N≫1). Our analytical findings are corroborated by stochastic simulations.

  8. Constrained structural dynamic model verification using free vehicle suspension testing methods

    Science.gov (United States)

    Blair, Mark A.; Vadlamudi, Nagarjuna

    1988-01-01

    Verification of the validity of a spacecraft's structural dynamic math model used in computing ascent (or in the case of the STS, ascent and landing) loads is mandatory. This verification process requires that tests be carried out on both the payload and the math model such that the ensuing correlation may validate the flight loads calculations. To properly achieve this goal, the tests should be performed with the payload in the launch constraint (i.e., held fixed at only the payload-booster interface DOFs). The practical achievement of this set of boundary conditions is quite difficult, especially with larger payloads, such as the 12-ton Hubble Space Telescope. The development of equations in the paper will show that by exciting the payload at its booster interface while it is suspended in the 'free-free' state, a set of transfer functions can be produced that will have minima that are directly related to the fundamental modes of the payload when it is constrained in its launch configuration.

  9. Constraining models of f(R) gravity with Planck and WiggleZ power spectrum data

    International Nuclear Information System (INIS)

    Dossett, Jason; Parkinson, David; Hu, Bin

    2014-01-01

    In order to explain cosmic acceleration without invoking ''dark'' physics, we consider f(R) modified gravity models, which replace the standard Einstein-Hilbert action in General Relativity with a higher derivative theory. We use data from the WiggleZ Dark Energy survey to probe the formation of structure on large scales which can place tight constraints on these models. We combine the large-scale structure data with measurements of the cosmic microwave background from the Planck surveyor. After parameterizing the modification of the action using the Compton wavelength parameter B 0 , we constrain this parameter using ISiTGR, assuming an initial non-informative log prior probability distribution of this cross-over scale. We find that the addition of the WiggleZ power spectrum provides the tightest constraints to date on B 0 by an order of magnitude, giving log 10 (B 0 ) < −4.07 at 95% confidence limit. Finally, we test whether the effect of adding the lensing amplitude A Lens and the sum of the neutrino mass ∑m ν is able to reconcile current tensions present in these parameters, but find f(R) gravity an inadequate explanation

  10. Ice loading model for Glacial Isostatic Adjustment in the Barents Sea constrained by GRACE gravity observations

    Science.gov (United States)

    Root, Bart; Tarasov, Lev; van der Wal, Wouter

    2014-05-01

    The global ice budget is still under discussion because the observed 120-130 m eustatic sea level equivalent since the Last Glacial Maximum (LGM) can not be explained by the current knowledge of land-ice melt after the LGM. One possible location for the missing ice is the Barents Sea Region, which was completely covered with ice during the LGM. This is deduced from relative sea level observations on Svalbard, Novaya Zemlya and the North coast of Scandinavia. However, there are no observations in the middle of the Barents Sea that capture the post-glacial uplift. With increased precision and longer time series of monthly gravity observations of the GRACE satellite mission it is possible to constrain Glacial Isostatic Adjustment in the center of the Barents Sea. This study investigates the extra constraint provided by GRACE data for modeling the past ice geometry in the Barents Sea. We use CSR release 5 data from February 2003 to July 2013. The GRACE data is corrected for the past 10 years of secular decline of glacier ice on Svalbard, Novaya Zemlya and Frans Joseph Land. With numerical GIA models for a radially symmetric Earth, we model the expected gravity changes and compare these with the GRACE observations after smoothing with a 250 km Gaussian filter. The comparisons show that for the viscosity profile VM5a, ICE-5G has too strong a gravity signal compared to GRACE. The regional calibrated ice sheet model (GLAC) of Tarasov appears to fit the amplitude of the GRACE signal. However, the GRACE data are very sensitive to the ice-melt correction, especially for Novaya Zemlya. Furthermore, the ice mass should be more concentrated to the middle of the Barents Sea. Alternative viscosity models confirm these conclusions.

  11. Constraining the parameters of the EAP sea ice rheology from satellite observations and discrete element model

    Science.gov (United States)

    Tsamados, Michel; Heorton, Harry; Feltham, Daniel; Muir, Alan; Baker, Steven

    2016-04-01

    The new elastic-plastic anisotropic (EAP) rheology that explicitly accounts for the sub-continuum anisotropy of the sea ice cover has been implemented into the latest version of the Los Alamos sea ice model CICE. The EAP rheology is widely used in the climate modeling scientific community (i.e. CPOM stand alone, RASM high resolution regional ice-ocean model, MetOffice fully coupled model). Early results from sensitivity studies (Tsamados et al, 2013) have shown the potential for an improved representation of the observed main sea ice characteristics with a substantial change of the spatial distribution of ice thickness and ice drift relative to model runs with the reference visco-plastic (VP) rheology. The model contains one new prognostic variable, the local structure tensor, which quantifies the degree of anisotropy of the sea ice, and two parameters that set the time scale of the evolution of this tensor. Observations from high resolution satellite SAR imagery as well as numerical simulation results from a discrete element model (DEM, see Wilchinsky, 2010) have shown that these individual floes can organize under external wind and thermal forcing to form an emergent isotropic sea ice state (via thermodynamic healing, thermal cracking) or an anisotropic sea ice state (via Coulombic failure lines due to shear rupture). In this work we use for the first time in the context of sea ice research a mathematical metric, the Tensorial Minkowski functionals (Schroeder-Turk, 2010), to measure quantitatively the degree of anisotropy and alignment of the sea ice at different scales. We apply the methodology on the GlobICE Envisat satellite deformation product (www.globice.info), on a prototype modified version of GlobICE applied on Sentinel-1 Synthetic Aperture Radar (SAR) imagery and on the DEM ice floe aggregates. By comparing these independent measurements of the sea ice anisotropy as well as its temporal evolution against the EAP model we are able to constrain the

  12. Constraining soil C cycling with strategic, adaptive action for data and model reporting

    Science.gov (United States)

    Harden, J. W.; Swanston, C.; Hugelius, G.

    2015-12-01

    Regional to global carbon assessments include a variety of models, data sets, and conceptual structures. This includes strategies for representing the role and capacity of soils to sequester, release, and store carbon. Traditionally, many soil carbon data sets emerged from agricultural missions focused on mapping and classifying soils to enhance and protect production of food and fiber. More recently, soil carbon assessments have allowed for more strategic measurement to address the functional and spatially explicit role that soils play in land-atmosphere carbon exchange. While soil data sets are increasingly inter-comparable and increasingly sampled to accommodate global assessments, soils remain poorly constrained or understood with regard to their role in spatio-temporal variations in carbon exchange. A more deliberate approach to rapid improvement in our understanding involves a community-based activity than embraces both a nimble data repository and a dynamic structure for prioritization. Data input and output can be transparent and retrievable as data-derived products, while also being subjected to rigorous queries for merging and harmonization into a searchable, comprehensive, transparent database. Meanwhile, adaptive action groups can prioritize data and modeling needs that emerge through workshops, meta-data analyses or model testing. Our continual renewal of priorities should address soil processes, mechanisms, and feedbacks that significantly influence global C budgets and/or significantly impact the needs and services of regional soil resources that are impacted by C management. In order to refine the International Soil Carbon Network, we welcome suggestions for such groups to be led on topics such as but not limited to manipulation experiments, extreme climate events, post-disaster C management, past climate-soil interactions, or water-soil-carbon linkages. We also welcome ideas for a business model that can foster and promote idea and data sharing.

  13. The small heat shock proteins from Acidithiobacillus ferrooxidans: gene expression, phylogenetic analysis, and structural modeling

    Directory of Open Access Journals (Sweden)

    Ribeiro Daniela A

    2011-12-01

    Full Text Available Abstract Background Acidithiobacillus ferrooxidans is an acidophilic, chemolithoautotrophic bacterium that has been successfully used in metal bioleaching. In this study, an analysis of the A. ferrooxidans ATCC 23270 genome revealed the presence of three sHSP genes, Afe_1009, Afe_1437 and Afe_2172, that encode proteins from the HSP20 family, a class of intracellular multimers that is especially important in extremophile microorganisms. Results The expression of the sHSP genes was investigated in A. ferrooxidans cells submitted to a heat shock at 40°C for 15, 30 and 60 minutes. After 60 minutes, the gene on locus Afe_1437 was about 20-fold more highly expressed than the gene on locus Afe_2172. Bioinformatic and phylogenetic analyses showed that the sHSPs from A. ferrooxidans are possible non-paralogous proteins, and are regulated by the σ32 factor, a common transcription factor of heat shock proteins. Structural studies using homology molecular modeling indicated that the proteins encoded by Afe_1009 and Afe_1437 have a conserved α-crystallin domain and share similar structural features with the sHSP from Methanococcus jannaschii, suggesting that their biological assembly involves 24 molecules and resembles a hollow spherical shell. Conclusion We conclude that the sHSPs encoded by the Afe_1437 and Afe_1009 genes are more likely to act as molecular chaperones in the A. ferrooxidans heat shock response. In addition, the three sHSPs from A. ferrooxidans are not recent paralogs, and the Afe_1437 and Afe_1009 genes could be inherited horizontally by A. ferrooxidans.

  14. Fast phylogenetic DNA barcoding

    DEFF Research Database (Denmark)

    Terkelsen, Kasper Munch; Boomsma, Wouter Krogh; Willerslev, Eske

    2008-01-01

    We present a heuristic approach to the DNA assignment problem based on phylogenetic inferences using constrained neighbour joining and non-parametric bootstrapping. We show that this method performs as well as the more computationally intensive full Bayesian approach in an analysis of 500 insect...... DNA sequences obtained from GenBank. We also analyse a previously published dataset of environmental DNA sequences from soil from New Zealand and Siberia, and use these data to illustrate the fact that statistical approaches to the DNA assignment problem allow for more appropriate criteria...... for determining the taxonomic level at which a particular DNA sequence can be assigned....

  15. A Kinematic Model of Slow Slip Constrained by Tremor-Derived Slip Histories in Cascadia

    Science.gov (United States)

    Schmidt, D. A.; Houston, H.

    2016-12-01

    We explore new ways to constrain the kinematic slip distributions for large slow slip events using constraints from tremor. Our goal is to prescribe one or more slip pulses that propagate across the fault and scale appropriately to satisfy the observations. Recent work (Houston, 2015) inferred a crude representative stress time history at an average point using the tidal stress history, the static stress drop, and the timing of the evolution of tidal sensitivity of tremor over several days of slip. To convert a stress time history into a slip time history, we use simulations to explore the stressing history of a small locked patch due to an approaching rupture front. We assume that the locked patch releases strain through a series of tremor bursts whose activity rate is related to the stressing history. To test whether the functional form of a slip pulse is reasonable, we assume a hypothetical slip time history (Ohnaka pulse) timed with the occurrence of tremor to create a rupture front that propagates along the fault. The duration of the rupture front for a fault patch is constrained by the observed tremor catalog for the 2010 ETS event. The slip amplitude is scaled appropriately to match the observed surface displacements from GPS. Through a forward simulation, we evaluate the ability of the tremor-derived slip history to accurately predict the pattern of surface displacements observed by GPS. We find that the temporal progression of surface displacements are well modeled by a 2-4 day slip pulse, suggesting that some of the longer duration of slip typically found in time-dependent GPS inversions is biased by the temporal smoothing. However, at some locations on the fault, the tremor lingers beyond the passage of the slip pulse. A small percentage (5-10%) of the tremor appears to be activated ahead of the approaching slip pulse, and tremor asperities experience a driving stress on the order of 10 kPa/day. Tremor amplitude, rather than just tremor counts, is needed

  16. Supporting the search for the CEP location with nonlocal PNJL models constrained by lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Contrera, Gustavo A. [IFLP, UNLP, CONICET, Facultad de Ciencias Exactas, La Plata (Argentina); Gravitation, Astrophysics and Cosmology Group, FCAyG, UNLP, La Plata (Argentina); CONICET, Buenos Aires (Argentina); Grunfeld, A.G. [CONICET, Buenos Aires (Argentina); Comision Nacional de Energia Atomica, Departamento de Fisica, Buenos Aires (Argentina); Blaschke, David [University of Wroclaw, Institute of Theoretical Physics, Wroclaw (Poland); Joint Institute for Nuclear Research, Moscow Region (Russian Federation); National Research Nuclear University (MEPhI), Moscow (Russian Federation)

    2016-08-15

    We investigate the possible location of the critical endpoint in the QCD phase diagram based on nonlocal covariant PNJL models including a vector interaction channel. The form factors of the covariant interaction are constrained by lattice QCD data for the quark propagator. The comparison of our results for the pressure including the pion contribution and the scaled pressure shift Δ P/T {sup 4} vs. T/T{sub c} with lattice QCD results shows a better agreement when Lorentzian form factors for the nonlocal interactions and the wave function renormalization are considered. The strength of the vector coupling is used as a free parameter which influences results at finite baryochemical potential. It is used to adjust the slope of the pseudocritical temperature of the chiral phase transition at low baryochemical potential and the scaled pressure shift accessible in lattice QCD simulations. Our study, albeit presently performed at the mean-field level, supports the very existence of a critical point and favors its location within a region that is accessible in experiments at the NICA accelerator complex. (orig.)

  17. CA-Markov Analysis of Constrained Coastal Urban Growth Modeling: Hua Hin Seaside City, Thailand

    Directory of Open Access Journals (Sweden)

    Rajendra Shrestha

    2013-04-01

    Full Text Available Thailand, a developing country in Southeast Asia, is experiencing rapid development, particularly urban growth as a response to the expansion of the tourism industry. Hua Hin city provides an excellent example of an area where urbanization has flourished due to tourism. This study focuses on how the dynamic urban horizontal expansion of the seaside city of Hua Hin is constrained by the coast, thus making sustainability for this popular tourist destination—managing and planning for its local inhabitants, its visitors, and its sites—an issue. The study examines the association of land use type and land use change by integrating Geo-Information technology, a statistic model, and CA-Markov analysis for sustainable land use planning. The study identifies that the land use types and land use changes from the year 1999 to 2008 have changed as a result of increased mobility; this trend, in turn, has everything to do with urban horizontal expansion. The changing sequences of land use type have developed from forest area to agriculture, from agriculture to grassland, then to bare land and built-up areas. Coastal urban growth has, for a decade, been expanding horizontally from a downtown center along the beach to the western area around the golf course, the southern area along the beach, the southwest grassland area, and then the northern area near the airport.

  18. Constraining the kinematics of metropolitan Los Angeles faults with a slip-partitioning model.

    Science.gov (United States)

    Daout, S; Barbot, S; Peltzer, G; Doin, M-P; Liu, Z; Jolivet, R

    2016-11-16

    Due to the limited resolution at depth of geodetic and other geophysical data, the geometry and the loading rate of the ramp-décollement faults below the metropolitan Los Angeles are poorly understood. Here we complement these data by assuming conservation of motion across the Big Bend of the San Andreas Fault. Using a Bayesian approach, we constrain the geometry of the ramp-décollement system from the Mojave block to Los Angeles and propose a partitioning of the convergence with 25.5 ± 0.5 mm/yr and 3.1 ± 0.6 mm/yr of strike-slip motion along the San Andreas Fault and the Whittier Fault, with 2.7 ± 0.9 mm/yr and 2.5 ± 1.0 mm/yr of updip movement along the Sierra Madre and the Puente Hills thrusts. Incorporating conservation of motion in geodetic models of strain accumulation reduces the number of free parameters and constitutes a useful methodology to estimate the tectonic loading and seismic potential of buried fault networks.

  19. A methodology for constraining power in finite element modeling of radiofrequency ablation.

    Science.gov (United States)

    Jiang, Yansheng; Possebon, Ricardo; Mulier, Stefaan; Wang, Chong; Chen, Feng; Feng, Yuanbo; Xia, Qian; Liu, Yewei; Yin, Ting; Oyen, Raymond; Ni, Yicheng

    2017-07-01

    Radiofrequency ablation (RFA) is a minimally invasive thermal therapy for the treatment of cancer, hyperopia, and cardiac tachyarrhythmia. In RFA, the power delivered to the tissue is a key parameter. The objective of this study was to establish a methodology for the finite element modeling of RFA with constant power. Because of changes in the electric conductivity of tissue with temperature, a nonconventional boundary value problem arises in the mathematic modeling of RFA: neither the voltage (Dirichlet condition) nor the current (Neumann condition), but the power, that is, the product of voltage and current was prescribed on part of boundary. We solved the problem using Lagrange multiplier: the product of the voltage and current on the electrode surface is constrained to be equal to the Joule heating. We theoretically proved the equality between the product of the voltage and current on the surface of the electrode and the Joule heating in the domain. We also proved the well-posedness of the problem of solving the Laplace equation for the electric potential under a constant power constraint prescribed on the electrode surface. The Pennes bioheat transfer equation and the Laplace equation for electric potential augmented with the constraint of constant power were solved simultaneously using the Newton-Raphson algorithm. Three problems for validation were solved. Numerical results were compared either with an analytical solution deduced in this study or with results obtained by ANSYS or experiments. This work provides the finite element modeling of constant power RFA with a firm mathematical basis and opens pathway for achieving the optimal RFA power. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Technical Note: Probabilistically constraining proxy age–depth models within a Bayesian hierarchical reconstruction model

    Directory of Open Access Journals (Sweden)

    J. P. Werner

    2015-03-01

    Full Text Available Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurements of tree rings, ice cores, and varved lake sediments. Considerable advances could be achieved if time-uncertain proxies were able to be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here, we demonstrate how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. Critically, although a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. This approach can readily be generalized to non-layer-counted proxies, such as those derived from marine sediments.

  1. A Constrained 3D Density Model of the Upper Crust from Gravity Data Interpretation for Central Costa Rica

    Directory of Open Access Journals (Sweden)

    Oscar H. Lücke

    2010-01-01

    Full Text Available The map of complete Bouguer anomaly of Costa Rica shows an elongated NW-SE trending gravity low in the central region. This gravity low coincides with the geographical region known as the Cordillera Volcánica Central. It is built by geologic and morpho-tectonic units which consist of Quaternary volcanic edifices. For quantitative interpretation of the sources of the anomaly and the characterization of fluid pathways and reservoirs of arc magmatism, a constrained 3D density model of the upper crust was designed by means of forward modeling. The density model is constrained by simplified surface geology, previously published seismic tomography and P-wave velocity models, which stem from wide-angle refraction seismic, as well as results from methods of direct interpretation of the gravity field obtained for this work. The model takes into account the effects and influence of subduction-related Neogene through Quaternary arc magmatism on the upper crust.

  2. Using phylogenetically-informed annotation (PIA) to search for light-interacting genes in transcriptomes from non-model organisms.

    Science.gov (United States)

    Speiser, Daniel I; Pankey, M Sabrina; Zaharoff, Alexander K; Battelle, Barbara A; Bracken-Grissom, Heather D; Breinholt, Jesse W; Bybee, Seth M; Cronin, Thomas W; Garm, Anders; Lindgren, Annie R; Patel, Nipam H; Porter, Megan L; Protas, Meredith E; Rivera, Ajna S; Serb, Jeanne M; Zigler, Kirk S; Crandall, Keith A; Oakley, Todd H

    2014-11-19

    Tools for high throughput sequencing and de novo assembly make the analysis of transcriptomes (i.e. the suite of genes expressed in a tissue) feasible for almost any organism. Yet a challenge for biologists is that it can be difficult to assign identities to gene sequences, especially from non-model organisms. Phylogenetic analyses are one useful method for assigning identities to these sequences, but such methods tend to be time-consuming because of the need to re-calculate trees for every gene of interest and each time a new data set is analyzed. In response, we employed existing tools for phylogenetic analysis to produce a computationally efficient, tree-based approach for annotating transcriptomes or new genomes that we term Phylogenetically-Informed Annotation (PIA), which places uncharacterized genes into pre-calculated phylogenies of gene families. We generated maximum likelihood trees for 109 genes from a Light Interaction Toolkit (LIT), a collection of genes that underlie the function or development of light-interacting structures in metazoans. To do so, we searched protein sequences predicted from 29 fully-sequenced genomes and built trees using tools for phylogenetic analysis in the Osiris package of Galaxy (an open-source workflow management system). Next, to rapidly annotate transcriptomes from organisms that lack sequenced genomes, we repurposed a maximum likelihood-based Evolutionary Placement Algorithm (implemented in RAxML) to place sequences of potential LIT genes on to our pre-calculated gene trees. Finally, we implemented PIA in Galaxy and used it to search for LIT genes in 28 newly-sequenced transcriptomes from the light-interacting tissues of a range of cephalopod mollusks, arthropods, and cubozoan cnidarians. Our new trees for LIT genes are available on the Bitbucket public repository ( http://bitbucket.org/osiris_phylogenetics/pia/ ) and we demonstrate PIA on a publicly-accessible web server ( http://galaxy-dev.cnsi.ucsb.edu/pia/ ). Our new

  3. Measurement model and calibration experiment of over-constrained parallel six-dimensional force sensor based on stiffness characteristics analysis

    International Nuclear Information System (INIS)

    Niu, Zhi; Zhao, Yanzhi; Zhao, Tieshi; Cao, Yachao; Liu, Menghua

    2017-01-01

    An over-constrained, parallel six-dimensional force sensor has various advantages, including its ability to bear heavy loads and provide redundant force measurement information. These advantages render the sensor valuable in important applications in the field of aerospace (space docking tests, etc). The stiffness of each component in the over-constrained structure has a considerable influence on the internal force distribution of the structure. Thus, the measurement model changes when the measurement branches of the sensor are under tensile or compressive force. This study establishes a general measurement model for an over-constrained parallel six-dimensional force sensor considering the different branch tensions and compression stiffness values. Numerical calculations and analyses are performed using practical examples. Based on the parallel mechanism, an over-constrained, orthogonal structure is proposed for a six-dimensional force sensor. Hence, a prototype is designed and developed, and a calibration experiment is conducted. The measurement accuracy of the sensor is improved based on the measurement model under different branch tensions and compression stiffness values. Moreover, the largest class I error is reduced from 5.81 to 2.23% full scale (FS), and the largest class II error is reduced from 3.425 to 1.871% FS. (paper)

  4. Globally COnstrained Local Function Approximation via Hierarchical Modelling, a Framework for System Modelling under Partial Information

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Sadegh, Payman

    2000-01-01

    be obtained. This paper presents a new approach for system modelling under partial (global) information (or the so called Gray-box modelling) that seeks to perserve the benefits of the global as well as local methodologies sithin a unified framework. While the proposed technique relies on local approximations......Local function approximations concern fitting low order models to weighted data in neighbourhoods of the points where the approximations are desired. Despite their generality and convenience of use, local models typically suffer, among others, from difficulties arising in physical interpretation...... simultaneously with the (local estimates of) function values. The approach is applied to modelling of a linear time variant dynamic system under prior linear time invariant structure where local regression fails as a result of high dimensionality....

  5. Constrained model predictive control for load-following operation of APR reactors

    International Nuclear Information System (INIS)

    Kim, Jae Hwan; Lee, Sim Won; Kim, Ju Hyun; Na, Man Gyun; Yu, Keuk Jong; Kim, Han Gon

    2012-01-01

    The load-following operation of APR+ reactor is needed to control the power effectively using the control rods and to restrain the reactivity control from using the boric acid for flexibility of plant operation. Usually, the reason why the disproportion of axial flux distribution occurs during load-following operation is xenon-induced oscillation. The xenon has a very high absorption cross-section and makes the impact on the reactor delayed by the iodine precursor. The power maneuvering using automatically load-following operation has advantage in terms of safety and economic operation of the reactor, so the controller has to be designed efficiently. Therefore, an advanced control method that meets the conditions such as automatic control, flexibility, safety, and convenience is necessary to load-following operation of APR+ reactor. In this paper, the constrained model predictive control (MPC) method is applied to design APR reactor's automatic load-following controller for the integrated thermal power level and axial shape index (ASI) control. Some controllers use only the current tracking command, but MPC considers future commands in addition to the current tracking command. So, MPC can achieve better tracking performance than others. Furthermore, an MPC is to used in many industrial process control systems. The basic concept of the MPC is to solve an optimization problem for a finite future time interval at present time and to implement the first optimal control input as the current control input. The KISPAC-1D code, which models the APR+ nuclear power plants, is interfaced to the proposed controller to verify the tracking performance of the reactor power level and ASI. It is known that the proposed controller exhibits very fast tracking responses

  6. Efficient Constrained Local Model Fitting for Non-Rigid Face Alignment.

    Science.gov (United States)

    Lucey, Simon; Wang, Yang; Cox, Mark; Sridharan, Sridha; Cohn, Jeffery F

    2009-11-01

    Active appearance models (AAMs) have demonstrated great utility when being employed for non-rigid face alignment/tracking. The "simultaneous" algorithm for fitting an AAM achieves good non-rigid face registration performance, but has poor real time performance (2-3 fps). The "project-out" algorithm for fitting an AAM achieves faster than real time performance (> 200 fps) but suffers from poor generic alignment performance. In this paper we introduce an extension to a discriminative method for non-rigid face registration/tracking referred to as a constrained local model (CLM). Our proposed method is able to achieve superior performance to the "simultaneous" AAM algorithm along with real time fitting speeds (35 fps). We improve upon the canonical CLM formulation, to gain this performance, in a number of ways by employing: (i) linear SVMs as patch-experts, (ii) a simplified optimization criteria, and (iii) a composite rather than additive warp update step. Most notably, our simplified optimization criteria for fitting the CLM divides the problem of finding a single complex registration/warp displacement into that of finding N simple warp displacements. From these N simple warp displacements, a single complex warp displacement is estimated using a weighted least-squares constraint. Another major advantage of this simplified optimization lends from its ability to be parallelized, a step which we also theoretically explore in this paper. We refer to our approach for fitting the CLM as the "exhaustive local search" (ELS) algorithm. Experiments were conducted on the CMU Multi-PIE database.

  7. Constraining groundwater flow model with geochemistry in the FUA and Cabril sites. Use in the ENRESA 2000 PA exercise

    International Nuclear Information System (INIS)

    Samper, J.; Carrera, J.; Bajos, C.; Astudillo, J.; Santiago, J.L.

    1999-01-01

    Hydrogeochemical activities have been a key factor for the verification and constraining of the groundwater flow model developed for the safety assessment of the FUA Uranium mill tailings restoration and the Cabril L/ILW disposal facility. The lesson learned in both sites will be applied to the ground water transport modelling in the current PA exercises (ENRESA 2000). The groundwater flow model in the Cabril site, represents a low permeability fractured media, and was performed using the TRANSIN code series developed by UPC-ENRESA. The hydrogeochemical data obtained from systematic yearly sampling and analysis campaigns were successfully applied to distinguish between local and regional flow and young and old groundwater. The salinity content, mainly the chlorine anion content, was the most critical hydrogeochemical data for constraining the groundwater flow model. (author)

  8. Potential pitfalls of modelling ribosomal RNA data in phylogenetic tree reconstruction: evidence from case studies in the Metazoa.

    Science.gov (United States)

    Letsch, Harald O; Kjer, Karl M

    2011-05-27

    Failure to account for covariation patterns in helical regions of ribosomal RNA (rRNA) genes has the potential to misdirect the estimation of the phylogenetic signal of the data. Furthermore, the extremes of length variation among taxa, combined with regional substitution rate variation can mislead the alignment of rRNA sequences and thus distort subsequent tree reconstructions. However, recent developments in phylogenetic methodology now allow a comprehensive integration of secondary structures in alignment and tree reconstruction analyses based on rRNA sequences, which has been shown to correct some of these problems. Here, we explore the potentials of RNA substitution models and the interactions of specific model setups with the inherent pattern of covariation in rRNA stems and substitution rate variation among loop regions. We found an explicit impact of RNA substitution models on tree reconstruction analyses. The application of specific RNA models in tree reconstructions is hampered by interaction between the appropriate modelling of covarying sites in stem regions, and excessive homoplasy in some loop regions. RNA models often failed to recover reasonable trees when single-stranded regions are excessively homoplastic, because these regions contribute a greater proportion of the data when covarying sites are essentially downweighted. In this context, the RNA6A model outperformed all other models, including the more parametrized RNA7 and RNA16 models. Our results depict a trade-off between increased accuracy in estimation of interdependencies in helical regions with the risk of magnifying positions lacking phylogenetic signal. We can therefore conclude that caution is warranted when applying rRNA covariation models, and suggest that loop regions be independently screened for phylogenetic signal, and eliminated when they are indistinguishable from random noise. In addition to covariation and homoplasy, other factors, like non-stationarity of substitution rates

  9. Constraining supersymmetric models using Higgs physics, precision observables and direct searches

    International Nuclear Information System (INIS)

    Zeune, Lisa

    2014-08-01

    We present various complementary possibilities to exploit experimental measurements in order to test and constrain supersymmetric (SUSY) models. Direct searches for SUSY particles have not resulted in any signal so far, and limits on the SUSY parameter space have been set. Measurements of the properties of the observed Higgs boson at ∝126 GeV as well as of the W boson mass (M W ) can provide valuable indirect constraints, supplementing the ones from direct searches. This thesis is divided into three major parts: In the first part we present the currently most precise prediction for M W in the Minimal Supersymmetric Standard Model (MSSM) with complex parameters and in the Next-to-Minimal Supersymmetric Standard Model (NMSSM). The evaluation includes the full one-loop result and all relevant available higher order corrections of Standard Model (SM) and SUSY type. We perform a detailed scan over the MSSM parameter space, taking into account the latest experimental results, including the observation of a Higgs signal. We find that the current measurements for M W and the top quark mass (m t ) slightly favour a non-zero SUSY contribution. The impact of different SUSY sectors on the prediction of M W as well as the size of the higher-order SUSY corrections are analysed both in the MSSM and the NMSSM. We investigate the genuine NMSSM contribution from the extended Higgs and neutralino sectors and highlight differences between the M W predictions in the two SUSY models. In the second part of the thesis we discuss possible interpretations of the observed Higgs signal in SUSY models. The properties of the observed Higgs boson are compatible with the SM so far, but many other interpretations are also possible. Performing scans over the relevant parts of the MSSM and the NMSSM parameter spaces and applying relevant constraints from Higgs searches, flavour physics and electroweak measurements, we find that a Higgs boson at ∝126 GeV, which decays into two photons, can in

  10. Time-constrained mother and expanding market: emerging model of under-nutrition in India

    Directory of Open Access Journals (Sweden)

    S. Chaturvedi

    2016-07-01

    Full Text Available Abstract Background Persistent high levels of under-nutrition in India despite economic growth continue to challenge political leadership and policy makers at the highest level. The present inductive enquiry was conducted to map the perceptions of mothers and other key stakeholders, to identify emerging drivers of childhood under-nutrition. Methods We conducted a multi-centric qualitative investigation in six empowered action group states of India. The study sample included 509 in-depth interviews with mothers of undernourished and normal nourished children, policy makers, district level managers, implementer and facilitators. Sixty six focus group discussions and 72 non-formal interactions were conducted in two rounds with primary caretakers of undernourished children, Anganwadi Workers and Auxiliary Nurse Midwives. Results Based on the perceptions of the mothers and other key stakeholders, a model evolved inductively showing core themes as drivers of under-nutrition. The most forceful emerging themes were: multitasking, time constrained mother with dwindling family support; fragile food security or seasonal food paucity; child targeted market with wide availability and consumption of ready-to-eat market food items; rising non-food expenditure, in the context of rising food prices; inadequate and inappropriate feeding; delayed recognition of under-nutrition and delayed care seeking; and inadequate responsiveness of health care system and Integrated Child Development Services (ICDS. The study emphasized that the persistence of child malnutrition in India is also tied closely to the high workload and consequent time constraint of mothers who are increasingly pursuing income generating activities and enrolled in paid labour force, without robust institutional support for childcare. Conclusion The emerging framework needs to be further tested through mixed and multiple method research approaches to quantify the contribution of time limitation of

  11. A Maximum Parsimony Model to Reconstruct Phylogenetic Network in Honey Bee Evolution

    OpenAIRE

    Usha Chouhan; K. R. Pardasani

    2007-01-01

    Phylogenies ; The evolutionary histories of groups of species are one of the most widely used tools throughout the life sciences, as well as objects of research with in systematic, evolutionary biology. In every phylogenetic analysis reconstruction produces trees. These trees represent the evolutionary histories of many groups of organisms, bacteria due to horizontal gene transfer and plants due to process of hybridization. The process of gene transfer in bacteria and hyb...

  12. Robust and Efficient Constrained DFT Molecular Dynamics Approach for Biochemical Modeling

    Czech Academy of Sciences Publication Activity Database

    Řezáč, Jan; Levy, B.; Demachy, I.; de la Lande, A.

    2012-01-01

    Roč. 8, č. 2 (2012), s. 418-427 ISSN 1549-9618 Institutional research plan: CEZ:AV0Z40550506 Keywords : constrained density functional the ory * electron transfer * density fitting Subject RIV: CF - Physical ; The oretical Chemistry Impact factor: 5.389, year: 2012

  13. GRACE gravity data help constraining seismic models of the 2004 Sumatran earthquake

    Science.gov (United States)

    Cambiotti, G.; Bordoni, A.; Sabadini, R.; Colli, L.

    2011-10-01

    The analysis of Gravity Recovery and Climate Experiment (GRACE) Level 2 data time series from the Center for Space Research (CSR) and GeoForschungsZentrum (GFZ) allows us to extract a new estimate of the co-seismic gravity signal due to the 2004 Sumatran earthquake. Owing to compressible self-gravitating Earth models, including sea level feedback in a new self-consistent way and designed to compute gravitational perturbations due to volume changes separately, we are able to prove that the asymmetry in the co-seismic gravity pattern, in which the north-eastern negative anomaly is twice as large as the south-western positive anomaly, is not due to the previously overestimated dilatation in the crust. The overestimate was due to a large dilatation localized at the fault discontinuity, the gravitational effect of which is compensated by an opposite contribution from topography due to the uplifted crust. After this localized dilatation is removed, we instead predict compression in the footwall and dilatation in the hanging wall. The overall anomaly is then mainly due to the additional gravitational effects of the ocean after water is displaced away from the uplifted crust, as first indicated by de Linage et al. (2009). We also detail the differences between compressible and incompressible material properties. By focusing on the most robust estimates from GRACE data, consisting of the peak-to-peak gravity anomaly and an asymmetry coefficient, that is given by the ratio of the negative gravity anomaly over the positive anomaly, we show that they are quite sensitive to seismic source depths and dip angles. This allows us to exploit space gravity data for the first time to help constraining centroid-momentum-tensor (CMT) source analyses of the 2004 Sumatran earthquake and to conclude that the seismic moment has been released mainly in the lower crust rather than the lithospheric mantle. Thus, GRACE data and CMT source analyses, as well as geodetic slip distributions aided

  14. Constraining performance assessment models with tracer test results: a comparison between two conceptual models

    Science.gov (United States)

    McKenna, Sean A.; Selroos, Jan-Olof

    Tracer tests are conducted to ascertain solute transport parameters of a single rock feature over a 5-m transport pathway. Two different conceptualizations of double-porosity solute transport provide estimates of the tracer breakthrough curves. One of the conceptualizations (single-rate) employs a single effective diffusion coefficient in a matrix with infinite penetration depth. However, the tracer retention between different flow paths can vary as the ratio of flow-wetted surface to flow rate differs between the path lines. The other conceptualization (multirate) employs a continuous distribution of multiple diffusion rate coefficients in a matrix with variable, yet finite, capacity. Application of these two models with the parameters estimated on the tracer test breakthrough curves produces transport results that differ by orders of magnitude in peak concentration and time to peak concentration at the performance assessment (PA) time and length scales (100,000 years and 1,000 m). These differences are examined by calculating the time limits for the diffusive capacity to act as an infinite medium. These limits are compared across both conceptual models and also against characteristic times for diffusion at both the tracer test and PA scales. Additionally, the differences between the models are examined by re-estimating parameters for the multirate model from the traditional double-porosity model results at the PA scale. Results indicate that for each model the amount of the diffusive capacity that acts as an infinite medium over the specified time scale explains the differences between the model results and that tracer tests alone cannot provide reliable estimates of transport parameters for the PA scale. Results of Monte Carlo runs of the transport models with varying travel times and path lengths show consistent results between models and suggest that the variation in flow-wetted surface to flow rate along path lines is insignificant relative to variability in

  15. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    Science.gov (United States)

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open

  16. Taylor O(h³) Discretization of ZNN Models for Dynamic Equality-Constrained Quadratic Programming With Application to Manipulators.

    Science.gov (United States)

    Liao, Bolin; Zhang, Yunong; Jin, Long

    2016-02-01

    In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.

  17. Integrating satellite retrieved leaf chlorophyll into land surface models for constraining simulations of water and carbon fluxes

    KAUST Repository

    Houborg, Rasmus

    2013-07-01

    In terrestrial biosphere models, key biochemical controls on carbon uptake by vegetation canopies are typically assigned fixed literature-based values for broad categories of vegetation types although in reality significant spatial and temporal variability exists. Satellite remote sensing can support modeling efforts by offering distributed information on important land surface characteristics, which would be very difficult to obtain otherwise. This study investigates the utility of satellite based retrievals of leaf chlorophyll for estimating leaf photosynthetic capacity and for constraining model simulations of water and carbon fluxes. © 2013 IEEE.

  18. Constraining local 3-D models of the saturated-zone, Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Barr, G.E.; Shannon, S.A.

    1994-01-01

    A qualitative three-dimensional analysis of the saturated zone flow system was performed for a 8 km x 8 km region including the potential Yucca Mountain repository site. Certain recognized geologic features of unknown hydraulic properties were introduced to assess the general response of the flow field to these features. Two of these features, the Solitario Canyon fault and the proposed fault in Drill Hole Wash, appear to constrain flow and allow calibration

  19. An improved model for whole genome phylogenetic analysis by Fourier transform.

    Science.gov (United States)

    Yin, Changchuan; Yau, Stephen S-T

    2015-10-07

    DNA sequence similarity comparison is one of the major steps in computational phylogenetic studies. The sequence comparison of closely related DNA sequences and genomes is usually performed by multiple sequence alignments (MSA). While the MSA method is accurate for some types of sequences, it may produce incorrect results when DNA sequences undergone rearrangements as in many bacterial and viral genomes. It is also limited by its computational complexity for comparing large volumes of data. Previously, we proposed an alignment-free method that exploits the full information contents of DNA sequences by Discrete Fourier Transform (DFT), but still with some limitations. Here, we present a significantly improved method for the similarity comparison of DNA sequences by DFT. In this method, we map DNA sequences into 2-dimensional (2D) numerical sequences and then apply DFT to transform the 2D numerical sequences into frequency domain. In the 2D mapping, the nucleotide composition of a DNA sequence is a determinant factor and the 2D mapping reduces the nucleotide composition bias in distance measure, and thus improving the similarity measure of DNA sequences. To compare the DFT power spectra of DNA sequences with different lengths, we propose an improved even scaling algorithm to extend shorter DFT power spectra to the longest length of the underlying sequences. After the DFT power spectra are evenly scaled, the spectra are in the same dimensionality of the Fourier frequency space, then the Euclidean distances of full Fourier power spectra of the DNA sequences are used as the dissimilarity metrics. The improved DFT method, with increased computational performance by 2D numerical representation, can be applicable to any DNA sequences of different length ranges. We assess the accuracy of the improved DFT similarity measure in hierarchical clustering of different DNA sequences including simulated and real datasets. The method yields accurate and reliable phylogenetic trees

  20. The phylogenetic likelihood library.

    Science.gov (United States)

    Flouri, T; Izquierdo-Carrasco, F; Darriba, D; Aberer, A J; Nguyen, L-T; Minh, B Q; Von Haeseler, A; Stamatakis, A

    2015-03-01

    We introduce the Phylogenetic Likelihood Library (PLL), a highly optimized application programming interface for developing likelihood-based phylogenetic inference and postanalysis software. The PLL implements appropriate data structures and functions that allow users to quickly implement common, error-prone, and labor-intensive tasks, such as likelihood calculations, model parameter as well as branch length optimization, and tree space exploration. The highly optimized and parallelized implementation of the phylogenetic likelihood function and a thorough documentation provide a framework for rapid development of scalable parallel phylogenetic software. By example of two likelihood-based phylogenetic codes we show that the PLL improves the sequential performance of current software by a factor of 2-10 while requiring only 1 month of programming time for integration. We show that, when numerical scaling for preventing floating point underflow is enabled, the double precision likelihood calculations in the PLL are up to 1.9 times faster than those in BEAGLE. On an empirical DNA dataset with 2000 taxa the AVX version of PLL is 4 times faster than BEAGLE (scaling enabled and required). The PLL is available at http://www.libpll.org under the GNU General Public License (GPL). © The Author(s) 2014. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  1. In Silico Phylogenetic Analysis and Molecular Modelling Study of 2-Haloalkanoic Acid Dehalogenase Enzymes from Bacterial and Fungal Origin

    Directory of Open Access Journals (Sweden)

    Raghunath Satpathy

    2016-01-01

    Full Text Available 2-Haloalkanoic acid dehalogenase enzymes have broad range of applications, starting from bioremediation to chemical synthesis of useful compounds that are widely distributed in fungi and bacteria. In the present study, a total of 81 full-length protein sequences of 2-haloalkanoic acid dehalogenase from bacteria and fungi were retrieved from NCBI database. Sequence analysis such as multiple sequence alignment (MSA, conserved motif identification, computation of amino acid composition, and phylogenetic tree construction were performed on these primary sequences. From MSA analysis, it was observed that the sequences share conserved lysine (K and aspartate (D residues in them. Also, phylogenetic tree indicated a subcluster comprised of both fungal and bacterial species. Due to nonavailability of experimental 3D structure for fungal 2-haloalkanoic acid dehalogenase in the PDB, molecular modelling study was performed for both fungal and bacterial sources of enzymes present in the subcluster. Further structural analysis revealed a common evolutionary topology shared between both fungal and bacterial enzymes. Studies on the buried amino acids showed highly conserved Leu and Ser in the core, despite variation in their amino acid percentage. Additionally, a surface exposed tryptophan was conserved in all of these selected models.

  2. Analysis of the Spatial Variation of Network-Constrained Phenomena Represented by a Link Attribute Using a Hierarchical Bayesian Model

    Directory of Open Access Journals (Sweden)

    Zhensheng Wang

    2017-02-01

    Full Text Available The spatial variation of geographical phenomena is a classical problem in spatial data analysis and can provide insight into underlying processes. Traditional exploratory methods mostly depend on the planar distance assumption, but many spatial phenomena are constrained to a subset of Euclidean space. In this study, we apply a method based on a hierarchical Bayesian model to analyse the spatial variation of network-constrained phenomena represented by a link attribute in conjunction with two experiments based on a simplified hypothetical network and a complex road network in Shenzhen that includes 4212 urban facility points of interest (POIs for leisure activities. Then, the methods named local indicators of network-constrained clusters (LINCS are applied to explore local spatial patterns in the given network space. The proposed method is designed for phenomena that are represented by attribute values of network links and is capable of removing part of random variability resulting from small-sample estimation. The effects of spatial dependence and the base distribution are also considered in the proposed method, which could be applied in the fields of urban planning and safety research.

  3. Modeling of Passive Constrained Layer Damping as Applied to a Gun Tube

    Directory of Open Access Journals (Sweden)

    Margaret Z. Kiehl

    2001-01-01

    Full Text Available We study the damping effects of a cantilever beam system consisting of a gun tube wrapped with a constrained viscoelastic polymer on terrain induced vibrations. A time domain solution to the forced motion of this system is developed using the GHM (Golla-Hughes-McTavish method to incorporate the viscoelastic properties of the polymer. An impulse load is applied at the free end and the tip deflection of the cantilevered beam system is determined. The resulting GHM equations are then solved in MATLAB by transformation to the state-space domain.

  4. Chempy: A flexible chemical evolution model for abundance fitting. Do the Sun's abundances alone constrain chemical evolution models?

    Science.gov (United States)

    Rybizki, Jan; Just, Andreas; Rix, Hans-Walter

    2017-09-01

    Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar

  5. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    Science.gov (United States)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  6. U(1) x U(1) x U(1) symmetry of the Kimura 3ST model and phylogenetic branching processes

    International Nuclear Information System (INIS)

    Bashford, J D; Jarvis, P D; Sumner, J G; Steel, M A

    2004-01-01

    An analysis of the Kimura 3ST model of DNA sequence evolution is given on the basis of its continuous Lie symmetries. The rate matrix commutes with a U(1) x U(1) x U(1) phase subgroup of the group GL(4) of 4 x 4 invertible complex matrices acting on a linear space spanned by the four nucleic acid base letters. The diagonal 'branching operator' representing speciation is defined, and shown to intertwine the U(1) x U(1) x U(1) action. Using the intertwining property, a general formula for the probability density on the leaves of a binary tree under the Kimura model is derived, which is shown to be equivalent to established phylogenetic spectral transform methods. (letter to the editor)

  7. A Geometrically-Constrained Mathematical Model of Mammary Gland Ductal Elongation Reveals Novel Cellular Dynamics within the Terminal End Bud.

    Directory of Open Access Journals (Sweden)

    Ingrid Paine

    2016-04-01

    Full Text Available Mathematics is often used to model biological systems. In mammary gland development, mathematical modeling has been limited to acinar and branching morphogenesis and breast cancer, without reference to normal duct formation. We present a model of ductal elongation that exploits the geometrically-constrained shape of the terminal end bud (TEB, the growing tip of the duct, and incorporates morphometrics, region-specific proliferation and apoptosis rates. Iterative model refinement and behavior analysis, compared with biological data, indicated that the traditional metric of nipple to the ductal front distance, or percent fat pad filled to evaluate ductal elongation rate can be misleading, as it disregards branching events that can reduce its magnitude. Further, model driven investigations of the fates of specific TEB cell types confirmed migration of cap cells into the body cell layer, but showed their subsequent preferential elimination by apoptosis, thus minimizing their contribution to the luminal lineage and the mature duct.

  8. Evolution in totally constrained models: Schrödinger vs. Heisenberg pictures

    Science.gov (United States)

    Olmedo, Javier

    2016-06-01

    We study the relation between two evolution pictures that are currently considered for totally constrained theories. Both descriptions are based on Rovelli’s evolving constants approach, where one identifies a (possibly local) degree of freedom of the system as an internal time. This method is well understood classically in several situations. The purpose of this paper is to further analyze this approach at the quantum level. Concretely, we will compare the (Schrödinger-like) picture where the physical states evolve in time with the (Heisenberg-like) picture in which one defines parametrized observables (or evolving constants of the motion). We will show that in the particular situations considered in this paper (the parametrized relativistic particle and a spatially flat homogeneous and isotropic spacetime coupled to a massless scalar field) both descriptions are equivalent. We will finally comment on possible issues and on the genericness of the equivalence between both pictures.

  9. Exploring the biological consequences of conformational changes in aspartame models containing constrained analogues of phenylalanine.

    Science.gov (United States)

    Mollica, Adriano; Mirzaie, Sako; Costante, Roberto; Carradori, Simone; Macedonio, Giorgia; Stefanucci, Azzurra; Dvoracsko, Szabolcs; Novellino, Ettore

    2016-12-01

    The dipeptide aspartame (Asp-Phe-OMe) is a sweetener widely used in replacement of sucrose by food industry. 2',6'-Dimethyltyrosine (DMT) and 2',6'-dimethylphenylalanine (DMP) are two synthetic phenylalanine-constrained analogues, with a limited freedom in χ-space due to the presence of methyl groups in position 2',6' of the aromatic ring. These residues have shown to increase the activity of opioid peptides, such as endomorphins improving the binding to the opioid receptors. In this work, DMT and DMP have been synthesized following a diketopiperazine-mediated route and the corresponding aspartame derivatives (Asp-DMT-OMe and Asp-DMP-OMe) have been evaluated in vivo and in silico for their activity as synthetic sweeteners.

  10. Constrained superfields in supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, Gianguido; Farakos, Fotis [Dipartimento di Fisica ed Astronomia “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-02-16

    We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.

  11. Reduction of false positives in the detection of architectural distortion in mammograms by using a geometrically constrained phase portrait model

    International Nuclear Information System (INIS)

    Ayres, Fabio J.; Rangayyan, Rangaraj M.

    2007-01-01

    Objective One of the commonly missed signs of breast cancer is architectural distortion. We have developed techniques for the detection of architectural distortion in mammograms, based on the analysis of oriented texture through the application of Gabor filters and a linear phase portrait model. In this paper, we propose constraining the shape of the general phase portrait model as a means to reduce the false-positive rate in the detection of architectural distortion. Material and methods The methods were tested with one set of 19 cases of architectural distortion and 41 normal mammograms, and with another set of 37 cases of architectural distortion. Results Sensitivity rates of 84% with 4.5 false positives per image and 81% with 10 false positives per image were obtained for the two sets of images. Conclusion The adoption of a constrained phase portrait model with a symmetric matrix and the incorporation of its condition number in the analysis resulted in a reduction in the false-positive rate in the detection of architectural distortion. The proposed techniques, dedicated for the detection and localization of architectural distortion, should lead to efficient detection of early signs of breast cancer. (orig.)

  12. Molecular phylogenetics and comparative modeling of HEN1, a methyltransferase involved in plant microRNA biogenesis

    Directory of Open Access Journals (Sweden)

    Obarska Agnieszka

    2006-01-01

    Full Text Available Abstract Background Recently, HEN1 protein from Arabidopsis thaliana was discovered as an essential enzyme in plant microRNA (miRNA biogenesis. HEN1 transfers a methyl group from S-adenosylmethionine to the 2'-OH or 3'-OH group of the last nucleotide of miRNA/miRNA* duplexes produced by the nuclease Dicer. Previously it was found that HEN1 possesses a Rossmann-fold methyltransferase (RFM domain and a long N-terminal extension including a putative double-stranded RNA-binding motif (DSRM. However, little is known about the details of the structure and the mechanism of action of this enzyme, and about its phylogenetic origin. Results Extensive database searches were carried out to identify orthologs and close paralogs of HEN1. Based on the multiple sequence alignment a phylogenetic tree of the HEN1 family was constructed. The fold-recognition approach was used to identify related methyltransferases with experimentally solved structures and to guide the homology modeling of the HEN1 catalytic domain. Additionally, we identified a La-like predicted RNA binding domain located C-terminally to the DSRM domain and a domain with a peptide prolyl cis/trans isomerase (PPIase fold, but without the conserved PPIase active site, located N-terminally to the catalytic domain. Conclusion The bioinformatics analysis revealed that the catalytic domain of HEN1 is not closely related to any known RNA:2'-OH methyltransferases (e.g. to the RrmJ/fibrillarin superfamily, but rather to small-molecule methyltransferases. The structural model was used as a platform to identify the putative active site and substrate-binding residues of HEN and to propose its mechanism of action.

  13. phangorn: phylogenetic analysis in R.

    Science.gov (United States)

    Schliep, Klaus Peter

    2011-02-15

    phangorn is a package for phylogenetic reconstruction and analysis in the R language. Previously it was only possible to estimate phylogenetic trees with distance methods in R. phangorn, now offers the possibility of reconstructing phylogenies with distance based methods, maximum parsimony or maximum likelihood (ML) and performing Hadamard conjugation. Extending the general ML framework, this package provides the possibility of estimating mixture and partition models. Furthermore, phangorn offers several functions for comparing trees, phylogenetic models or splits, simulating character data and performing congruence analyses. phangorn can be obtained through the CRAN homepage http://cran.r-project.org/web/packages/phangorn/index.html. phangorn is licensed under GPL 2.

  14. Estimation of microbial respiration rates in groundwater by geochemical modeling constrained with stable isotopes

    International Nuclear Information System (INIS)

    Murphy, E.M.

    1998-01-01

    Changes in geochemistry and stable isotopes along a well-established groundwater flow path were used to estimate in situ microbial respiration rates in the Middendorf aquifer in the southeastern United States. Respiration rates were determined for individual terminal electron acceptors including O 2 , MnO 2 , Fe 3+ , and SO 4 2- . The extent of biotic reactions were constrained by the fractionation of stable isotopes of carbon and sulfur. Sulfur isotopes and the presence of sulfur-oxidizing microorganisms indicated that sulfate is produced through the oxidation of reduced sulfur species in the aquifer and not by the dissolution of gypsum, as previously reported. The respiration rates varied along the flow path as the groundwater transitioned between primarily oxic to anoxic conditions. Iron-reducing microorganisms were the largest contributors to the oxidation of organic matter along the portion of the groundwater flow path investigated in this study. The transition zone between oxic and anoxic groundwater contained a wide range of terminal electron acceptors and showed the greatest diversity and numbers of culturable microorganisms and the highest respiration rates. A comparison of respiration rates measured from core samples and pumped groundwater suggests that variability in respiration rates may often reflect the measurement scales, both in the sample volume and the time-frame over which the respiration measurement is averaged. Chemical heterogeneity may create a wide range of respiration rates when the scale of the observation is below the scale of the heterogeneity

  15. An evolutionary model-based algorithm for accurate phylogenetic breakpoint mapping and subtype prediction in HIV-1.

    Directory of Open Access Journals (Sweden)

    Sergei L Kosakovsky Pond

    2009-11-01

    Full Text Available Genetically diverse pathogens (such as Human Immunodeficiency virus type 1, HIV-1 are frequently stratified into phylogenetically or immunologically defined subtypes for classification purposes. Computational identification of such subtypes is helpful in surveillance, epidemiological analysis and detection of novel variants, e.g., circulating recombinant forms in HIV-1. A number of conceptually and technically different techniques have been proposed for determining the subtype of a query sequence, but there is not a universally optimal approach. We present a model-based phylogenetic method for automatically subtyping an HIV-1 (or other viral or bacterial sequence, mapping the location of breakpoints and assigning parental sequences in recombinant strains as well as computing confidence levels for the inferred quantities. Our Subtype Classification Using Evolutionary ALgorithms (SCUEAL procedure is shown to perform very well in a variety of simulation scenarios, runs in parallel when multiple sequences are being screened, and matches or exceeds the performance of existing approaches on typical empirical cases. We applied SCUEAL to all available polymerase (pol sequences from two large databases, the Stanford Drug Resistance database and the UK HIV Drug Resistance Database. Comparing with subtypes which had previously been assigned revealed that a minor but substantial (approximately 5% fraction of pure subtype sequences may in fact be within- or inter-subtype recombinants. A free implementation of SCUEAL is provided as a module for the HyPhy package and the Datamonkey web server. Our method is especially useful when an accurate automatic classification of an unknown strain is desired, and is positioned to complement and extend faster but less accurate methods. Given the increasingly frequent use of HIV subtype information in studies focusing on the effect of subtype on treatment, clinical outcome, pathogenicity and vaccine design, the importance

  16. CONSTRAINING THE GRB-MAGNETAR MODEL BY MEANS OF THE GALACTIC PULSAR POPULATION

    Energy Technology Data Exchange (ETDEWEB)

    Rea, N. [Anton Pannekoek Institute for Astronomy, University of Amsterdam, Postbus 94249, NL-1090 GE Amsterdam (Netherlands); Gullón, M.; Pons, J. A.; Miralles, J. A. [Departament de Fisica Aplicada, Universitat d’Alacant, Ap. Correus 99, E-03080 Alacant (Spain); Perna, R. [Department of Physics and Astronomy, Stony Brook University, Stony Brook, NY 11794 (United States); Dainotti, M. G. [Physics Department, Stanford University, Via Pueblo Mall 382, Stanford, CA (United States); Torres, D. F. [Instituto de Ciencias de l’Espacio (ICE, CSIC-IEEC), Campus UAB, Carrer Can Magrans s/n, E-08193 Barcelona (Spain)

    2015-11-10

    A large fraction of Gamma-ray bursts (GRBs) displays an X-ray plateau phase within <10{sup 5} s from the prompt emission, proposed to be powered by the spin-down energy of a rapidly spinning newly born magnetar. In this work we use the properties of the Galactic neutron star population to constrain the GRB-magnetar scenario. We re-analyze the X-ray plateaus of all Swift GRBs with known redshift, between 2005 January and 2014 August. From the derived initial magnetic field distribution for the possible magnetars left behind by the GRBs, we study the evolution and properties of a simulated GRB-magnetar population using numerical simulations of magnetic field evolution, coupled with Monte Carlo simulations of Pulsar Population Synthesis in our Galaxy. We find that if the GRB X-ray plateaus are powered by the rotational energy of a newly formed magnetar, the current observational properties of the Galactic magnetar population are not compatible with being formed within the GRB scenario (regardless of the GRB type or rate at z = 0). Direct consequences would be that we should allow the existence of magnetars and “super-magnetars” having different progenitors, and that Type Ib/c SNe related to Long GRBs form systematically neutron stars with higher initial magnetic fields. We put an upper limit of ≤16 “super-magnetars” formed by a GRB in our Galaxy in the past Myr (at 99% c.l.). This limit is somewhat smaller than what is roughly expected from Long GRB rates, although the very large uncertainties do not allow us to draw strong conclusion in this respect.

  17. Simulating the Range Expansion of Spartina alterniflora in Ecological Engineering through Constrained Cellular Automata Model and GIS

    Directory of Open Access Journals (Sweden)

    Zongsheng Zheng

    2015-01-01

    Full Text Available Environmental factors play an important role in the range expansion of Spartina alterniflora in estuarine salt marshes. CA models focusing on neighbor effect often failed to account for the influence of environmental factors. This paper proposed a CCA model that enhanced CA model by integrating constrain factors of tidal elevation, vegetation density, vegetation classification, and tidal channels in Chongming Dongtan wetland, China. Meanwhile, a positive feedback loop between vegetation and sedimentation was also considered in CCA model through altering the tidal accretion rate in different vegetation communities. After being validated and calibrated, the CCA model is more accurate than the CA model only taking account of neighbor effect. By overlaying remote sensing classification and the simulation results, the average accuracy increases to 80.75% comparing with the previous CA model. Through the scenarios simulation, the future of Spartina alterniflora expansion was analyzed. CCA model provides a new technical idea and method for salt marsh species expansion and control strategies research.

  18. New high-fidelity terrain modeling method constrained by terrain semanteme.

    Directory of Open Access Journals (Sweden)

    Bo Zhou

    Full Text Available Production of higher-fidelity digital elevation models is important; as such models are indispensable components of space data infrastructure. However, loss of terrain features is a constant problem for grid digital elevation models, although these models have already been defined in such a way that their distinct usage as data sources in terrain modeling processing is prohibited. Therefore, in this study, the novel concept-terrain semanteme is proposed to define local space terrain features, and a new process for generating grid digital elevation models based on this new concept is designed. A prototype system is programmed to test the proposed approach; the results indicate that terrain semanteme can be applied in the process of grid digital elevation model generation, and that usage of this new concept improves the digital elevation model fidelity. Moreover, the terrain semanteme technique can be applied for recovery of distorted digital elevation model regions containing terrain semantemes, with good recovery efficiency indicated by experiments.

  19. Minimal constrained supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Cribiori, N. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Dall' Agata, G., E-mail: dallagat@pd.infn.it [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Farakos, F. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Porrati, M. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States)

    2017-01-10

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  20. Minimal constrained supergravity

    Directory of Open Access Journals (Sweden)

    N. Cribiori

    2017-01-01

    Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  1. Minimal constrained supergravity

    International Nuclear Information System (INIS)

    Cribiori, N.; Dall'Agata, G.; Farakos, F.; Porrati, M.

    2017-01-01

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  2. Elastic Model Transitions: a Hybrid Approach Utilizing Quadratic Inequality Constrained Least Squares (LSQI) and Direct Shape Mapping (DSM)

    Science.gov (United States)

    Jurenko, Robert J.; Bush, T. Jason; Ottander, John A.

    2014-01-01

    A method for transitioning linear time invariant (LTI) models in time varying simulation is proposed that utilizes both quadratically constrained least squares (LSQI) and Direct Shape Mapping (DSM) algorithms to determine physical displacements. This approach is applicable to the simulation of the elastic behavior of launch vehicles and other structures that utilize multiple LTI finite element model (FEM) derived mode sets that are propagated throughout time. The time invariant nature of the elastic data for discrete segments of the launch vehicle trajectory presents a problem of how to properly transition between models while preserving motion across the transition. In addition, energy may vary between flex models when using a truncated mode set. The LSQI-DSM algorithm can accommodate significant changes in energy between FEM models and carries elastic motion across FEM model transitions. Compared with previous approaches, the LSQI-DSM algorithm shows improvements ranging from a significant reduction to a complete removal of transients across FEM model transitions as well as maintaining elastic motion from the prior state.

  3. Modeling Optical and Radiative Properties of Clouds Constrained with CARDEX Observations

    Science.gov (United States)

    Mishra, S. K.; Praveen, P. S.; Ramanathan, V.

    2013-12-01

    Carbonaceous aerosols (CA) have important effects on climate by directly absorbing solar radiation and indirectly changing cloud properties. These particles tend to be a complex mixture of graphitic carbon and organic compounds. The graphitic component, called as elemental carbon (EC), is characterized by significant absorption of solar radiation. Recent studies showed that organic carbon (OC) aerosols absorb strongly near UV region, and this faction is known as Brown Carbon (BrC). The indirect effect of CA can occur in two ways, first by changing the thermal structure of the atmosphere which further affects dynamical processes governing cloud life cycle; secondly, by acting as cloud condensation nuclei (CCN) that can change cloud radiative properties. In this work, cloud optical properties have been numerically estimated by accounting for CAEDEX (Cloud Aerosol Radiative Forcing Dynamics Experiment) observed cloud parameters and the physico-chemical and optical properties of aerosols. The aerosol inclusions in the cloud drop have been considered as core shell structure with core as EC and shell comprising of ammonium sulfate, ammonium nitrate, sea salt and organic carbon (organic acids, OA and brown carbon, BrC). The EC/OC ratio of the inclusion particles have been constrained based on observations. Moderate and heavy pollution events have been decided based on the aerosol number and BC concentration. Cloud drop's co-albedo at 550nm was found nearly identical for pure EC sphere inclusions and core-shell inclusions with all non-absorbing organics in the shell. However, co-albedo was found to increase for the drop having all BrC in the shell. The co-albedo of a cloud drop was found to be the maximum for all aerosol present as interstitial compare to 50% and 0% inclusions existing as interstitial aerosols. The co-albedo was found to be ~ 9.87e-4 for the drop with 100% inclusions existing as interstitial aerosols externally mixed with micron size mineral dust with 2

  4. Phylogenetic comparative methods on phylogenetic networks with reticulations.

    Science.gov (United States)

    Bastide, Paul; Solís-Lemus, Claudia; Kriebel, Ricardo; Sparks, K William; Ané, Cécile

    2018-04-25

    The goal of Phylogenetic Comparative Methods (PCMs) is to study the distribution of quantitative traits among related species. The observed traits are often seen as the result of a Brownian Motion (BM) along the branches of a phylogenetic tree. Reticulation events such as hybridization, gene flow or horizontal gene transfer, can substantially affect a species' traits, but are not modeled by a tree. Phylogenetic networks have been designed to represent reticulate evolution. As they become available for downstream analyses, new models of trait evolution are needed, applicable to networks. One natural extension of the BM is to use a weighted average model for the trait of a hybrid, at a reticulation point. We develop here an efficient recursive algorithm to compute the phylogenetic variance matrix of a trait on a network, in only one preorder traversal of the network. We then extend the standard PCM tools to this new framework, including phylogenetic regression with covariates (or phylogenetic ANOVA), ancestral trait reconstruction, and Pagel's λ test of phylogenetic signal. The trait of a hybrid is sometimes outside of the range of its two parents, for instance because of hybrid vigor or hybrid depression. These two phenomena are rather commonly observed in present-day hybrids. Transgressive evolution can be modeled as a shift in the trait value following a reticulation point. We develop a general framework to handle such shifts, and take advantage of the phylogenetic regression view of the problem to design statistical tests for ancestral transgressive evolution in the evolutionary history of a group of species. We study the power of these tests in several scenarios, and show that recent events have indeed the strongest impact on the trait distribution of present-day taxa. We apply those methods to a dataset of Xiphophorus fishes, to confirm and complete previous analysis in this group. All the methods developed here are available in the Julia package PhyloNetworks.

  5. A Stochastic Multi-Objective Chance-Constrained Programming Model for Water Supply Management in Xiaoqing River Watershed

    Directory of Open Access Journals (Sweden)

    Ye Xu

    2017-05-01

    Full Text Available In this paper, a stochastic multi-objective chance-constrained programming model (SMOCCP was developed for tackling the water supply management problem. Two objectives were included in this model, which are the minimization of leakage loss amounts and total system cost, respectively. The traditional SCCP model required the random variables to be expressed in the normal distributions, although their statistical characteristics were suitably reflected by other forms. The SMOCCP model allows the random variables to be expressed in log-normal distributions, rather than general normal form. Possible solution deviation caused by irrational parameter assumption was avoided and the feasibility and accuracy of generated solutions were ensured. The water supply system in the Xiaoqing River watershed was used as a study case for demonstration. Under the context of various weight combinations and probabilistic levels, many types of solutions are obtained, which are expressed as a series of transferred amounts from water sources to treated plants, from treated plants to reservoirs, as well as from reservoirs to tributaries. It is concluded that the SMOCCP model could reflect the sketch of the studied region and generate desired water supply schemes under complex uncertainties. The successful application of the proposed model is expected to be a good example for water resource management in other watersheds.

  6. A novel robust chance constrained possibilistic programming model for disaster relief logistics under uncertainty

    Directory of Open Access Journals (Sweden)

    Maryam Rahafrooz

    2016-09-01

    Full Text Available In this paper, a novel multi-objective robust possibilistic programming model is proposed, which simultaneously considers maximizing the distributive justice in relief distribution, minimizing the risk of relief distribution, and minimizing the total logistics costs. To effectively cope with the uncertainties of the after-disaster environment, the uncertain parameters of the proposed model are considered in the form of fuzzy trapezoidal numbers. The proposed model not only considers relief commodities priority and demand points priority in relief distribution, but also considers the difference between the pre-disaster and post-disaster supply abilities of the suppliers. In order to solve the proposed model, the LP-metric and the improved augmented ε-constraint methods are used. Second, a set of test problems are designed to evaluate the effectiveness of the proposed robust model against its equivalent deterministic form, which reveales the capabilities of the robust model. Finally, to illustrate the performance of the proposed robust model, a seismic region of northwestern Iran (East Azerbaijan is selected as a case study to model its relief logistics in the face of future earthquakes. This investigation indicates the usefulness of the proposed model in the field of crisis.

  7. Improving SWAT model prediction using an upgraded denitrification scheme and constrained auto calibration

    Science.gov (United States)

    The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...

  8. How to constrain multi-objective calibrations of the SWAT model using water balance components

    Science.gov (United States)

    Automated procedures are often used to provide adequate fits between hydrologic model estimates and observed data. While the models may provide good fits based upon numeric criteria, they may still not accurately represent the basic hydrologic characteristics of the represented watershed. Here we ...

  9. DOMINO: development of informative molecular markers for phylogenetic and genome-wide population genetic studies in non-model organisms.

    Science.gov (United States)

    Frías-López, Cristina; Sánchez-Herrero, José F; Guirao-Rico, Sara; Mora, Elisa; Arnedo, Miquel A; Sánchez-Gracia, Alejandro; Rozas, Julio

    2016-12-15

    The development of molecular markers is one of the most important challenges in phylogenetic and genome wide population genetics studies, especially in studies with non-model organisms. A highly promising approach for obtaining suitable markers is the utilization of genomic partitioning strategies for the simultaneous discovery and genotyping of a large number of markers. Unfortunately, not all markers obtained from these strategies provide enough information for solving multiple evolutionary questions at a reasonable taxonomic resolution. We have developed Development Of Molecular markers In Non-model Organisms (DOMINO), a bioinformatics tool for informative marker development from both next generation sequencing (NGS) data and pre-computed sequence alignments. The application implements popular NGS tools with new utilities in a highly versatile pipeline specifically designed to discover or select personalized markers at different levels of taxonomic resolution. These markers can be directly used to study the taxa surveyed for their design, utilized for further downstream PCR amplification in a broader set taxonomic scope, or exploited as suitable templates to bait design for target DNA enrichment techniques. We conducted an exhaustive evaluation of the performance of DOMINO via computer simulations and illustrate its utility to find informative markers in an empirical dataset. DOMINO is freely available from www.ub.edu/softevol/domino CONTACT: elsanchez@ub.edu or jrozas@ub.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Constrained quadratic stabilization of discrete-time uncertain nonlinear multi-model systems using piecewise affine state-feedback

    Directory of Open Access Journals (Sweden)

    Olav Slupphaug

    1999-07-01

    Full Text Available In this paper a method for nonlinear robust stabilization based on solving a bilinear matrix inequality (BMI feasibility problem is developed. Robustness against model uncertainty is handled. In different non-overlapping regions of the state-space called clusters the plant is assumed to be an element in a polytope which vertices (local models are affine systems. In the clusters containing the origin in their closure, the local models are restricted to be linear systems. The clusters cover the region of interest in the state-space. An affine state-feedback is associated with each cluster. By utilizing the affinity of the local models and the state-feedback, a set of linear matrix inequalities (LMIs combined with a single nonconvex BMI are obtained which, if feasible, guarantee quadratic stability of the origin of the closed-loop. The feasibility problem is attacked by a branch-and-bound based global approach. If the feasibility check is successful, the Liapunov matrix and the piecewise affine state-feedback are given directly by the feasible solution. Control constraints are shown to be representable by LMIs or BMIs, and an application of the control design method to robustify constrained nonlinear model predictive control is presented. Also, the control design method is applied to a simple example.

  11. A Metabolite-Sensitive, Thermodynamically Constrained Model of Cardiac Cross-Bridge Cycling: Implications for Force Development during Ischemia

    KAUST Repository

    Tran, Kenneth; Smith, Nicolas P.; Loiselle, Denis S.; Crampin, Edmund J.

    2010-01-01

    We present a metabolically regulated model of cardiac active force generation with which we investigate the effects of ischemia on maximum force production. Our model, based on a model of cross-bridge kinetics that was developed by others, reproduces many of the observed effects of MgATP, MgADP, Pi, and H(+) on force development while retaining the force/length/Ca(2+) properties of the original model. We introduce three new parameters to account for the competitive binding of H(+) to the Ca(2+) binding site on troponin C and the binding of MgADP within the cross-bridge cycle. These parameters, along with the Pi and H(+) regulatory steps within the cross-bridge cycle, were constrained using data from the literature and validated using a range of metabolic and sinusoidal length perturbation protocols. The placement of the MgADP binding step between two strongly-bound and force-generating states leads to the emergence of an unexpected effect on the force-MgADP curve, where the trend of the relationship (positive or negative) depends on the concentrations of the other metabolites and [H(+)]. The model is used to investigate the sensitivity of maximum force production to changes in metabolite concentrations during the development of ischemia.

  12. Dissecting galaxy formation models with sensitivity analysis—a new approach to constrain the Milky Way formation history

    International Nuclear Information System (INIS)

    Gómez, Facundo A.; O'Shea, Brian W.; Coleman-Smith, Christopher E.; Tumlinson, Jason; Wolpert, Robert L.

    2014-01-01

    We present an application of a statistical tool known as sensitivity analysis to characterize the relationship between input parameters and observational predictions of semi-analytic models of galaxy formation coupled to cosmological N-body simulations. We show how a sensitivity analysis can be performed on our chemo-dynamical model, ChemTreeN, to characterize and quantify its relationship between model input parameters and predicted observable properties. The result of this analysis provides the user with information about which parameters are most important and most likely to affect the prediction of a given observable. It can also be used to simplify models by identifying input parameters that have no effect on the outputs (i.e., observational predictions) of interest. Conversely, sensitivity analysis allows us to identify what model parameters can be most efficiently constrained by the given observational data set. We have applied this technique to real observational data sets associated with the Milky Way, such as the luminosity function of the dwarf satellites. The results from the sensitivity analysis are used to train specific model emulators of ChemTreeN, only involving the most relevant input parameters. This allowed us to efficiently explore the input parameter space. A statistical comparison of model outputs and real observables is used to obtain a 'best-fitting' parameter set. We consider different Milky-Way-like dark matter halos to account for the dependence of the best-fitting parameter selection process on the underlying merger history of the models. For all formation histories considered, running ChemTreeN with best-fitting parameters produced luminosity functions that tightly fit their observed counterpart. However, only one of the resulting stellar halo models was able to reproduce the observed stellar halo mass within 40 kpc of the Galactic center. On the basis of this analysis, it is possible to disregard certain models, and their

  13. Constraining neutrinoless double beta decay

    International Nuclear Information System (INIS)

    Dorame, L.; Meloni, D.; Morisi, S.; Peinado, E.; Valle, J.W.F.

    2012-01-01

    A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.

  14. Model Predictive Vibration Control Efficient Constrained MPC Vibration Control for Lightly Damped Mechanical Structures

    CERN Document Server

    Takács, Gergely

    2012-01-01

    Real-time model predictive controller (MPC) implementation in active vibration control (AVC) is often rendered difficult by fast sampling speeds and extensive actuator-deformation asymmetry. If the control of lightly damped mechanical structures is assumed, the region of attraction containing the set of allowable initial conditions requires a large prediction horizon, making the already computationally demanding on-line process even more complex. Model Predictive Vibration Control provides insight into the predictive control of lightly damped vibrating structures by exploring computationally efficient algorithms which are capable of low frequency vibration control with guaranteed stability and constraint feasibility. In addition to a theoretical primer on active vibration damping and model predictive control, Model Predictive Vibration Control provides a guide through the necessary steps in understanding the founding ideas of predictive control applied in AVC such as: ·         the implementation of ...

  15. Thermo-magnetic effects in quark matter: Nambu-Jona-Lasinio model constrained by lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Ricardo L.S. [Universidade Federal de Santa Maria, Departamento de Fisica, Santa Maria, RS (Brazil); Kent State University, Physics Department, Kent, OH (United States); Timoteo, Varese S. [Universidade Estadual de Campinas (UNICAMP), Grupo de Optica e Modelagem Numerica (GOMNI), Faculdade de Tecnologia, Limeira, SP (Brazil); Avancini, Sidney S.; Pinto, Marcus B. [Universidade Federal de Santa Catarina, Departamento de Fisica, Florianopolis, Santa Catarina (Brazil); Krein, Gastao [Universidade Estadual Paulista, Instituto de Fisica Teorica, Sao Paulo, SP (Brazil)

    2017-05-15

    The phenomenon of inverse magnetic catalysis of chiral symmetry in QCD predicted by lattice simulations can be reproduced within the Nambu-Jona-Lasinio model if the coupling G of the model decreases with the strength B of the magnetic field and temperature T. The thermo-magnetic dependence of G(B, T) is obtained by fitting recent lattice QCD predictions for the chiral transition order parameter. Different thermodynamic quantities of magnetized quark matter evaluated with G(B, T) are compared with the ones obtained at constant coupling, G. The model with G(B, T) predicts a more dramatic chiral transition as the field intensity increases. In addition, the pressure and magnetization always increase with B for a given temperature. Being parametrized by four magnetic-field-dependent coefficients and having a rather simple exponential thermal dependence our accurate ansatz for the coupling constant can be easily implemented to improve typical model applications to magnetized quark matter. (orig.)

  16. Joint modeling of constrained path enumeration and path choice behavior: a semi-compensatory approach

    DEFF Research Database (Denmark)

    Kaplan, Sigal; Prato, Carlo Giacomo

    2010-01-01

    A behavioural and a modelling framework are proposed for representing route choice from a path set that satisfies travellers’ spatiotemporal constraints. Within the proposed framework, travellers’ master sets are constructed by path generation, consideration sets are delimited according to spatio...

  17. Constrained noninformative priors

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-10-01

    The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given

  18. A Three-Dimensional Model of the Marine Nitrogen Cycle during the Last Glacial Maximum Constrained by Sedimentary Isotopes

    Directory of Open Access Journals (Sweden)

    Christopher J. Somes

    2017-05-01

    Full Text Available Nitrogen is a key limiting nutrient that influences marine productivity and carbon sequestration in the ocean via the biological pump. In this study, we present the first estimates of nitrogen cycling in a coupled 3D ocean-biogeochemistry-isotope model forced with realistic boundary conditions from the Last Glacial Maximum (LGM ~21,000 years before present constrained by nitrogen isotopes. The model predicts a large decrease in nitrogen loss rates due to higher oxygen concentrations in the thermocline and sea level drop, and, as a response, reduced nitrogen fixation. Model experiments are performed to evaluate effects of hypothesized increases of atmospheric iron fluxes and oceanic phosphorus inventory relative to present-day conditions. Enhanced atmospheric iron deposition, which is required to reproduce observations, fuels export production in the Southern Ocean causing increased deep ocean nutrient storage. This reduces transport of preformed nutrients to the tropics via mode waters, thereby decreasing productivity, oxygen deficient zones, and water column N-loss there. A larger global phosphorus inventory up to 15% cannot be excluded from the currently available nitrogen isotope data. It stimulates additional nitrogen fixation that increases the global oceanic nitrogen inventory, productivity, and water column N-loss. Among our sensitivity simulations, the best agreements with nitrogen isotope data from LGM sediments indicate that water column and sedimentary N-loss were reduced by 17–62% and 35–69%, respectively, relative to preindustrial values. Our model demonstrates that multiple processes alter the nitrogen isotopic signal in most locations, which creates large uncertainties when quantitatively constraining individual nitrogen cycling processes. One key uncertainty is nitrogen fixation, which decreases by 25–65% in the model during the LGM mainly in response to reduced N-loss, due to the lack of observations in the open ocean most

  19. Constraining snowmelt in a temperature-index model using simulated snow densities

    KAUST Repository

    Bormann, Kathryn J.; Evans, Jason P.; McCabe, Matthew

    2014-01-01

    Current snowmelt parameterisation schemes are largely untested in warmer maritime snowfields, where physical snow properties can differ substantially from the more common colder snow environments. Physical properties such as snow density influence the thermal properties of snow layers and are likely to be important for snowmelt rates. Existing methods for incorporating physical snow properties into temperature-index models (TIMs) require frequent snow density observations. These observations are often unavailable in less monitored snow environments. In this study, previous techniques for end-of-season snow density estimation (Bormann et al., 2013) were enhanced and used as a basis for generating daily snow density data from climate inputs. When evaluated against 2970 observations, the snow density model outperforms a regionalised density-time curve reducing biases from -0.027gcm-3 to -0.004gcm-3 (7%). The simulated daily densities were used at 13 sites in the warmer maritime snowfields of Australia to parameterise snowmelt estimation. With absolute snow water equivalent (SWE) errors between 100 and 136mm, the snow model performance was generally lower in the study region than that reported for colder snow environments, which may be attributed to high annual variability. Model performance was strongly dependent on both calibration and the adjustment for precipitation undercatch errors, which influenced model calibration parameters by 150-200%. Comparison of the density-based snowmelt algorithm against a typical temperature-index model revealed only minor differences between the two snowmelt schemes for estimation of SWE. However, when the model was evaluated against snow depths, the new scheme reduced errors by up to 50%, largely due to improved SWE to depth conversions. While this study demonstrates the use of simulated snow density in snowmelt parameterisation, the snow density model may also be of broad interest for snow depth to SWE conversion. Overall, the

  20. Constraining the uncertainty in emissions over India with a regional air quality model evaluation

    Science.gov (United States)

    Karambelas, Alexandra; Holloway, Tracey; Kiesewetter, Gregor; Heyes, Chris

    2018-02-01

    To evaluate uncertainty in the spatial distribution of air emissions over India, we compare satellite and surface observations with simulations from the U.S. Environmental Protection Agency (EPA) Community Multi-Scale Air Quality (CMAQ) model. Seasonally representative simulations were completed for January, April, July, and October 2010 at 36 km × 36 km using anthropogenic emissions from the Greenhouse Gas-Air Pollution Interaction and Synergies (GAINS) model following version 5a of the Evaluating the Climate and Air Quality Impacts of Short-Lived Pollutants project (ECLIPSE v5a). We use both tropospheric columns from the Ozone Monitoring Instrument (OMI) and surface observations from the Central Pollution Control Board (CPCB) to closely examine modeled nitrogen dioxide (NO2) biases in urban and rural regions across India. Spatial average evaluation with satellite retrievals indicate a low bias in the modeled tropospheric column (-63.3%), which reflects broad low-biases in majority non-urban regions (-70.1% in rural areas) across the sub-continent to slightly lesser low biases reflected in semi-urban areas (-44.7%), with the threshold between semi-urban and rural defined as 400 people per km2. In contrast, modeled surface NO2 concentrations exhibit a slight high bias of +15.6% when compared to surface CPCB observations predominantly located in urban areas. Conversely, in examining extremely population dense urban regions with more than 5000 people per km2 (dense-urban), we find model overestimates in both the column (+57.8) and at the surface (+131.2%) compared to observations. Based on these results, we find that existing emission fields for India may overestimate urban emissions in densely populated regions and underestimate rural emissions. However, if we rely on model evaluation with predominantly urban surface observations from the CPCB, comparisons reflect model high biases, contradictory to the knowledge gained using satellite observations. Satellites thus

  1. Constraining snowmelt in a temperature-index model using simulated snow densities

    KAUST Repository

    Bormann, Kathryn J.

    2014-09-01

    Current snowmelt parameterisation schemes are largely untested in warmer maritime snowfields, where physical snow properties can differ substantially from the more common colder snow environments. Physical properties such as snow density influence the thermal properties of snow layers and are likely to be important for snowmelt rates. Existing methods for incorporating physical snow properties into temperature-index models (TIMs) require frequent snow density observations. These observations are often unavailable in less monitored snow environments. In this study, previous techniques for end-of-season snow density estimation (Bormann et al., 2013) were enhanced and used as a basis for generating daily snow density data from climate inputs. When evaluated against 2970 observations, the snow density model outperforms a regionalised density-time curve reducing biases from -0.027gcm-3 to -0.004gcm-3 (7%). The simulated daily densities were used at 13 sites in the warmer maritime snowfields of Australia to parameterise snowmelt estimation. With absolute snow water equivalent (SWE) errors between 100 and 136mm, the snow model performance was generally lower in the study region than that reported for colder snow environments, which may be attributed to high annual variability. Model performance was strongly dependent on both calibration and the adjustment for precipitation undercatch errors, which influenced model calibration parameters by 150-200%. Comparison of the density-based snowmelt algorithm against a typical temperature-index model revealed only minor differences between the two snowmelt schemes for estimation of SWE. However, when the model was evaluated against snow depths, the new scheme reduced errors by up to 50%, largely due to improved SWE to depth conversions. While this study demonstrates the use of simulated snow density in snowmelt parameterisation, the snow density model may also be of broad interest for snow depth to SWE conversion. Overall, the

  2. Attitudinal travel demand model for non-work trips of homogeneously constrained segments of a population

    Energy Technology Data Exchange (ETDEWEB)

    Recker, W.W.; Stevens, R.F.

    1977-06-01

    Market-segmentation techniques are used to capture effects of opportunity and availability constraints on urban residents' choice of mode for trips for major grocery shopping and for visiting friends and acquaintances. Attitudinal multinomial logit choice models are estimated for each market segment. Explanatory variables are individual's beliefs about attributes of four modal alternatives: bus, car, taxi and walking. Factor analysis is employed to identify latent dimensions of perception of the modal alternatives and to eliminate problems of multicollinearity in model estimation.

  3. Constraining dark photon model with dark matter from CMB spectral distortions

    Directory of Open Access Journals (Sweden)

    Ki-Young Choi

    2017-08-01

    Full Text Available Many extensions of Standard Model (SM include a dark sector which can interact with the SM sector via a light mediator. We explore the possibilities to probe such a dark sector by studying the distortion of the CMB spectrum from the blackbody shape due to the elastic scatterings between the dark matter and baryons through a hidden light mediator. We in particular focus on the model where the dark sector gauge boson kinetically mixes with the SM and present the future experimental prospect for a PIXIE-like experiment along with its comparison to the existing bounds from complementary terrestrial experiments.

  4. Assessing water resources in Azerbaijan using a local distributed model forced and constrained with global data

    Science.gov (United States)

    Bouaziz, Laurène; Hegnauer, Mark; Schellekens, Jaap; Sperna Weiland, Frederiek; ten Velden, Corine

    2017-04-01

    In many countries, data is scarce, incomplete and often not easily shared. In these cases, global satellite and reanalysis data provide an alternative to assess water resources. To assess water resources in Azerbaijan, a completely distributed and physically based hydrological wflow-sbm model was set-up for the entire Kura basin. We used SRTM elevation data, a locally available river map and one from OpenStreetMap to derive the drainage direction network at the model resolution of approximately 1x1 km. OpenStreetMap data was also used to derive the fraction of paved area per cell to account for the reduced infiltration capacity (c.f. Schellekens et al. 2014). We used the results of a global study to derive root zone capacity based on climate data (Wang-Erlandsson et al., 2016). To account for the variation in vegetation cover over the year, monthly averages of Leaf Area Index, based on MODIS data, were used. For the soil-related parameters, we used global estimates as provided by Dai et al. (2013). This enabled the rapid derivation of a first estimate of parameter values for our hydrological model. Digitized local meteorological observations were scarce and available only for limited time period. Therefore several sources of global meteorological data were evaluated: (1) EU-WATCH global precipitation, temperature and derived potential evaporation for the period 1958-2001 (Harding et al., 2011), (2) WFDEI precipitation, temperature and derived potential evaporation for the period 1979-2014 (by Weedon et al., 2014), (3) MSWEP precipitation (Beck et al., 2016) and (4) local precipitation data from more than 200 stations in the Kura basin were available from the NOAA website for a period up to 1991. The latter, together with data archives from Azerbaijan, were used as a benchmark to evaluate the global precipitation datasets for the overlapping period 1958-1991. By comparing the datasets, we found that monthly mean precipitation of EU-WATCH and WFDEI coincided well

  5. A frictionally and hydraulically constrained model of the convectively driven mean flow in partially enclosed seas

    Science.gov (United States)

    Maxworthy, T.

    1997-08-01

    A simple three-layer model of the dynamics of partially enclosed seas, driven by a surface buoyancy flux, is presented. It contains two major elements, a hydraulic constraint at the exit contraction and friction in the interior of the main body of the sea; both together determine the vertical structure and magnitudes of the interior flow variables, i.e. velocity and density. Application of the model to the large-scale dynamics of the Red Sea gives results that are not in disagreement with observation once the model is applied, also, to predict the dense outflow from the Gulf of Suez. The latter appears to be the agent responsible for the formation of dense bottom water in this system. Also, the model is reasonably successful in predicting the density of the outflow from the Persian Gulf, and can be applied to any number of other examples of convectively driven flow in long, narrow channels, with or without sills and constrictions at their exits.

  6. Electron-capture Isotopes Could Constrain Cosmic-Ray Propagation Models

    Science.gov (United States)

    Benyamin, David; Shaviv, Nir J.; Piran, Tsvi

    2017-12-01

    Electron capture (EC) isotopes are known to provide constraints on the low-energy behavior of cosmic rays (CRs), such as reacceleration. Here, we study the EC isotopes within the framework of the dynamic spiral-arms CR propagation model in which most of the CR sources reside in the galactic spiral arms. The model was previously used to explain the B/C and sub-Fe/Fe ratios. We show that the known inconsistency between the 49Ti/49V and 51V/51Cr ratios remains also in the spiral-arms model. On the other hand, unlike the general wisdom that says the isotope ratios depend primarily on reacceleration, we find here that the ratio also depends on the halo size (Z h) and, in spiral-arms models, also on the time since the last spiral-arm passage ({τ }{arm}). Namely, EC isotopes can, in principle, provide interesting constraints on the diffusion geometry. However, with the present uncertainties in the lab measurements of both the electron attachment rate and the fragmentation cross sections, no meaningful constraint can be placed.

  7. Using expert knowledge of the hydrological system to constrain multi-objective calibration of SWAT models

    Science.gov (United States)

    The SWAT model is a helpful tool to predict hydrological processes in a study catchment and their impact on the river discharge at the catchment outlet. For reliable discharge predictions, a precise simulation of hydrological processes is required. Therefore, SWAT has to be calibrated accurately to ...

  8. Constraining biogenic silica dissolution in marine sediments: a comparison between diagenetic models and experimental dissolution rates

    NARCIS (Netherlands)

    Khalil, K.; Rabouille, C.; Gallinari, M.; Soetaert, K.E.R.; DeMaster, D.J.; Ragueneau, O.

    2007-01-01

    The processes controlling preservation and recycling of particulate biogenic silica in sediments must be understood in order to calculate oceanic silica mass balances. The new contribution of this work is the coupled use of advanced models including reprecipitation and different phases of biogenic

  9. Effects of time-varying β in SNLS3 on constraining interacting dark energy models

    International Nuclear Information System (INIS)

    Wang, Shuang; Wang, Yong-Zhen; Geng, Jia-Jia; Zhang, Xin

    2014-01-01

    It has been found that, for the Supernova Legacy Survey three-year (SNLS3) data, there is strong evidence for the redshift evolution of the color-luminosity parameter β. In this paper, adopting the w-cold-dark-matter (wCDM) model and considering its interacting extensions (with three kinds of interaction between dark sectors), we explore the evolution of β and its effects on parameter estimation. In addition to the SNLS3 data, we also use the latest Planck distance priors data, the galaxy clustering data extracted from sloan digital sky survey data release 7 and baryon oscillation spectroscopic survey, as well as the direct measurement of Hubble constant H 0 from the Hubble Space Telescope observation. We find that, for all the interacting dark energy (IDE) models, adding a parameter of β can reduce χ 2 by ∝34, indicating that a constant β is ruled out at 5.8σ confidence level. Furthermore, it is found that varying β can significantly change the fitting results of various cosmological parameters: for all the dark energy models considered in this paper, varying β yields a larger fractional CDM densities Ω c0 and a larger equation of state w; on the other side, varying β yields a smaller reduced Hubble constant h for the wCDM model, but it has no impact on h for the three IDE models. This implies that there is a degeneracy between h and coupling parameter γ. Our work shows that the evolution of β is insensitive to the interaction between dark sectors, and then highlights the importance of considering β's evolution in the cosmology fits. (orig.)

  10. Modeling Studies to Constrain Fluid and Gas Migration Associated with Hydraulic Fracturing Operations

    Science.gov (United States)

    Rajaram, H.; Birdsell, D.; Lackey, G.; Karra, S.; Viswanathan, H. S.; Dempsey, D.

    2015-12-01

    The dramatic increase in the extraction of unconventional oil and gas resources using horizontal wells and hydraulic fracturing (fracking) technologies has raised concerns about potential environmental impacts. Large volumes of hydraulic fracturing fluids are injected during fracking. Incidents of stray gas occurrence in shallow aquifers overlying shale gas reservoirs have been reported; whether these are in any way related to fracking continues to be debated. Computational models serve as useful tools for evaluating potential environmental impacts. We present modeling studies of hydraulic fracturing fluid and gas migration during the various stages of well operation, production, and subsequent plugging. The fluid migration models account for overpressure in the gas reservoir, density contrast between injected fluids and brine, imbibition into partially saturated shale, and well operations. Our results highlight the importance of representing the different stages of well operation consistently. Most importantly, well suction and imbibition both play a significant role in limiting upward migration of injected fluids, even in the presence of permeable connecting pathways. In an overall assessment, our fluid migration simulations suggest very low risk to groundwater aquifers when the vertical separation from a shale gas reservoir is of the order of 1000' or more. Multi-phase models of gas migration were developed to couple flow and transport in compromised wellbores and subsurface formations. These models are useful for evaluating both short-term and long-term scenarios of stray methane release. We present simulation results to evaluate mechanisms controlling stray gas migration, and explore relationships between bradenhead pressures and the likelihood of methane release and transport.

  11. Candidatus Sodalis melophagi sp. nov.: phylogenetically independent comparative model to the tsetse fly symbiont Sodalis glossinidius.

    Directory of Open Access Journals (Sweden)

    Tomáš Chrudimský

    Full Text Available Bacteria of the genus Sodalis live in symbiosis with various groups of insects. The best known member of this group, a secondary symbiont of tsetse flies Sodalis glossinidius, has become one of the most important models in investigating establishment and evolution of insect-bacteria symbiosis. It represents a bacterium in the early/intermediate state of the transition towards symbiosis, which allows for exploring such interesting topics as: usage of secretory systems for entering the host cell, tempo of the genome modification, and metabolic interaction with a coexisting primary symbiont. In this study, we describe a new Sodalis species which could provide a useful comparative model to the tsetse symbiont. It lives in association with Melophagus ovinus, an insect related to tsetse flies, and resembles S. glossinidius in several important traits. Similar to S. glossinidius, it cohabits the host with another symbiotic bacterium, the bacteriome-harbored primary symbiont of the genus Arsenophonus. As a typical secondary symbiont, Candidatus Sodalis melophagi infects various host tissues, including bacteriome. We provide basic morphological and molecular characteristics of the symbiont and show that these traits also correspond to the early/intermediate state of the evolution towards symbiosis. Particularly, we demonstrate the ability of the bacterium to live in insect cell culture as well as in cell-free medium. We also provide basic characteristics of type three secretion system and using three reference sequences (16 S rDNA, groEL and spaPQR region we show that the bacterium branched within the genus Sodalis, but originated independently of the two previously described symbionts of hippoboscoids. We propose the name Candidatus Sodalis melophagi for this new bacterium.

  12. Candidatus Sodalis melophagi sp. nov.: phylogenetically independent comparative model to the tsetse fly symbiont Sodalis glossinidius.

    Science.gov (United States)

    Chrudimský, Tomáš; Husník, Filip; Nováková, Eva; Hypša, Václav

    2012-01-01

    Bacteria of the genus Sodalis live in symbiosis with various groups of insects. The best known member of this group, a secondary symbiont of tsetse flies Sodalis glossinidius, has become one of the most important models in investigating establishment and evolution of insect-bacteria symbiosis. It represents a bacterium in the early/intermediate state of the transition towards symbiosis, which allows for exploring such interesting topics as: usage of secretory systems for entering the host cell, tempo of the genome modification, and metabolic interaction with a coexisting primary symbiont. In this study, we describe a new Sodalis species which could provide a useful comparative model to the tsetse symbiont. It lives in association with Melophagus ovinus, an insect related to tsetse flies, and resembles S. glossinidius in several important traits. Similar to S. glossinidius, it cohabits the host with another symbiotic bacterium, the bacteriome-harbored primary symbiont of the genus Arsenophonus. As a typical secondary symbiont, Candidatus Sodalis melophagi infects various host tissues, including bacteriome. We provide basic morphological and molecular characteristics of the symbiont and show that these traits also correspond to the early/intermediate state of the evolution towards symbiosis. Particularly, we demonstrate the ability of the bacterium to live in insect cell culture as well as in cell-free medium. We also provide basic characteristics of type three secretion system and using three reference sequences (16 S rDNA, groEL and spaPQR region) we show that the bacterium branched within the genus Sodalis, but originated independently of the two previously described symbionts of hippoboscoids. We propose the name Candidatus Sodalis melophagi for this new bacterium.

  13. Stroke type differentiation using spectrally constrained multifrequency EIT: evaluation of feasibility in a realistic head model

    International Nuclear Information System (INIS)

    Malone, Emma; Jehl, Markus; Arridge, Simon; Betcke, Timo; Holder, David

    2014-01-01

    We investigate the application of multifrequency electrical impedance tomography (MFEIT) to imaging the brain in stroke patients. The use of MFEIT could enable early diagnosis and thrombolysis of ischaemic stroke, and therefore improve the outcome of treatment. Recent advances in the imaging methodology suggest that the use of spectral constraints could allow for the reconstruction of a one-shot image. We performed a simulation study to investigate the feasibility of imaging stroke in a head model with realistic conductivities. We introduced increasing levels of modelling errors to test the robustness of the method to the most common sources of artefact. We considered the case of errors in the electrode placement, spectral constraints, and contact impedance. The results indicate that errors in the position and shape of the electrodes can affect image quality, although our imaging method was successful in identifying tissues with sufficiently distinct spectra. (paper)

  14. A hydrodynamical model of Kepler's supernova remnant constrained by x-ray spectra

    International Nuclear Information System (INIS)

    Ballet, J.; Arnaud, M.; Rothinfluo, R.; Chieze, J.P.; Magne, B.

    1988-01-01

    The remnant of the historical supernova observed by Kepler in 1604 was recently observed in x-rays by the EXOSAT satellite up to 10 keV. A strong Fe K emission line around 6.5 keV is readily apparent in the spectrum. From an analysis of the light curve of the SN, reconstructed from historical descriptions, a previous study proposed to classify it as type I. Standard models of SN I based on carbon deflagration of white dwarf predict the synthesis of about 0.5 M circle of iron in the ejecta. Observing the iron line is a crucial check for such models. It has been argued that the light curve of Sn II-L is very similar to that of SN I and that the original observations are compatible with either type. In view of this uncertainty the authors have run a hydrodynamics-ionization code for both SN II and SN I remnants

  15. Constraining the thermal conditions of impact environments through integrated low-temperature thermochronometry and numerical modeling

    Science.gov (United States)

    Kelly, N. M.; Marchi, S.; Mojzsis, S. J.; Flowers, R. M.; Metcalf, J. R.; Bottke, W. F., Jr.

    2017-12-01

    Impacts have a significant physical and chemical influence on the surface conditions of a planet. The cratering record is used to understand a wide array of impact processes, such as the evolution of the impact flux through time. However, the relationship between impactor size and a resulting impact crater remains controversial (e.g., Bottke et al., 2016). Likewise, small variations in the impact velocity are known to significantly affect the thermal-mechanical disturbances in the aftermath of a collision. Development of more robust numerical models for impact cratering has implications for how we evaluate the disruptive capabilities of impact events, including the extent and duration of thermal anomalies, the volume of ejected material, and the resulting landscape of impacted environments. To address uncertainties in crater scaling relationships, we present an approach and methodology that integrates numerical modeling of the thermal evolution of terrestrial impact craters with low-temperature, (U-Th)/He thermochronometry. The approach uses time-temperature (t-T) paths of crust within an impact crater, generated from numerical simulations of an impact. These t-T paths are then used in forward models to predict the resetting behavior of (U-Th)/He ages in the mineral chronometers apatite and zircon. Differences between the predicted and measured (U-Th)/He ages from a modeled terrestrial impact crater can then be used to evaluate parameters in the original numerical simulations, and refine the crater scaling relationships. We expect our methodology to additionally inform our interpretation of impact products, such as lunar impact breccias and meteorites, providing robust constraints on their thermal histories. In addition, the method is ideal for sample return mission planning - robust "prediction" of ages we expect from a given impact environment enhances our ability to target sampling sites on the Moon, Mars or other solar system bodies where impacts have strongly

  16. Large-scale coastal and fluvial models constrain the late Holocene evolution of the Ebro Delta

    Directory of Open Access Journals (Sweden)

    J. H. Nienhuis

    2017-09-01

    Full Text Available The distinctive plan-view shape of the Ebro Delta coast reveals a rich morphologic history. The degree to which the form and depositional history of the Ebro and other deltas represent autogenic (internal dynamics or allogenic (external forcing remains a prominent challenge for paleo-environmental reconstructions. Here we use simple coastal and fluvial morphodynamic models to quantify paleo-environmental changes affecting the Ebro Delta over the late Holocene. Our findings show that these models are able to broadly reproduce the Ebro Delta morphology, with simple fluvial and wave climate histories. Based on numerical model experiments and the preserved and modern shape of the Ebro Delta plain, we estimate that a phase of rapid shoreline progradation began approximately 2100 years BP, requiring approximately a doubling in coarse-grained fluvial sediment supply to the delta. River profile simulations suggest that an instantaneous and sustained increase in coarse-grained sediment supply to the delta requires a combined increase in both flood discharge and sediment supply from the drainage basin. The persistence of rapid delta progradation throughout the last 2100 years suggests an anthropogenic control on sediment supply and flood intensity. Using proxy records of the North Atlantic Oscillation, we do not find evidence that changes in wave climate aided this delta expansion. Our findings highlight how scenario-based investigations of deltaic systems using simple models can assist first-order quantitative paleo-environmental reconstructions, elucidating the effects of past human influence and climate change, and allowing a better understanding of the future of deltaic landforms.

  17. Reliability constrained decision model for energy service provider incorporating demand response programs

    International Nuclear Information System (INIS)

    Mahboubi-Moghaddam, Esmaeil; Nayeripour, Majid; Aghaei, Jamshid

    2016-01-01

    Highlights: • The operation of Energy Service Providers (ESPs) in electricity markets is modeled. • Demand response as the cost-effective solution is used for energy service provider. • The market price uncertainty is modeled using the robust optimization technique. • The reliability of the distribution network is embedded into the framework. • The simulation results demonstrate the benefits of robust framework for ESPs. - Abstract: Demand response (DR) programs are becoming a critical concept for the efficiency of current electric power industries. Therefore, its various capabilities and barriers have to be investigated. In this paper, an effective decision model is presented for the strategic behavior of energy service providers (ESPs) to demonstrate how to participate in the day-ahead electricity market and how to allocate demand in the smart distribution network. Since market price affects DR and vice versa, a new two-step sequential framework is proposed, in which unit commitment problem (UC) is solved to forecast the expected locational marginal prices (LMPs), and successively DR program is applied to optimize the total cost of providing energy for the distribution network customers. This total cost includes the cost of purchased power from the market and distributed generation (DG) units, incentive cost paid to the customers, and compensation cost of power interruptions. To obtain compensation cost, the reliability evaluation of the distribution network is embedded into the framework using some innovative constraints. Furthermore, to consider the unexpected behaviors of the other market participants, the LMP prices are modeled as the uncertainty parameters using the robust optimization technique, which is more practical compared to the conventional stochastic approach. The simulation results demonstrate the significant benefits of the presented framework for the strategic performance of ESPs.

  18. Constraining Transient Climate Sensitivity Using Coupled Climate Model Simulations of Volcanic Eruptions

    KAUST Repository

    Merlis, Timothy M.; Held, Isaac M.; Stenchikov, Georgiy L.; Zeng, Fanrong; Horowitz, Larry W.

    2014-01-01

    Coupled climate model simulations of volcanic eruptions and abrupt changes in CO2 concentration are compared in multiple realizations of the Geophysical Fluid Dynamics Laboratory Climate Model, version 2.1 (GFDL CM2.1). The change in global-mean surface temperature (GMST) is analyzed to determine whether a fast component of the climate sensitivity of relevance to the transient climate response (TCR; defined with the 1%yr-1 CO2-increase scenario) can be estimated from shorter-time-scale climate changes. The fast component of the climate sensitivity estimated from the response of the climate model to volcanic forcing is similar to that of the simulations forced by abrupt CO2 changes but is 5%-15% smaller than the TCR. In addition, the partition between the top-of-atmosphere radiative restoring and ocean heat uptake is similar across radiative forcing agents. The possible asymmetry between warming and cooling climate perturbations, which may affect the utility of volcanic eruptions for estimating the TCR, is assessed by comparing simulations of abrupt CO2 doubling to abrupt CO2 halving. There is slightly less (~5%) GMST change in 0.5 × CO2 simulations than in 2 × CO2 simulations on the short (~10 yr) time scales relevant to the fast component of the volcanic signal. However, inferring the TCR from volcanic eruptions is more sensitive to uncertainties from internal climate variability and the estimation procedure. The response of the GMST to volcanic eruptions is similar in GFDL CM2.1 and GFDL Climate Model, version 3 (CM3), even though the latter has a higher TCR associated with a multidecadal time scale in its response. This is consistent with the expectation that the fast component of the climate sensitivity inferred from volcanic eruptions is a lower bound for the TCR.

  19. Constraining Transient Climate Sensitivity Using Coupled Climate Model Simulations of Volcanic Eruptions

    KAUST Repository

    Merlis, Timothy M.

    2014-10-01

    Coupled climate model simulations of volcanic eruptions and abrupt changes in CO2 concentration are compared in multiple realizations of the Geophysical Fluid Dynamics Laboratory Climate Model, version 2.1 (GFDL CM2.1). The change in global-mean surface temperature (GMST) is analyzed to determine whether a fast component of the climate sensitivity of relevance to the transient climate response (TCR; defined with the 1%yr-1 CO2-increase scenario) can be estimated from shorter-time-scale climate changes. The fast component of the climate sensitivity estimated from the response of the climate model to volcanic forcing is similar to that of the simulations forced by abrupt CO2 changes but is 5%-15% smaller than the TCR. In addition, the partition between the top-of-atmosphere radiative restoring and ocean heat uptake is similar across radiative forcing agents. The possible asymmetry between warming and cooling climate perturbations, which may affect the utility of volcanic eruptions for estimating the TCR, is assessed by comparing simulations of abrupt CO2 doubling to abrupt CO2 halving. There is slightly less (~5%) GMST change in 0.5 × CO2 simulations than in 2 × CO2 simulations on the short (~10 yr) time scales relevant to the fast component of the volcanic signal. However, inferring the TCR from volcanic eruptions is more sensitive to uncertainties from internal climate variability and the estimation procedure. The response of the GMST to volcanic eruptions is similar in GFDL CM2.1 and GFDL Climate Model, version 3 (CM3), even though the latter has a higher TCR associated with a multidecadal time scale in its response. This is consistent with the expectation that the fast component of the climate sensitivity inferred from volcanic eruptions is a lower bound for the TCR.

  20. Maximizing time from the constraining European Working Time Directive (EWTD): The Heidelberg New Working Time Model.

    Science.gov (United States)

    Schimmack, Simon; Hinz, Ulf; Wagner, Andreas; Schmidt, Thomas; Strothmann, Hendrik; Büchler, Markus W; Schmitz-Winnenthal, Hubertus

    2014-01-01

    The introduction of the European Working Time Directive (EWTD) has greatly reduced training hours of surgical residents, which translates into 30% less surgical and clinical experience. Such a dramatic drop in attendance has serious implications such compromised quality of medical care. As the surgical department of the University of Heidelberg, our goal was to establish a model that was compliant with the EWTD while avoiding reduction in quality of patient care and surgical training. We first performed workload analyses and performance statistics for all working areas of our department (operation theater, emergency room, specialized consultations, surgical wards and on-call duties) using personal interviews, time cards, medical documentation software as well as data of the financial- and personnel-controlling sector of our administration. Using that information, we specifically designed an EWTD-compatible work model and implemented it. Surgical wards and operating rooms (ORs) were not compliant with the EWTD. Between 5 pm and 8 pm, three ORs were still operating two-thirds of the time. By creating an extended work shift (7:30 am-7:30 pm), we effectively reduced the workload to less than 49% from 4 pm and 8 am, allowing the combination of an eight-hour working day with a 16-hour on call duty; thus, maximizing surgical resident training and ensuring patient continuity of care while maintaining EDTW guidelines. A precise workload analysis is the key to success. The Heidelberg New Working Time Model provides a legal model, which, by avoiding rotating work shifts, assures quality of patient care and surgical training.

  1. Modeling and analysis of strategic forward contracting in transmission constrained power markets

    International Nuclear Information System (INIS)

    Yu, C.W.; Chung, T.S.; Zhang, S.H.; Wang, X.

    2010-01-01

    Taking the effects of transmission network into account, strategic forward contracting induced by the interaction of generation firms' strategies in the spot and forward markets is investigated. A two-stage game model is proposed to describe generation firms' strategic forward contracting and spot market competition. In the spot market, generation firms behave strategically by submitting bids at their nodes in a form of linear supply function (LSF) and there are arbitrageurs who buy and resell power at different nodes where price differences exceed the costs of transmission. The owner of the grid is assumed to ration limited transmission line capacity to maximize the value of the transmission services in the spot market. The Cournot-type competition is assumed for the strategic forward contract market. This two-stage model is formulated as an equilibrium problem with equilibrium constraints (EPEC); in which each firm's optimization problem in the forward market is a mathematical program with equilibrium constraints (MPEC) and parameter-dependent spot market equilibrium as the inner problem. A nonlinear complementarity method is employed to solve this EPEC model. (author)

  2. Constraining volcanic inflation at Three Sisters Volcanic Field in Oregon, USA, through microgravity and deformation modeling

    Science.gov (United States)

    Zurek, Jeffrey; William-Jones, Glyn; Johnson, Dan; Eggers, Al

    2012-10-01

    Microgravity data were collected between 2002 and 2009 at the Three Sisters Volcanic Complex, Oregon, to investigate the causes of an ongoing deformation event west of South Sister volcano. Three different conceptual models have been proposed as the causal mechanism for the deformation event: (1) hydraulic uplift due to continual injection of magma at depth, (2) pressurization of hydrothermal systems and (3) viscoelastic response to an initial pressurization at depth. The gravitational effect of continual magma injection was modeled to be 20 to 33 μGal at the center of the deformation field with volumes based on previous deformation studies. The gravity time series, however, did not detect a mass increase suggesting that a viscoelactic response of the crust is the most likely cause for the deformation from 2002 to 2009. The crust, deeper than 3 km, in the Three Sisters region was modeled as a Maxwell viscoelastic material and the results suggest a dynamic viscosity between 1018 to 5 × 1019 Pa s. This low crustal viscosity suggests that magma emplacement or stall depth is controlled by density and not the brittle ductile transition zone. Furthermore, these crustal properties and the observed geochemical composition gaps at Three Sisters can be best explained by different melt sources and limited magma mixing rather than fractional crystallization. More generally, low intrusion rates, low crustal viscosity, and multiple melt sources could also explain the whole rock compositional gaps observed at other arc volcanoes.

  3. Modeling and Economic Analysis of Power Grid Operations in a Water Constrained System

    Science.gov (United States)

    Zhou, Z.; Xia, Y.; Veselka, T.; Yan, E.; Betrie, G.; Qiu, F.

    2016-12-01

    The power sector is the largest water user in the United States. Depending on the cooling technology employed at a facility, steam-electric power stations withdrawal and consume large amounts of water for each megawatt hour of electricity generated. The amounts are dependent on many factors, including ambient air and water temperatures, cooling technology, etc. Water demands from most economic sectors are typically highest during summertime. For most systems, this coincides with peak electricity demand and consequently a high demand for thermal power plant cooling water. Supplies however are sometimes limited due to seasonal precipitation fluctuations including sporadic droughts that lead to water scarcity. When this occurs there is an impact on both unit commitments and the real-time dispatch. In this work, we model the cooling efficiency of several different types of thermal power generation technologies as a function of power output level and daily temperature profiles. Unit specific relationships are then integrated in a power grid operational model that minimizes total grid production cost while reliably meeting hourly loads. Grid operation is subject to power plant physical constraints, transmission limitations, water availability and environmental constraints such as power plant water exit temperature limits. The model is applied to a standard IEEE-118 bus system under various water availability scenarios. Results show that water availability has a significant impact on power grid economics.

  4. Coupling geophysical investigation with hydrothermal modeling to constrain the enthalpy classification of a potential geothermal resource.

    Science.gov (United States)

    White, Jeremy T.; Karakhanian, Arkadi; Connor, Chuck; Connor, Laura; Hughes, Joseph D.; Malservisi, Rocco; Wetmore, Paul

    2015-01-01

    An appreciable challenge in volcanology and geothermal resource development is to understand the relationships between volcanic systems and low-enthalpy geothermal resources. The enthalpy of an undeveloped geothermal resource in the Karckar region of Armenia is investigated by coupling geophysical and hydrothermal modeling. The results of 3-dimensional inversion of gravity data provide key inputs into a hydrothermal circulation model of the system and associated hot springs, which is used to evaluate possible geothermal system configurations. Hydraulic and thermal properties are specified using maximum a priori estimates. Limited constraints provided by temperature data collected from an existing down-gradient borehole indicate that the geothermal system can most likely be classified as low-enthalpy and liquid dominated. We find the heat source for the system is likely cooling quartz monzonite intrusions in the shallow subsurface and that meteoric recharge in the pull-apart basin circulates to depth, rises along basin-bounding faults and discharges at the hot springs. While other combinations of subsurface properties and geothermal system configurations may fit the temperature distribution equally well, we demonstrate that the low-enthalpy system is reasonably explained based largely on interpretation of surface geophysical data and relatively simple models.

  5. Constraining the Timescales of Rehydration in Nominally Anhydrous Minerals Using 3D Numerical Diffusion Models

    Science.gov (United States)

    Lynn, K. J.; Warren, J. M.

    2017-12-01

    Nominally anhydrous minerals (NAMs) are important for characterizing deep-Earth water reservoirs, but the water contents of olivine (ol), orthopyroxene (opx), and clinopyroxene (cpx) in peridotites generally do not reflect mantle equilibrium conditions. Ol is typically "dry" and decoupled from H in cpx and opx, which is inconsistent with models of partial melting and/or diffusive loss of H during upwelling beneath mid-ocean ridges. The rehydration of mantle pyroxenes via late-stage re-fertilization has been invoked to explain their relatively high water contents. Here, we use sophisticated 3D diffusion models (after Shea et al., 2015, Am Min) of H in ol, opx, and cpx to investigate the timescales of rehydration across a range of conditions relevant for melt-rock interaction and serpentinization of peridotites. Numerical crystals with 1 mm c-axis lengths and realistic crystal morphologies are modeled using recent H diffusivities that account for compositional variation and diffusion anisotropy. Models were run over timescales of minutes to millions of years and temperatures from 300 to 1200°C. Our 3D models show that, at the high-T end of the range, H concentrations in the cores of NAMs are partially re-equilibrated in as little as a few minutes, and completely re-equilibrated within hours to weeks. At low-T (300°C), serpentinization can induce considerable diffusion in cpx and opx. H contents are 30% re-equilibrated after continuous exposure to hydrothermal fluids for 102 and 105 years, respectively, which is inconsistent with previous interpretations that there is no effect on H in opx under similar conditions. Ol is unaffected after 1 Myr due to the slower diffusivity of the proton-vacancy mechanism at 300°C (2-4 log units lower than for opx). In the middle of the T range (700-1000°C), rehydration of opx and cpx occurs over hours to days, while ol is somewhat slower to respond (days to weeks), potentially allowing the decoupling observed in natural samples to

  6. Present mantle flow in North China Craton constrained by seismic anisotropy and numerical modelling

    Science.gov (United States)

    Qu, W.; Guo, Z.; Zhang, H.; Chen, Y. J.

    2017-12-01

    North China Carton (NCC) has undergone complicated geodynamic processes during the Cenozoic, including the westward subduction of the Pacific plate to its east and the collision of the India-Eurasia plates to its southwest. Shear wave splitting measurements in NCC reveal distinct seismic anisotropy patterns at different tectonic blocks, that is, the predominantly NW-SE trending alignment of fast directions in the western NCC and eastern NCC, weak anisotropy within the Ordos block, and N-S fast polarization beneath the Trans-North China Orogen (TNCO). To better understand the origin of seismic anisotropy from SKS splitting in NCC, we obtain a high-resolution dynamic model that absorbs multi-geophysical observations and state-of-the-art numerical methods. We calculate the mantle flow using a most updated version of software ASPECT (Kronbichler et al., 2012) with high-resolution temperature and density structures from a recent 3-D thermal-chemical model by Guo et al. (2016). The thermal-chemical model is obtained by multi-observable probabilistic inversion using high-quality surface wave measurements, potential fields, topography, and surface heat flow (Guo et al., 2016). The viscosity is then estimated by combining the dislocation creep, diffusion creep, and plasticity, which is depended on temperature, pressure, and chemical composition. Then we calculate the seismic anisotropy from the shear deformation of mantle flow by DREX, and predict the fast direction and delay time of SKS splitting. We find that when complex boundary conditions are applied, including the far field effects of the deep subduction of Pacific plate and eastward escaping of Tibetan Plateau, our model can successfully predict the observed shear wave splitting patterns. Our model indicates that seismic anisotropy revealed by SKS is primarily resulting from the LPO of olivine due to the shear deformation from asthenospheric flow. We suggest that two branches of mantle flow may contribute to the

  7. CONSTRAINING A MODEL OF TURBULENT CORONAL HEATING FOR AU MICROSCOPII WITH X-RAY, RADIO, AND MILLIMETER OBSERVATIONS

    International Nuclear Information System (INIS)

    Cranmer, Steven R.; Wilner, David J.; MacGregor, Meredith A.

    2013-01-01

    Many low-mass pre-main-sequence stars exhibit strong magnetic activity and coronal X-ray emission. Even after the primordial accretion disk has been cleared out, the star's high-energy radiation continues to affect the formation and evolution of dust, planetesimals, and large planets. Young stars with debris disks are thus ideal environments for studying the earliest stages of non-accretion-driven coronae. In this paper we simulate the corona of AU Mic, a nearby active M dwarf with an edge-on debris disk. We apply a self-consistent model of coronal loop heating that was derived from numerical simulations of solar field-line tangling and magnetohydrodynamic turbulence. We also synthesize the modeled star's X-ray luminosity and thermal radio/millimeter continuum emission. A realistic set of parameter choices for AU Mic produces simulated observations that agree with all existing measurements and upper limits. This coronal model thus represents an alternative explanation for a recently discovered ALMA central emission peak that was suggested to be the result of an inner 'asteroid belt' within 3 AU of the star. However, it is also possible that the central 1.3 mm peak is caused by a combination of active coronal emission and a bright inner source of dusty debris. Additional observations of this source's spatial extent and spectral energy distribution at millimeter and radio wavelengths will better constrain the relative contributions of the proposed mechanisms

  8. Constrained minimization problems for the reproduction number in meta-population models.

    Science.gov (United States)

    Poghotanyan, Gayane; Feng, Zhilan; Glasser, John W; Hill, Andrew N

    2018-02-14

    The basic reproduction number ([Formula: see text]) can be considerably higher in an SIR model with heterogeneous mixing compared to that from a corresponding model with homogeneous mixing. For example, in the case of measles, mumps and rubella in San Diego, CA, Glasser et al. (Lancet Infect Dis 16(5):599-605, 2016. https://doi.org/10.1016/S1473-3099(16)00004-9 ), reported an increase of 70% in [Formula: see text] when heterogeneity was accounted for. Meta-population models with simple heterogeneous mixing functions, e.g., proportionate mixing, have been employed to identify optimal vaccination strategies using an approach based on the gradient of the effective reproduction number ([Formula: see text]), which consists of partial derivatives of [Formula: see text] with respect to the proportions immune [Formula: see text] in sub-groups i (Feng et al. in J Theor Biol 386:177-187, 2015.  https://doi.org/10.1016/j.jtbi.2015.09.006 ; Math Biosci 287:93-104, 2017.  https://doi.org/10.1016/j.mbs.2016.09.013 ). These papers consider cases in which an optimal vaccination strategy exists. However, in general, the optimal solution identified using the gradient may not be feasible for some parameter values (i.e., vaccination coverages outside the unit interval). In this paper, we derive the analytic conditions under which the optimal solution is feasible. Explicit expressions for the optimal solutions in the case of [Formula: see text] sub-populations are obtained, and the bounds for optimal solutions are derived for [Formula: see text] sub-populations. This is done for general mixing functions and examples of proportionate and preferential mixing are presented. Of special significance is the result that for general mixing schemes, both [Formula: see text] and [Formula: see text] are bounded below and above by their corresponding expressions when mixing is proportionate and isolated, respectively.

  9. Modeling of thin-walled structures interacting with acoustic media as constrained two-dimensional continua

    Science.gov (United States)

    Rabinskiy, L. N.; Zhavoronok, S. I.

    2018-04-01

    The transient interaction of acoustic media and elastic shells is considered on the basis of the transition function approach. The three-dimensional hyperbolic initial boundary-value problem is reduced to a two-dimensional problem of shell theory with integral operators approximating the acoustic medium effect on the shell dynamics. The kernels of these integral operators are determined by the elementary solution of the problem of acoustic waves diffraction at a rigid obstacle with the same boundary shape as the wetted shell surface. The closed-form elementary solution for arbitrary convex obstacles can be obtained at the initial interaction stages on the background of the so-called “thin layer hypothesis”. Thus, the shell–wave interaction model defined by integro-differential dynamic equations with analytically determined kernels of integral operators becomes hence two-dimensional but nonlocal in time. On the other hand, the initial interaction stage results in localized dynamic loadings and consequently in complex strain and stress states that require higher-order shell theories. Here the modified theory of I.N.Vekua–A.A.Amosov-type is formulated in terms of analytical continuum dynamics. The shell model is constructed on a two-dimensional manifold within a set of field variables, Lagrangian density, and constraint equations following from the boundary conditions “shifted” from the shell faces to its base surface. Such an approach allows one to construct consistent low-order shell models within a unified formal hierarchy. The equations of the N th-order shell theory are singularly perturbed and contain second-order partial derivatives with respect to time and surface coordinates whereas the numerical integration of systems of first-order equations is more efficient. Such systems can be obtained as Hamilton–de Donder–Weyl-type equations for the Lagrangian dynamical system. The Hamiltonian formulation of the elementary N th-order shell theory is

  10. Constraining Carbonaceous Aerosol Climate Forcing by Bridging Laboratory, Field and Modeling Studies

    Science.gov (United States)

    Dubey, M. K.; Aiken, A. C.; Liu, S.; Saleh, R.; Cappa, C. D.; Williams, L. R.; Donahue, N. M.; Gorkowski, K.; Ng, N. L.; Mazzoleni, C.; China, S.; Sharma, N.; Yokelson, R. J.; Allan, J. D.; Liu, D.

    2014-12-01

    Biomass and fossil fuel combustion emits black (BC) and brown carbon (BrC) aerosols that absorb sunlight to warm climate and organic carbon (OC) aerosols that scatter sunlight to cool climate. The net forcing depends strongly on the composition, mixing state and transformations of these carbonaceous aerosols. Complexities from large variability of fuel types, combustion conditions and aging processes have confounded their treatment in models. We analyse recent laboratory and field measurements to uncover fundamental mechanism that control the chemical, optical and microphysical properties of carbonaceous aerosols that are elaborated below: Wavelength dependence of absorption and the single scattering albedo (ω) of fresh biomass burning aerosols produced from many fuels during FLAME-4 was analysed to determine the factors that control the variability in ω. Results show that ω varies strongly with fire-integrated modified combustion efficiency (MCEFI)—higher MCEFI results in lower ω values and greater spectral dependence of ω (Liu et al GRL 2014). A parameterization of ω as a function of MCEFI for fresh BB aerosols is derived from the laboratory data and is evaluated by field data, including BBOP. Our laboratory studies also demonstrate that BrC production correlates with BC indicating that that they are produced by a common mechanism that is driven by MCEFI (Saleh et al NGeo 2014). We show that BrC absorption is concentrated in the extremely low volatility component that favours long-range transport. We observe substantial absorption enhancement for internally mixed BC from diesel and wood combustion near London during ClearFlo. While the absorption enhancement is due to BC particles coated by co-emitted OC in urban regions, it increases with photochemical age in rural areas and is simulated by core-shell models. We measure BrC absorption that is concentrated in the extremely low volatility components and attribute it to wood burning. Our results support

  11. Constraining Gamma-Ray Pulsar Gap Models with a Simulated Pulsar Population

    Science.gov (United States)

    Pierbattista, Marco; Grenier, I. A.; Harding, A. K.; Gonthier, P. L.

    2012-01-01

    With the large sample of young gamma-ray pulsars discovered by the Fermi Large Area Telescope (LAT), population synthesis has become a powerful tool for comparing their collective properties with model predictions. We synthesised a pulsar population based on a radio emission model and four gamma-ray gap models (Polar Cap, Slot Gap, Outer Gap, and One Pole Caustic). Applying gamma-ray and radio visibility criteria, we normalise the simulation to the number of detected radio pulsars by a select group of ten radio surveys. The luminosity and the wide beams from the outer gaps can easily account for the number of Fermi detections in 2 years of observations. The wide slot-gap beam requires an increase by a factor of 10 of the predicted luminosity to produce a reasonable number of gamma-ray pulsars. Such large increases in the luminosity may be accommodated by implementing offset polar caps. The narrow polar-cap beams contribute at most only a handful of LAT pulsars. Using standard distributions in birth location and pulsar spin-down power (E), we skew the initial magnetic field and period distributions in a an attempt to account for the high E Fermi pulsars. While we compromise the agreement between simulated and detected distributions of radio pulsars, the simulations fail to reproduce the LAT findings: all models under-predict the number of LAT pulsars with high E , and they cannot explain the high probability of detecting both the radio and gamma-ray beams at high E. The beaming factor remains close to 1.0 over 4 decades in E evolution for the slot gap whereas it significantly decreases with increasing age for the outer gaps. The evolution of the enhanced slot-gap luminosity with E is compatible with the large dispersion of gamma-ray luminosity seen in the LAT data. The stronger evolution predicted for the outer gap, which is linked to the polar cap heating by the return current, is apparently not supported by the LAT data. The LAT sample of gamma-ray pulsars

  12. Functional and phylogenetic ecology in R

    CERN Document Server

    Swenson, Nathan G

    2014-01-01

    Functional and Phylogenetic Ecology in R is designed to teach readers to use R for phylogenetic and functional trait analyses. Over the past decade, a dizzying array of tools and methods were generated to incorporate phylogenetic and functional information into traditional ecological analyses. Increasingly these tools are implemented in R, thus greatly expanding their impact. Researchers getting started in R can use this volume as a step-by-step entryway into phylogenetic and functional analyses for ecology in R. More advanced users will be able to use this volume as a quick reference to understand particular analyses. The volume begins with an introduction to the R environment and handling relevant data in R. Chapters then cover phylogenetic and functional metrics of biodiversity; null modeling and randomizations for phylogenetic and functional trait analyses; integrating phylogenetic and functional trait information; and interfacing the R environment with a popular C-based program. This book presents a uni...

  13. The Next Generation of Numerical Modeling in Mergers- Constraining the Star Formation Law

    Science.gov (United States)

    Chien, Li-Hsin

    2010-09-01

    Spectacular images of colliding galaxies like the "Antennae", taken with the Hubble Space Telescope, have revealed that a burst of star/cluster formation occurs whenever gas-rich galaxies interact. A?The ages and locations of these clusters reveal the interaction history and provide crucial clues to the process of star formation in galaxies. A?We propose to carry out state-of-the-art numerical simulations to model six nearby galaxy mergers {Arp 256, NGC 7469, NGC 4038/39, NGC 520, NGC 2623, NGC 3256}, hence increasing the number with this level of sophistication by a factor of 3. These simulations provide specific predictions for the age and spatial distributions of young star clusters. The comparison between these simulation results and the observations will allow us to answer a number of fundamental questions including: 1} is shock-induced or density-dependent star formation the dominant mechanism; 2} are the demographics {i.e. mass and age distributions} of the clusters in different mergers similar, i.e. "universal", or very different; and 3} will it be necessary to include other mechanisms, e.g., locally triggered star formation, in the models to better match the observations?

  14. CONSTRAINING THE NFW POTENTIAL WITH OBSERVATIONS AND MODELING OF LOW SURFACE BRIGHTNESS GALAXY VELOCITY FIELDS

    International Nuclear Information System (INIS)

    Kuzio de Naray, Rachel; McGaugh, Stacy S.; Mihos, J. Christopher

    2009-01-01

    We model the Navarro-Frenk-White (NFW) potential to determine if, and under what conditions, the NFW halo appears consistent with the observed velocity fields of low surface brightness (LSB) galaxies. We present mock DensePak Integral Field Unit (IFU) velocity fields and rotation curves of axisymmetric and nonaxisymmetric potentials that are well matched to the spatial resolution and velocity range of our sample galaxies. We find that the DensePak IFU can accurately reconstruct the velocity field produced by an axisymmetric NFW potential and that a tilted-ring fitting program can successfully recover the corresponding NFW rotation curve. We also find that nonaxisymmetric potentials with fixed axis ratios change only the normalization of the mock velocity fields and rotation curves and not their shape. The shape of the modeled NFW rotation curves does not reproduce the data: these potentials are unable to simultaneously bring the mock data at both small and large radii into agreement with observations. Indeed, to match the slow rise of LSB galaxy rotation curves, a specific viewing angle of the nonaxisymmetric potential is required. For each of the simulated LSB galaxies, the observer's line of sight must be along the minor axis of the potential, an arrangement that is inconsistent with a random distribution of halo orientations on the sky.

  15. Constraining SUSY models with Fittino using measurements before, with and beyond the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Bechtle, Philip [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Desch, Klaus; Uhlenbrock, Mathias; Wienemann, Peter [Bonn Univ. (Germany). Physikalisches Inst.

    2009-07-15

    We investigate the constraints on Supersymmetry (SUSY) arising from available precision measurements using a global fit approach.When interpreted within minimal supergravity (mSUGRA), the data provide significant constraints on the masses of supersymmetric particles (sparticles), which are predicted to be light enough for an early discovery at the Large Hadron Collider (LHC). We provide predicted mass spectra including, for the first time, full uncertainty bands. The most stringent constraint is from the measurement of the anomalous magnetic moment of the muon. Using the results of these fits, we investigate to which precision mSUGRA and more general MSSM parameters can be measured by the LHC experiments with three different integrated luminosities for a parameter point which approximately lies in the region preferred by current data. The impact of the already available measurements on these precisions, when combined with LHC data, is also studied. We develop a method to treat ambiguities arising from different interpretations of the data within one model and provide a way to differentiate between values of different digital parameters of a model (e. g. sign({mu}) within mSUGRA). Finally, we show how measurements at a linear collider with up to 1 TeV centre-of-mass energy will help to improve precision by an order of magnitude. (orig.)

  16. DNA and dispersal models highlight constrained connectivity in a migratory marine megavertebrate

    Science.gov (United States)

    Naro-Maciel, Eugenia; Hart, Kristen M.; Cruciata, Rossana; Putman, Nathan F.

    2016-01-01

    Population structure and spatial distribution are fundamentally important fields within ecology, evolution, and conservation biology. To investigate pan-Atlantic connectivity of globally endangered green turtles (Chelonia mydas) from two National Parks in Florida, USA, we applied a multidisciplinary approach comparing genetic analysis and ocean circulation modeling. The Everglades (EP) is a juvenile feeding ground, whereas the Dry Tortugas (DT) is used for courtship, breeding, and feeding by adults and juveniles. We sequenced two mitochondrial segments from 138 turtles sampled there from 2006-2015, and simulated oceanic transport to estimate their origins. Genetic and ocean connectivity data revealed northwestern Atlantic rookeries as the major natal sources, while southern and eastern Atlantic contributions were negligible. However, specific rookery estimates differed between genetic and ocean transport models. The combined analyses suggest that post-hatchling drift via ocean currents poorly explains the distribution of neritic juveniles and adults, but juvenile natal homing and population history likely play important roles. DT and EP were genetically similar to feeding grounds along the southern US coast, but highly differentiated from most other Atlantic groups. Despite expanded mitogenomic analysis and correspondingly increased ability to detect genetic variation, no significant differentiation between DT and EP, or among years, sexes or stages was observed. This first genetic analysis of a North Atlantic green turtle courtship area provides rare data supporting local movements and male philopatry. The study highlights the applications of multidisciplinary approaches for ecological research and conservation.

  17. Lightning NOx emissions over the USA constrained by TES ozone observations and the GEOS-Chem model

    Science.gov (United States)

    Jourdain, L.; Kulawik, S. S.; Worden, H. M.; Pickering, K. E.; Worden, J.; Thompson, A. M.

    2010-01-01

    Improved estimates of NOx from lightning sources are required to understand tropospheric NOx and ozone distributions, the oxidising capacity of the troposphere and corresponding feedbacks between chemistry and climate change. In this paper, we report new satellite ozone observations from the Tropospheric Emission Spectrometer (TES) instrument that can be used to test and constrain the parameterization of the lightning source of NOx in global models. Using the National Lightning Detection (NLDN) and the Long Range Lightning Detection Network (LRLDN) data as well as the HYPSLIT transport and dispersion model, we show that TES provides direct observations of ozone enhanced layers downwind of convective events over the USA in July 2006. We find that the GEOS-Chem global chemistry-transport model with a parameterization based on cloud top height, scaled regionally and monthly to OTD/LIS (Optical Transient Detector/Lightning Imaging Sensor) climatology, captures the ozone enhancements seen by TES. We show that the model's ability to reproduce the location of the enhancements is due to the fact that this model reproduces the pattern of the convective events occurrence on a daily basis during the summer of 2006 over the USA, even though it does not well represent the relative distribution of lightning intensities. However, this model with a value of 6 Tg N/yr for the lightning source (i.e.: with a mean production of 260 moles NO/Flash over the USA in summer) underestimates the intensities of the ozone enhancements seen by TES. By imposing a production of 520 moles NO/Flash for lightning occurring in midlatitudes, which better agrees with the values proposed by the most recent studies, we decrease the bias between TES and GEOS-Chem ozone over the USA in July 2006 by 40%. However, our conclusion on the strength of the lightning source of NOx is limited by the fact that the contribution from the stratosphere is underestimated in the GEOS-Chem simulations.

  18. Modeling of the Inner Coma of Comet 67P/Churyumov-Gerasimenko Constrained by VIRTIS and ROSINA Observations

    Science.gov (United States)

    Fougere, N.; Combi, M. R.; Tenishev, V.; Bieler, A. M.; Migliorini, A.; Bockelée-Morvan, D.; Toth, G.; Huang, Z.; Gombosi, T. I.; Hansen, K. C.; Capaccioni, F.; Filacchione, G.; Piccioni, G.; Debout, V.; Erard, S.; Leyrat, C.; Fink, U.; Rubin, M.; Altwegg, K.; Tzou, C. Y.; Le Roy, L.; Calmonte, U.; Berthelier, J. J.; Rème, H.; Hässig, M.; Fuselier, S. A.; Fiethe, B.; De Keyser, J.

    2015-12-01

    As it orbits around comet 67P/Churyumov-Gerasimenko (CG), the Rosetta spacecraft acquires more information about its main target. The numerous observations made at various geometries and at different times enable a good spatial and temporal coverage of the evolution of CG's cometary coma. However, the question regarding the link between the coma measurements and the nucleus activity remains relatively open notably due to gas expansion and strong kinetic effects in the comet's rarefied atmosphere. In this work, we use coma observations made by the ROSINA-DFMS instrument to constrain the activity at the surface of the nucleus. The distribution of the H2O and CO2 outgassing is described with the use of spherical harmonics. The coordinates in the orthogonal system represented by the spherical harmonics are computed using a least squared method, minimizing the sum of the square residuals between an analytical coma model and the DFMS data. Then, the previously deduced activity distributions are used in a Direct Simulation Monte Carlo (DSMC) model to compute a full description of the H2O and CO2 coma of comet CG from the nucleus' surface up to several hundreds of kilometers. The DSMC outputs are used to create synthetic images, which can be directly compared with VIRTIS measurements. The good agreement between the VIRTIS observations and the DSMC model, itself constrained with ROSINA data, provides a compelling juxtaposition of the measurements from these two instruments. Acknowledgements Work at UofM was supported by contracts JPL#1266313, JPL#1266314 and NASA grant NNX09AB59G. Work at UoB was funded by the State of Bern, the Swiss National Science Foundation and by the ESA PRODEX Program. Work at Southwest Research institute was supported by subcontract #1496541 from the JPL. Work at BIRA-IASB was supported by the Belgian Science Policy Office via PRODEX/ROSINA PEA 90020. The authors would like to thank ASI, CNES, DLR, NASA for supporting this research. VIRTIS was built

  19. Constraining Methane Emissions from Natural Gas Production in Northeastern Pennsylvania Using Aircraft Observations and Mesoscale Modeling

    Science.gov (United States)

    Barkley, Z.; Davis, K.; Lauvaux, T.; Miles, N.; Richardson, S.; Martins, D. K.; Deng, A.; Cao, Y.; Sweeney, C.; Karion, A.; Smith, M. L.; Kort, E. A.; Schwietzke, S.

    2015-12-01

    Leaks in natural gas infrastructure release methane (CH4), a potent greenhouse gas, into the atmosphere. The estimated fugitive emission rate associated with the production phase varies greatly between studies, hindering our understanding of the natural gas energy efficiency. This study presents a new application of inverse methodology for estimating regional fugitive emission rates from natural gas production. Methane observations across the Marcellus region in northeastern Pennsylvania were obtained during a three week flight campaign in May 2015 performed by a team from the National Oceanic and Atmospheric Administration (NOAA) Global Monitoring Division and the University of Michigan. In addition to these data, CH4 observations were obtained from automobile campaigns during various periods from 2013-2015. An inventory of CH4 emissions was then created for various sources in Pennsylvania, including coalmines, enteric fermentation, industry, waste management, and unconventional and conventional wells. As a first-guess emission rate for natural gas activity, a leakage rate equal to 2% of the natural gas production was emitted at the locations of unconventional wells across PA. These emission rates were coupled to the Weather Research and Forecasting model with the chemistry module (WRF-Chem) and atmospheric CH4 concentration fields at 1km resolution were generated. Projected atmospheric enhancements from WRF-Chem were compared to observations, and the emission rate from unconventional wells was adjusted to minimize errors between observations and simulation. We show that the modeled CH4 plume structures match observed plumes downwind of unconventional wells, providing confidence in the methodology. In all cases, the fugitive emission rate was found to be lower than our first guess. In this initial emission configuration, each well has been assigned the same fugitive emission rate, which can potentially impair our ability to match the observed spatial variability

  20. Dynamical phase diagrams of a love capacity constrained prey-predator model

    Science.gov (United States)

    Simin, P. Toranj; Jafari, Gholam Reza; Ausloos, Marcel; Caiafa, Cesar Federico; Caram, Facundo; Sonubi, Adeyemi; Arcagni, Alberto; Stefani, Silvana

    2018-02-01

    One interesting question in love relationships is: finally, what and when is the end of this love relationship? Using a prey-predator Verhulst-Lotka-Volterra (VLV) model we imply cooperation and competition tendency between people in order to describe a "love dilemma game". We select the most simple but immediately most complex case for studying the set of nonlinear differential equations, i.e. that implying three persons, being at the same time prey and predator. We describe four different scenarios in such a love game containing either a one-way love or a love triangle. Our results show that it is hard to love more than one person simultaneously. Moreover, to love several people simultaneously is an unstable state. We find some condition in which persons tend to have a friendly relationship and love someone in spite of their antagonistic interaction. We demonstrate the dynamics by displaying flow diagrams.

  1. Distributed model predictive control for constrained nonlinear systems with decoupled local dynamics.

    Science.gov (United States)

    Zhao, Meng; Ding, Baocang

    2015-03-01

    This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Nonfragile Robust Model Predictive Control for Uncertain Constrained Systems with Time-Delay Compensation

    Directory of Open Access Journals (Sweden)

    Wei Jiang

    2016-01-01

    Full Text Available This study investigates the problem of asymptotic stabilization for a class of discrete-time linear uncertain time-delayed systems with input constraints. Parametric uncertainty is assumed to be structured, and delay is assumed to be known. In Lyapunov stability theory framework, two synthesis schemes of designing nonfragile robust model predictive control (RMPC with time-delay compensation are put forward, where the additive and the multiplicative gain perturbations are, respectively, considered. First, by designing appropriate Lyapunov-Krasovskii (L-K functions, the robust performance index is defined as optimization problems that minimize upper bounds of infinite horizon cost function. Then, to guarantee closed-loop stability, the sufficient conditions for the existence of desired nonfragile RMPC are obtained in terms of linear matrix inequalities (LMIs. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed approaches.

  3. Constrained Vapor Bubble Experiment

    Science.gov (United States)

    Gokhale, Shripad; Plawsky, Joel; Wayner, Peter C., Jr.; Zheng, Ling; Wang, Ying-Xi

    2002-11-01

    Microgravity experiments on the Constrained Vapor Bubble Heat Exchanger, CVB, are being developed for the International Space Station. In particular, we present results of a precursory experimental and theoretical study of the vertical Constrained Vapor Bubble in the Earth's environment. A novel non-isothermal experimental setup was designed and built to study the transport processes in an ethanol/quartz vertical CVB system. Temperature profiles were measured using an in situ PC (personal computer)-based LabView data acquisition system via thermocouples. Film thickness profiles were measured using interferometry. A theoretical model was developed to predict the curvature profile of the stable film in the evaporator. The concept of the total amount of evaporation, which can be obtained directly by integrating the experimental temperature profile, was introduced. Experimentally measured curvature profiles are in good agreement with modeling results. For microgravity conditions, an analytical expression, which reveals an inherent relation between temperature and curvature profiles, was derived.

  4. Visualizing phylogenetic tree landscapes.

    Science.gov (United States)

    Wilgenbusch, James C; Huang, Wen; Gallivan, Kyle A

    2017-02-02

    Genomic-scale sequence alignments are increasingly used to infer phylogenies in order to better understand the processes and patterns of evolution. Different partitions within these new alignments (e.g., genes, codon positions, and structural features) often favor hundreds if not thousands of competing phylogenies. Summarizing and comparing phylogenies obtained from multi-source data sets using current consensus tree methods discards valuable information and can disguise potential methodological problems. Discovery of efficient and accurate dimensionality reduction methods used to display at once in 2- or 3- dimensions the relationship among these competing phylogenies will help practitioners diagnose the limits of current evolutionary models and potential problems with phylogenetic reconstruction methods when analyzing large multi-source data sets. We introduce several dimensionality reduction methods to visualize in 2- and 3-dimensions the relationship among competing phylogenies obtained from gene partitions found in three mid- to large-size mitochondrial genome alignments. We test the performance of these dimensionality reduction methods by applying several goodness-of-fit measures. The intrinsic dimensionality of each data set is also estimated to determine whether projections in 2- and 3-dimensions can be expected to reveal meaningful relationships among trees from different data partitions. Several new approaches to aid in the comparison of different phylogenetic landscapes are presented. Curvilinear Components Analysis (CCA) and a stochastic gradient decent (SGD) optimization method give the best representation of the original tree-to-tree distance matrix for each of the three- mitochondrial genome alignments and greatly outperformed the method currently used to visualize tree landscapes. The CCA + SGD method converged at least as fast as previously applied methods for visualizing tree landscapes. We demonstrate for all three mtDNA alignments that 3D

  5. Water-Constrained Electric Sector Capacity Expansion Modeling Under Climate Change Scenarios

    Science.gov (United States)

    Cohen, S. M.; Macknick, J.; Miara, A.; Vorosmarty, C. J.; Averyt, K.; Meldrum, J.; Corsi, F.; Prousevitch, A.; Rangwala, I.

    2015-12-01

    Over 80% of U.S. electricity generation uses a thermoelectric process, which requires significant quantities of water for power plant cooling. This water requirement exposes the electric sector to vulnerabilities related to shifts in water availability driven by climate change as well as reductions in power plant efficiencies. Electricity demand is also sensitive to climate change, which in most of the United States leads to warming temperatures that increase total cooling-degree days. The resulting demand increase is typically greater for peak demand periods. This work examines the sensitivity of the development and operations of the U.S. electric sector to the impacts of climate change using an electric sector capacity expansion model that endogenously represents seasonal and local water resource availability as well as climate impacts on water availability, electricity demand, and electricity system performance. Capacity expansion portfolios and water resource implications from 2010 to 2050 are shown at high spatial resolution under a series of climate scenarios. Results demonstrate the importance of water availability for future electric sector capacity planning and operations, especially under more extreme hotter and drier climate scenarios. In addition, region-specific changes in electricity demand and water resources require region-specific responses that depend on local renewable resource availability and electricity market conditions. Climate change and the associated impacts on water availability and temperature can affect the types of power plants that are built, their location, and their impact on regional water resources.

  6. A transition-constrained discrete hidden Markov model for automatic sleep staging

    Directory of Open Access Journals (Sweden)

    Pan Shing-Tai

    2012-08-01

    Full Text Available Abstract Background Approximately one-third of the human lifespan is spent sleeping. To diagnose sleep problems, all-night polysomnographic (PSG recordings including electroencephalograms (EEGs, electrooculograms (EOGs and electromyograms (EMGs, are usually acquired from the patient and scored by a well-trained expert according to Rechtschaffen & Kales (R&K rules. Visual sleep scoring is a time-consuming and subjective process. Therefore, the development of an automatic sleep scoring method is desirable. Method The EEG, EOG and EMG signals from twenty subjects were measured. In addition to selecting sleep characteristics based on the 1968 R&K rules, features utilized in other research were collected. Thirteen features were utilized including temporal and spectrum analyses of the EEG, EOG and EMG signals, and a total of 158 hours of sleep data were recorded. Ten subjects were used to train the Discrete Hidden Markov Model (DHMM, and the remaining ten were tested by the trained DHMM for recognition. Furthermore, the 2-fold cross validation was performed during this experiment. Results Overall agreement between the expert and the results presented is 85.29%. With the exception of S1, the sensitivities of each stage were more than 81%. The most accurate stage was SWS (94.9%, and the least-accurately classified stage was S1 ( Conclusion The results of the experiments demonstrate that the proposed method significantly enhances the recognition rate when compared with prior studies.

  7. Constraining the Physics of AM Canum Venaticorum Systems with the Accretion Disk Instability Model

    Science.gov (United States)

    Cannizzo, John K.; Nelemans, Gijs

    2015-01-01

    Recent work by Levitan et al. has expanded the long-term photometric database for AM CVn stars. In particular, their outburst properties are well correlated with orbital period and allow constraints to be placed on the secular mass transfer rate between secondary and primary if one adopts the disk instability model for the outbursts. We use the observed range of outbursting behavior for AM CVn systems as a function of orbital period to place a constraint on mass transfer rate versus orbital period. We infer a rate approximately 5 x 10(exp -9) solar mass yr(exp -1) ((P(sub orb)/1000 s)(exp -5.2)). We show that the functional form so obtained is consistent with the recurrence time-orbital period relation found by Levitan et al. using a simple theory for the recurrence time. Also, we predict that their steep dependence of outburst duration on orbital period will flatten considerably once the longer orbital period systems have more complete observations.

  8. 3Es System Optimization under Uncertainty Using Hybrid Intelligent Algorithm: A Fuzzy Chance-Constrained Programming Model

    Directory of Open Access Journals (Sweden)

    Jiekun Song

    2016-01-01

    Full Text Available Harmonious development of 3Es (economy-energy-environment system is the key to realize regional sustainable development. The structure and components of 3Es system are analyzed. Based on the analysis of causality diagram, GDP and industrial structure are selected as the target parameters of economy subsystem, energy consumption intensity is selected as the target parameter of energy subsystem, and the emissions of COD, ammonia nitrogen, SO2, and NOX and CO2 emission intensity are selected as the target parameters of environment system. Fixed assets investment of three industries, total energy consumption, and investment in environmental pollution control are selected as the decision variables. By regarding the parameters of 3Es system optimization as fuzzy numbers, a fuzzy chance-constrained goal programming (FCCGP model is constructed, and a hybrid intelligent algorithm including fuzzy simulation and genetic algorithm is proposed for solving it. The results of empirical analysis on Shandong province of China show that the FCCGP model can reflect the inherent relationship and evolution law of 3Es system and provide the effective decision-making support for 3Es system optimization.

  9. Constraining the JULES land-surface model for different land-use types using citizen-science generated hydrological data

    Science.gov (United States)

    Chou, H. K.; Ochoa-Tocachi, B. F.; Buytaert, W.

    2017-12-01

    Community land surface models such as JULES are increasingly used for hydrological assessment because of their state-of-the-art representation of land-surface processes. However, a major weakness of JULES and other land surface models is the limited number of land surface parameterizations that is available. Therefore, this study explores the use of data from a network of catchments under homogeneous land-use to generate parameter "libraries" to extent the land surface parameterizations of JULES. The network (called iMHEA) is part of a grassroots initiative to characterise the hydrological response of different Andean ecosystems, and collects data on streamflow, precipitation, and several weather variables at a high temporal resolution. The tropical Andes are a useful case study because of the complexity of meteorological and geographical conditions combined with extremely heterogeneous land-use that result in a wide range of hydrological responses. We then calibrated JULES for each land-use represented in the iMHEA dataset. For the individual land-use types, the results show improved simulations of streamflow when using the calibrated parameters with respect to default values. In particular, the partitioning between surface and subsurface flows can be improved. But also, on a regional scale, hydrological modelling was greatly benefitted from constraining parameters using such distributed citizen-science generated streamflow data. This study demonstrates the modelling and prediction on regional hydrology by integrating citizen science and land surface model. In the context of hydrological study, the limitation of data scarcity could be solved indeed by using this framework. Improved predictions of such impacts could be leveraged by catchment managers to guide watershed interventions, to evaluate their effectiveness, and to minimize risks.

  10. Constraining the Dynamics of Periodic Behavior at Mt. Semeru, Indonesia, Combining Numerical Modeling and Field Measurements of Gas emission

    Science.gov (United States)

    Smekens, J.; Clarke, A. B.; De'Michieli Vitturi, M.; Moore, G. M.

    2012-12-01

    Mt. Semeru is one of the most active explosive volcanoes on the island of Java in Indonesia. The current eruption style consists of small but frequent explosions and/or gas releases (several times a day) accompanied by continuous lava effusion that sporadically produces block-and-ash flows down the SE flank of the volcano. Semeru presents a unique opportunity to investigate the magma ascent conditions that produce this kind of persistent periodic behavior and the coexistence of explosive and effusive eruptions. In this work we use DOMEFLOW, a 1.5D transient isothermal numerical model, to investigate the dynamics of lava extrusion at Semeru. Petrologic observations from tephra and ballistic samples collected at the summit help us constrain the initial conditions of the system. Preliminary model runs produced periodic lava extrusion and pulses of gas release at the vent, with a cycle period on the order of hours, even though a steady magma supply rate was prescribed at the bottom of the conduit. Enhanced shallow permeability implemented in the model appears to create a dense plug in the shallow subsurface, which in turn plays a critical role in creating and controlling the observed periodic behavior. We measured SO2 fluxes just above the vent, using a custom UV imaging system. The device consists of two high-sensitivity CCD cameras with narrow UV filters centered at 310 and 330 nm, and a USB2000+ spectrometer for calibration and distance correction. The method produces high-frequency flux series with an accurate determination of the wind speed and plume geometry. The model results, when combined with gas measurements, and measurements of sulfur in both the groundmass and melt inclusions in eruptive products, could be used to create a volatile budget of the system. Furthermore, a well-calibrated model of the system will ultimately allow the characteristic periodicity and corresponding gas flux to be used as a proxy for magma supply rate.

  11. Refined Use of Satellite Aerosol Optical Depth Snapshots to Constrain Biomass Burning Emissions in the GOCART Model

    Science.gov (United States)

    Petrenko, Mariya; Kahn, Ralph; Chin, Mian; Limbacher, James

    2017-10-01

    Simulations of biomass burning (BB) emissions in global chemistry and aerosol transport models depend on external inventories, which provide location and strength for BB aerosol sources. Our previous work shows that to first order, satellite snapshots of aerosol optical depth (AOD) near the emitted smoke plume can be used to constrain model-simulated AOD, and effectively, the smoke source strength. We now refine the satellite-snapshot method and investigate where applying simple multiplicative emission adjustment factors alone to the widely used Global Fire Emission Database version 3 emission inventory can achieve regional-scale consistency between Moderate Resolution Imaging Spectroradiometer (MODIS) AOD snapshots and the Goddard Chemistry Aerosol Radiation and Transport model. The model and satellite AOD are compared globally, over a set of BB cases observed by the MODIS instrument during the 2004, and 2006-2008 biomass burning seasons. Regional discrepancies between the model and satellite are diverse around the globe yet quite consistent within most ecosystems. We refine our approach to address physically based limitations of our earlier work (1) by expanding the number of fire cases from 124 to almost 900, (2) by using scaled reanalysis-model simulations to fill missing AOD retrievals in the MODIS observations, (3) by distinguishing the BB components of the total aerosol load from background aerosol in the near-source regions, and (4) by including emissions from fires too small to be identified explicitly in the satellite observations. The small-fire emission adjustment shows the complimentary nature of correcting for source strength and adding geographically distinct missing sources. Our analysis indicates that the method works best for fire cases where the BB fraction of total AOD is high, primarily evergreen or deciduous forests. In heavily polluted or agricultural burning regions, where smoke and background AOD values tend to be comparable, this approach

  12. Use of stratigraphic models as soft information to constrain stochastic modeling of rock properties: Development of the GSLIB-Lynx integration module

    International Nuclear Information System (INIS)

    Cromer, M.V.; Rautman, C.A.

    1995-10-01

    Rock properties in volcanic units at Yucca Mountain are controlled largely by relatively deterministic geologic processes related to the emplacement, cooling, and alteration history of the tuffaceous lithologic sequence. Differences in the lithologic character of the rocks have been used to subdivide the rock sequence into stratigraphic units, and the deterministic nature of the processes responsible for the character of the different units can be used to infer the rock material properties likely to exist in unsampled regions. This report proposes a quantitative, theoretically justified method of integrating interpretive geometric models, showing the three-dimensional distribution of different stratigraphic units, with numerical stochastic simulation techniques drawn from geostatistics. This integration of soft, constraining geologic information with hard, quantitative measurements of various material properties can produce geologically reasonable, spatially correlated models of rock properties that are free from stochastic artifacts for use in subsequent physical-process modeling, such as the numerical representation of ground-water flow and radionuclide transport. Prototype modeling conducted using the GSLIB-Lynx Integration Module computer program, known as GLINTMOD, has successfully demonstrated the proposed integration technique. The method involves the selection of stratigraphic-unit-specific material-property expected values that are then used to constrain the probability function from which a material property of interest at an unsampled location is simulated

  13. Constraining the heat flux between Enceladus’ tiger stripes: numerical modeling of funiscular plains formation

    Science.gov (United States)

    Bland, Michael T.; McKinnon, William B; Schenk, Paul M.

    2015-01-01

    The Cassini spacecraft’s Composite Infrared Spectrometer (CIRS) has observed at least 5 GW of thermal emission at Enceladus’ south pole. The vast majority of this emission is localized on the four long, parallel, evenly-spaced fractures dubbed tiger stripes. However, the thermal emission from regions between the tiger stripes has not been determined. These spatially localized regions have a unique morphology consisting of short-wavelength (∼1 km) ridges and troughs with topographic amplitudes of ∼100 m, and a generally ropy appearance that has led to them being referred to as “funiscular terrain.” Previous analysis pursued the hypothesis that the funiscular terrain formed via thin-skinned folding, analogous to that occurring on a pahoehoe flow top (Barr, A.C., Preuss, L.J. [2010]. Icarus 208, 499–503). Here we use finite element modeling of lithospheric shortening to further explore this hypothesis. Our best-case simulations reproduce funiscular-like morphologies, although our simulated fold wavelengths after 10% shortening are 30% longer than those observed. Reproducing short-wavelength folds requires high effective surface temperatures (∼185 K), an ice lithosphere (or high-viscosity layer) with a low thermal conductivity (one-half to one-third that of intact ice or lower), and very high heat fluxes (perhaps as great as 400 mW m−2). These conditions are driven by the requirement that the high-viscosity layer remain extremely thin (≲200 m). Whereas the required conditions are extreme, they can be met if a layer of fine grained plume material 1–10 m thick, or a highly fractured ice layer >50 m thick insulates the surface, and the lithosphere is fractured throughout as well. The source of the necessary heat flux (a factor of two greater than previous estimates) is less obvious. We also present evidence for an unusual color/spectral character of the ropy terrain, possibly related to its unique surface texture. Our simulations demonstrate

  14. Nonlinear model dynamics for closed-system, constrained, maximal-entropy-generation relaxation by energy redistribution

    International Nuclear Information System (INIS)

    Beretta, Gian Paolo

    2006-01-01

    We discuss a nonlinear model for relaxation by energy redistribution within an isolated, closed system composed of noninteracting identical particles with energy levels e i with i=1,2,...,N. The time-dependent occupation probabilities p i (t) are assumed to obey the nonlinear rate equations τ dp i /dt=-p i ln p i -α(t)p i -β(t)e i p i where α(t) and β(t) are functionals of the p i (t)'s that maintain invariant the mean energy E=Σ i=1 N e i p i (t) and the normalization condition 1=Σ i=1 N p i (t). The entropy S(t)=-k B Σ i=1 N p i (t)ln p i (t) is a nondecreasing function of time until the initially nonzero occupation probabilities reach a Boltzmann-like canonical distribution over the occupied energy eigenstates. Initially zero occupation probabilities, instead, remain zero at all times. The solutions p i (t) of the rate equations are unique and well defined for arbitrary initial conditions p i (0) and for all times. The existence and uniqueness both forward and backward in time allows the reconstruction of the ancestral or primordial lowest entropy state. By casting the rate equations in terms not of the p i 's but of their positive square roots √(p i ), they unfold from the assumption that time evolution is at all times along the local direction of steepest entropy ascent or, equivalently, of maximal entropy generation. These rate equations have the same mathematical structure and basic features as the nonlinear dynamical equation proposed in a series of papers ending with G. P. Beretta, Found. Phys. 17, 365 (1987) and recently rediscovered by S. Gheorghiu-Svirschevski [Phys. Rev. A 63, 022105 (2001);63, 054102 (2001)]. Numerical results illustrate the features of the dynamics and the differences from the rate equations recently considered for the same problem by M. Lemanska and Z. Jaeger [Physica D 170, 72 (2002)]. We also interpret the functionals k B α(t) and k B β(t) as nonequilibrium generalizations of the thermodynamic-equilibrium Massieu

  15. A fusion of top-down and bottom-up modeling techniques to constrain regional scale carbon budgets

    Science.gov (United States)

    Goeckede, M.; Turner, D. P.; Michalak, A. M.; Vickers, D.; Law, B. E.

    2009-12-01

    The effort to constrain regional scale carbon budgets benefits from assimilating as many high quality data sources as possible in order to reduce uncertainties. Two of the most common approaches used in this field, bottom-up and top-down techniques, both have their strengths and weaknesses, and partly build on very different sources of information to train, drive, and validate the models. Within the context of the ORCA2 project, we follow both bottom-up and top-down modeling strategies with the ultimate objective of reconciling their surface flux estimates. The ORCA2 top-down component builds on a coupled WRF-STILT transport module that resolves the footprint function of a CO2 concentration measurement in high temporal and spatial resolution. Datasets involved in the current setup comprise GDAS meteorology, remote sensing products, VULCAN fossil fuel inventories, boundary conditions from CarbonTracker, and high-accuracy time series of atmospheric CO2 concentrations. Surface fluxes of CO2 are normally provided through a simple diagnostic model which is optimized against atmospheric observations. For the present study, we replaced the simple model with fluxes generated by an advanced bottom-up process model, Biome-BGC, which uses state-of-the-art algorithms to resolve plant-physiological processes, and 'grow' a biosphere based on biogeochemical conditions and climate history. This approach provides a more realistic description of biomass and nutrient pools than is the case for the simple model. The process model ingests various remote sensing data sources as well as high-resolution reanalysis meteorology, and can be trained against biometric inventories and eddy-covariance data. Linking the bottom-up flux fields to the atmospheric CO2 concentrations through the transport module allows evaluating the spatial representativeness of the BGC flux fields, and in that way assimilates more of the available information than either of the individual modeling techniques alone

  16. Geologic modeling constrained by seismic and dynamical data; Modelisation geologique contrainte par les donnees sismiques et dynamiques

    Energy Technology Data Exchange (ETDEWEB)

    Pianelo, L.

    2001-09-01

    Matching procedures are often used in reservoir production to improve geological models. In reservoir engineering, history matching leads to update petrophysical parameters in fluid flow simulators to fit the results of the calculations with observed data. In the same line, seismic parameters are inverted to allow the numerical recovery of seismic acquisitions. However, it is well known that these inverse problems are poorly constrained. The idea of this original work is to simultaneous match both the permeability and the acoustic impedance of the reservoir, for an enhancement of the resulting geological model. To do so, both parameters are linked using either observed relations and/or the classic Wyllie (porosity impedance) and Carman-Kozeny (porosity-permeability) relationships. Hence production data are added to the seismic match, and seismic observations are used for the permeability recovery. The work consists in developing numerical prototypes of a 3-D fluid flow simulator and a 3-D seismic acquisition simulator. Then, in implementing the coupled inversion loop of the permeability and the acoustic impedance of the two models. We can hence test our theory on a 3-D realistic case. Comparison of the coupled matching with the two classical ones demonstrates the efficiency of our method. We reduce significantly the number of possible solutions, and then the number of scenarios. In addition to that, the augmentation of information leads to a natural improvement of the obtained models, especially in the spatial localization of the permeability contrasts. The improvement is significant, at the same time in the distribution of the two inverted parameters, and in the rapidity of the operation. This work is an important step in a way of data integration, and leads to a better reservoir characterization. This original algorithm could also be useful in reservoir monitoring, history matching and in optimization of production. This new and original method is patented and

  17. Constraining Parameters in Pulsar Models of Repeating FRB 121102 with High-energy Follow-up Observations

    International Nuclear Information System (INIS)

    Xiao, Di; Dai, Zi-Gao

    2017-01-01

    Recently, a precise (sub-arcsecond) localization of the repeating fast radio burst (FRB) 121102 led to the discovery of persistent radio and optical counterparts, the identification of a host dwarf galaxy at a redshift of z = 0.193, and several campaigns of searches for higher-frequency counterparts, which gave only upper limits on the emission flux. Although the origin of FRBs remains unknown, most of the existing theoretical models are associated with pulsars, or more specifically, magnetars. In this paper, we explore persistent high-energy emission from a rapidly rotating highly magnetized pulsar associated with FRB 121102 if internal gradual magnetic dissipation occurs in the pulsar wind. We find that the efficiency of converting the spin-down luminosity to the high-energy (e.g., X-ray) luminosity is generally much smaller than unity, even for a millisecond magnetar. This provides an explanation for the non-detection of high-energy counterparts to FRB 121102. We further constrain the spin period and surface magnetic field strength of the pulsar with the current high-energy observations. In addition, we compare our results with the constraints given by the other methods in previous works and expect to apply our new method to some other open issues in the future.

  18. Constraining Parameters in Pulsar Models of Repeating FRB 121102 with High-energy Follow-up Observations

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Di; Dai, Zi-Gao, E-mail: dzg@nju.edu.cn [School of Astronomy and Space Science, Nanjing University, Nanjing 210093 (China)

    2017-09-10

    Recently, a precise (sub-arcsecond) localization of the repeating fast radio burst (FRB) 121102 led to the discovery of persistent radio and optical counterparts, the identification of a host dwarf galaxy at a redshift of z = 0.193, and several campaigns of searches for higher-frequency counterparts, which gave only upper limits on the emission flux. Although the origin of FRBs remains unknown, most of the existing theoretical models are associated with pulsars, or more specifically, magnetars. In this paper, we explore persistent high-energy emission from a rapidly rotating highly magnetized pulsar associated with FRB 121102 if internal gradual magnetic dissipation occurs in the pulsar wind. We find that the efficiency of converting the spin-down luminosity to the high-energy (e.g., X-ray) luminosity is generally much smaller than unity, even for a millisecond magnetar. This provides an explanation for the non-detection of high-energy counterparts to FRB 121102. We further constrain the spin period and surface magnetic field strength of the pulsar with the current high-energy observations. In addition, we compare our results with the constraints given by the other methods in previous works and expect to apply our new method to some other open issues in the future.

  19. Establishing a regulatory value chain model: An innovative approach to strengthening medicines regulatory systems in resource-constrained settings.

    Science.gov (United States)

    Chahal, Harinder Singh; Kashfipour, Farrah; Susko, Matt; Feachem, Neelam Sekhri; Boyle, Colin

    2016-05-01

    Medicines Regulatory Authorities (MRAs) are an essential part of national health systems and are charged with protecting and promoting public health through regulation of medicines. However, MRAs in resource-constrained settings often struggle to provide effective oversight of market entry and use of health commodities. This paper proposes a regulatory value chain model (RVCM) that policymakers and regulators can use as a conceptual framework to guide investments aimed at strengthening regulatory systems. The RVCM incorporates nine core functions of MRAs into five modules: (i) clear guidelines and requirements; (ii) control of clinical trials; (iii) market authorization of medical products; (iv) pre-market quality control; and (v) post-market activities. Application of the RVCM allows national stakeholders to identify and prioritize investments according to where they can add the most value to the regulatory process. Depending on the economy, capacity, and needs of a country, some functions can be elevated to a regional or supranational level, while others can be maintained at the national level. In contrast to a "one size fits all" approach to regulation in which each country manages the full regulatory process at the national level, the RVCM encourages leveraging the expertise and capabilities of other MRAs where shared processes strengthen regulation. This value chain approach provides a framework for policymakers to maximize investment impact while striving to reach the goal of safe, affordable, and rapidly accessible medicines for all.

  20. Using Dynamic Contrast-Enhanced Magnetic Resonance Imaging Data to Constrain a Positron Emission Tomography Kinetic Model: Theory and Simulations

    Directory of Open Access Journals (Sweden)

    Jacob U. Fluckiger

    2013-01-01

    Full Text Available We show how dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI data can constrain a compartmental model for analyzing dynamic positron emission tomography (PET data. We first develop the theory that enables the use of DCE-MRI data to separate whole tissue time activity curves (TACs available from dynamic PET data into individual TACs associated with the blood space, the extravascular-extracellular space (EES, and the extravascular-intracellular space (EIS. Then we simulate whole tissue TACs over a range of physiologically relevant kinetic parameter values and show that using appropriate DCE-MRI data can separate the PET TAC into the three components with accuracy that is noise dependent. The simulations show that accurate blood, EES, and EIS TACs can be obtained as evidenced by concordance correlation coefficients >0.9 between the true and estimated TACs. Additionally, provided that the estimated DCE-MRI parameters are within 10% of their true values, the errors in the PET kinetic parameters are within approximately 20% of their true values. The parameters returned by this approach may provide new information on the transport of a tracer in a variety of dynamic PET studies.

  1. An Interval Fuzzy-Stochastic Chance-Constrained Programming Based Energy-Water Nexus Model for Planning Electric Power Systems

    Directory of Open Access Journals (Sweden)

    Jing Liu

    2017-11-01

    Full Text Available In this study, an interval fuzzy-stochastic chance-constrained programming based energy-water nexus (IFSCP-WEN model is developed for planning electric power system (EPS. The IFSCP-WEN model can tackle uncertainties expressed as possibility and probability distributions, as well as interval values. Different credibility (i.e., γ levels and probability (i.e., qi levels are set to reflect relationships among water supply, electricity generation, system cost, and constraint-violation risk. Results reveal that different γ and qi levels can lead to a changed system cost, imported electricity, electricity generation, and water supply. Results also disclose that the study EPS would tend to the transition from coal-dominated into clean energy-dominated. Gas-fired would be the main electric utility to supply electricity at the end of the planning horizon, occupying [28.47, 30.34]% (where 28.47% and 30.34% present the lower bound and the upper bound of interval value, respectively of the total electricity generation. Correspondingly, water allocated to gas-fired would reach the highest, occupying [33.92, 34.72]% of total water supply. Surface water would be the main water source, accounting for more than [40.96, 43.44]% of the total water supply. The ratio of recycled water to total water supply would increase by about [11.37, 14.85]%. Results of the IFSCP-WEN model present its potential for sustainable EPS planning by co-optimizing energy and water resources.

  2. Constraining drivers of basin exhumation in the Molasse Basin by combining low-temperature thermochronology, thermal history and kinematic modeling

    Science.gov (United States)

    Luijendijk, Elco; von Hagke, Christoph; Hindle, David

    2017-04-01

    Due to a wealth of geological and thermochronology data the northern foreland basin of the European Alps is an ideal natural laboratory for understanding the dynamics of foreland basins and their interaction with surface and geodynamic processes. The northern foreland basin of the Alps has been exhumed since the Miocene. The timing, rate and cause of this phase of exhumation are still enigmatic. We compile all available thermochronology and organic maturity data and use a new thermal history model, PyBasin, to quantify the rate and timing of exhumation that can explain these data. In addition we quantify the amount of tectonic exhumation using a new kinematic model for the part of the basin that is passively moved above the detachment of the Jura Mountains. Our results show that the vitrinite reflectance, apatite fission track data and cooling rates show no clear difference between the thrusted and folded part of the foreland basin and the undeformed part of the foreland basin. The undeformed plateau Molasse shows a high rate of cooling during the Neogene of 40 to 100 °C, which is equal to >1.0 km of exhumation. Calculated rates of exhumation suggest that drainage reorganization can only explain a small part of the observed exhumation and cooling. Similarly, tectonic transport over a detachment ramp cannot explain the magnitude, timing and wavelength of the observed cooling signal. We conclude that the observed cooling rates suggest large wavelength exhumation that is probably caused by lithospheric-scale processes. In contrast to previous studies we find that the timing of exhumation is poorly constrained. Uncertainty analysis shows that models with timing starting as early as 12 Ma or as late as 2 Ma can all explain the observed data.

  3. Estimation of p,p'-DDT degradation in soil by modeling and constraining hydrological and biogeochemical controls.

    Science.gov (United States)

    Sanka, Ondrej; Kalina, Jiri; Lin, Yan; Deutscher, Jan; Futter, Martyn; Butterfield, Dan; Melymuk, Lisa; Brabec, Karel; Nizzetto, Luca

    2018-08-01

    Despite not being used for decades in most countries, DDT remains ubiquitous in soils due to its persistence and intense past usage. Because of this it is still a pollutant of high global concern. Assessing long term dissipation of DDT from this reservoir is fundamental to understand future environmental and human exposure. Despite a large research effort, key properties controlling fate in soil (in particular, the degradation half-life (τ soil )) are far from being fully quantified. This paper describes a case study in a large central European catchment where hundreds of measurements of p,p'-DDT concentrations in air, soil, river water and sediment are available for the last two decades. The goal was to deliver an integrated estimation of τ soil by constraining a state-of-the-art hydrobiogeochemical-multimedia fate model of the catchment against the full body of empirical data available for this area. The INCA-Contaminants model was used for this scope. Good predictive performance against an (external) dataset of water and sediment concentrations was achieved with partitioning properties taken from the literature and τ soil estimates obtained from forcing the model against empirical historical data of p,p'-DDT in the catchment multicompartments. This approach allowed estimation of p,p'-DDT degradation in soil after taking adequate consideration of losses due to runoff and volatilization. Estimated τ soil ranged over 3000-3800 days. Degradation was the most important loss process, accounting on a yearly basis for more than 90% of the total dissipation. The total dissipation flux from the catchment soils was one order of magnitude higher than the total current atmospheric input estimated from atmospheric concentrations, suggesting that the bulk of p,p'-DDT currently being remobilized or lost is essentially that accumulated over two decades ago. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. p53 constrains progression to anaplastic thyroid carcinoma in a Braf-mutant mouse model of papillary thyroid cancer

    Science.gov (United States)

    McFadden, David G.; Vernon, Amanda; Santiago, Philip M.; Martinez-McFaline, Raul; Bhutkar, Arjun; Crowley, Denise M.; McMahon, Martin; Sadow, Peter M.; Jacks, Tyler

    2014-01-01

    Anaplastic thyroid carcinoma (ATC) has among the worst prognoses of any solid malignancy. The low incidence of the disease has in part precluded systematic clinical trials and tissue collection, and there has been little progress in developing effective therapies. v-raf murine sarcoma viral oncogene homolog B (BRAF) and tumor protein p53 (TP53) mutations cooccur in a high proportion of ATCs, particularly those associated with a precursor papillary thyroid carcinoma (PTC). To develop an adult-onset model of BRAF-mutant ATC, we generated a thyroid-specific CreER transgenic mouse. We used a Cre-regulated BrafV600E mouse and a conditional Trp53 allelic series to demonstrate that p53 constrains progression from PTC to ATC. Gene expression and immunohistochemical analyses of murine tumors identified the cardinal features of human ATC including loss of differentiation, local invasion, distant metastasis, and rapid lethality. We used small-animal ultrasound imaging to monitor autochthonous tumors and showed that treatment with the selective BRAF inhibitor PLX4720 improved survival but did not lead to tumor regression or suppress signaling through the MAPK pathway. The combination of PLX4720 and the mapk/Erk kinase (MEK) inhibitor PD0325901 more completely suppressed MAPK pathway activation in mouse and human ATC cell lines and improved the structural response and survival of ATC-bearing animals. This model expands the limited repertoire of autochthonous models of clinically aggressive thyroid cancer, and these data suggest that small-molecule MAPK pathway inhibitors hold clinical promise in the treatment of advanced thyroid carcinoma. PMID:24711431

  5. Systematic Constraint Selection Strategy for Rate-Controlled Constrained-Equilibrium Modeling of Complex Nonequilibrium Chemical Kinetics

    Science.gov (United States)

    Beretta, Gian Paolo; Rivadossi, Luca; Janbozorgi, Mohammad

    2018-04-01

    Rate-Controlled Constrained-Equilibrium (RCCE) modeling of complex chemical kinetics provides acceptable accuracies with much fewer differential equations than for the fully Detailed Kinetic Model (DKM). Since its introduction by James C. Keck, a drawback of the RCCE scheme has been the absence of an automatable, systematic procedure to identify the constraints that most effectively warrant a desired level of approximation for a given range of initial, boundary, and thermodynamic conditions. An optimal constraint identification has been recently proposed. Given a DKM with S species, E elements, and R reactions, the procedure starts by running a probe DKM simulation to compute an S-vector that we call overall degree of disequilibrium (ODoD) because its scalar product with the S-vector formed by the stoichiometric coefficients of any reaction yields its degree of disequilibrium (DoD). The ODoD vector evolves in the same (S-E)-dimensional stoichiometric subspace spanned by the R stoichiometric S-vectors. Next we construct the rank-(S-E) matrix of ODoD traces obtained from the probe DKM numerical simulation and compute its singular value decomposition (SVD). By retaining only the first C largest singular values of the SVD and setting to zero all the others we obtain the best rank-C approximation of the matrix of ODoD traces whereby its columns span a C-dimensional subspace of the stoichiometric subspace. This in turn yields the best approximation of the evolution of the ODoD vector in terms of only C parameters that we call the constraint potentials. The resulting order-C RCCE approximate model reduces the number of independent differential equations related to species, mass, and energy balances from S+2 to C+E+2, with substantial computational savings when C ≪ S-E.

  6. Parametrization consequences of constraining soil organic matter models by total carbon and radiocarbon using long-term field data

    Science.gov (United States)

    Menichetti, Lorenzo; Kätterer, Thomas; Leifeld, Jens

    2016-05-01

    Soil organic carbon (SOC) dynamics result from different interacting processes and controls on spatial scales from sub-aggregate to pedon to the whole ecosystem. These complex dynamics are translated into models as abundant degrees of freedom. This high number of not directly measurable variables and, on the other hand, very limited data at disposal result in equifinality and parameter uncertainty. Carbon radioisotope measurements are a proxy for SOC age both at annual to decadal (bomb peak based) and centennial to millennial timescales (radio decay based), and thus can be used in addition to total organic C for constraining SOC models. By considering this additional information, uncertainties in model structure and parameters may be reduced. To test this hypothesis we studied SOC dynamics and their defining kinetic parameters in the Zürich Organic Fertilization Experiment (ZOFE) experiment, a > 60-year-old controlled cropland experiment in Switzerland, by utilizing SOC and SO14C time series. To represent different processes we applied five model structures, all stemming from a simple mother model (Introductory Carbon Balance Model - ICBM): (I) two decomposing pools, (II) an inert pool added, (III) three decomposing pools, (IV) two decomposing pools with a substrate control feedback on decomposition, (V) as IV but with also an inert pool. These structures were extended to explicitly represent total SOC and 14C pools. The use of different model structures allowed us to explore model structural uncertainty and the impact of 14C on kinetic parameters. We considered parameter uncertainty by calibrating in a formal Bayesian framework. By varying the relative importance of total SOC and SO14C data in the calibration, we could quantify the effect of the information from these two data streams on estimated model parameters. The weighing of the two data streams was crucial for determining model outcomes, and we suggest including it in future modeling efforts whenever SO14C

  7. Molecular Phylogenetics: Concepts for a Newcomer.

    Science.gov (United States)

    Ajawatanawong, Pravech

    Molecular phylogenetics is the study of evolutionary relationships among organisms using molecular sequence data. The aim of this review is to introduce the important terminology and general concepts of tree reconstruction to biologists who lack a strong background in the field of molecular evolution. Some modern phylogenetic programs are easy to use because of their user-friendly interfaces, but understanding the phylogenetic algorithms and substitution models, which are based on advanced statistics, is still important for the analysis and interpretation without a guide. Briefly, there are five general steps in carrying out a phylogenetic analysis: (1) sequence data preparation, (2) sequence alignment, (3) choosing a phylogenetic reconstruction method, (4) identification of the best tree, and (5) evaluating the tree. Concepts in this review enable biologists to grasp the basic ideas behind phylogenetic analysis and also help provide a sound basis for discussions with expert phylogeneticists.

  8. Phylogenetic trees in bioinformatics

    Energy Technology Data Exchange (ETDEWEB)

    Burr, Tom L [Los Alamos National Laboratory

    2008-01-01

    Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.

  9. Evaluating terrestrial water storage variations from regionally constrained GRACE mascon data and hydrological models over Southern Africa – Preliminary results

    DEFF Research Database (Denmark)

    Krogh, Pernille Engelbredt; Andersen, Ole Baltazar; Michailovsky, Claire Irene B.

    2010-01-01

    ). In this paper we explore an experimental set of regionally constrained mascon blocks over Southern Africa where a system of 1.25° × 1.5° and 1.5° × 1.5° blocks has been designed. The blocks are divided into hydrological regions based on drainage patterns of the largest river basins, and are constrained...... Malawi with water level from altimetry. Results show that weak constraints across regions in addition to intra-regional constraints are necessary, to reach reasonable mass variations....

  10. Modeling biomass burning over the South, South East and East Asian Monsoon regions using a new, satellite constrained approach

    Science.gov (United States)

    Lan, R.; Cohen, J. B.

    2017-12-01

    Biomass burning over the South, South East and East Asian Monsoon regions, is a crucial contributor to the total local aerosol loading. Furthermore, the impact of the ITCZ, and Monsoonal circulation patterns coupled with complex topography also have a prominent impact on the aerosol loading throughout much of the Northern Hemisphere. However, at the present time, biomass burning emissions are highly underestimated over this region, in part due to under-reported emissions in space and time, and in part due to an incomplete understanding of the physics and chemistry of the aerosols emitted in fires and formed downwind from them. Hence, a better understanding of the four-dimensional source distribution, plume rise, and in-situ processing, in particular in regions with significant quantities of urban air pollutants, is essential to advance our knowledge of this problem. This work uses a new modeling methodology based on the simultaneous constraints of measured AOD and some trace gasses over the region. The results of the 4-D constrained emissions are further expanded upon using different fire plume height rise and in-situ processing assumptions. Comparisons between the results and additional ground-based and remotely sensed measurements, including AERONET, CALIOP, and NOAA and other ground networks are included. The end results reveal a trio of insights into the nonlinear processes most-important to understand the impacts of biomass burning in this part of the world. Model-measurement comparisons are found to be consistent during the typical burning years of 2016. First, the model performs better under the new emissions representations, than it does using any of the standard hotspot based approaches currently employed by the community. Second, long range transport and mixing between the boundary layer and free troposphere contribute to the spatial-temporal variations. Third, we indicate some source regions that are new, either because of increased urbanization, or of

  11. Constraining a complex biogeochemical model for CO2 and N2O emission simulations from various land uses by model-data fusion

    Science.gov (United States)

    Houska, Tobias; Kraus, David; Kiese, Ralf; Breuer, Lutz

    2017-07-01

    This study presents the results of a combined measurement and modelling strategy to analyse N2O and CO2 emissions from adjacent arable land, forest and grassland sites in Hesse, Germany. The measured emissions reveal seasonal patterns and management effects, including fertilizer application, tillage, harvest and grazing. The measured annual N2O fluxes are 4.5, 0.4 and 0.1 kg N ha-1 a-1, and the CO2 fluxes are 20.0, 12.2 and 3.0 t C ha-1 a-1 for the arable land, grassland and forest sites, respectively. An innovative model-data fusion concept based on a multicriteria evaluation (soil moisture at different depths, yield, CO2 and N2O emissions) is used to rigorously test the LandscapeDNDC biogeochemical model. The model is run in a Latin-hypercube-based uncertainty analysis framework to constrain model parameter uncertainty and derive behavioural model runs. The results indicate that the model is generally capable of predicting trace gas emissions, as evaluated with RMSE as the objective function. The model shows a reasonable performance in simulating the ecosystem C and N balances. The model-data fusion concept helps to detect remaining model errors, such as missing (e.g. freeze-thaw cycling) or incomplete model processes (e.g. respiration rates after harvest). This concept further elucidates the identification of missing model input sources (e.g. the uptake of N through shallow groundwater on grassland during the vegetation period) and uncertainty in the measured validation data (e.g. forest N2O emissions in winter months). Guidance is provided to improve the model structure and field measurements to further advance landscape-scale model predictions.

  12. Constraining the Timing of Lobate Debris Apron Emplacement at Martian Mid-Latitudes Using a Numerical Model of Ice Flow

    Science.gov (United States)

    Parsons, R. A.; Nimmo, F.

    2010-03-01

    SHARAD observations constrain the thickness and dust content of lobate debris aprons (LDAs). Simulations of dust-free ice-sheet flow over a flat surface at 205 K for 10-100 m.y. give LDA lengths and thicknesses that are consistent with observations.

  13. Constraining walking and custodial technicolor

    DEFF Research Database (Denmark)

    Foadi, Roshan; Frandsen, Mads Toudal; Sannino, Francesco

    2008-01-01

    We show how to constrain the physical spectrum of walking technicolor models via precision measurements and modified Weinberg sum rules. We also study models possessing a custodial symmetry for the S parameter at the effective Lagrangian level-custodial technicolor-and argue that these models...

  14. An Architecturally Constrained Model of Random Number Generation and its Application to Modelling the Effect of Generation Rate

    Directory of Open Access Journals (Sweden)

    Nicholas J. Sexton

    2014-07-01

    Full Text Available Random number generation (RNG is a complex cognitive task for human subjects, requiring deliberative control to avoid production of habitual, stereotyped sequences. Under various manipulations (e.g., speeded responding, transcranial magnetic stimulation, or neurological damage the performance of human subjects deteriorates, as reflected in a number of qualitatively distinct, dissociable biases. For example, the intrusion of stereotyped behaviour (e.g., counting increases at faster rates of generation. Theoretical accounts of the task postulate that it requires the integrated operation of multiple, computationally heterogeneous cognitive control ('executive' processes. We present a computational model of RNG, within the framework of a novel, neuropsychologically-inspired cognitive architecture, ESPro. Manipulating the rate of sequence generation in the model reproduced a number of key effects observed in empirical studies, including increasing sequence stereotypy at faster rates. Within the model, this was due to time limitations on the interaction of supervisory control processes, namely, task setting, proposal of responses, monitoring, and response inhibition. The model thus supports the fractionation of executive function into multiple, computationally heterogeneous processes.

  15. A fuzzy chance-constrained programming model with type 1 and type 2 fuzzy sets for solid waste management under uncertainty

    Science.gov (United States)

    Ma, Xiaolin; Ma, Chi; Wan, Zhifang; Wang, Kewei

    2017-06-01

    Effective management of municipal solid waste (MSW) is critical for urban planning and development. This study aims to develop an integrated type 1 and type 2 fuzzy sets chance-constrained programming (ITFCCP) model for tackling regional MSW management problem under a fuzzy environment, where waste generation amounts are supposed to be type 2 fuzzy variables and treated capacities of facilities are assumed to be type 1 fuzzy variables. The evaluation and expression of uncertainty overcome the drawbacks in describing fuzzy possibility distributions as oversimplified forms. The fuzzy constraints are converted to their crisp equivalents through chance-constrained programming under the same or different confidence levels. Regional waste management of the City of Dalian, China, was used as a case study for demonstration. The solutions under various confidence levels reflect the trade-off between system economy and reliability. It is concluded that the ITFCCP model is capable of helping decision makers to generate reasonable waste-allocation alternatives under uncertainties.

  16. Hybrid Active/Passive Control of Sound Radiation from Panels with Constrained Layer Damping and Model Predictive Feedback Control

    Science.gov (United States)

    Cabell, Randolph H.; Gibbs, Gary P.

    2000-01-01

    make the controller adaptive. For example, a mathematical model of the plant could be periodically updated as the plant changes, and the feedback gains recomputed from the updated model. To be practical, this approach requires a simple plant model that can be updated quickly with reasonable computational requirements. A recent paper by the authors discussed one way to simplify a feedback controller, by reducing the number of actuators and sensors needed for good performance. The work was done on a tensioned aircraft-style panel excited on one side by TBL flow in a low speed wind tunnel. Actuation was provided by a piezoelectric (PZT) actuator mounted on the center of the panel. For sensing, the responses of four accelerometers, positioned to approximate the response of the first radiation mode of the panel, were summed and fed back through the controller. This single input-single output topology was found to have nearly the same noise reduction performance as a controller with fifteen accelerometers and three PZT patches. This paper extends the previous results by looking at how constrained layer damping (CLD) on a panel can be used to enhance the performance of the feedback controller thus providing a more robust and efficient hybrid active/passive system. The eventual goal is to use the CLD to reduce sound radiation at high frequencies, then implement a very simple, reduced order, low sample rate adaptive controller to attenuate sound radiation at low frequencies. Additionally this added damping smoothes phase transitions over the bandwidth which promotes robustness to natural frequency shifts. Experiments were conducted in a transmission loss facility on a clamped-clamped aluminum panel driven on one side by a loudspeaker. A generalized predictive control (GPC) algorithm, which is suited to online adaptation of its parameters, was used in single input-single output and multiple input-single output configurations. Because this was a preliminary look at the potential

  17. Modeling the Land Use/Cover Change in an Arid Region Oasis City Constrained by Water Resource and Environmental Policy Change using Cellular Automata Model

    Science.gov (United States)

    Hu, X.; Li, X.; Lu, L.

    2017-12-01

    Land use/cover change (LUCC) is an important subject in the research of global environmental change and sustainable development, while spatial simulation on land use/cover change is one of the key content of LUCC and is also difficult due to the complexity of the system. The cellular automata (CA) model had an irreplaceable role in simulating of land use/cover change process due to the powerful spatial computing power. However, the majority of current CA land use/cover models were binary-state model that could not provide more general information about the overall spatial pattern of land use/cover change. Here, a multi-state logistic-regression-based Markov cellular automata (MLRMCA) model and a multi-state artificial-neural-network-based Markov cellular automata (MANNMCA) model were developed and were used to simulate complex land use/cover evolutionary process in an arid region oasis city constrained by water resource and environmental policy change, the Zhangye city during the period of 1990-2010. The results indicated that the MANNMCA model was superior to MLRMCA model in simulated accuracy. These indicated that by combining the artificial neural network with CA could more effectively capture the complex relationships between the land use/cover change and a set of spatial variables. Although the MLRMCA model were also some advantages, the MANNMCA model was more appropriate for simulating complex land use/cover dynamics. The two proposed models were effective and reliable, and could reflect the spatial evolution of regional land use/cover changes. These have also potential implications for the impact assessment of water resources, ecological restoration, and the sustainable urban development in arid areas.

  18. Simulating secondary organic aerosol in a regional air quality model using the statistical oxidation model - Part 1: Assessing the influence of constrained multi-generational ageing

    Science.gov (United States)

    Jathar, S. H.; Cappa, C. D.; Wexler, A. S.; Seinfeld, J. H.; Kleeman, M. J.

    2016-02-01

    Multi-generational oxidation of volatile organic compound (VOC) oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA) compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1) consider only functionalization reactions but do not consider fragmentation reactions, (2) have not been constrained to experimental data and (3) are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the statistical oxidation model (SOM) of Cappa and Wilson (2012), constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional University of California at Davis / California Institute of Technology (UCD/CIT) air quality model and applied to air quality episodes in California and the eastern USA. The mass, composition and properties of SOA predicted using SOM were compared to SOA predictions generated by a traditional two-product model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation.Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under the conditions

  19. Modelling the flooding capacity of a Polish Carpathian river: A comparison of constrained and free channel conditions

    Science.gov (United States)

    Czech, Wiktoria; Radecki-Pawlik, Artur; Wyżga, Bartłomiej; Hajdukiewicz, Hanna

    2016-11-01

    The gravel-bed Biała River, Polish Carpathians, was heavily affected by channelization and channel incision in the twentieth century. Not only were these impacts detrimental to the ecological state of the river, but they also adversely modified the conditions of floodwater retention and flood wave passage. Therefore, a few years ago an erodible corridor was delimited in two sections of the Biała to enable restoration of the river. In these sections, short, channelized reaches located in the vicinity of bridges alternate with longer, unmanaged channel reaches, which either avoided channelization or in which the channel has widened after the channelization scheme ceased to be maintained. Effects of these alternating channel morphologies on the conditions for flood flows were investigated in a study of 10 pairs of neighbouring river cross sections with constrained and freely developed morphology. Discharges of particular recurrence intervals were determined for each cross section using an empirical formula. The morphology of the cross sections together with data about channel slope and roughness of particular parts of the cross sections were used as input data to the hydraulic modelling performed with the one-dimensional steady-flow HEC-RAS software. The results indicated that freely developed cross sections, usually with multithread morphology, are typified by significantly lower water depth but larger width and cross-sectional flow area at particular discharges than single-thread, channelized cross sections. They also exhibit significantly lower average flow velocity, unit stream power, and bed shear stress. The pattern of differences in the hydraulic parameters of flood flows apparent between the two types of river cross sections varies with the discharges of different frequency, and the contrasts in hydraulic parameters between unmanaged and channelized cross sections are most pronounced at low-frequency, high-magnitude floods. However, because of the deep

  20. Explaining evolution via constrained persistent perfect phylogeny

    Science.gov (United States)

    2014-01-01

    Background The perfect phylogeny is an often used model in phylogenetics since it provides an efficient basic procedure for representing the evolution of genomic binary characters in several frameworks, such as for example in haplotype inference. The model, which is conceptually the simplest, is based on the infinite sites assumption, that is no character can mutate more than once in the whole tree. A main open problem regarding the model is finding generalizations that retain the computational tractability of the original model but are more flexible in modeling biological data when the infinite site assumption is violated because of e.g. back mutations. A special case of back mutations that has been considered in the study of the evolution of protein domains (where a domain is acquired and then lost) is persistency, that is the fact that a character is allowed to return back to the ancestral state. In this model characters can be gained and lost at most once. In this paper we consider the computational problem of explaining binary data by the Persistent Perfect Phylogeny model (referred as PPP) and for this purpose we investigate the problem of reconstructing an evolution where some constraints are imposed on the paths of the tree. Results We define a natural generalization of the PPP problem obtained by requiring that for some pairs (character, species), neither the species nor any of its ancestors can have the character. In other words, some characters cannot be persistent for some species. This new problem is called Constrained PPP (CPPP). Based on a graph formulation of the CPPP problem, we are able to provide a polynomial time solution for the CPPP problem for matrices whose conflict graph has no edges. Using this result, we develop a parameterized algorithm for solving the CPPP problem where the parameter is the number of characters. Conclusions A preliminary experimental analysis shows that the constrained persistent perfect phylogeny model allows to

  1. Using finite element modelling to examine the flow process and temperature evolution in HPT under different constraining conditions

    International Nuclear Information System (INIS)

    Pereira, P H R; Langdon, T G; Figueiredo, R B; Cetlin, P R

    2014-01-01

    High-pressure torsion (HPT) is a metal-working technique used to impose severe plastic deformation into disc-shaped samples under high hydrostatic pressures. Different HPT facilities have been developed and they may be divided into three distinct categories depending upon the configuration of the anvils and the restriction imposed on the lateral flow of the samples. In the present paper, finite element simulations were performed to compare the flow process, temperature, strain and hydrostatic stress distributions under unconstrained, quasi-constrained and constrained conditions. It is shown there are distinct strain distributions in the samples depending on the facility configurations and a similar trend in the temperature rise of the HPT workpieces

  2. Phylogenetic analysis and protein structure modelling identifies distinct Ca(2+)/Cation antiporters and conservation of gene family structure within Arabidopsis and rice species.

    Science.gov (United States)

    Pittman, Jon K; Hirschi, Kendal D

    2016-12-01

    The Ca(2+)/Cation Antiporter (CaCA) superfamily is an ancient and widespread family of ion-coupled cation transporters found in nearly all kingdoms of life. In animals, K(+)-dependent and K(+)-indendent Na(+)/Ca(2+) exchangers (NCKX and NCX) are important CaCA members. Recently it was proposed that all rice and Arabidopsis CaCA proteins should be classified as NCX proteins. Here we performed phylogenetic analysis of CaCA genes and protein structure homology modelling to further characterise members of this transporter superfamily. Phylogenetic analysis of rice and Arabidopsis CaCAs in comparison with selected CaCA members from non-plant species demonstrated that these genes form clearly distinct families, with the H(+)/Cation exchanger (CAX) and cation/Ca(2+) exchanger (CCX) families dominant in higher plants but the NCKX and NCX families absent. NCX-related Mg(2+)/H(+) exchanger (MHX) and CAX-related Na(+)/Ca(2+) exchanger-like (NCL) proteins are instead present. Analysis of genomes of ten closely-related rice species and four Arabidopsis-related species found that CaCA gene family structures are highly conserved within related plants, apart from minor variation. Protein structures were modelled for OsCAX1a and OsMHX1. Despite exhibiting broad structural conservation, there are clear structural differences observed between the different CaCA types. Members of the CaCA superfamily form clearly distinct families with different phylogenetic, structural and functional characteristics, and therefore should not be simply classified as NCX proteins, which should remain as a separate gene family.

  3. Sharp spatially constrained inversion

    DEFF Research Database (Denmark)

    Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.

    2013-01-01

    We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....

  4. Evaluation of HOx sources and cycling using measurement-constrained model calculations in a 2-methyl-3-butene-2-ol (MBO and monoterpene (MT dominated ecosystem

    Directory of Open Access Journals (Sweden)

    S. B. Henry

    2013-02-01

    Full Text Available We present a detailed analysis of OH observations from the BEACHON (Bio-hydro-atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics and Nitrogen-ROCS (Rocky Mountain Organic Carbon Study 2010 field campaign at the Manitou Forest Observatory (MFO, which is a 2-methyl-3-butene-2-ol (MBO and monoterpene (MT dominated forest environment. A comprehensive suite of measurements was used to constrain primary production of OH via ozone photolysis, OH recycling from HO2, and OH chemical loss rates, in order to estimate the steady-state concentration of OH. In addition, the University of Washington Chemical Model (UWCM was used to evaluate the performance of a near-explicit chemical mechanism. The diurnal cycle in OH from the steady-state calculations is in good agreement with measurement. A comparison between the photolytic production rates and the recycling rates from the HO2 + NO reaction shows that recycling rates are ~20 times faster than the photolytic OH production rates from ozone. Thus, we find that direct measurement of the recycling rates and the OH loss rates can provide accurate predictions of OH concentrations. More importantly, we also conclude that a conventional OH recycling pathway (HO2 + NO can explain the observed OH levels in this non-isoprene environment. This is in contrast to observations in isoprene-dominated regions, where investigators have observed significant underestimation of OH and have speculated that unknown sources of OH are responsible. The highly-constrained UWCM calculation under-predicts observed HO2 by as much as a factor of 8. As HO2 maintains oxidation capacity by recycling to OH, UWCM underestimates observed OH by as much as a factor of 4. When the UWCM calculation is constrained by measured HO2, model calculated OH is in better agreement with the observed OH levels. Conversely, constraining the model to observed OH only slightly reduces the model-measurement HO2 discrepancy, implying unknown HO2

  5. A constrained dispersive optical model for the neutron-nucleus interaction from -80 to +80 MeV for the mass region 27≤A≤32

    International Nuclear Information System (INIS)

    Al-Ohali, M.A.; Howell, C.R.; Tornow, W.; Walter, R.L.

    1995-01-01

    A Constrained Dispersive Optical Model (CDOM) analysis was performed for the neutron-nucleus interaction in the energy domain from -80 to 80 MeV for the three nuclei in the center of the 2s-1d shell nuclei. The CDOM incorporates the dispersion relation which connects the real and imaginary parts of the nuclear mean field. Parameters for the model were derived by fitting the neutron differential elastic cross-section, the total cross-section, and the analyzing power data for 27 Al, 28 Si, and 32 S. The parameters were also adjusted slightly to improve overall agreement to single-particle bound-state energies

  6. The space of ultrametric phylogenetic trees.

    Science.gov (United States)

    Gavryushkin, Alex; Drummond, Alexei J

    2016-08-21

    The reliability of a phylogenetic inference method from genomic sequence data is ensured by its statistical consistency. Bayesian inference methods produce a sample of phylogenetic trees from the posterior distribution given sequence data. Hence the question of statistical consistency of such methods is equivalent to the consistency of the summary of the sample. More generally, statistical consistency is ensured by the tree space used to analyse the sample. In this paper, we consider two standard parameterisations of phylogenetic time-trees used in evolutionary models: inter-coalescent interval lengths and absolute times of divergence events. For each of these parameterisations we introduce a natural metric space on ultrametric phylogenetic trees. We compare the introduced spaces with existing models of tree space and formulate several formal requirements that a metric space on phylogenetic trees must possess in order to be a satisfactory space for statistical analysis, and justify them. We show that only a few known constructions of the space of phylogenetic trees satisfy these requirements. However, our results suggest that these basic requirements are not enough to distinguish between the two metric spaces we introduce and that the choice between metric spaces requires additional properties to be considered. Particularly, that the summary tree minimising the square distance to the trees from the sample might be different for different parameterisations. This suggests that further fundamental insight is needed into the problem of statistical consistency of phylogenetic inference methods. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Constraining the dynamics of the water budget at high spatial resolution in the world's water towers using models and remote sensing data; Snake River Basin, USA

    Science.gov (United States)

    Watson, K. A.; Masarik, M. T.; Flores, A. N.

    2016-12-01

    Mountainous, snow-dominated basins are often referred to as the water towers of the world because they store precipitation in seasonal snowpacks, which gradually melt and provide water supplies to downstream communities. Yet significant uncertainties remain in terms of quantifying the stores and fluxes of water in these regions as well as the associated energy exchanges. Constraining these stores and fluxes is crucial for advancing process understanding and managing these water resources in a changing climate. Remote sensing data are particularly important to these efforts due to the remoteness of these landscapes and high spatial variability in water budget components. We have developed a high resolution regional climate dataset extending from 1986 to the present for the Snake River Basin in the northwestern USA. The Snake River Basin is the largest tributary of the Columbia River by volume and a critically important basin for regional economies and communities. The core of the dataset was developed using a regional climate model, forced by reanalysis data. Specifically the Weather Research and Forecasting (WRF) model was used to dynamically downscale the North American Regional Reanalysis (NARR) over the region at 3 km horizontal resolution for the period of interest. A suite of satellite remote sensing products provide independent, albeit uncertain, constraint on a number of components of the water and energy budgets for the region across a range of spatial and temporal scales. For example, GRACE data are used to constrain basinwide terrestrial water storage and MODIS products are used to constrain the spatial and temporal evolution of evapotranspiration and snow cover. The joint use of both models and remote sensing products allows for both better understanding of water cycle dynamics and associated hydrometeorologic processes, and identification of limitations in both the remote sensing products and regional climate simulations.

  8. Constrained creation of poetic forms during theme-driven exploration of a domain defined by an N-gram model

    Science.gov (United States)

    Gervás, Pablo

    2016-04-01

    Most poetry-generation systems apply opportunistic approaches where algorithmic procedures are applied to explore the conceptual space defined by a given knowledge resource in search of solutions that might be aesthetically valuable. Aesthetical value is assumed to arise from compliance to a given poetic form - such as rhyme or metrical regularity - or from evidence of semantic relations between the words in the resulting poems that can be interpreted as rhetorical tropes - such as similes, analogies, or metaphors. This approach tends to fix a priori the aesthetic parameters of the results, and imposes no constraints on the message to be conveyed. The present paper describes an attempt to initiate a shift in this balance, introducing means for constraining the output to certain topics and allowing a looser mechanism for constraining form. This goal arose as a result of the need to produce poems for a themed collection commissioned to be included in a book. The solution adopted explores an approach to creativity where the goals are not solely aesthetic and where the results may be surprising in their poetic form. An existing computer poet, originally developed to produce poems in a given form but with no specific constraints on their content, is put to the task of producing a set of poems with explicit restrictions on content, and allowing for an exploration of poetic form. Alternative generation methods are devised to overcome the difficulties, and the various insights arising from these new methods and the impact they have on the set of resulting poems are discussed in terms of their potential contribution to better poetry-generation systems.

  9. On Tree-Based Phylogenetic Networks.

    Science.gov (United States)

    Zhang, Louxin

    2016-07-01

    A large class of phylogenetic networks can be obtained from trees by the addition of horizontal edges between the tree edges. These networks are called tree-based networks. We present a simple necessary and sufficient condition for tree-based networks and prove that a universal tree-based network exists for any number of taxa that contains as its base every phylogenetic tree on the same set of taxa. This answers two problems posted by Francis and Steel recently. A byproduct is a computer program for generating random binary phylogenetic networks under the uniform distribution model.

  10. Phylogenetic molecular function annotation

    International Nuclear Information System (INIS)

    Engelhardt, Barbara E; Jordan, Michael I; Repo, Susanna T; Brenner, Steven E

    2009-01-01

    It is now easier to discover thousands of protein sequences in a new microbial genome than it is to biochemically characterize the specific activity of a single protein of unknown function. The molecular functions of protein sequences have typically been predicted using homology-based computational methods, which rely on the principle that homologous proteins share a similar function. However, some protein families include groups of proteins with different molecular functions. A phylogenetic approach for predicting molecular function (sometimes called 'phylogenomics') is an effective means to predict protein molecular function. These methods incorporate functional evidence from all members of a family that have functional characterizations using the evolutionary history of the protein family to make robust predictions for the uncharacterized proteins. However, they are often difficult to apply on a genome-wide scale because of the time-consuming step of reconstructing the phylogenies of each protein to be annotated. Our automated approach for function annotation using phylogeny, the SIFTER (Statistical Inference of Function Through Evolutionary Relationships) methodology, uses a statistical graphical model to compute the probabilities of molecular functions for unannotated proteins. Our benchmark tests showed that SIFTER provides accurate functional predictions on various protein families, outperforming other available methods.

  11. Early cosmology constrained

    Energy Technology Data Exchange (ETDEWEB)

    Verde, Licia; Jimenez, Raul [Institute of Cosmos Sciences, University of Barcelona, IEEC-UB, Martí Franquès, 1, E08028 Barcelona (Spain); Bellini, Emilio [University of Oxford, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH (United Kingdom); Pigozzo, Cassio [Instituto de Física, Universidade Federal da Bahia, Salvador, BA (Brazil); Heavens, Alan F., E-mail: liciaverde@icc.ub.edu, E-mail: emilio.bellini@physics.ox.ac.uk, E-mail: cpigozzo@ufba.br, E-mail: a.heavens@imperial.ac.uk, E-mail: raul.jimenez@icc.ub.edu [Imperial Centre for Inference and Cosmology (ICIC), Imperial College, Blackett Laboratory, Prince Consort Road, London SW7 2AZ (United Kingdom)

    2017-04-01

    We investigate our knowledge of early universe cosmology by exploring how much additional energy density can be placed in different components beyond those in the ΛCDM model. To do this we use a method to separate early- and late-universe information enclosed in observational data, thus markedly reducing the model-dependency of the conclusions. We find that the 95% credibility regions for extra energy components of the early universe at recombination are: non-accelerating additional fluid density parameter Ω{sub MR} < 0.006 and extra radiation parameterised as extra effective neutrino species 2.3 < N {sub eff} < 3.2 when imposing flatness. Our constraints thus show that even when analyzing the data in this largely model-independent way, the possibility of hiding extra energy components beyond ΛCDM in the early universe is seriously constrained by current observations. We also find that the standard ruler, the sound horizon at radiation drag, can be well determined in a way that does not depend on late-time Universe assumptions, but depends strongly on early-time physics and in particular on additional components that behave like radiation. We find that the standard ruler length determined in this way is r {sub s} = 147.4 ± 0.7 Mpc if the radiation and neutrino components are standard, but the uncertainty increases by an order of magnitude when non-standard dark radiation components are allowed, to r {sub s} = 150 ± 5 Mpc.

  12. Constraining the Magmatic System at Mount St. Helens (2004-2008) Using Bayesian Inversion With Physics-Based Models Including Gas Escape and Crystallization

    International Nuclear Information System (INIS)

    Wong, Ying-Qi; Segall, Paul; Bradley, Andrew; Anderson, Kyle

    2017-01-01

    Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock and magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ~10 –11.4 m 2 to reproduce observed dome rock porosities. Here, compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.

  13. Phylogenetically Acquired Representations and Evolutionary Algorithms.

    OpenAIRE

    Wozniak , Adrianna

    2006-01-01

    First, we explain why Genetic Algorithms (GAs), inspired by the Modern Synthesis, do not accurately model biological evolution, being rather an artificial version of artificial, rather than natural selection. Being focused on optimisation, we propose two improvements of GAs, with the aim to successfully generate adapted, desired behaviour. The first one concerns phylogenetic grounding of meaning, a way to avoid the Symbol Grounding Problem. We give a definition of Phylogenetically Acquired Re...

  14. Constraining Secluded Dark Matter models with the public data from the 79-string IceCube search for dark matter in the Sun

    Energy Technology Data Exchange (ETDEWEB)

    Ardid, M.; Felis, I.; Martínez-Mora, J.A. [Institut d' Investigació per a la Gestió Integrada de les Zones Costaneres (IGIC), Universitat Politècnica de València, C/Paranimf 1, 46730 Gandia (Spain); Herrero, A., E-mail: mardid@fis.upv.es, E-mail: ivfeen@upv.es, E-mail: aherrero@mat.upv.es, E-mail: jmmora@fis.upv.es [Institut de Matemàtica Multidisciplinar, Universitat Politècnica de València, Camí de Vera s/n, 46022 València (Spain)

    2017-04-01

    The 79-string IceCube search for dark matter in the Sun public data is used to test Secluded Dark Matter models. No significant excess over background is observed and constraints on the parameters of the models are derived. Moreover, the search is also used to constrain the dark photon model in the region of the parameter space with dark photon masses between 0.22 and ∼ 1 GeV and a kinetic mixing parameter ε ∼ 10{sup −9}, which remains unconstrained. These are the first constraints of dark photons from neutrino telescopes. It is expected that neutrino telescopes will be efficient tools to test dark photons by means of different searches in the Sun, Earth and Galactic Center, which could complement constraints from direct detection, accelerators, astrophysics and indirect detection with other messengers, such as gamma rays or antiparticles.

  15. Thermal-based modeling of coupled carbon, water, and energy fluxes using nominal light use efficiencies constrained by leaf chlorophyll observations

    KAUST Repository

    Schull, M. A.

    2015-03-11

    Recent studies have shown that estimates of leaf chlorophyll content (Chl), defined as the combined mass of chlorophyll a and chlorophyll b per unit leaf area, can be useful for constraining estimates of canopy light use efficiency (LUE). Canopy LUE describes the amount of carbon assimilated by a vegetative canopy for a given amount of absorbed photosynthetically active radiation (APAR) and is a key parameter for modeling land-surface carbon fluxes. A carbon-enabled version of the remote-sensing-based two-source energy balance (TSEB) model simulates coupled canopy transpiration and carbon assimilation using an analytical sub-model of canopy resistance constrained by inputs of nominal LUE (βn), which is modulated within the model in response to varying conditions in light, humidity, ambient CO2 concentration, and temperature. Soil moisture constraints on water and carbon exchange are conveyed to the TSEB-LUE indirectly through thermal infrared measurements of land-surface temperature. We investigate the capability of using Chl estimates for capturing seasonal trends in the canopy βn from in situ measurements of Chl acquired in irrigated and rain-fed fields of soybean and maize near Mead, Nebraska. The results show that field-measured Chl is nonlinearly related to βn, with variability primarily related to phenological changes during early growth and senescence. Utilizing seasonally varying βn inputs based on an empirical relationship with in situ measured Chl resulted in improvements in carbon flux estimates from the TSEB model, while adjusting the partitioning of total water loss between plant transpiration and soil evaporation. The observed Chl-βn relationship provides a functional mechanism for integrating remotely sensed Chl into the TSEB model, with the potential for improved mapping of coupled carbon, water, and energy fluxes across vegetated landscapes.

  16. Phylogenetic diversity and biodiversity indices on phylogenetic networks.

    Science.gov (United States)

    Wicke, Kristina; Fischer, Mareike

    2018-04-01

    In biodiversity conservation it is often necessary to prioritize the species to conserve. Existing approaches to prioritization, e.g. the Fair Proportion Index and the Shapley Value, are based on phylogenetic trees and rank species according to their contribution to overall phylogenetic diversity. However, in many cases evolution is not treelike and thus, phylogenetic networks have been developed as a generalization of phylogenetic trees, allowing for the representation of non-treelike evolutionary events, such as hybridization. Here, we extend the concepts of phylogenetic diversity and phylogenetic diversity indices from phylogenetic trees to phylogenetic networks. On the one hand, we consider the treelike content of a phylogenetic network, e.g. the (multi)set of phylogenetic trees displayed by a network and the so-called lowest stable ancestor tree associated with it. On the other hand, we derive the phylogenetic diversity of subsets of taxa and biodiversity indices directly from the internal structure of the network. We consider both approaches that are independent of so-called inheritance probabilities as well as approaches that explicitly incorporate these probabilities. Furthermore, we introduce our software package NetDiversity, which is implemented in Perl and allows for the calculation of all generalized measures of phylogenetic diversity and generalized phylogenetic diversity indices established in this note that are independent of inheritance probabilities. We apply our methods to a phylogenetic network representing the evolutionary relationships among swordtails and platyfishes (Xiphophorus: Poeciliidae), a group of species characterized by widespread hybridization. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Evolutionary constrained optimization

    CERN Document Server

    Deb, Kalyanmoy

    2015-01-01

    This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...

  18. Chance-constrained overland flow modeling for improving conceptual distributed hydrologic simulations based on scaling representation of sub-daily rainfall variability

    International Nuclear Information System (INIS)

    Han, Jing-Cheng; Huang, Guohe; Huang, Yuefei; Zhang, Hua; Li, Zhong; Chen, Qiuwen

    2015-01-01

    Lack of hydrologic process representation at the short time-scale would lead to inadequate simulations in distributed hydrological modeling. Especially for complex mountainous watersheds, surface runoff simulations are significantly affected by the overland flow generation, which is closely related to the rainfall characteristics at a sub-time step. In this paper, the sub-daily variability of rainfall intensity was considered using a probability distribution, and a chance-constrained overland flow modeling approach was proposed to capture the generation of overland flow within conceptual distributed hydrologic simulations. The integrated modeling procedures were further demonstrated through a watershed of China Three Gorges Reservoir area, leading to an improved SLURP-TGR hydrologic model based on SLURP. Combined with rainfall thresholds determined to distinguish various magnitudes of daily rainfall totals, three levels of significance were simultaneously employed to examine the hydrologic-response simulation. Results showed that SLURP-TGR could enhance the model performance, and the deviation of runoff simulations was effectively controlled. However, rainfall thresholds were so crucial for reflecting the scaling effect of rainfall intensity that optimal levels of significance and rainfall threshold were 0.05 and 10 mm, respectively. As for the Xiangxi River watershed, the main runoff contribution came from interflow of the fast store. Although slight differences of overland flow simulations between SLURP and SLURP-TGR were derived, SLURP-TGR was found to help improve the simulation of peak flows, and would improve the overall modeling efficiency through adjusting runoff component simulations. Consequently, the developed modeling approach favors efficient representation of hydrological processes and would be expected to have a potential for wide applications. - Highlights: • We develop an improved hydrologic model considering the scaling effect of rainfall. • A

  19. Chance-constrained overland flow modeling for improving conceptual distributed hydrologic simulations based on scaling representation of sub-daily rainfall variability

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jing-Cheng [State Key Laboratory of Hydroscience & Engineering, Department of Hydraulic Engineering, Tsinghua University, Beijing 100084 (China); Huang, Guohe, E-mail: huang@iseis.org [Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan S4S 0A2 (Canada); Huang, Yuefei [State Key Laboratory of Hydroscience & Engineering, Department of Hydraulic Engineering, Tsinghua University, Beijing 100084 (China); Zhang, Hua [College of Science and Engineering, Texas A& M University — Corpus Christi, Corpus Christi, TX 78412-5797 (United States); Li, Zhong [Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan S4S 0A2 (Canada); Chen, Qiuwen [Center for Eco-Environmental Research, Nanjing Hydraulics Research Institute, Nanjing 210029 (China)

    2015-08-15

    Lack of hydrologic process representation at the short time-scale would lead to inadequate simulations in distributed hydrological modeling. Especially for complex mountainous watersheds, surface runoff simulations are significantly affected by the overland flow generation, which is closely related to the rainfall characteristics at a sub-time step. In this paper, the sub-daily variability of rainfall intensity was considered using a probability distribution, and a chance-constrained overland flow modeling approach was proposed to capture the generation of overland flow within conceptual distributed hydrologic simulations. The integrated modeling procedures were further demonstrated through a watershed of China Three Gorges Reservoir area, leading to an improved SLURP-TGR hydrologic model based on SLURP. Combined with rainfall thresholds determined to distinguish various magnitudes of daily rainfall totals, three levels of significance were simultaneously employed to examine the hydrologic-response simulation. Results showed that SLURP-TGR could enhance the model performance, and the deviation of runoff simulations was effectively controlled. However, rainfall thresholds were so crucial for reflecting the scaling effect of rainfall intensity that optimal levels of significance and rainfall threshold were 0.05 and 10 mm, respectively. As for the Xiangxi River watershed, the main runoff contribution came from interflow of the fast store. Although slight differences of overland flow simulations between SLURP and SLURP-TGR were derived, SLURP-TGR was found to help improve the simulation of peak flows, and would improve the overall modeling efficiency through adjusting runoff component simulations. Consequently, the developed modeling approach favors efficient representation of hydrological processes and would be expected to have a potential for wide applications. - Highlights: • We develop an improved hydrologic model considering the scaling effect of rainfall. • A

  20. Increased accuracy in mineral and hydrogeophysical modelling of HTEM data via detailed description of system transfer function and constrained inversion

    DEFF Research Database (Denmark)

    Viezzoli, Andrea; Christiansen, Anders Vest; Auken, Esben

    This paper aims at providing more insight into the parameters that need to be modelled during inversion of Helicopter TEM data for accurate modelling, both for hydrogeophysical and exploration applications. We use synthetic data to show in details the effect, both in data and in model space...

  1. A Universal Phylogenetic Tree.

    Science.gov (United States)

    Offner, Susan

    2001-01-01

    Presents a universal phylogenetic tree suitable for use in high school and college-level biology classrooms. Illustrates the antiquity of life and that all life is related, even if it dates back 3.5 billion years. Reflects important evolutionary relationships and provides an exciting way to learn about the history of life. (SAH)

  2. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  3. Using a data-constrained model of home range establishment to predict abundance in spatially heterogeneous habitats.

    Directory of Open Access Journals (Sweden)

    Mark C Vanderwel

    Full Text Available Mechanistic modelling approaches that explicitly translate from individual-scale resource selection to the distribution and abundance of a larger population may be better suited to predicting responses to spatially heterogeneous habitat alteration than commonly-used regression models. We developed an individual-based model of home range establishment that, given a mapped distribution of local habitat values, estimates species abundance by simulating the number and position of viable home ranges that can be maintained across a spatially heterogeneous area. We estimated parameters for this model from data on red-backed vole (Myodes gapperi abundances in 31 boreal forest sites in Ontario, Canada. The home range model had considerably more support from these data than both non-spatial regression models based on the same original habitat variables and a mean-abundance null model. It had nearly equivalent support to a non-spatial regression model that, like the home range model, scaled an aggregate measure of habitat value from local associations with habitat resources. The home range and habitat-value regression models gave similar predictions for vole abundance under simulations of light- and moderate-intensity partial forest harvesting, but the home range model predicted lower abundances than the regression model under high-intensity disturbance. Empirical regression-based approaches for predicting species abundance may overlook processes that affect habitat use by individuals, and often extrapolate poorly to novel habitat conditions. Mechanistic home range models that can be parameterized against abundance data from different habitats permit appropriate scaling from individual- to population-level habitat relationships, and can potentially provide better insights into responses to disturbance.

  4. Order-Constrained Reference Priors with Implications for Bayesian Isotonic Regression, Analysis of Covariance and Spatial Models

    Science.gov (United States)

    Gong, Maozhen

    Selecting an appropriate prior distribution is a fundamental issue in Bayesian Statistics. In this dissertation, under the framework provided by Berger and Bernardo, I derive the reference priors for several models which include: Analysis of Variance (ANOVA)/Analysis of Covariance (ANCOVA) models with a categorical variable under common ordering constraints, the conditionally autoregressive (CAR) models and the simultaneous autoregressive (SAR) models with a spatial autoregression parameter rho considered. The performances of reference priors for ANOVA/ANCOVA models are evaluated by simulation studies with comparisons to Jeffreys' prior and Least Squares Estimation (LSE). The priors are then illustrated in a Bayesian model of the "Risk of Type 2 Diabetes in New Mexico" data, where the relationship between the type 2 diabetes risk (through Hemoglobin A1c) and different smoking levels is investigated. In both simulation studies and real data set modeling, the reference priors that incorporate internal order information show good performances and can be used as default priors. The reference priors for the CAR and SAR models are also illustrated in the "1999 SAT State Average Verbal Scores" data with a comparison to a Uniform prior distribution. Due to the complexity of the reference priors for both CAR and SAR models, only a portion (12 states in the Midwest) of the original data set is considered. The reference priors can give a different marginal posterior distribution compared to a Uniform prior, which provides an alternative for prior specifications for areal data in Spatial statistics.

  5. Constraining models of postglacial rebound using space geodesy: a detailed assessment of model ICE-5G (VM2) and its relatives

    Science.gov (United States)

    Argus, Donald F.; Peltier, W. Richard

    2010-05-01

    Using global positioning system, very long baseline interferometry, satellite laser ranging and Doppler Orbitography and Radiopositioning Integrated by Satellite observations, including the Canadian Base Network and Fennoscandian BIFROST array, we constrain, in models of postglacial rebound, the thickness of the ice sheets as a function of position and time and the viscosity of the mantle as a function of depth. We test model ICE-5G VM2 T90 Rot, which well fits many hundred Holocene relative sea level histories in North America, Europe and worldwide. ICE-5G is the deglaciation history having more ice in western Canada than ICE-4G; VM2 is the mantle viscosity profile having a mean upper mantle viscosity of 0.5 × 1021Pas and a mean uppermost-lower mantle viscosity of 1.6 × 1021Pas T90 is an elastic lithosphere thickness of 90 km; and Rot designates that the model includes (rotational feedback) Earth's response to the wander of the North Pole of Earth's spin axis towards Canada at a speed of ~1° Myr-1. The vertical observations in North America show that, relative to ICE-5G, the Laurentide ice sheet at last glacial maximum (LGM) at ~26 ka was (1) much thinner in southern Manitoba, (2) thinner near Yellowknife (Northwest Territories), (3) thicker in eastern and southern Quebec and (4) thicker along the northern British Columbia-Alberta border, or that ice was unloaded from these areas later (thicker) or earlier (thinner) than in ICE-5G. The data indicate that the western Laurentide ice sheet was intermediate in mass between ICE-5G and ICE-4G. The vertical observations and GRACE gravity data together suggest that the western Laurentide ice sheet was nearly as massive as that in ICE-5G but distributed more broadly across northwestern Canada. VM2 poorly fits the horizontal observations in North America, predicting places along the margins of the Laurentide ice sheet to be moving laterally away from the ice centre at 2 mm yr-1 in ICE-4G and 3 mm yr-1 in ICE-5G, in

  6. Phylogenetic trees and Euclidean embeddings.

    Science.gov (United States)

    Layer, Mark; Rhodes, John A

    2017-01-01

    It was recently observed by de Vienne et al. (Syst Biol 60(6):826-832, 2011) that a simple square root transformation of distances between taxa on a phylogenetic tree allowed for an embedding of the taxa into Euclidean space. While the justification for this was based on a diffusion model of continuous character evolution along the tree, here we give a direct and elementary explanation for it that provides substantial additional insight. We use this embedding to reinterpret the differences between the NJ and BIONJ tree building algorithms, providing one illustration of how this embedding reflects tree structures in data.

  7. Robust self-triggered model predictive control for constrained discrete-time LTI systems based on homothetic tubes

    NARCIS (Netherlands)

    Aydiner, E.; Brunner, F.D.; Heemels, W.P.M.H.; Allgower, F.

    2015-01-01

    In this paper we present a robust self-triggered model predictive control (MPC) scheme for discrete-time linear time-invariant systems subject to input and state constraints and additive disturbances. In self-triggered model predictive control, at every sampling instant an optimization problem based

  8. How to constrain multi-objective calibrations using water balance components for an improved realism of model results

    Science.gov (United States)

    Accurate discharge simulation is one of the most common objectives of hydrological modeling studies. However, a good simulation of discharge is not necessarily the result of a realistic simulation of hydrological processes within the catchment. To enhance the realism of model results, we propose an ...

  9. A synchrotron-based local computed tomography combined with data-constrained modelling approach for quantitative analysis of anthracite coal microstructure

    International Nuclear Information System (INIS)

    Chen, Wen Hao; Yang, Sam Y. S.; Xiao, Ti Qiao; Mayo, Sherry C.; Wang, Yu Dan; Wang, Hai Peng

    2014-01-01

    A quantitative local computed tomography combined with data-constrained modelling has been developed. The method could improve distinctly the spatial resolution and the composition resolution in a sample larger than the field of view, for quantitative characterization of three-dimensional distributions of material compositions and void. Quantifying three-dimensional spatial distributions of pores and material compositions in samples is a key materials characterization challenge, particularly in samples where compositions are distributed across a range of length scales, and where such compositions have similar X-ray absorption properties, such as in coal. Consequently, obtaining detailed information within sub-regions of a multi-length-scale sample by conventional approaches may not provide the resolution and level of detail one might desire. Herein, an approach for quantitative high-definition determination of material compositions from X-ray local computed tomography combined with a data-constrained modelling method is proposed. The approach is capable of dramatically improving the spatial resolution and enabling finer details within a region of interest of a sample larger than the field of view to be revealed than by using conventional techniques. A coal sample containing distributions of porosity and several mineral compositions is employed to demonstrate the approach. The optimal experimental parameters are pre-analyzed. The quantitative results demonstrated that the approach can reveal significantly finer details of compositional distributions in the sample region of interest. The elevated spatial resolution is crucial for coal-bed methane reservoir evaluation and understanding the transformation of the minerals during coal processing. The method is generic and can be applied for three-dimensional compositional characterization of other materials

  10. Peculiarities of inflorescences morphogenesis in model representatives of the Celastraceae R.Br. in context of molecular phylogenetic data

    Directory of Open Access Journals (Sweden)

    Ivan A. Savinov

    2014-04-01

    Full Text Available Peculiarities of laying and forming of inflorescences for model representatives of the Celastraceae are studied. Specific characters in rhythm development of generative elements for different taxa are determined. Morphological markers, which are coincided completely with molecular characters, are determined. They are evidenced on closely relation between next taxa: Celastrus and Tripterygium, Salacia and Sarawakodendron, Salacia and Brexia.

  11. Expertly validated models and phylogenetically-controlled analysis suggests responses to climate change are related to species traits in the order lagomorpha.

    Directory of Open Access Journals (Sweden)

    Katie Leach

    Full Text Available Climate change during the past five decades has impacted significantly on natural ecosystems, and the rate of current climate change is of great concern among conservation biologists. Species Distribution Models (SDMs have been used widely to project changes in species' bioclimatic envelopes under future climate scenarios. Here, we aimed to advance this technique by assessing future changes in the bioclimatic envelopes of an entire mammalian order, the Lagomorpha, using a novel framework for model validation based jointly on subjective expert evaluation and objective model evaluation statistics. SDMs were built using climatic, topographical, and habitat variables for all 87 lagomorph species under past and current climate scenarios. Expert evaluation and Kappa values were used to validate past and current models and only those deemed 'modellable' within our framework were projected under future climate scenarios (58 species. Phylogenetically-controlled regressions were used to test whether species traits correlated with predicted responses to climate change. Climate change is likely to impact more than two-thirds of lagomorph species, with leporids (rabbits, hares, and jackrabbits likely to undertake poleward shifts with little overall change in range extent, whilst pikas are likely to show extreme shifts to higher altitudes associated with marked range declines, including the likely extinction of Kozlov's Pika (Ochotona koslowi. Smaller-bodied species were more likely to exhibit range contractions and elevational increases, but showing little poleward movement, and fecund species were more likely to shift latitudinally and elevationally. Our results suggest that species traits may be important indicators of future climate change and we believe multi-species approaches, as demonstrated here, are likely to lead to more effective mitigation measures and conservation management. We strongly advocate studies minimising data gaps in our knowledge of

  12. Integrating satellite retrieved leaf chlorophyll into land surface models for constraining simulations of water and carbon fluxes

    KAUST Repository

    Houborg, Rasmus; Cescatti, Alessandro; Gitelson, Anatoly A.

    2013-01-01

    variability exists. Satellite remote sensing can support modeling efforts by offering distributed information on important land surface characteristics, which would be very difficult to obtain otherwise. This study investigates the utility of satellite based

  13. Dispersion of the HIV-1 Epidemic in Men Who Have Sex with Men in the Netherlands: A Combined Mathematical Model and Phylogenetic Analysis.

    Directory of Open Access Journals (Sweden)

    Daniela Bezemer

    2015-11-01

    Full Text Available The HIV-1 subtype B epidemic amongst men who have sex with men (MSM is resurgent in many countries despite the widespread use of effective combination antiretroviral therapy (cART. In this combined mathematical and phylogenetic study of observational data, we aimed to find out the extent to which the resurgent epidemic is the result of newly introduced strains or of growth of already circulating strains.As of November 2011, the ATHENA observational HIV cohort of all patients in care in the Netherlands since 1996 included HIV-1 subtype B polymerase sequences from 5,852 patients. Patients who were diagnosed between 1981 and 1995 were included in the cohort if they were still alive in 1996. The ten most similar sequences to each ATHENA sequence were selected from the Los Alamos HIV Sequence Database, and a phylogenetic tree was created of a total of 8,320 sequences. Large transmission clusters that included ≥10 ATHENA sequences were selected, with a local support value ≥ 0.9 and median pairwise patristic distance below the fifth percentile of distances in the whole tree. Time-varying reproduction numbers of the large MSM-majority clusters were estimated through mathematical modeling. We identified 106 large transmission clusters, including 3,061 (52% ATHENA and 652 Los Alamos sequences. Half of the HIV sequences from MSM registered in the cohort in the Netherlands (2,128 of 4,288 were included in 91 large MSM-majority clusters. Strikingly, at least 54 (59% of these 91 MSM-majority clusters were already circulating before 1996, when cART was introduced, and have persisted to the present. Overall, 1,226 (35% of the 3,460 diagnoses among MSM since 1996 were found in these 54 long-standing clusters. The reproduction numbers of all large MSM-majority clusters were around the epidemic threshold value of one over the whole study period. A tendency towards higher numbers was visible in recent years, especially in the more recently introduced clusters

  14. Western Lake Erie Basin: Soft-data-constrained, NHDPlus resolution watershed modeling and exploration of applicable conservation scenarios.

    Science.gov (United States)

    Yen, Haw; White, Michael J; Arnold, Jeffrey G; Keitzer, S Conor; Johnson, Mari-Vaughn V; Atwood, Jay D; Daggupati, Prasad; Herbert, Matthew E; Sowa, Scott P; Ludsin, Stuart A; Robertson, Dale M; Srinivasan, Raghavan; Rewa, Charles A

    2016-11-01

    Complex watershed simulation models are powerful tools that can help scientists and policy-makers address challenging topics, such as land use management and water security. In the Western Lake Erie Basin (WLEB), complex hydrological models have been applied at various scales to help describe relationships between land use and water, nutrient, and sediment dynamics. This manuscript evaluated the capacity of the current Soil and Water Assessment Tool (SWAT) to predict hydrological and water quality processes within WLEB at the finest resolution watershed boundary unit (NHDPlus) along with the current conditions and conservation scenarios. The process based SWAT model was capable of the fine-scale computation and complex routing used in this project, as indicated by measured data at five gaging stations. The level of detail required for fine-scale spatial simulation made the use of both hard and soft data necessary in model calibration, alongside other model adaptations. Limitations to the model's predictive capacity were due to a paucity of data in the region at the NHDPlus scale rather than due to SWAT functionality. Results of treatment scenarios demonstrate variable effects of structural practices and nutrient management on sediment and nutrient loss dynamics. Targeting treatment to acres with critical outstanding conservation needs provides the largest return on investment in terms of nutrient loss reduction per dollar spent, relative to treating acres with lower inherent nutrient loss vulnerabilities. Importantly, this research raises considerations about use of models to guide land management decisions at very fine spatial scales. Decision makers using these results should be aware of data limitations that hinder fine-scale model interpretation. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Constraining Controls on the Emplacement of Long Lava Flows on Earth and Mars Through Modeling in ArcGIS

    Science.gov (United States)

    Golder, K.; Burr, D. M.; Tran, L.

    2017-12-01

    Regional volcanic processes shaped many planetary surfaces in the Solar System, often through the emplacement of long, voluminous lava flows. Terrestrial examples of this type of lava flow have been used as analogues for extensive martian flows, including those within the circum-Cerberus outflow channels. This analogy is based on similarities in morphology, extent, and inferred eruptive style between terrestrial and martian flows, which raises the question of how these lava flows appear comparable in size and morphology on different planets. The parameters that influence the areal extent of silicate lavas during emplacement may be categorized as either inherent or external to the lava. The inherent parameters include the lava yield strength, density, composition, water content, crystallinity, exsolved gas content, pressure, and temperature. Each inherent parameter affects the overall viscosity of the lava, and for this work can be considered a subset of the viscosity parameter. External parameters include the effusion rate, total erupted volume, regional slope, and gravity. To investigate which parameter(s) may control(s) the development of long lava flows on Mars, we are applying a computational numerical-modelling to reproduce the observed lava flow morphologies. Using a matrix of boundary conditions in the model enables us to investigate the possible range of emplacement conditions that can yield the observed morphologies. We have constructed the basic model framework in Model Builder within ArcMap, including all governing equations and parameters that we seek to test, and initial implementation and calibration has been performed. The base model is currently capable of generating a lava flow that propagates along a pathway governed by the local topography. At AGU, the results of model calibration using the Eldgá and Laki lava flows in Iceland will be presented, along with the application of the model to lava flows within the Cerberus plains on Mars. We then

  16. Western Lake Erie Basin: Soft-data-constrained, NHDPlus resolution watershed modeling and exploration of applicable conservation scenarios

    Science.gov (United States)

    Yen, Haw; White, Michael J.; Arnold, Jeffrey G.; Keitzer, S. Conor; Johnson, Mari-Vaughn V; Atwood, Jay D.; Daggupati, Prasad; Herbert, Matthew E.; Sowa, Scott P.; Ludsin, Stuart A.; Robertson, Dale M.; Srinivasan, Raghavan; Rewa, Charles A.

    2016-01-01

    Complex watershed simulation models are powerful tools that can help scientists and policy-makers address challenging topics, such as land use management and water security. In the Western Lake Erie Basin (WLEB), complex hydrological models have been applied at various scales to help describe relationships between land use and water, nutrient, and sediment dynamics. This manuscript evaluated the capacity of the current Soil and Water Assessment Tool (SWAT2012) to predict hydrological and water quality processes within WLEB at the finest resolution watershed boundary unit (NHDPlus) along with the current conditions and conservation scenarios. The process based SWAT model was capable of the fine-scale computation and complex routing used in this project, as indicated by measured data at five gaging stations. The level of detail required for fine-scale spatial simulation made the use of both hard and soft data necessary in model calibration, alongside other model adaptations. Limitations to the model's predictive capacity were due to a paucity of data in the region at the NHDPlus scale rather than due to SWAT functionality. Results of treatment scenarios demonstrate variable effects of structural practices and nutrient management on sediment and nutrient loss dynamics. Targeting treatment to acres with critical outstanding conservation needs provides the largest return on investment in terms of nutrient loss reduction per dollar spent, relative to treating acres with lower inherent nutrient loss vulnerabilities. Importantly, this research raises considerations about use of models to guide land management decisions at very fine spatial scales. Decision makers using these results should be aware of data limitations that hinder fine-scale model interpretation.

  17. CONSTRAINING MODELS OF TWIN-PEAK QUASI-PERIODIC OSCILLATIONS WITH REALISTIC NEUTRON STAR EQUATIONS OF STATE

    Energy Technology Data Exchange (ETDEWEB)

    Török, Gabriel; Goluchová, Katerina; Urbanec, Martin, E-mail: gabriel.torok@gmail.com, E-mail: katka.g@seznam.cz, E-mail: martin.urbanec@physics.cz [Research Centre for Computational Physics and Data Processing, Institute of Physics, Faculty of Philosophy and Science, Silesian University in Opava, Bezručovo nám. 13, CZ-746, 01 Opava (Czech Republic); and others

    2016-12-20

    Twin-peak quasi-periodic oscillations (QPOs) are observed in the X-ray power-density spectra of several accreting low-mass neutron star (NS) binaries. In our previous work we have considered several QPO models. We have identified and explored mass–angular-momentum relations implied by individual QPO models for the atoll source 4U 1636-53. In this paper we extend our study and confront QPO models with various NS equations of state (EoS). We start with simplified calculations assuming Kerr background geometry and then present results of detailed calculations considering the influence of NS quadrupole moment (related to rotationally induced NS oblateness) assuming Hartle–Thorne spacetimes. We show that the application of concrete EoS together with a particular QPO model yields a specific mass–angular-momentum relation. However, we demonstrate that the degeneracy in mass and angular momentum can be removed when the NS spin frequency inferred from the X-ray burst observations is considered. We inspect a large set of EoS and discuss their compatibility with the considered QPO models. We conclude that when the NS spin frequency in 4U 1636-53 is close to 580 Hz, we can exclude 51 of the 90 considered combinations of EoS and QPO models. We also discuss additional restrictions that may exclude even more combinations. Namely, 13 EOS are compatible with the observed twin-peak QPOs and the relativistic precession model. However, when considering the low-frequency QPOs and Lense–Thirring precession, only 5 EOS are compatible with the model.

  18. GRAVITATIONAL-WAVE OBSERVATIONS MAY CONSTRAIN GAMMA-RAY BURST MODELS: THE CASE OF GW150914–GBM

    Energy Technology Data Exchange (ETDEWEB)

    Veres, P. [CSPAR, University of Alabama in Huntsville, 320 Sparkman Dr., Huntsville, AL 35805 (United States); Preece, R. D. [Dept. of Space Science, University of Alabama in Huntsville, 320 Sparkman Dr., Huntsville, AL 35805 (United States); Goldstein, A.; Connaughton, V. [Universities Space Research Association, 320 Sparkman Dr. Huntsville, AL 35806 (United States); Mészáros, P. [Dept. of Astronomy and Astrophysics, Pennsylvania State University, 525 Davey Laboratory, University Park, PA 16802 (United States); Burns, E., E-mail: peter.veres@uah.edu [Physics Dept., University of Alabama in Huntsville, 320 Sparkman Dr., Huntsville, AL 35805 (United States)

    2016-08-20

    The possible short gamma-ray burst (GRB) observed by Fermi /GBM in coincidence with the first gravitational-wave (GW) detection offers new ways to test GRB prompt emission models. GW observations provide previously inaccessible physical parameters for the black hole central engine such as its horizon radius and rotation parameter. Using a minimum jet launching radius from the Advanced LIGO measurement of GW 150914, we calculate photospheric and internal shock models and find that they are marginally inconsistent with the GBM data, but cannot be definitely ruled out. Dissipative photosphere models, however, have no problem explaining the observations. Based on the peak energy and the observed flux, we find that the external shock model gives a natural explanation, suggesting a low interstellar density (∼10{sup −3} cm{sup −3}) and a high Lorentz factor (∼2000). We only speculate on the exact nature of the system producing the gamma-rays, and study the parameter space of a generic Blandford–Znajek model. If future joint observations confirm the GW–short-GRB association we can provide similar but more detailed tests for prompt emission models.

  19. Optimal interpolation schemes to constrain pmPM2.5 in regional modeling over the United States

    Science.gov (United States)

    Sousan, Sinan Dhia Jameel

    This thesis presents the use of data assimilation with optimal interpolation (OI) to develop atmospheric aerosol concentration estimates for the United States at high spatial and temporal resolutions. Concentration estimates are highly desirable for a wide range of applications, including visibility, climate, and human health. OI is a viable data assimilation method that can be used to improve Community Multiscale Air Quality (CMAQ) model fine particulate matter (PM2.5) estimates. PM2.5 is the mass of solid and liquid particles with diameters less than or equal to 2.5 µm suspended in the gas phase. OI was employed by combining model estimates with satellite and surface measurements. The satellite data assimilation combined 36 x 36 km aerosol concentrations from CMAQ with aerosol optical depth (AOD) measured by MODIS and AERONET over the continental United States for 2002. Posterior model concentrations generated by the OI algorithm were compared with surface PM2.5 measurements to evaluate a number of possible data assimilation parameters, including model error, observation error, and temporal averaging assumptions. Evaluation was conducted separately for six geographic U.S. regions in 2002. Variability in model error and MODIS biases limited the effectiveness of a single data assimilation system for the entire continental domain. The best combinations of four settings and three averaging schemes led to a domain-averaged improvement in fractional error from 1.2 to 0.97 and from 0.99 to 0.89 at respective IMPROVE and STN monitoring sites. For 38% of OI results, MODIS OI degraded the forward model skill due to biases and outliers in MODIS AOD. Surface data assimilation combined 36 × 36 km aerosol concentrations from the CMAQ model with surface PM2.5 measurements over the continental United States for 2002. The model error covariance matrix was constructed by using the observational method. The observation error covariance matrix included site representation that

  20. Accounting for disturbance history in models: using remote sensing to constrain carbon and nitrogen pool spin-up.

    Science.gov (United States)

    Hanan, Erin J; Tague, Christina; Choate, Janet; Liu, Mingliang; Kolden, Crystal; Adam, Jennifer

    2018-03-24

    Disturbances such as wildfire, insect outbreaks, and forest clearing, play an important role in regulating carbon, nitrogen, and hydrologic fluxes in terrestrial watersheds. Evaluating how watersheds respond to disturbance requires understanding mechanisms that interact over multiple spatial and temporal scales. Simulation modeling is a powerful tool for bridging these scales; however, model projections are limited by uncertainties in the initial state of plant carbon and nitrogen stores. Watershed models typically use one of two methods to initialize these stores: spin-up to steady state or remote sensing with allometric relationships. Spin-up involves running a model until vegetation reaches equilibrium based on climate. This approach assumes that vegetation across the watershed has reached maturity and is of uniform age, which fails to account for landscape heterogeneity and non-steady-state conditions. By contrast, remote sensing, can provide data for initializing such conditions. However, methods for assimilating remote sensing into model simulations can also be problematic. They often rely on empirical allometric relationships between a single vegetation variable and modeled carbon and nitrogen stores. Because allometric relationships are species- and region-specific, they do not account for the effects of local resource limitation, which can influence carbon allocation (to leaves, stems, roots, etc.). To address this problem, we developed a new initialization approach using the catchment-scale ecohydrologic model RHESSys. The new approach merges the mechanistic stability of spin-up with the spatial fidelity of remote sensing. It uses remote sensing to define spatially explicit targets for one or several vegetation state variables, such as leaf area index, across a watershed. The model then simulates the growth of carbon and nitrogen stores until the defined targets are met for all locations. We evaluated this approach in a mixed pine-dominated watershed in

  1. Modelling carbon and water exchange of a grazed pasture in New Zealand constrained by eddy covariance measurements.

    Science.gov (United States)

    Kirschbaum, Miko U F; Rutledge, Susanna; Kuijper, Isoude A; Mudge, Paul L; Puche, Nicolas; Wall, Aaron M; Roach, Chris G; Schipper, Louis A; Campbell, David I

    2015-04-15

    We used two years of eddy covariance (EC) measurements collected over an intensively grazed dairy pasture to better understand the key drivers of changes in soil organic carbon stocks. Analysing grazing systems with EC measurements poses significant challenges as the respiration from grazing animals can result in large short-term CO2 fluxes. As paddocks are grazed only periodically, EC observations derive from a mosaic of paddocks with very different exchange rates. This violates the assumptions implicit in the use of EC methodology. To test whether these challenges could be overcome, and to develop a tool for wider scenario testing, we compared EC measurements with simulation runs with the detailed ecosystem model CenW 4.1. Simulations were run separately for 26 paddocks around the EC tower and coupled to a footprint analysis to estimate net fluxes at the EC tower. Overall, we obtained good agreement between modelled and measured fluxes, especially for the comparison of evapotranspiration rates, with model efficiency of 0.96 for weekly averaged values of the validation data. For net ecosystem productivity (NEP) comparisons, observations were omitted when cattle grazed the paddocks immediately around the tower. With those points omitted, model efficiencies for weekly averaged values of the validation data were 0.78, 0.67 and 0.54 for daytime, night-time and 24-hour NEP, respectively. While not included for model parameterisation, simulated gross primary production also agreed closely with values inferred from eddy covariance measurements (model efficiency of 0.84 for weekly averages). The study confirmed that CenW simulations could adequately model carbon and water exchange in grazed pastures. It highlighted the critical role of animal respiration for net CO2 fluxes, and showed that EC studies of grazed pastures need to consider the best approach of accounting for this important flux to avoid unbalanced accounting. Copyright © 2015. Published by Elsevier B.V.

  2. Exploring Constrained Creative Communication

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk

    2017-01-01

    Creative collaboration via online tools offers a less ‘media rich’ exchange of information between participants than face-to-face collaboration. The participants’ freedom to communicate is restricted in means of communication, and rectified in terms of possibilities offered in the interface. How do...... these constrains influence the creative process and the outcome? In order to isolate the communication problem from the interface- and technology problem, we examine via a design game the creative communication on an open-ended task in a highly constrained setting, a design game. Via an experiment the relation...... between communicative constrains and participants’ perception of dialogue and creativity is examined. Four batches of students preparing for forming semester project groups were conducted and documented. Students were asked to create an unspecified object without any exchange of communication except...

  3. Choosing health, constrained choices.

    Science.gov (United States)

    Chee Khoon Chan

    2009-12-01

    In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease.

  4. Constraining a compositional flow model with flow-chemical data using an ensemble-based Kalman filter

    KAUST Repository

    Gharamti, M. E.; Kadoura, A.; Valstar, J.; Sun, S.; Hoteit, Ibrahim

    2014-01-01

    Isothermal compositional flow models require coupling transient compressible flows and advective transport systems of various chemical species in subsurface porous media. Building such numerical models is quite challenging and may be subject to many sources of uncertainties because of possible incomplete representation of some geological parameters that characterize the system's processes. Advanced data assimilation methods, such as the ensemble Kalman filter (EnKF), can be used to calibrate these models by incorporating available data. In this work, we consider the problem of estimating reservoir permeability using information about phase pressure as well as the chemical properties of fluid components. We carry out state-parameter estimation experiments using joint and dual updating schemes in the context of the EnKF with a two-dimensional single-phase compositional flow model (CFM). Quantitative and statistical analyses are performed to evaluate and compare the performance of the assimilation schemes. Our results indicate that including chemical composition data significantly enhances the accuracy of the permeability estimates. In addition, composition data provide more information to estimate system states and parameters than do standard pressure data. The dual state-parameter estimation scheme provides about 10% more accurate permeability estimates on average than the joint scheme when implemented with the same ensemble members, at the cost of twice more forward model integrations. At similar computational cost, the dual approach becomes only beneficial after using large enough ensembles.

  5. Effects of degraded sensory input on memory for speech: behavioral data and a test of biologically constrained computational models.

    Science.gov (United States)

    Piquado, Tepring; Cousins, Katheryn A Q; Wingfield, Arthur; Miller, Paul

    2010-12-13

    Poor hearing acuity reduces memory for spoken words, even when the words are presented with enough clarity for correct recognition. An "effortful hypothesis" suggests that the perceptual effort needed for recognition draws from resources that would otherwise be available for encoding the word in memory. To assess this hypothesis, we conducted a behavioral task requiring immediate free recall of word-lists, some of which contained an acoustically masked word that was just above perceptual threshold. Results show that masking a word reduces the recall of that word and words prior to it, as well as weakening the linking associations between the masked and prior words. In contrast, recall probabilities of words following the masked word are not affected. To account for this effect we conducted computational simulations testing two classes of models: Associative Linking Models and Short-Term Memory Buffer Models. Only a model that integrated both contextual linking and buffer components matched all of the effects of masking observed in our behavioral data. In this Linking-Buffer Model, the masked word disrupts a short-term memory buffer, causing associative links of words in the buffer to be weakened, affecting memory for the masked word and the word prior to it, while allowing links of words following the masked word to be spared. We suggest that these data account for the so-called "effortful hypothesis", where distorted input has a detrimental impact on prior information stored in short-term memory. Copyright © 2010 Elsevier B.V. All rights reserved.

  6. Constraining a compositional flow model with flow-chemical data using an ensemble-based Kalman filter

    KAUST Repository

    Gharamti, M. E.

    2014-03-01

    Isothermal compositional flow models require coupling transient compressible flows and advective transport systems of various chemical species in subsurface porous media. Building such numerical models is quite challenging and may be subject to many sources of uncertainties because of possible incomplete representation of some geological parameters that characterize the system\\'s processes. Advanced data assimilation methods, such as the ensemble Kalman filter (EnKF), can be used to calibrate these models by incorporating available data. In this work, we consider the problem of estimating reservoir permeability using information about phase pressure as well as the chemical properties of fluid components. We carry out state-parameter estimation experiments using joint and dual updating schemes in the context of the EnKF with a two-dimensional single-phase compositional flow model (CFM). Quantitative and statistical analyses are performed to evaluate and compare the performance of the assimilation schemes. Our results indicate that including chemical composition data significantly enhances the accuracy of the permeability estimates. In addition, composition data provide more information to estimate system states and parameters than do standard pressure data. The dual state-parameter estimation scheme provides about 10% more accurate permeability estimates on average than the joint scheme when implemented with the same ensemble members, at the cost of twice more forward model integrations. At similar computational cost, the dual approach becomes only beneficial after using large enough ensembles.

  7. Improving on hidden Markov models: An articulatorily constrained, maximum likelihood approach to speech recognition and speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Hogden, J.

    1996-11-05

    The goal of the proposed research is to test a statistical model of speech recognition that incorporates the knowledge that speech is produced by relatively slow motions of the tongue, lips, and other speech articulators. This model is called Maximum Likelihood Continuity Mapping (Malcom). Many speech researchers believe that by using constraints imposed by articulator motions, we can improve or replace the current hidden Markov model based speech recognition algorithms. Unfortunately, previous efforts to incorporate information about articulation into speech recognition algorithms have suffered because (1) slight inaccuracies in our knowledge or the formulation of our knowledge about articulation may decrease recognition performance, (2) small changes in the assumptions underlying models of speech production can lead to large changes in the speech derived from the models, and (3) collecting measurements of human articulator positions in sufficient quantity for training a speech recognition algorithm is still impractical. The most interesting (and in fact, unique) quality of Malcom is that, even though Malcom makes use of a mapping between acoustics and articulation, Malcom can be trained to recognize speech using only acoustic data. By learning the mapping between acoustics and articulation using only acoustic data, Malcom avoids the difficulties involved in collecting articulator position measurements and does not require an articulatory synthesizer model to estimate the mapping between vocal tract shapes and speech acoustics. Preliminary experiments that demonstrate that Malcom can learn the mapping between acoustics and articulation are discussed. Potential applications of Malcom aside from speech recognition are also discussed. Finally, specific deliverables resulting from the proposed research are described.

  8. An embodied biologically constrained model of foraging: from classical and operant conditioning to adaptive real-world behavior in DAC-X.

    Science.gov (United States)

    Maffei, Giovanni; Santos-Pata, Diogo; Marcos, Encarni; Sánchez-Fibla, Marti; Verschure, Paul F M J

    2015-12-01

    Animals successfully forage within new environments by learning, simulating and adapting to their surroundings. The functions behind such goal-oriented behavior can be decomposed into 5 top-level objectives: 'how', 'why', 'what', 'where', 'when' (H4W). The paradigms of classical and operant conditioning describe some of the behavioral aspects found in foraging. However, it remains unclear how the organization of their underlying neural principles account for these complex behaviors. We address this problem from the perspective of the Distributed Adaptive Control theory of mind and brain (DAC) that interprets these two paradigms as expressing properties of core functional subsystems of a layered architecture. In particular, we propose DAC-X, a novel cognitive architecture that unifies the theoretical principles of DAC with biologically constrained computational models of several areas of the mammalian brain. DAC-X supports complex foraging strategies through the progressive acquisition, retention and expression of task-dependent information and associated shaping of action, from exploration to goal-oriented deliberation. We benchmark DAC-X using a robot-based hoarding task including the main perceptual and cognitive aspects of animal foraging. We show that efficient goal-oriented behavior results from the interaction of parallel learning mechanisms accounting for motor adaptation, spatial encoding and decision-making. Together, our results suggest that the H4W problem can be solved by DAC-X building on the insights from the study of classical and operant conditioning. Finally, we discuss the advantages and limitations of the proposed biologically constrained and embodied approach towards the study of cognition and the relation of DAC-X to other cognitive architectures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Utilizing the Updated Gamma-Ray Bursts and Type Ia Supernovae to Constrain the Cardassian Expansion Model and Dark Energy

    Directory of Open Access Journals (Sweden)

    Jun-Jie Wei

    2015-01-01

    Full Text Available We update gamma-ray burst (GRB luminosity relations among certain spectral and light-curve features with 139 GRBs. The distance modulus of 82 GRBs at z>1.4 can be calibrated with the sample at z≤1.4 by using the cubic spline interpolation method from the Union2.1 Type Ia supernovae (SNe Ia set. We investigate the joint constraints on the Cardassian expansion model and dark energy with 580 Union2.1 SNe Ia sample (z<1.4 and 82 calibrated GRBs’ data (1.4model, the best fit is Ωm=0.24-0.15+0.15 and n=0.16-0.52+0.30  (1σ, which is consistent with the ΛCDM cosmology (n=0 in the 1σ confidence region. We also discuss two dark energy models in which the equation of state w(z is parameterized as w(z=w0 and w(z=w0+w1z/(1+z, respectively. Based on our analysis, we see that our universe at higher redshift up to z=8.2 is consistent with the concordance model within 1σ confidence level.

  10. Modeling and Event-Driven Simulation of Coordinated Multi-Point in LTE-Advanced with Constrained Backhaul

    DEFF Research Database (Denmark)

    Artuso, Matteo; Christiansen, Henrik Lehrmann

    2014-01-01

    multi-point joint transmission (CoMP JT). Field tests are generally considered impractical and costly for CoMP JT, therefore the need to provide a comprehensive and high-fidelity computer model to understand the impact of different design attributes and the applicability use cases. This paper presents...

  11. A system identification approach for developing and parameterising an agroforestry system model under constrained availability of data

    NARCIS (Netherlands)

    Keesman, K.J.; Graves, A.; Werf, van der W.; Burgess, P.J.; Palma, J.; Dupraz, C.; Keulen, van H.

    2011-01-01

    This paper introduces a system identification approach to overcome the problem of insufficient data when developing and parameterising an agroforestry system model. Typically, for these complex systems the number of available data points from actual systems is less than the number of parameters in a

  12. Reliable design of a closed loop supply chain network under uncertainty: An interval fuzzy possibilistic chance-constrained model

    Science.gov (United States)

    Vahdani, Behnam; Tavakkoli-Moghaddam, Reza; Jolai, Fariborz; Baboli, Arman

    2013-06-01

    This article seeks to offer a systematic approach to establishing a reliable network of facilities in closed loop supply chains (CLSCs) under uncertainties. Facilities that are located in this article concurrently satisfy both traditional objective functions and reliability considerations in CLSC network designs. To attack this problem, a novel mathematical model is developed that integrates the network design decisions in both forward and reverse supply chain networks. The model also utilizes an effective reliability approach to find a robust network design. In order to make the results of this article more realistic, a CLSC for a case study in the iron and steel industry has been explored. The considered CLSC is multi-echelon, multi-facility, multi-product and multi-supplier. Furthermore, multiple facilities exist in the reverse logistics network leading to high complexities. Since the collection centres play an important role in this network, the reliability concept of these facilities is taken into consideration. To solve the proposed model, a novel interactive hybrid solution methodology is developed by combining a number of efficient solution approaches from the recent literature. The proposed solution methodology is a bi-objective interval fuzzy possibilistic chance-constraint mixed integer linear programming (BOIFPCCMILP). Finally, computational experiments are provided to demonstrate the applicability and suitability of the proposed model in a supply chain environment and to help decision makers facilitate their analyses.

  13. Dynamics of Saxothuringian subduction channel/wedge constrained by phase equilibria modelling and micro-fabric analysis

    Czech Academy of Sciences Publication Activity Database

    Collett, S.; Štípská, P.; Kusbach, Vladimír; Schulmann, K.; Marciniak, G.

    2017-01-01

    Roč. 35, č. 3 (2017), s. 253-280 ISSN 0263-4929 Institutional support: RVO:67985530 Keywords : eclogite * Bohemian Massif * thermodynamic modelling * micro-fabric analysis * subduction and exhumation dynamics Subject RIV: DB - Geology ; Mineralogy OBOR OECD: Geology Impact factor: 3.594, year: 2016

  14. A vertically resolved, global, gap-free ozone database for assessing or constraining global climate model simulations

    Directory of Open Access Journals (Sweden)

    G. E. Bodeker

    2013-02-01

    Full Text Available High vertical resolution ozone measurements from eight different satellite-based instruments have been merged with data from the global ozonesonde network to calculate monthly mean ozone values in 5° latitude zones. These ''Tier 0'' ozone number densities and ozone mixing ratios are provided on 70 altitude levels (1 to 70 km and on 70 pressure levels spaced ~ 1 km apart (878.4 hPa to 0.046 hPa. The Tier 0 data are sparse and do not cover the entire globe or altitude range. To provide a gap-free database, a least squares regression model is fitted to the Tier 0 data and then evaluated globally. The regression model fit coefficients are expanded in Legendre polynomials to account for latitudinal structure, and in Fourier series to account for seasonality. Regression model fit coefficient patterns, which are two dimensional fields indexed by latitude and month of the year, from the N-th vertical level serve as an initial guess for the fit at the N + 1-th vertical level. The initial guess field for the first fit level (20 km/58.2 hPa was derived by applying the regression model to total column ozone fields. Perturbations away from the initial guess are captured through the Legendre and Fourier expansions. By applying a single fit at each level, and using the approach of allowing the regression fits to change only slightly from one level to the next, the regression is less sensitive to measurement anomalies at individual stations or to individual satellite-based instruments. Particular attention is paid to ensuring that the low ozone abundances in the polar regions are captured. By summing different combinations of contributions from different regression model basis functions, four different ''Tier 1'' databases have been compiled for different intended uses. This database is suitable for assessing ozone fields from chemistry-climate model simulations or for providing the ozone boundary conditions for global climate model simulations that do not

  15. New Spectral Model for Constraining Torus Covering Factors from Broadband X-Ray Spectra of Active Galactic Nuclei

    Science.gov (United States)

    Baloković, M.; Brightman, M.; Harrison, F. A.; Comastri, A.; Ricci, C.; Buchner, J.; Gandhi, P.; Farrah, D.; Stern, D.

    2018-02-01

    The basic unified model of active galactic nuclei (AGNs) invokes an anisotropic obscuring structure, usually referred to as a torus, to explain AGN obscuration as an angle-dependent effect. We present a new grid of X-ray spectral templates based on radiative transfer calculations in neutral gas in an approximately toroidal geometry, appropriate for CCD-resolution X-ray spectra (FWHM ≥ 130 eV). Fitting the templates to broadband X-ray spectra of AGNs provides constraints on two important geometrical parameters of the gas distribution around the supermassive black hole: the average column density and the covering factor. Compared to the currently available spectral templates, our model is more flexible, and capable of providing constraints on the main torus parameters in a wider range of AGNs. We demonstrate the application of this model using hard X-ray spectra from NuSTAR (3–79 keV) for four AGNs covering a variety of classifications: 3C 390.3, NGC 2110, IC 5063, and NGC 7582. This small set of examples was chosen to illustrate the range of possible torus configurations, from disk-like to sphere-like geometries with column densities below, as well as above, the Compton-thick threshold. This diversity of torus properties challenges the simple assumption of a standard geometrically and optically thick toroidal structure commonly invoked in the basic form of the unified model of AGNs. Finding broad consistency between our constraints and those from infrared modeling, we discuss how the approach from the X-ray band complements similar measurements of AGN structures at other wavelengths.

  16. Data-constrained models of quiet and storm-time geosynchronous magnetic field based on observations in the near geospace

    Science.gov (United States)

    Andreeva, V. A.; Tsyganenko, N. A.

    2017-12-01

    The geosynchronous orbit is unique in that its nightside segment skims along the boundary, separating the inner magnetosphere with a predominantly dipolar configuration from the magnetotail, where the Earth's magnetic field becomes small relative to the contribution from external sources. The ability to accurately reconstruct the magnetospheric configuration at GEO is important to understand the behavior of plasma and energetic particles, which critically affect space weather in the area densely populated by a host of satellites. To that end, we have developed a dynamical empirical model of the geosynchronous magnetic field with forecasting capability, based on a multi-year set of data taken by THEMIS, Polar, Cluster, Geotail, and Van Allen missions. The model's mathematical structure is devised using a new approach [Andreeva and Tsyganenko, 2016, doi:10.1002/2015JA022242], in which the toroidal/poloidal components of the field are represented using the radial and azimuthal basis functions. The model describes the field as a function of solar-magnetic coordinates, geodipole tilt angle, solar wind pressure, and a set of dynamic variables, quantifying the magnetosphere's response to external driving/loading and internal relaxation/dissipation during the disturbance recovery. The response variables are introduced following the approach by Tsyganenko and Sitnov [2005, doi:10.1029/2004JA010798], in which the electric current dynamics was described as a result of competition between the external energy input and the subsequent internal losses of the injected energy. The model's applicability range extends from quiet to moderately disturbed conditions, with peak Sym-H values -150 nT. The obtained results have been validated using independent GOES magnetometer data, taken during the maximum of the 23rd solar cycle and its declining phase.

  17. A discriminative model-constrained EM approach to 3D MRI brain tissue classification and intensity non-uniformity correction

    International Nuclear Information System (INIS)

    Wels, Michael; Hornegger, Joachim; Zheng Yefeng; Comaniciu, Dorin; Huber, Martin

    2011-01-01

    We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average

  18. Spatial organization of the budding yeast genome in the cell nucleus and identification of specific chromatin interactions from multi-chromosome constrained chromatin model.

    Science.gov (United States)

    Gürsoy, Gamze; Xu, Yun; Liang, Jie

    2017-07-01

    Nuclear landmarks and biochemical factors play important roles in the organization of the yeast genome. The interaction pattern of budding yeast as measured from genome-wide 3C studies are largely recapitulated by model polymer genomes subject to landmark constraints. However, the origin of inter-chromosomal interactions, specific roles of individual landmarks, and the roles of biochemical factors in yeast genome organization remain unclear. Here we describe a multi-chromosome constrained self-avoiding chromatin model (mC-SAC) to gain understanding of the budding yeast genome organization. With significantly improved sampling of genome structures, both intra- and inter-chromosomal interaction patterns from genome-wide 3C studies are accurately captured in our model at higher resolution than previous studies. We show that nuclear confinement is a key determinant of the intra-chromosomal interactions, and centromere tethering is responsible for the inter-chromosomal interactions. In addition, important genomic elements such as fragile sites and tRNA genes are found to be clustered spatially, largely due to centromere tethering. We uncovered previously unknown interactions that were not captured by genome-wide 3C studies, which are found to be enriched with tRNA genes, RNAPIII and TFIIS binding. Moreover, we identified specific high-frequency genome-wide 3C interactions that are unaccounted for by polymer effects under landmark constraints. These interactions are enriched with important genes and likely play biological roles.

  19. Fire emissions constrained by the synergistic use of formaldehyde and glyoxal SCIAMACHY columns in a two-compound inverse modelling framework

    Science.gov (United States)

    Stavrakou, T.; Muller, J.; de Smedt, I.; van Roozendael, M.; Vrekoussis, M.; Wittrock, F.; Richter, A.; Burrows, J.

    2008-12-01

    Formaldehyde (HCHO) and glyoxal (CHOCHO) are carbonyls formed in the oxidation of volatile organic compounds (VOCs) emitted by plants, anthropogenic activities, and biomass burning. They are also directly emitted by fires. Although this primary production represents only a small part of the global source for both species, yet it can be locally important during intense fire events. Simultaneous observations of formaldehyde and glyoxal retrieved from the SCIAMACHY satellite instrument in 2005 and provided by the BIRA/IASB and the Bremen group, respectively, are compared with the corresponding columns simulated with the IMAGESv2 global CTM. The chemical mechanism has been optimized with respect to HCHO and CHOCHO production from pyrogenically emitted NMVOCs, based on the Master Chemical Mechanism (MCM) and on an explicit profile for biomass burning emissions. Gas-to-particle conversion of glyoxal in clouds and in aqueous aerosols is considered in the model. In this study we provide top-down estimates for fire emissions of HCHO and CHOCHO precursors by performing a two- compound inversion of emissions using the adjoint of the IMAGES model. The pyrogenic fluxes are optimized at the model resolution. The two-compound inversion offers the advantage that the information gained from measurements of one species constrains the sources of both compounds, due to the existence of common precursors. In a first inversion, only the burnt biomass amounts are optimized. In subsequent simulations, the emission factors for key individual NMVOC compounds are also varied.

  20. A facilitated diffusion model constrained by the probability isotherm: a pedagogical exercise in intuitive non-equilibrium thermodynamics.

    Science.gov (United States)

    Chapman, Brian

    2017-06-01

    This paper seeks to develop a more thermodynamically sound pedagogy for students of biological transport than is currently available from either of the competing schools of linear non-equilibrium thermodynamics (LNET) or Michaelis-Menten kinetics (MMK). To this end, a minimal model of facilitated diffusion was constructed comprising four reversible steps: cis- substrate binding, cis → trans bound enzyme shuttling, trans -substrate dissociation and trans → cis free enzyme shuttling. All model parameters were subject to the second law constraint of the probability isotherm, which determined the unidirectional and net rates for each step and for the overall reaction through the law of mass action. Rapid equilibration scenarios require sensitive 'tuning' of the thermodynamic binding parameters to the equilibrium substrate concentration. All non-equilibrium scenarios show sigmoidal force-flux relations, with only a minority of cases having their quasi -linear portions close to equilibrium. Few cases fulfil the expectations of MMK relating reaction rates to enzyme saturation. This new approach illuminates and extends the concept of rate-limiting steps by focusing on the free energy dissipation associated with each reaction step and thereby deducing its respective relative chemical impedance. The crucial importance of an enzyme's being thermodynamically 'tuned' to its particular task, dependent on the cis- and trans- substrate concentrations with which it deals, is consistent with the occurrence of numerous isoforms for enzymes that transport a given substrate in physiologically different circumstances. This approach to kinetic modelling, being aligned with neither MMK nor LNET, is best described as intuitive non-equilibrium thermodynamics, and is recommended as a useful adjunct to the design and interpretation of experiments in biotransport.

  1. Constraining Genome-Scale Models to Represent the Bow Tie Structure of Metabolism for 13C Metabolic Flux Analysis

    Directory of Open Access Journals (Sweden)

    Tyler W. H. Backman

    2018-01-01

    Full Text Available Determination of internal metabolic fluxes is crucial for fundamental and applied biology because they map how carbon and electrons flow through metabolism to enable cell function. 13 C Metabolic Flux Analysis ( 13 C MFA and Two-Scale 13 C Metabolic Flux Analysis (2S- 13 C MFA are two techniques used to determine such fluxes. Both operate on the simplifying approximation that metabolic flux from peripheral metabolism into central “core” carbon metabolism is minimal, and can be omitted when modeling isotopic labeling in core metabolism. The validity of this “two-scale” or “bow tie” approximation is supported both by the ability to accurately model experimental isotopic labeling data, and by experimentally verified metabolic engineering predictions using these methods. However, the boundaries of core metabolism that satisfy this approximation can vary across species, and across cell culture conditions. Here, we present a set of algorithms that (1 systematically calculate flux bounds for any specified “core” of a genome-scale model so as to satisfy the bow tie approximation and (2 automatically identify an updated set of core reactions that can satisfy this approximation more efficiently. First, we leverage linear programming to simultaneously identify the lowest fluxes from peripheral metabolism into core metabolism compatible with the observed growth rate and extracellular metabolite exchange fluxes. Second, we use Simulated Annealing to identify an updated set of core reactions that allow for a minimum of fluxes into core metabolism to satisfy these experimental constraints. Together, these methods accelerate and automate the identification of a biologically reasonable set of core reactions for use with 13 C MFA or 2S- 13 C MFA, as well as provide for a substantially lower set of flux bounds for fluxes into the core as compared with previous methods. We provide an open source Python implementation of these algorithms at https://github.com/JBEI/limitfluxtocore.

  2. Source model for the Copahue volcano magma plumbing system constrained by InSAR surface deformation observations

    OpenAIRE

    Paul Lundgren; M. Nikkhoo; Sergey V. Samsonov; Pietro Milillo; Fernando Gil-Cruz; Jonathan Lazo

    2017-01-01

    Tar files for each of the InSAR time series (interferograms used in the GIAnT time series computation as well as the input files and outputs from using GIAnT). GIAnT is an open source InSAR time series code developed at Caltech. The UAVSAR_*.tgz files contain the interferograms from the UAVSAR airborne system that were used in the analysis. The actual model input files require some additional down sampling using resamptool.m a Matlab code developed by Prof. R. Lohman, Cornell Univ.

  3. Dynamic Output Feedback Robust Model Predictive Control via Zonotopic Set-Membership Estimation for Constrained Quasi-LPV Systems

    Directory of Open Access Journals (Sweden)

    Xubin Ping

    2015-01-01

    Full Text Available For the quasi-linear parameter varying (quasi-LPV system with bounded disturbance, a synthesis approach of dynamic output feedback robust model predictive control (OFRMPC is investigated. The estimation error set is represented by a zonotope and refreshed by the zonotopic set-membership estimation method. By properly refreshing the estimation error set online, the bounds of true state at the next sampling time can be obtained. Furthermore, the feasibility of the main optimization problem at the next sampling time can be determined at the current time. A numerical example is given to illustrate the effectiveness of the approach.

  4. Use of natural analog and modeling studies to constrain the effects of magmatic activity on long-term geologic repositories

    International Nuclear Information System (INIS)

    Valentine, G.A.; Rosenberg, N.D.; Crowe, B.M.; Perry, F.V.

    1995-01-01

    Examples of the application of natural-analog studies to the estimation of the consequences of a volcanic eruption penetrating a radioactive waste repository are given, including the criteria for analog selection and new data from ongoing studies. Examples of early modeling results focusing on the spatial and temporal scale of subsurface processes are also provided. All of these examples are taken from studies of the potential Yucca Mountain repository, Nevada, but similar approaches could be applied in other areas. In addition, studies of subsurface processes initiated by magmatic events serve as useful analogs for repository thermal loading studies

  5. Effect of time-varying tropospheric models on near-regional and regional infrasound propagation as constrained by observational data

    Science.gov (United States)

    McKenna, Mihan H.; Stump, Brian W.; Hayward, Chris

    2008-06-01

    The Chulwon Seismo-Acoustic Array (CHNAR) is a regional seismo-acoustic array with co-located seismometers and infrasound microphones on the Korean peninsula. Data from forty-two days over the course of a year between October 1999 and August 2000 were analyzed; 2052 infrasound-only arrivals and 23 seismo-acoustic arrivals were observed over the six week study period. A majority of the signals occur during local working hours, hour 0 to hour 9 UT and appear to be the result of cultural activity located within a 250 km radius. Atmospheric modeling is presented for four sample days during the study period, one in each of November, February, April, and August. Local meteorological data sampled at six hour intervals is needed to accurately model the observed arrivals and this data produced highly temporally variable thermal ducts that propagated infrasound signals within 250 km, matching the temporal variation in the observed arrivals. These ducts change dramatically on the order of hours, and meteorological data from the appropriate sampled time frame was necessary to interpret the observed arrivals.

  6. New approach to study mobility in the vicinity of dynamical arrest; exact application to a kinetically constrained model

    Science.gov (United States)

    DeGregorio, P.; Lawlor, A.; Dawson, K. A.

    2006-04-01

    We introduce a new method to describe systems in the vicinity of dynamical arrest. This involves a map that transforms mobile systems at one length scale to mobile systems at a longer length. This map is capable of capturing the singular behavior accrued across very large length scales, and provides a direct route to the dynamical correlation length and other related quantities. The ideas are immediately applicable in two spatial dimensions, and have been applied to a modified Kob-Andersen type model. For such systems the map may be derived in an exact form, and readily solved numerically. We obtain the asymptotic behavior across the whole physical domain of interest in dynamical arrest.

  7. Correlation-constrained and sparsity-controlled vector autoregressive model for spatio-temporal wind power forecasting

    DEFF Research Database (Denmark)

    Zhao, Yongning; Ye, Lin; Pinson, Pierre

    2018-01-01

    The ever-increasing number of wind farms has brought both challenges and opportunities in the development of wind power forecasting techniques to take advantage of interdependenciesbetweentensorhundredsofspatiallydistributedwind farms, e.g., over a region. In this paper, a Sparsity-Controlled Vec......The ever-increasing number of wind farms has brought both challenges and opportunities in the development of wind power forecasting techniques to take advantage of interdependenciesbetweentensorhundredsofspatiallydistributedwind farms, e.g., over a region. In this paper, a Sparsity...... matrices in direct manner. However this original SC-VAR is difficult to implement due to its complicated constraints and the lack of guidelines for setting its parameters. To reduce the complexity of this MINLP and to make it possible to incorporate prior expert knowledge to benefit model building...

  8. Theory and practice in sport psychology and motor behaviour needs to be constrained by integrative modelling of brain and behaviour.

    Science.gov (United States)

    Keil, D; Holmes, P; Bennett, S; Davids, K; Smith, N

    2000-06-01

    Because of advances in technology, the non-invasive study of the human brain has enhanced the knowledge base within the neurosciences, resulting in an increased impact on the psychological study of human behaviour. We argue that application of this knowledge base should be considered in theoretical modelling within sport psychology and motor behaviour alongside existing ideas. We propose that interventions founded on current theoretical and empirical understanding in both psychology and the neurosciences may ultimately lead to greater benefits for athletes during practice and performance. As vehicles for exploring the arguments of a greater integration of psychology and neurosciences research, imagery and perception-action within the sport psychology and motor behaviour domains will serve as exemplars. Current neuroscience evidence will be discussed in relation to theoretical developments; the implications for sport scientists will be considered.

  9. Constraining the climate and ocean pH of the early Earth with a geological carbon cycle model

    Science.gov (United States)

    Krissansen-Totton, Joshua; Arney, Giada N.; Catling, David C.

    2018-04-01

    The early Earth’s environment is controversial. Climatic estimates range from hot to glacial, and inferred marine pH spans strongly alkaline to acidic. Better understanding of early climate and ocean chemistry would improve our knowledge of the origin of life and its coevolution with the environment. Here, we use a geological carbon cycle model with ocean chemistry to calculate self-consistent histories of climate and ocean pH. Our carbon cycle model includes an empirically justified temperature and pH dependence of seafloor weathering, allowing the relative importance of continental and seafloor weathering to be evaluated. We find that the Archean climate was likely temperate (0–50 °C) due to the combined negative feedbacks of continental and seafloor weathering. Ocean pH evolves monotonically from 6.6‑0.4+0.6 (2σ) at 4.0 Ga to 7.0‑0.5+0.7 (2σ) at the Archean–Proterozoic boundary, and to 7.9‑0.2+0.1 (2σ) at the Proterozoic–Phanerozoic boundary. This evolution is driven by the secular decline of pCO2, which in turn is a consequence of increasing solar luminosity, but is moderated by carbonate alkalinity delivered from continental and seafloor weathering. Archean seafloor weathering may have been a comparable carbon sink to continental weathering, but is less dominant than previously assumed, and would not have induced global glaciation. We show how these conclusions are robust to a wide range of scenarios for continental growth, internal heat flow evolution and outgassing history, greenhouse gas abundances, and changes in the biotic enhancement of weathering.

  10. Constraining the climate and ocean pH of the early Earth with a geological carbon cycle model.

    Science.gov (United States)

    Krissansen-Totton, Joshua; Arney, Giada N; Catling, David C

    2018-04-17

    The early Earth's environment is controversial. Climatic estimates range from hot to glacial, and inferred marine pH spans strongly alkaline to acidic. Better understanding of early climate and ocean chemistry would improve our knowledge of the origin of life and its coevolution with the environment. Here, we use a geological carbon cycle model with ocean chemistry to calculate self-consistent histories of climate and ocean pH. Our carbon cycle model includes an empirically justified temperature and pH dependence of seafloor weathering, allowing the relative importance of continental and seafloor weathering to be evaluated. We find that the Archean climate was likely temperate (0-50 °C) due to the combined negative feedbacks of continental and seafloor weathering. Ocean pH evolves monotonically from [Formula: see text] (2σ) at 4.0 Ga to [Formula: see text] (2σ) at the Archean-Proterozoic boundary, and to [Formula: see text] (2σ) at the Proterozoic-Phanerozoic boundary. This evolution is driven by the secular decline of pCO 2 , which in turn is a consequence of increasing solar luminosity, but is moderated by carbonate alkalinity delivered from continental and seafloor weathering. Archean seafloor weathering may have been a comparable carbon sink to continental weathering, but is less dominant than previously assumed, and would not have induced global glaciation. We show how these conclusions are robust to a wide range of scenarios for continental growth, internal heat flow evolution and outgassing history, greenhouse gas abundances, and changes in the biotic enhancement of weathering. Copyright © 2018 the Author(s). Published by PNAS.

  11. Linking phylogenetic identities of bacteria to starch fermentation in an in vitro model of the large intestine by RNA-based stable isotope probing.

    Science.gov (United States)

    Kovatcheva-Datchary, Petia; Egert, Markus; Maathuis, Annet; Rajilić-Stojanović, Mirjana; de Graaf, Albert A; Smidt, Hauke; de Vos, Willem M; Venema, Koen

    2009-04-01

    Carbohydrates, including starches, are an important energy source for humans, and are known for their interactions with the microbiota in the digestive tract. Largely, those interactions are thought to promote human health. Using 16S ribosomal RNA (rRNA)-based stable isotope probing (SIP), we identified starch-fermenting bacteria under human colon-like conditions. To the microbiota of the TIM-2 in vitro model of the human colon 7.4 g l(-1) of [U-(13)C]-starch was added. RNA extracted from lumen samples after 0 (control), 2, 4 and 8 h was subjected to density-gradient ultracentrifugation. Terminal-restriction fragment length polymorphism (T-RFLP) fingerprinting and phylogenetic analyses of the labelled and unlabelled 16S rRNA suggested populations related to Ruminococcus bromii, Prevotella spp. and Eubacterium rectale to be involved in starch metabolism. Additionally, 16S rRNA related to that of Bifidobacterium adolescentis was abundant in all analysed fractions. While this might be due to the enrichment of high-GC RNA in high-density fractions, it could also indicate an active role in starch fermentation. Comparison of the T-RFLP fingerprints of experiments performed with labelled and unlabelled starch revealed Ruminococcus bromii as the primary degrader in starch fermentation in the studied model, as it was found to solely predominate in the labelled fractions. LC-MS analyses of the lumen and dialysate samples showed that, for both experiments, starch fermentation primarily yielded acetate, butyrate and propionate. Integration of molecular and metabolite data suggests metabolic cross-feeding in the system, where populations related to Ruminococcus bromii are the primary starch degrader, while those related to Prevotella spp., Bifidobacterium adolescentis and Eubacterium rectale might be further involved in the trophic chain.

  12. Phylogenetic Conservatism in Plant Phenology

    Science.gov (United States)

    Davies, T. Jonathan; Wolkovich, Elizabeth M.; Kraft, Nathan J. B.; Salamin, Nicolas; Allen, Jenica M.; Ault, Toby R.; Betancourt, Julio L.; Bolmgren, Kjell; Cleland, Elsa E.; Cook, Benjamin I.; hide

    2013-01-01

    Phenological events defined points in the life cycle of a plant or animal have been regarded as highly plastic traits, reflecting flexible responses to various environmental cues. The ability of a species to track, via shifts in phenological events, the abiotic environment through time might dictate its vulnerability to future climate change. Understanding the predictors and drivers of phenological change is therefore critical. Here, we evaluated evidence for phylogenetic conservatism the tendency for closely related species to share similar ecological and biological attributes in phenological traits across flowering plants. We aggregated published and unpublished data on timing of first flower and first leaf, encompassing 4000 species at 23 sites across the Northern Hemisphere. We reconstructed the phylogeny for the set of included species, first, using the software program Phylomatic, and second, from DNA data. We then quantified phylogenetic conservatism in plant phenology within and across sites. We show that more closely related species tend to flower and leaf at similar times. By contrasting mean flowering times within and across sites, however, we illustrate that it is not the time of year that is conserved, but rather the phenological responses to a common set of abiotic cues. Our findings suggest that species cannot be treated as statistically independent when modelling phenological responses.Closely related species tend to resemble each other in the timing of their life-history events, a likely product of evolutionarily conserved responses to environmental cues. The search for the underlying drivers of phenology must therefore account for species' shared evolutionary histories.

  13. Constraining Dark Matter Models from a Combined Analysis of Milky Way Satellites with the Fermi Large Area Telescope

    Science.gov (United States)

    Ackermann, M.; Ajello, M.; Albert, A.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; hide

    2011-01-01

    Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% confidence level upper limits range from about 10(exp -26) cm(exp 3) / s at 5 GeV to about 5 X 10(exp -23) cm(exp 3)/ s at 1 TeV, depending on the dark matter annihilation final state. For the first time, using gamma rays, we are able to rule out models with the most generic cross section (approx 3 X 10(exp -26) cm(exp 3)/s for a purely s-wave cross section), without assuming additional boost factors.

  14. Constraining Dark Matter Models from a Combined Analysis of Milky Way Satellites with the Fermi Large Area Telescope

    Energy Technology Data Exchange (ETDEWEB)

    Ackermann, M.; Ajello, M.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Albert, A.; /Taiwan, Natl. Taiwan U. /Ohio State U.; Atwood, W.B.; /UC, Santa Cruz; Baldini, L.; /INFN, Pisa; Ballet, J.; /DAPNIA, Saclay; Barbiellini, G.; /INFN, Trieste /Trieste U.; Bastieri, D.; /INFN, Padua /Padua U.; Bechtol, K.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Bellazzini, R.; /INFN, Pisa; Berenji, B.; Blandford, R.D.; Bloom, E.D.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Bonamente, E.; /INFN, Perugia /Perugia U.; Borgland, A.W.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Bregeon, J.; /INFN, Pisa; Brigida, M.; /Bari Polytechnic /INFN, Bari; Bruel, P.; /Ecole Polytechnique; Buehler, R.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC; Burnett, T.H.; /Washington U., Seattle; Buson, S.; /INFN, Padua /Padua U. /ICE, Bellaterra /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /INFN, Rome /Rome U. /IASF, Milan /IASF, Milan /DAPNIA, Saclay /INFN, Perugia /Perugia U. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /Artep Inc. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /ASDC, Frascati /Perugia U. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /Montpellier U. /Stockholm U. /Stockholm U., OKC /ASDC, Frascati /ASDC, Frascati /Udine U. /INFN, Trieste /Bari Polytechnic /INFN, Bari /Naval Research Lab, Wash., D.C. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /Montpellier U. /Bari Polytechnic /INFN, Bari /Ecole Polytechnique /NASA, Goddard /Hiroshima U. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /Bari Polytechnic /INFN, Bari /INFN, Bari /ASDC, Frascati /NASA, Goddard /INFN, Perugia /Perugia U. /Bari Polytechnic /INFN, Bari /Bologna Observ. /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC /DAPNIA, Saclay /Alabama U., Huntsville; /more authors..

    2012-09-14

    Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% confidence level upper limits range from about 10{sup -26} cm{sup 3} s{sup -1} at 5 GeV to about 5 x 10{sup -23} cm{sup 3} s{sup -1} at 1 TeV, depending on the dark matter annihilation final state. For the first time, using gamma rays, we are able to rule out models with the most generic cross section ({approx}3 x 10{sup -26} cm{sup 3} s{sup -1} for a purely s-wave cross section), without assuming additional boost factors.

  15. The Cluster Lens SDSS 1004+4112: Constraining World Models With its Multiply-Imaged Quasar and Galaxies

    Science.gov (United States)

    Kochanek, C.

    2005-07-01

    We will use deep ACS imaging of the giant {15 arcsec} four-image z_s=1.734 lensed quasar SDSS 1004+4112, and its z_l=0.68 lensing galaxy cluster, to identify many additional multiply-imaged background galaxies. Combining the existing single orbit ACS I-band image with ground based data, we have definitely identified two multiply imaged galaxies with estimated redshifts of 2.6 and 4.3, about 15 probable images of background galaxies, and a point source in the core of the central cD galaxy, which is likely to be the faint, fifth image of the quasar. The new data will provide accurate photometric redshifts, confirm that the candidate fifth image has the same spectral energy distribution as the other quasar images, allow secure identification of additional multiply-lensed galaxies for improving the mass model, and permit identification of faint cluster members. Due to the high lens redshift and the broad redshift distribution of the lensed background sources, we should be able to use the source-redshift scaling of the Einstein radius that depends on {d_ls/d_os}, to derive a direct, geometric estimate of Omega_Lambda. The deeper images will also allow a weak lensing analysis to extend the mass distribution to larger radii. Unlike any other cluster lenses, the time delay between the lensed quasar images {already measured for the A-B images, and measurable for the others over the next few years}, breaks the so-called kappa-degeneracies that complicate weak-lensing analyses.

  16. Phylogenetic Structure of Foliar Spectral Traits in Tropical Forest Canopies

    Directory of Open Access Journals (Sweden)

    Kelly M. McManus

    2016-02-01

    Full Text Available The Spectranomics approach to tropical forest remote sensing has established a link between foliar reflectance spectra and the phylogenetic composition of tropical canopy tree communities vis-à-vis the taxonomic organization of biochemical trait variation. However, a direct relationship between phylogenetic affiliation and foliar reflectance spectra of species has not been established. We sought to develop this relationship by quantifying the extent to which underlying patterns of phylogenetic structure drive interspecific variation among foliar reflectance spectra within three Neotropical canopy tree communities with varying levels of soil fertility. We interpreted the resulting spectral patterns of phylogenetic signal in the context of foliar biochemical traits that may contribute to the spectral-phylogenetic link. We utilized a multi-model ensemble to elucidate trait-spectral relationships, and quantified phylogenetic signal for spectral wavelengths and traits using Pagel’s lambda statistic. Foliar reflectance spectra showed evidence of phylogenetic influence primarily within the visible and shortwave infrared spectral regions. These regions were also selected by the multi-model ensemble as those most important to the quantitative prediction of several foliar biochemical traits. Patterns of phylogenetic organization of spectra and traits varied across sites and with soil fertility, indicative of the complex interactions between the environmental and phylogenetic controls underlying patterns of biodiversity.

  17. Fourier transform inequalities for phylogenetic trees.

    Science.gov (United States)

    Matsen, Frederick A

    2009-01-01

    Phylogenetic invariants are not the only constraints on site-pattern frequency vectors for phylogenetic trees. A mutation matrix, by its definition, is the exponential of a matrix with non-negative off-diagonal entries; this positivity requirement implies non-trivial constraints on the site-pattern frequency vectors. We call these additional constraints "edge-parameter inequalities". In this paper, we first motivate the edge-parameter inequalities by considering a pathological site-pattern frequency vector corresponding to a quartet tree with a negative internal edge. This site-pattern frequency vector nevertheless satisfies all of the constraints described up to now in the literature. We next describe two complete sets of edge-parameter inequalities for the group-based models; these constraints are square-free monomial inequalities in the Fourier transformed coordinates. These inequalities, along with the phylogenetic invariants, form a complete description of the set of site-pattern frequency vectors corresponding to bona fide trees. Said in mathematical language, this paper explicitly presents two finite lists of inequalities in Fourier coordinates of the form "monomial < or = 1", each list characterizing the phylogenetically relevant semialgebraic subsets of the phylogenetic varieties.

  18. Inferring phylogenetic trees from the knowledge of rare evolutionary events.

    Science.gov (United States)

    Hellmuth, Marc; Hernandez-Rosales, Maribel; Long, Yangjing; Stadler, Peter F

    2018-06-01

    Rare events have played an increasing role in molecular phylogenetics as potentially homoplasy-poor characters. In this contribution we analyze the phylogenetic information content from a combinatorial point of view by considering the binary relation on the set of taxa defined by the existence of a single event separating two taxa. We show that the graph-representation of this relation must be a tree. Moreover, we characterize completely the relationship between the tree of such relations and the underlying phylogenetic tree. With directed operations such as tandem-duplication-random-loss events in mind we demonstrate how non-symmetric information constrains the position of the root in the partially reconstructed phylogeny.

  19. Constraining the Influence of Natural Variability to Improve Estimates of Global Aerosol Indirect Effects in a Nudged Version of the Community Atmosphere Model 5

    Energy Technology Data Exchange (ETDEWEB)

    Kooperman, G. J.; Pritchard, M. S.; Ghan, Steven J.; Wang, Minghuai; Somerville, Richard C.; Russell, Lynn

    2012-12-11

    Natural modes of variability on many timescales influence aerosol particle distributions and cloud properties such that isolating statistically significant differences in cloud radiative forcing due to anthropogenic aerosol perturbations (indirect effects) typically requires integrating over long simulations. For state-of-the-art global climate models (GCM), especially those in which embedded cloud-resolving models replace conventional statistical parameterizations (i.e. multi-scale modeling framework, MMF), the required long integrations can be prohibitively expensive. Here an alternative approach is explored, which implements Newtonian relaxation (nudging) to constrain simulations with both pre-industrial and present-day aerosol emissions toward identical meteorological conditions, thus reducing differences in natural variability and dampening feedback responses in order to isolate radiative forcing. Ten-year GCM simulations with nudging provide a more stable estimate of the global-annual mean aerosol indirect radiative forcing than do conventional free-running simulations. The estimates have mean values and 95% confidence intervals of -1.54 ± 0.02 W/m2 and -1.63 ± 0.17 W/m2 for nudged and free-running simulations, respectively. Nudging also substantially increases the fraction of the world’s area in which a statistically significant aerosol indirect effect can be detected (68% and 25% of the Earth's surface for nudged and free-running simulations, respectively). One-year MMF simulations with and without nudging provide global-annual mean aerosol indirect radiative forcing estimates of -0.80 W/m2 and -0.56 W/m2, respectively. The one-year nudged results compare well with previous estimates from three-year free-running simulations (-0.77 W/m2), which showed the aerosol-cloud relationship to be in better agreement with observations and high-resolution models than in the results obtained with conventional parameterizations.

  20. Constraining Aerosol Optical Models Using Ground-Based, Collocated Particle Size and Mass Measurements in Variable Air Mass Regimes During the 7-SEAS/Dongsha Experiment

    Science.gov (United States)

    Bell, Shaun W.; Hansell, Richard A.; Chow, Judith C.; Tsay, Si-Chee; Wang, Sheng-Hsiang; Ji, Qiang; Li, Can; Watson, John G.; Khlystov, Andrey

    2012-01-01

    During the spring of 2010, NASA Goddard's COMMIT ground-based mobile laboratory was stationed on Dongsha Island off the southwest coast of Taiwan, in preparation for the upcoming 2012 7-SEAS field campaign. The measurement period offered a unique opportunity for conducting detailed investigations of the optical properties of aerosols associated with different air mass regimes including background maritime and those contaminated by anthropogenic air pollution and mineral dust. What appears to be the first time for this region, a shortwave optical closure experiment for both scattering and absorption was attempted over a 12-day period during which aerosols exhibited the most change. Constraints to the optical model included combined SMPS and APS number concentration data for a continuum of fine and coarse-mode particle sizes up to PM2.5. We also take advantage of an IMPROVE chemical sampler to help constrain aerosol composition and mass partitioning of key elemental species including sea-salt, particulate organic matter, soil, non sea-salt sulphate, nitrate, and elemental carbon. Our results demonstrate that the observed aerosol scattering and absorption for these diverse air masses are reasonably captured by the model, where peak aerosol events and transitions between key aerosols types are evident. Signatures of heavy polluted aerosol composed mostly of ammonium and non sea-salt sulphate mixed with some dust with transitions to background sea-salt conditions are apparent in the absorption data, which is particularly reassuring owing to the large variability in the imaginary component of the refractive indices. Extinctive features at significantly smaller time scales than the one-day sample period of IMPROVE are more difficult to reproduce, as this requires further knowledge concerning the source apportionment of major chemical components in the model. Consistency between the measured and modeled optical parameters serves as an important link for advancing remote

  1. Constrained minimization in C ++ environment

    International Nuclear Information System (INIS)

    Dymov, S.N.; Kurbatov, V.S.; Silin, I.N.; Yashchenko, S.V.

    1998-01-01

    Based on the ideas, proposed by one of the authors (I.N.Silin), the suitable software was developed for constrained data fitting. Constraints may be of the arbitrary type: equalities and inequalities. The simplest of possible ways was used. Widely known program FUMILI was realized to the C ++ language. Constraints in the form of inequalities φ (θ i ) ≥ a were taken into account by change into equalities φ (θ i ) = t and simple inequalities of type t ≥ a. The equalities were taken into account by means of quadratic penalty functions. The suitable software was tested on the model data of the ANKE setup (COSY accelerator, Forschungszentrum Juelich, Germany)

  2. Phylogenetic signal detection from an ancient rapid radiation: Effects of noise reduction, long-branch attraction, and model selection in crown clade Apocynaceae.

    Science.gov (United States)

    Straub, Shannon C K; Moore, Michael J; Soltis, Pamela S; Soltis, Douglas E; Liston, Aaron; Livshultz, Tatyana

    2014-11-01

    Crown clade Apocynaceae comprise seven primary lineages of lianas, shrubs, and herbs with a diversity of pollen aggregation morphologies including monads, tetrads, and pollinia, making them an ideal group for investigating the evolution and function of pollen packaging. Traditional molecular systematic approaches utilizing small amounts of sequence data have failed to resolve relationships along the spine of the crown clade, a likely ancient rapid radiation. The previous best estimate of the phylogeny was a five-way polytomy, leaving ambiguous the homology of aggregated pollen in two major lineages, the Periplocoideae, which possess pollen tetrads, and the milkweeds (Secamonoideae plus Asclepiadoideae), which possess pollinia. To assess whether greatly increased character sampling would resolve these relationships, a plastome sequence data matrix was assembled for 13 taxa of Apocynaceae, including nine newly generated complete plastomes, one partial new plastome, and three previously reported plastomes, collectively representing all primary crown clade lineages and outgroups. The effects of phylogenetic noise, long-branch attraction, and model selection (linked versus unlinked branch lengths among data partitions) were evaluated in a hypothesis-testing framework based on Shimodaira-Hasegawa tests. Discrimination among alternative crown clade resolutions was affected by all three factors. Exclusion of the noisiest alignment positions and topologies influenced by long-branch attraction resulted in a trichotomy along the spine of the crown clade consisting of Rhabdadenia+the Asian clade, Baisseeae+milkweeds, and Periplocoideae+the New World clade. Parsimony reconstruction on all optimal topologies after noise exclusion unambiguously supports parallel evolution of aggregated pollen in Periplocoideae (tetrads) and milkweeds (pollinia). Our phylogenomic approach has greatly advanced the resolution of one of the most perplexing radiations in Apocynaceae, providing the

  3. Potentials of satellite derived SIF products to constrain GPP simulated by the new ORCHIDEE-FluOR terrestrial model at the global scale

    Science.gov (United States)

    Bacour, C.; Maignan, F.; Porcar-Castell, A.; MacBean, N.; Goulas, Y.; Flexas, J.; Guanter, L.; Joiner, J.; Peylin, P.

    2016-12-01

    A new era for improving our knowledge of the terrestrial carbon cycle at the global scale has begun with recent studies on the relationships between remotely sensed Sun Induce Fluorescence (SIF) and plant photosynthetic activity (GPP), and the availability of such satellite-derived products now "routinely" produced from GOSAT, GOME-2, or OCO-2 observations. Assimilating SIF data into terrestrial ecosystem models (TEMs) represents a novel opportunity to reduce the uncertainty of their prediction with respect to carbon-climate feedbacks, in particular the uncertainties resulting from inaccurate parameter values. A prerequisite is a correct representation in TEMs of the several drivers of plant fluorescence from the leaf to the canopy scale, and in particular the competing processes of photochemistry and non photochemical quenching (NPQ).In this study, we present the first results of a global scale assimilation of GOME-2 SIF products within a new version of the ORCHIDEE land surface model including a physical module of plant fluorescence. At the leaf level, the regulation of fluorescence yield is simulated both by the photosynthesis module of ORCHIDEE to calculate the photochemical yield and by a parametric model to estimate NPQ. The latter has been calibrated on leaf fluorescence measurements performed for boreal coniferous and Mediterranean vegetation species. A parametric representation of the SCOPE radiative transfer model is used to model the plant fluorescence fluxes for PSI and PSII and the scaling up to the canopy level. The ORCHIDEE-FluOR model is firstly evaluated with respect to in situ measurements of plant fluorescence flux and photochemical yield for scots pine and wheat. The potentials of SIF data to constrain the modelled GPP are evaluated by assimilating one year of GOME-2-SIF products within ORCHIDEE-FluOR. We investigate in particular the changes in the spatial patterns of GPP following the optimization of the photosynthesis and phenology parameters

  4. Constraining surface emissions of air pollutants using inverse modelling: method intercomparison and a new two-step two-scale regularization approach

    Energy Technology Data Exchange (ETDEWEB)

    Saide, Pablo (CGRER, Center for Global and Regional Environmental Research, Univ. of Iowa, Iowa City, IA (United States)), e-mail: pablo-saide@uiowa.edu; Bocquet, Marc (Universite Paris-Est, CEREA Joint Laboratory Ecole des Ponts ParisTech and EDF RandD, Champs-sur-Marne (France); INRIA, Paris Rocquencourt Research Center (France)); Osses, Axel (Departamento de Ingeniera Matematica, Universidad de Chile, Santiago (Chile); Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile)); Gallardo, Laura (Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile); Departamento de Geofisica, Universidad de Chile, Santiago (Chile))

    2011-07-15

    When constraining surface emissions of air pollutants using inverse modelling one often encounters spurious corrections to the inventory at places where emissions and observations are colocated, referred to here as the colocalization problem. Several approaches have been used to deal with this problem: coarsening the spatial resolution of emissions; adding spatial correlations to the covariance matrices; adding constraints on the spatial derivatives into the functional being minimized; and multiplying the emission error covariance matrix by weighting factors. Intercomparison of methods for a carbon monoxide inversion over a city shows that even though all methods diminish the colocalization problem and produce similar general patterns, detailed information can greatly change according to the method used ranging from smooth, isotropic and short range modifications to not so smooth, non-isotropic and long range modifications. Poisson (non-Gaussian) and Gaussian assumptions both show these patterns, but for the Poisson case the emissions are naturally restricted to be positive and changes are given by means of multiplicative correction factors, producing results closer to the true nature of emission errors. Finally, we propose and test a new two-step, two-scale, fully Bayesian approach that deals with the colocalization problem and can be implemented for any prior density distribution

  5. Intrinsically motivated action-outcome learning and goal-based action recall: a system-level bio-constrained computational model.

    Science.gov (United States)

    Baldassarre, Gianluca; Mannella, Francesco; Fiore, Vincenzo G; Redgrave, Peter; Gurney, Kevin; Mirolli, Marco

    2013-05-01

    Reinforcement (trial-and-error) learning in animals is driven by a multitude of processes. Most animals have evolved several sophisticated systems of 'extrinsic motivations' (EMs) that guide them to acquire behaviours allowing them to maintain their bodies, defend against threat, and reproduce. Animals have also evolved various systems of 'intrinsic motivations' (IMs) that allow them to acquire actions in the absence of extrinsic rewards. These actions are used later to pursue such rewards when they become available. Intrinsic motivations have been studied in Psychology for many decades and their biological substrates are now being elucidated by neuroscientists. In the last two decades, investigators in computational modelling, robotics and machine learning have proposed various mechanisms that capture certain aspects of IMs. However, we still lack models of IMs that attempt to integrate all key aspects of intrinsically motivated learning and behaviour while taking into account the relevant neurobiological constraints. This paper proposes a bio-constrained system-level model that contributes a major step towards this integration. The model focusses on three processes related to IMs and on the neural mechanisms underlying them: (a) the acquisition of action-outcome associations (internal models of the agent-environment interaction) driven by phasic dopamine signals caused by sudden, unexpected changes in the environment; (b) the transient focussing of visual gaze and actions on salient portions of the environment; (c) the subsequent recall of actions to pursue extrinsic rewards based on goal-directed reactivation of the representations of their outcomes. The tests of the model, including a series of selective lesions, show how the focussing processes lead to a faster learning of action-outcome associations, and how these associations can be recruited for accomplishing goal-directed behaviours. The model, together with the background knowledge reviewed in the paper

  6. Incompletely resolved phylogenetic trees inflate estimates of phylogenetic conservatism.

    Science.gov (United States)

    Davies, T Jonathan; Kraft, Nathan J B; Salamin, Nicolas; Wolkovich, Elizabeth M

    2012-02-01

    The tendency for more closely related species to share similar traits and ecological strategies can be explained by their longer shared evolutionary histories and represents phylogenetic conservatism. How strongly species traits co-vary with phylogeny can significantly impact how we analyze cross-species data and can influence our interpretation of assembly rules in the rapidly expanding field of community phylogenetics. Phylogenetic conservatism is typically quantified by analyzing the distribution of species values on the phylogenetic tree that connects them. Many phylogenetic approaches, however, assume a completely sampled phylogeny: while we have good estimates of deeper phylogenetic relationships for many species-rich groups, such as birds and flowering plants, we often lack information on more recent interspecific relationships (i.e., within a genus). A common solution has been to represent these relationships as polytomies on trees using taxonomy as a guide. Here we show that such trees can dramatically inflate estimates of phylogenetic conservatism quantified using S. P. Blomberg et al.'s K statistic. Using simulations, we show that even randomly generated traits can appear to be phylogenetically conserved on poorly resolved trees. We provide a simple rarefaction-based solution that can reliably retrieve unbiased estimates of K, and we illustrate our method using data on first flowering times from Thoreau's woods (Concord, Massachusetts, USA).

  7. Transforming phylogenetic networks: Moving beyond tree space

    OpenAIRE

    Huber, Katharina T.; Moulton, Vincent; Wu, Taoyang

    2016-01-01

    Phylogenetic networks are a generalization of phylogenetic trees that are used to represent reticulate evolution. Unrooted phylogenetic networks form a special class of such networks, which naturally generalize unrooted phylogenetic trees. In this paper we define two operations on unrooted phylogenetic networks, one of which is a generalization of the well-known nearest-neighbor interchange (NNI) operation on phylogenetic trees. We show that any unrooted phylogenetic network can be transforme...

  8. The lithospheric-scale 3D structural configuration of the North Alpine Foreland Basin constrained by gravity modelling and the calculation of the 3D load distribution

    Science.gov (United States)

    Przybycin, Anna M.; Scheck-Wenderoth, Magdalena; Schneider, Michael

    2014-05-01

    The North Alpine Foreland Basin is situated in the northern front of the European Alps and extends over parts of France, Switzerland, Germany and Austria. It formed as a wedge shaped depression since the Tertiary in consequence of the Euro - Adriatic continental collision and the Alpine orogeny. The basin is filled with clastic sediments, the Molasse, originating from erosional processes of the Alps and underlain by Mesozoic sedimentary successions and a Paleozoic crystalline crust. For our study we have focused on the German part of the basin. To investigate the deep structure, the isostatic state and the load distribution of this region we have constructed a 3D structural model of the basin and the Alpine area using available depth and thickness maps, regional scale 3D structural models as well as seismic and well data for the sedimentary part. The crust (from the top Paleozoic down to the Moho (Grad et al. 2008)) has been considered as two-parted with a lighter upper crust and a denser lower crust; the partition has been calculated following the approach of isostatic equilibrium of Pratt (1855). By implementing a seismic Lithosphere-Asthenosphere-Boundary (LAB) (Tesauro 2009) the crustal scale model has been extended to the lithospheric-scale. The layer geometry and the assigned bulk densities of this starting model have been constrained by means of 3D gravity modelling (BGI, 2012). Afterwards the 3D load distribution has been calculated using a 3D finite element method. Our results show that the North Alpine Foreland Basin is not isostatically balanced and that the configuration of the crystalline crust strongly controls the gravity field in this area. Furthermore, our results show that the basin area is influenced by varying lateral load differences down to a depth of more than 150 km what allows a first order statement of the required compensating horizontal stress needed to prevent gravitational collapse of the system. BGI (2012). The International

  9. Ring-constrained Join

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Karras, Panagiotis; Mamoulis, Nikos

    2008-01-01

    . This new operation has important applications in decision support, e.g., placing recycling stations at fair locations between restaurants and residential complexes. Clearly, RCJ is defined based on a geometric constraint but not on distances between points. Thus, our operation is fundamentally different......We introduce a novel spatial join operator, the ring-constrained join (RCJ). Given two sets P and Q of spatial points, the result of RCJ consists of pairs (p, q) (where p ε P, q ε Q) satisfying an intuitive geometric constraint: the smallest circle enclosing p and q contains no other points in P, Q...

  10. Remarkable phylogenetic resolution of the most complex clade of Cyprinidae (Teleostei: Cypriniformes): a proof of concept of homology assessment and partitioning sequence data integrated with mixed model Bayesian analyses.

    Science.gov (United States)

    Tao, Wenjing; Mayden, Richard L; He, Shunping

    2013-03-01

    Despite many efforts to resolve evolutionary relationships among major clades of Cyprinidae, some nodes have been especially problematic and remain unresolved. In this study, we employ four nuclear gene fragments (3.3kb) to infer interrelationships of the Cyprinidae. A reconstruction of the phylogenetic relationships within the family using maximum parsimony, maximum likelihood, and Bayesian analyses is presented. Among the taxa within the monophyletic Cyprinidae, Rasborinae is the basal-most lineage; Cyprinine is sister to Leuciscine. The monophyly for the subfamilies Gobioninae, Leuciscinae and Acheilognathinae were resolved with high nodal support. Although our results do not completely resolve relationships within Cyprinidae, this study presents novel and significant findings having major implications for a highly diverse and enigmatic clade of East-Asian cyprinids. Within this monophyletic group five closely-related subgroups are identified. Tinca tinca, one of the most phylogenetically enigmatic genera in the family, is strongly supported as having evolutionary affinities with this East-Asian clade; an established yet remarkable association because of the natural variation in phenotypes and generalized ecological niches occupied by these taxa. Our results clearly argue that the choice of partitioning strategies has significant impacts on the phylogenetic reconstructions, especially when multiple genes are being considered. The most highly partitioned model (partitioned by codon positions within genes) extracts the strongest phylogenetic signals and performs better than any other partitioning schemes supported by the strongest 2Δln Bayes factor. Future studies should include higher levels of taxon sampling and partitioned, model-based analyses. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Ultrafast Approximation for Phylogenetic Bootstrap

    NARCIS (Netherlands)

    Bui Quang Minh, [No Value; Nguyen, Thi; von Haeseler, Arndt

    Nonparametric bootstrap has been a widely used tool in phylogenetic analysis to assess the clade support of phylogenetic trees. However, with the rapidly growing amount of data, this task remains a computational bottleneck. Recently, approximation methods such as the RAxML rapid bootstrap (RBS) and

  12. Petrologically-constrained thermo-chemical modelling of cratonic upper mantle consistent with elevation, geoid, surface heat flow, seismic surface waves and MT data

    Science.gov (United States)

    Jones, A. G.; Afonso, J. C.

    2015-12-01

    The Earth comprises a single physio-chemical system that we interrogate from its surface and/or from space making observations related to various physical and chemical parameters. A change in one of those parameters affects many of the others; for example a change in velocity is almost always indicative of a concomitant change in density, which results in changes to elevation, gravity and geoid observations. Similarly, a change in oxide chemistry affects almost all physical parameters to a greater or lesser extent. We have now developed sophisticated tools to model/invert data in our individual disciplines to such an extent that we are obtaining high resolution, robust models from our datasets. However, in the vast majority of cases the different datasets are modelled/inverted independently of each other, and often even without considering other data in a qualitative sense. The LitMod framework of Afonso and colleagues presents integrated inversion of geoscientific data to yield thermo-chemical models that are petrologically consistent and constrained. Input data can comprise any combination of elevation, geoid, surface heat flow, seismic surface wave (Rayleigh and Love) data and receiver function data, and MT data. The basis of LitMod is characterization of the upper mantle in terms of five oxides in the CFMAS system and a thermal structure that is conductive to the LAB and convective along the adiabat below the LAB to the 410 km discontinuity. Candidate solutions are chosen from prior distributions of the oxides. For the crust, candidate solutions are chosen from distributions of crustal layering, velocity and density parameters. Those candidate solutions that fit the data within prescribed error limits are kept, and are used to establish broad posterior distributions from which new candidate solutions are chosen. Examples will be shown of application of this approach fitting data from the Kaapvaal Craton in South Africa and the Rae Craton in northern Canada. I

  13. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  14. A format for phylogenetic placements.

    Directory of Open Access Journals (Sweden)

    Frederick A Matsen

    Full Text Available We have developed a unified format for phylogenetic placements, that is, mappings of environmental sequence data (e.g., short reads into a phylogenetic tree. We are motivated to do so by the growing number of tools for computing and post-processing phylogenetic placements, and the lack of an established standard for storing them. The format is lightweight, versatile, extensible, and is based on the JSON format, which can be parsed by most modern programming languages. Our format is already implemented in several tools for computing and post-processing parsimony- and likelihood-based phylogenetic placements and has worked well in practice. We believe that establishing a standard format for analyzing read placements at this early stage will lead to a more efficient development of powerful and portable post-analysis tools for the growing applications of phylogenetic placement.

  15. Cloning, Phylogenetic Analysis and 3D Modeling of a Putative Lysosomal Acid Lipase from the Camel, Camelus dromedarius

    Directory of Open Access Journals (Sweden)

    Farid Shokry Ataya

    2012-08-01

    Full Text Available Acid lipase belongs to a family of enzymes that is mainly present in lysosomes of different organs and the stomach. It is characterized by its capacity to withstand acidic conditions while maintaining high lipolytic activity. We cloned for the first time the full coding sequence of camel’s lysosomal acid lipase, cLIPA using RT-PCR technique (Genbank accession numbers JF803951 and AEG75815, for the nucleotide and aminoacid sequences respectively. The cDNA sequencing revealed an open reading frame of 1,197 nucleotides that encodes a protein of 399 aminoacids which was similar to that from other related mammalian species. Bioinformatic analysis was used to determine the aminoacid sequence, 3D structure and phylogeny of cLIPA. Bioinformatics analysis suggested the molecular weight of the translated protein to be 45.57 kDa, which could be decreased to 43.16 kDa after the removal of a signal peptide comprising the first 21 aminoacids. The deduced cLIPA sequences exhibited high identity with Equus caballus (86%, Numascus leucogenys (85%, Homo sapiens (84%, Sus scrofa (84%, Bos taurus (82% and Ovis aries (81%. cLIPA shows high aminoacid sequence identity with human and dog-gastric lipases (58%, and 59% respectively which makes it relevant to build a 3D structure model for cLIPA. The comparison confirms the presence of the catalytic triad and the oxyanion hole in cLIPA. Phylogenetic analysis revealed that camel cLIPA is grouped with monkey, human, pig, cow and goat. The level of expression of cLIPA in five camel tissues was examined using Real Time-PCR. The highest level of cLIPA transcript was found in the camel testis (162%, followed by spleen (129%, liver (100%, kidney (20.5% and lung (17.4%.

  16. Phylogenetic Inference of HIV Transmission Clusters

    Directory of Open Access Journals (Sweden)

    Vlad Novitsky

    2017-10-01

    Full Text Available Better understanding the structure and dynamics of HIV transmission networks is essential for designing the most efficient interventions to prevent new HIV transmissions, and ultimately for gaining control of the HIV epidemic. The inference of phylogenetic relationships and the interpretation of results rely on the definition of the HIV transmission cluster. The definition of the HIV cluster is complex and dependent on multiple factors, including the design of sampling, accuracy of sequencing, precision of sequence alignment, evolutionary models, the phylogenetic method of inference, and specified thresholds for cluster support. While the majority of studies focus on clusters, non-clustered cases could also be highly informative. A new dimension in the analysis of the global and local HIV epidemics is the concept of phylogenetically distinct HIV sub-epidemics. The identification of active HIV sub-epidemics reveals spreading viral lineages and may help in the design of targeted interventions.HIVclustering can also be affected by sampling density. Obtaining a proper sampling density may increase statistical power and reduce sampling bias, so sampling density should be taken into account in study design and in interpretation of phylogenetic results. Finally, recent advances in long-range genotyping may enable more accurate inference of HIV transmission networks. If performed in real time, it could both inform public-health strategies and be clinically relevant (e.g., drug-resistance testing.

  17. The best of both worlds: Phylogenetic eigenvector regression and mapping

    Directory of Open Access Journals (Sweden)

    José Alexandre Felizola Diniz Filho

    2015-09-01

    Full Text Available Eigenfunction analyses have been widely used to model patterns of autocorrelation in time, space and phylogeny. In a phylogenetic context, Diniz-Filho et al. (1998 proposed what they called Phylogenetic Eigenvector Regression (PVR, in which pairwise phylogenetic distances among species are submitted to a Principal Coordinate Analysis, and eigenvectors are then used as explanatory variables in regression, correlation or ANOVAs. More recently, a new approach called Phylogenetic Eigenvector Mapping (PEM was proposed, with the main advantage of explicitly incorporating a model-based warping in phylogenetic distance in which an Ornstein-Uhlenbeck (O-U process is fitted to data before eigenvector extraction. Here we compared PVR and PEM in respect to estimated phylogenetic signal, correlated evolution under alternative evolutionary models and phylogenetic imputation, using simulated data. Despite similarity between the two approaches, PEM has a slightly higher prediction ability and is more general than the original PVR. Even so, in a conceptual sense, PEM may provide a technique in the best of both worlds, combining the flexibility of data-driven and empirical eigenfunction analyses and the sounding insights provided by evolutionary models well known in comparative analyses.

  18. Coalescent methods for estimating phylogenetic trees.

    Science.gov (United States)

    Liu, Liang; Yu, Lili; Kubatko, Laura; Pearl, Dennis K; Edwards, Scott V

    2009-10-01

    We review recent models to estimate phylogenetic trees under the multispecies coalescent. Although the distinction between gene trees and species trees has come to the fore of phylogenetics, only recently have methods been developed that explicitly estimate species trees. Of the several factors that can cause gene tree heterogeneity and discordance with the species tree, deep coalescence due to random genetic drift in branches of the species tree has been modeled most thoroughly. Bayesian approaches to estimating species trees utilizes two likelihood functions, one of which has been widely used in traditional phylogenetics and involves the model of nucleotide substitution, and the second of which is less familiar to phylogeneticists and involves the probability distribution of gene trees given a species tree. Other recent parametric and nonparametric methods for estimating species trees involve parsimony criteria, summary statistics, supertree and consensus methods. Species tree approaches are an appropriate goal for systematics, appear to work well in some cases where concatenation can be misleading, and suggest that sampling many independent loci will be paramount. Such methods can also be challenging to implement because of the complexity of the models and computational time. In addition, further elaboration of the simplest of coalescent models will be required to incorporate commonly known issues such as deviation from the molecular clock, gene flow and other genetic forces.

  19. Could a secular increase in organic burial explain the rise of oxygen? Insights from a geological carbon cycle model constrained by the carbon isotope record

    Science.gov (United States)

    Krissansen-Totton, J.; Kipp, M.; Catling, D. C.

    2017-12-01

    The stable isotopes of carbon in marine sedimentary rock provide a window into the evolution of the Earth system. Conventionally, a relatively constant carbon isotope ratio in marine sedimentary rocks has been interpreted as implying constant organic carbon burial relative to total carbon burial. Because organic carbon burial corresponds to net oxygen production from photosynthesis, it follows that secular changes in the oxygen source flux cannot explain the dramatic rise of oxygen over Earth history. Instead, secular declines in oxygen sink fluxes are often invoked as causes for the rise of oxygen. However, constant fractional organic burial is difficult to reconcile with tentative evidence for low phosphate concentrations in the Archean ocean, which would imply lower marine productivity and—all else being equal—less organic carbon burial than today. The conventional interpretation of the carbon isotope record rests on the untested assumption that the isotopic ratio of carbon inputs into the ocean reflect mantle isotopic values throughout Earth history. In practice, differing rates of carbonate and organic weathering will allow for changes in isotopic inputs, as suggested by [1] and [2]. However, these inputs can not vary freely because large changes in isotopic inputs would induce secular trends in carbon reservoirs, which are not observed in the isotope record. We apply a geological carbon cycle model to all Earth history, tracking carbon isotopes in crustal, mantle, and ocean reservoirs. Our model is constrained by the carbon isotope record such that we can determine the extent to which large changes in organic burial are permitted. We find both constant organic burial and 3-5 fold increases in organic burial since 4.0 Ga can be reconciled with the carbon isotope record. Changes in the oxygen source flux thus need to be reconsidered as a possible contributor to Earth's oxygenation. [1] L. A. Derry, Organic carbon cycling and the lithosphere, in Treatise on

  20. On Nakhleh's metric for reduced phylogenetic networks

    OpenAIRE

    Cardona, Gabriel; Llabrés, Mercè; Rosselló, Francesc; Valiente Feruglio, Gabriel Alejandro

    2009-01-01

    We prove that Nakhleh’s metric for reduced phylogenetic networks is also a metric on the classes of tree-child phylogenetic networks, semibinary tree-sibling time consistent phylogenetic networks, and multilabeled phylogenetic trees. We also prove that it separates distinguishable phylogenetic networks. In this way, it becomes the strongest dissimilarity measure for phylogenetic networks available so far. Furthermore, we propose a generalization of that metric that separates arbitrary phyl...

  1. Bayesian phylogenetic estimation of fossil ages.

    Science.gov (United States)

    Drummond, Alexei J; Stadler, Tanja

    2016-07-19

    Recent advances have allowed for both morphological fossil evidence and molecular sequences to be integrated into a single combined inference of divergence dates under the rule of Bayesian probability. In particular, the fossilized birth-death tree prior and the Lewis-Mk model of discrete morphological evolution allow for the estimation of both divergence times and phylogenetic relationships between fossil and extant taxa. We exploit this statistical framework to investigate the internal consistency of these models by producing phylogenetic estimates of the age of each fossil in turn, within two rich and well-characterized datasets of fossil and extant species (penguins and canids). We find that the estimation accuracy of fossil ages is generally high with credible intervals seldom excluding the true age and median relative error in the two datasets of 5.7% and 13.2%, respectively. The median relative standard error (RSD) was 9.2% and 7.2%, respectively, suggesting good precision, although with some outliers. In fact, in the two datasets we analyse, the phylogenetic estimate of fossil age is on average less than 2 Myr from the mid-point age of the geological strata from which it was excavated. The high level of internal consistency found in our analyses suggests that the Bayesian statistical model employed is an adequate fit for both the geological and morphological data, and provides evidence from real data that the framework used can accurately model the evolution of discrete morphological traits coded from fossil and extant taxa. We anticipate that this approach will have diverse applications beyond divergence time dating, including dating fossils that are temporally unconstrained, testing of the 'morphological clock', and for uncovering potential model misspecification and/or data errors when controversial phylogenetic hypotheses are obtained based on combined divergence dating analyses.This article is part of the themed issue 'Dating species divergences using

  2. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    International Nuclear Information System (INIS)

    Zhou Jinghao; Kim, Sung; Jabbour, Salma; Goyal, Sharad; Haffty, Bruce; Chen, Ting; Levinson, Lydia; Metaxas, Dimitris; Yue, Ning J.

    2010-01-01

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CT (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to

  3. Constraining magma physical properties and its temporal evolution from InSAR and topographic data only: a physics-based eruption model for the effusive phase of the Cordon Caulle 2011-2012 rhyodacitic eruption

    Science.gov (United States)

    Delgado, F.; Kubanek, J.; Anderson, K. R.; Lundgren, P.; Pritchard, M. E.

    2017-12-01

    The 2011-2012 eruption of Cordón Caulle volcano in Chile is the best scientifically observed rhyodacitic eruption and is thus a key place to understand the dynamics of these rare but powerful explosive rhyodacitic eruptions. Because the volatile phase controls both the eruption temporal evolution and the eruptive style, either explosive or effusive, it is important to constrain the physical parameters that drive these eruptions. The eruption began explosively and after two weeks evolved into a hybrid explosive - lava flow effusion whose volume-time evolution we constrain with a series of TanDEM-X Digital Elevation Models. Our data shows the intrusion of a large volume laccolith or cryptodome during the first 2.5 months of the eruption and lava flow effusion only afterwards, with a total volume of 1.4 km3. InSAR data from the ENVISAT and TerraSAR-X missions shows more than 2 m of subsidence during the effusive eruption phase produced by deflation of a finite spheroidal source at a depth of 5 km. In order to constrain the magma total H2O content, crystal cargo, and reservoir pressure drop we numerically solve the coupled set of equations of a pressurized magma reservoir, magma conduit flow and time dependent density, volatile exsolution and viscosity that we use to invert the InSAR and topographic data time series. We compare the best-fit model parameters with independent estimates of magma viscosity and total gas content measured from lava samples. Preliminary modeling shows that although it is not possible to model both the InSAR and the topographic data during the onset of the laccolith emplacement, it is possible to constrain the magma H2O and crystal content, to 4% wt and 30% which agree well with published literature values.

  4. Constraining a hybrid volatility basis-set model for aging of wood-burning emissions using smog chamber experiments: a box-model study based on the VBS scheme of the CAMx model (v5.40)

    Science.gov (United States)

    Ciarelli, Giancarlo; El Haddad, Imad; Bruns, Emily; Aksoyoglu, Sebnem; Möhler, Ottmar; Baltensperger, Urs; Prévôt, André S. H.

    2017-06-01

    In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K) in a ˜ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS) box model, representing the emission partitioning and their oxidation against OH. We combine aerosol-chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs) from a high-resolution proton transfer reaction mass spectrometer (PTR-MS) and with organic aerosol measurements from an aerosol mass spectrometer (AMS). Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model) relative to low volatility and semi-volatile primary organic material (OMsv), which is partitioned based on current published volatility distribution data. By comparing the NTVOC / OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ˜ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA) concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10-11 to 4. 0 × 10-11 cm3 molec-1 s-1. The average enthalpy of vaporization of secondary organic aerosol

  5. Constraining a hybrid volatility basis-set model for aging of wood-burning emissions using smog chamber experiments: a box-model study based on the VBS scheme of the CAMx model (v5.40

    Directory of Open Access Journals (Sweden)

    G. Ciarelli

    2017-06-01

    Full Text Available In this study, novel wood combustion aging experiments performed at different temperatures (263 and 288 K in a ∼ 7 m3 smog chamber were modelled using a hybrid volatility basis set (VBS box model, representing the emission partitioning and their oxidation against OH. We combine aerosol–chemistry box-model simulations with unprecedented measurements of non-traditional volatile organic compounds (NTVOCs from a high-resolution proton transfer reaction mass spectrometer (PTR-MS and with organic aerosol measurements from an aerosol mass spectrometer (AMS. Due to this, we are able to observationally constrain the amounts of different NTVOC aerosol precursors (in the model relative to low volatility and semi-volatile primary organic material (OMsv, which is partitioned based on current published volatility distribution data. By comparing the NTVOC ∕ OMsv ratios at different temperatures, we determine the enthalpies of vaporization of primary biomass-burning organic aerosols. Further, the developed model allows for evaluating the evolution of oxidation products of the semi-volatile and volatile precursors with aging. More than 30 000 box-model simulations were performed to retrieve the combination of parameters that best fit the observed organic aerosol mass and O : C ratios. The parameters investigated include the NTVOC reaction rates and yields as well as enthalpies of vaporization and the O : C of secondary organic aerosol surrogates. Our results suggest an average ratio of NTVOCs to the sum of non-volatile and semi-volatile organic compounds of ∼ 4.75. The mass yields of these compounds determined for a wide range of atmospherically relevant temperatures and organic aerosol (OA concentrations were predicted to vary between 8 and 30 % after 5 h of continuous aging. Based on the reaction scheme used, reaction rates of the NTVOC mixture range from 3.0 × 10−11 to 4. 0 × 10−11 cm3 molec−1 s−1

  6. A less-constrained (2,0) super-Yang-Mills model: the coupling to non-linear σ-models

    International Nuclear Information System (INIS)

    Almeida, C.A.S.; Doria, R.M.

    1990-01-01

    Considering a class of (2,0) super-Yang-Mills multiplets characterised by the appearance of a pair of independent gauge potentials, we present here their coupling to non-linear σ-models in (2,0)-superspace. Contrary to the case of the coupling to (2,0) matter superfields, the extra gauge potential present in the Yang-Mills sector does not decouple from the theory in the case one gauges isometry groups of σ-models. (author)

  7. Point estimates in phylogenetic reconstructions

    OpenAIRE

    Benner, Philipp; Bacak, Miroslav; Bourguignon, Pierre-Yves

    2013-01-01

    Motivation: The construction of statistics for summarizing posterior samples returned by a Bayesian phylogenetic study has so far been hindered by the poor geometric insights available into the space of phylogenetic trees, and ad hoc methods such as the derivation of a consensus tree makeup for the ill-definition of the usual concepts of posterior mean, while bootstrap methods mitigate the absence of a sound concept of variance. Yielding satisfactory results with sufficiently concentrated pos...

  8. Phylogenetic lineages in Pseudocercospora.

    Science.gov (United States)

    Crous, P W; Braun, U; Hunter, G C; Wingfield, M J; Verkley, G J M; Shin, H-D; Nakashima, C; Groenewald, J Z

    2013-06-30

    Pseudocercospora is a large cosmopolitan genus of plant pathogenic fungi that are commonly associated with leaf and fruit spots as well as blights on a wide range of plant hosts. They occur in arid as well as wet environments and in a wide range of climates including cool temperate, sub-tropical and tropical regions. Pseudocercospora is now treated as a genus in its own right, although formerly recognised as either an anamorphic state of Mycosphaerella or having mycosphaerella-like teleomorphs. The aim of this study was to sequence the partial 28S nuclear ribosomal RNA gene of a selected set of isolates to resolve phylogenetic generic limits within the Pseudocercospora complex. From these data, 14 clades are recognised, six of which cluster in Mycosphaerellaceae. Pseudocercospora s. str. represents a distinct clade, sister to Passalora eucalypti, and a clade representing the genera Scolecostigmina, Trochophora and Pallidocercospora gen. nov., taxa formerly accommodated in the Mycosphaerella heimii complex and characterised by smooth, pale brown conidia, as well as the formation of red crystals in agar media. Other clades in Mycosphaerellaceae include Sonderhenia, Microcyclosporella, and Paracercospora. Pseudocercosporella resides in a large clade along with Phloeospora, Miuraea, Cercospora and Septoria. Additional clades represent Dissoconiaceae, Teratosphaeriaceae, Cladosporiaceae, and the genera Xenostigmina, Strelitziana, Cyphellophora and Thedgonia. The genus Phaeomycocentrospora is introduced to accommodate Mycocentrospora cantuariensis, primarily distinguished from Pseudocercospora based on its hyaline hyphae, broad conidiogenous loci and hila. Host specificity was considered for 146 species of Pseudocercospora occurring on 115 host genera from 33 countries. Partial nucleotide sequence data for three gene loci, ITS, EF-1α, and ACT suggest that the majority of these species are host specific. Species identified on the basis of host, symptomatology and general

  9. Simulating secondary organic aerosol in a regional air quality model using the statistical oxidation model – Part 1: Assessing the influence of constrained multi-generational ageing

    Directory of Open Access Journals (Sweden)

    S. H. Jathar

    2016-02-01

    Full Text Available Multi-generational oxidation of volatile organic compound (VOC oxidation products can significantly alter the mass, chemical composition and properties of secondary organic aerosol (SOA compared to calculations that consider only the first few generations of oxidation reactions. However, the most commonly used state-of-the-science schemes in 3-D regional or global models that account for multi-generational oxidation (1 consider only functionalization reactions but do not consider fragmentation reactions, (2 have not been constrained to experimental data and (3 are added on top of existing parameterizations. The incomplete description of multi-generational oxidation in these models has the potential to bias source apportionment and control calculations for SOA. In this work, we used the statistical oxidation model (SOM of Cappa and Wilson (2012, constrained by experimental laboratory chamber data, to evaluate the regional implications of multi-generational oxidation considering both functionalization and fragmentation reactions. SOM was implemented into the regional University of California at Davis / California Institute of Technology (UCD/CIT air quality model and applied to air quality episodes in California and the eastern USA. The mass, composition and properties of SOA predicted using SOM were compared to SOA predictions generated by a traditional two-product model to fully investigate the impact of explicit and self-consistent accounting of multi-generational oxidation.Results show that SOA mass concentrations predicted by the UCD/CIT-SOM model are very similar to those predicted by a two-product model when both models use parameters that are derived from the same chamber data. Since the two-product model does not explicitly resolve multi-generational oxidation reactions, this finding suggests that the chamber data used to parameterize the models captures the majority of the SOA mass formation from multi-generational oxidation under

  10. Phylogenetic classification of bony fishes.

    Science.gov (United States)

    Betancur-R, Ricardo; Wiley, Edward O; Arratia, Gloria; Acero, Arturo; Bailly, Nicolas; Miya, Masaki; Lecointre, Guillaume; Ortí, Guillermo

    2017-07-06

    Fish classifications, as those of most other taxonomic groups, are being transformed drastically as new molecular phylogenies provide support for natural groups that were unanticipated by previous studies. A brief review of the main criteria used by ichthyologists to define their classifications during the last 50 years, however, reveals slow progress towards using an explicit phylogenetic framework. Instead, the trend has been to rely, in varying degrees, on deep-rooted anatomical concepts and authority, often mixing taxa with explicit phylogenetic support with arbitrary groupings. Two leading sources in ichthyology frequently used for fish classifications (JS Nelson's volumes of Fishes of the World and W. Eschmeyer's Catalog of Fishes) fail to adopt a global phylogenetic framework despite much recent progress made towards the resolution of the fish Tree of Life. The first explicit phylogenetic classification of bony fishes was published in 2013, based on a comprehensive molecular phylogeny ( www.deepfin.org ). We here update the first version of that classification by incorporating the most recent phylogenetic results. The updated classification presented here is based on phylogenies inferred using molecular and genomic data for nearly 2000 fishes. A total of 72 orders (and 79 suborders) are recognized in this version, compared with 66 orders in version 1. The phylogeny resolves placement of 410 families, or ~80% of the total of 514 families of bony fishes currently recognized. The ordinal status of 30 percomorph families included in this study, however, remains uncertain (incertae sedis in the series Carangaria, Ovalentaria, or Eupercaria). Comments to support taxonomic decisions and comparisons with conflicting taxonomic groups proposed by others are presented. We also highlight cases were morphological support exist for the groups being classified. This version of the phylogenetic classification of bony fishes is substantially improved, providing resolution

  11. Locating a tree in a phylogenetic network

    NARCIS (Netherlands)

    Iersel, van L.J.J.; Semple, C.; Steel, M.A.

    2010-01-01

    Phylogenetic trees and networks are leaf-labelled graphs that are used to describe evolutionary histories of species. The Tree Containment problem asks whether a given phylogenetic tree is embedded in a given phylogenetic network. Given a phylogenetic network and a cluster of species, the Cluster

  12. Coding for Two Dimensional Constrained Fields

    DEFF Research Database (Denmark)

    Laursen, Torben Vaarbye

    2006-01-01

    a first order model to model higher order constraints by the use of an alphabet extension. We present an iterative method that based on a set of conditional probabilities can help in choosing the large numbers of parameters of the model in order to obtain a stationary model. Explicit results are given...... for the No Isolated Bits constraint. Finally we present a variation of the encoding scheme of bit-stuffing that is applicable to the class of checkerboard constrained fields. It is possible to calculate the entropy of the coding scheme thus obtaining lower bounds on the entropy of the fields considered. These lower...... bounds are very tight for the Run-Length limited fields. Explicit bounds are given for the diamond constrained field as well....

  13. Structure-constrained sparse canonical correlation analysis with an application to microbiome data analysis.

    Science.gov (United States)

    Chen, Jun; Bushman, Frederic D; Lewis, James D; Wu, Gary D; Li, Hongzhe

    2013-04-01

    Motivated by studying the association between nutrient intake and human gut microbiome composition, we developed a method for structure-constrained sparse canonical correlation analysis (ssCCA) in a high-dimensional setting. ssCCA takes into account the phylogenetic relationships among bacteria, which provides important prior knowledge on evolutionary relationships among bacterial taxa. Our ssCCA formulation utilizes a phylogenetic structure-constrained penalty function to impose certain smoothness on the linear coefficients according to the phylogenetic relationships among the taxa. An efficient coordinate descent algorithm is developed for optimization. A human gut microbiome data set is used to illustrate this method. Both simulations and real data applications show that ssCCA performs better than the standard sparse CCA in identifying meaningful variables when there are structures in the data.

  14. Coherent and incoherent giant dipole resonance γ-ray emission induced by heavy ion collisions: Study of the 40Ca+48Ca system by means of the constrained molecular dynamics model

    International Nuclear Information System (INIS)

    Papa, Massimo; Cardella, Giuseppe; Bonanno, Antonio; Pappalardo, Giuseppe; Rizzo, Francesca; Amorini, Francesca; Bonasera, Aldo; Di Pietro, Alessia; Figuera, Pier Paolo; Tudisco, Salvatore; Maruyama, Toshiki

    2003-01-01

    Coherent and incoherent dipolar γ-ray emission is studied in a fully dynamical approach by means of the constrained molecular dynamics model. The study is focused on the system 40 Ca+ 48 Ca for which recently experimental data have been collected at 25 MeV/nucleon. The approach allows us to explain the experimental results in a self-consistent way without using statistical or hybrid models. Moreover, calculations performed at higher energy show interesting correlations between the fragment formation process, the degree of collectivity, and the coherence degree of the γ-ray emission process

  15. Secondary Structural Models (16S rRNA of Polyhydroxyalkanoates Producing Bacillus Species Isolated from Different Rhizospheric Soil: Phylogenetics and Chemical Analysis

    Directory of Open Access Journals (Sweden)

    Swati Mohapatra

    2016-09-01

    Full Text Available Polyhydroxyalkanoates (PHAs producing bacterial isolates are gaining more importance over the world due to the synthesis of a biodegradable polymer which is extremely desirable to substitute synthetic plastics. PHAs are produced by various microorganisms under certain stress conditions. In this study, sixteen bacterial isolates characterized previously by partial 16S rRNA gene sequencing (NCBI Accession No. KF626466 to KF626481 were again stained by Nile red after three years of preservation in order to confirm their ability to accumulate PHAs. Also, phylogenetic analysis carried out in the present investigation evidenced that the bacterial species belonging to genus Bacillus are the dominant flora of the rhizospheric region, with a potentiality of biodegradable polymer (PHAs production. Again, RNA secondary structure prediction hypothesized that there is no direct correlation between RNA folding pattern stability with a rate of PHAs production among the selected isolates of genus Bacillus.

  16. Locating a tree in a phylogenetic network

    OpenAIRE

    van Iersel, Leo; Semple, Charles; Steel, Mike

    2010-01-01

    Phylogenetic trees and networks are leaf-labelled graphs that are used to describe evolutionary histories of species. The Tree Containment problem asks whether a given phylogenetic tree is embedded in a given phylogenetic network. Given a phylogenetic network and a cluster of species, the Cluster Containment problem asks whether the given cluster is a cluster of some phylogenetic tree embedded in the network. Both problems are known to be NP-complete in general. In this article, we consider t...

  17. Constrained evolution in numerical relativity

    Science.gov (United States)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  18. Order-constrained linear optimization.

    Science.gov (United States)

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  19. A Consistent Phylogenetic Backbone for the Fungi

    Science.gov (United States)

    Ebersberger, Ingo; de Matos Simoes, Ricardo; Kupczok, Anne; Gube, Matthias; Kothe, Erika; Voigt, Kerstin; von Haeseler, Arndt

    2012-01-01

    The kingdom of fungi provides model organisms for biotechnology, cell biology, genetics, and life sciences in general. Only when their phylogenetic relationships are stably resolved, can individual results from fungal research be integrated into a holistic picture of biology. However, and despite recent progress, many deep relationships within the fungi remain unclear. Here, we present the first phylogenomic study of an entire eukaryotic kingdom that uses a consistency criterion to strengthen phylogenetic conclusions. We reason that branches (splits) recovered with independent data and different tree reconstruction methods are likely to reflect true evolutionary relationships. Two complementary phylogenomic data sets based on 99 fungal genomes and 109 fungal expressed sequence tag (EST) sets analyzed with four different tree reconstruction methods shed light from different angles on the fungal tree of life. Eleven additional data sets address specifically the phylogenetic position of Blastocladiomycota, Ustilaginomycotina, and Dothideomycetes, respectively. The combined evidence from the resulting trees supports the deep-level stability of the fungal groups toward a comprehensive natural system of the fungi. In addition, our analysis reveals methodologically interesting aspects. Enrichment for EST encoded data—a common practice in phylogenomic analyses—introduces a strong bias toward slowly evolving and functionally correlated genes. Consequently, the generalization of phylogenomic data sets as collections of randomly selected genes cannot be taken for granted. A thorough characterization of the data to assess possible influences on the tree reconstruction should therefore become a standard in phylogenomic analyses. PMID:22114356

  20. Phylogenetic reconstruction methods: an overview.

    Science.gov (United States)

    De Bruyn, Alexandre; Martin, Darren P; Lefeuvre, Pierre

    2014-01-01

    Initially designed to infer evolutionary relationships based on morphological and physiological characters, phylogenetic reconstruction methods have greatly benefited from recent developments in molecular biology and sequencing technologies with a number of powerful methods having been developed specifically to infer phylogenies from macromolecular data. This chapter, while presenting an overview of basic concepts and methods used in phylogenetic reconstruction, is primarily intended as a simplified step-by-step guide to the construction of phylogenetic trees from nucleotide sequences using fairly up-to-date maximum likelihood methods implemented in freely available computer programs. While the analysis of chloroplast sequences from various Vanilla species is used as an illustrative example, the techniques covered here are relevant to the comparative analysis of homologous sequences datasets sampled from any group of organisms.

  1. Hyperbolicity and constrained evolution in linearized gravity

    International Nuclear Information System (INIS)

    Matzner, Richard A.

    2005-01-01

    Solving the 4-d Einstein equations as evolution in time requires solving equations of two types: the four elliptic initial data (constraint) equations, followed by the six second order evolution equations. Analytically the constraint equations remain solved under the action of the evolution, and one approach is to simply monitor them (unconstrained evolution). Since computational solution of differential equations introduces almost inevitable errors, it is clearly 'more correct' to introduce a scheme which actively maintains the constraints by solution (constrained evolution). This has shown promise in computational settings, but the analysis of the resulting mixed elliptic hyperbolic method has not been completely carried out. We present such an analysis for one method of constrained evolution, applied to a simple vacuum system, linearized gravitational waves. We begin with a study of the hyperbolicity of the unconstrained Einstein equations. (Because the study of hyperbolicity deals only with the highest derivative order in the equations, linearization loses no essential details.) We then give explicit analytical construction of the effect of initial data setting and constrained evolution for linearized gravitational waves. While this is clearly a toy model with regard to constrained evolution, certain interesting features are found which have relevance to the full nonlinear Einstein equations

  2. Interpreting the universal phylogenetic tree

    Science.gov (United States)

    Woese, C. R.

    2000-01-01

    The universal phylogenetic tree not only spans all extant life, but its root and earliest branchings represent stages in the evolutionary process before modern cell types had come into being. The evolution of the cell is an interplay between vertically derived and horizontally acquired variation. Primitive cellular entities were necessarily simpler and more modular in design than are modern cells. Consequently, horizontal gene transfer early on was pervasive, dominating the evolutionary dynamic. The root of the universal phylogenetic tree represents the first stage in cellular evolution when the evolving cell became sufficiently integrated and stable to the erosive effects of horizontal gene transfer that true organismal lineages could exist.

  3. Phylogenetic comparative methods complement discriminant function analysis in ecomorphology.

    Science.gov (United States)

    Barr, W Andrew; Scott, Robert S

    2014-04-01

    In ecomorphology, Discriminant Function Analysis (DFA) has been used as evidence for the presence of functional links between morphometric variables and ecological categories. Here we conduct simulations of characters containing phylogenetic signal to explore the performance of DFA under a variety of conditions. Characters were simulated using a phylogeny of extant antelope species from known habitats. Characters were modeled with no biomechanical relationship to the habitat category; the only sources of variation were body mass, phylogenetic signal, or random "noise." DFA on the discriminability of habitat categories was performed using subsets of the simulated characters, and Phylogenetic Generalized Least Squares (PGLS) was performed for each character. Analyses were repeated with randomized habitat assignments. When simulated characters lacked phylogenetic signal and/or habitat assignments were random, ecomorphology. Copyright © 2013 Wiley Periodicals, Inc.

  4. Efficient Detection of Repeating Sites to Accelerate Phylogenetic Likelihood Calculations.

    Science.gov (United States)

    Kobert, K; Stamatakis, A; Flouri, T

    2017-03-01

    The phylogenetic likelihood function (PLF) is the major computational bottleneck in several applications of evolutionary biology such as phylogenetic inference, species delimitation, model selection, and divergence times estimation. Given the alignment, a tree and the evolutionary model parameters, the likelihood function computes the conditional likelihood vectors for every node of the tree. Vector entries for which all input data are identical result in redundant likelihood operations which, in turn, yield identical conditional values. Such operations can be omitted for improving run-time and, using appropriate data structures, reducing memory usage. We present a fast, novel method for identifying and omitting such redundant operations in phylogenetic likelihood calculations, and assess the performance improvement and memory savings attained by our method. Using empirical and simulated data sets, we show that a prototype implementation of our method yields up to 12-fold speedups and uses up to 78% less memory than one of the fastest and most highly tuned implementations of the PLF currently available. Our method is generic and can seamlessly be integrated into any phylogenetic likelihood implementation. [Algorithms; maximum likelihood; phylogenetic likelihood function; phylogenetics]. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  5. Recursive algorithms for phylogenetic tree counting.

    Science.gov (United States)

    Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J

    2013-10-28

    In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.

  6. Lightweight cryptography for constrained devices

    DEFF Research Database (Denmark)

    Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco

    2014-01-01

    Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...... complexity, with the consequence that traditional cryptography solutions become too costly to be implemented. In this paper, we survey design strategies and techniques suitable for implementing security primitives in constrained devices....

  7. Climate-driven extinctions shape the phylogenetic structure of temperate tree floras.

    Science.gov (United States)

    Eiserhardt, Wolf L; Borchsenius, Finn; Plum, Christoffer M; Ordonez, Alejandro; Svenning, Jens-Christian

    2015-03-01

    When taxa go extinct, unique evolutionary history is lost. If extinction is selective, and the intrinsic vulnerabilities of taxa show phylogenetic signal, more evolutionary history may be lost than expected under random extinction. Under what conditions this occurs is insufficiently known. We show that late Cenozoic climate change induced phylogenetically selective regional extinction of northern temperate trees because of phylogenetic signal in cold tolerance, leading to significantly and substantially larger than random losses of phylogenetic diversity (PD). The surviving floras in regions that experienced stronger extinction are phylogenetically more clustered, indicating that non-random losses of PD are of increasing concern with increasing extinction severity. Using simulations, we show that a simple threshold model of survival given a physiological trait with phylogenetic signal reproduces our findings. Our results send a strong warning that we may expect future assemblages to be phylogenetically and possibly functionally depauperate if anthropogenic climate change affects taxa similarly. © 2015 John Wiley & Sons Ltd/CNRS.

  8. Communication Schemes with Constrained Reordering of Resources

    DEFF Research Database (Denmark)

    Popovski, Petar; Utkovski, Zoran; Trillingsgaard, Kasper Fløe

    2013-01-01

    This paper introduces a communication model inspired by two practical scenarios. The first scenario is related to the concept of protocol coding, where information is encoded in the actions taken by an existing communication protocol. We investigate strategies for protocol coding via combinatorial...... reordering of the labelled user resources (packets, channels) in an existing, primary system. However, the degrees of freedom of the reordering are constrained by the operation of the primary system. The second scenario is related to communication systems with energy harvesting, where the transmitted signals...... are constrained by the energy that is available through the harvesting process. We have introduced a communication model that covers both scenarios and elicits their key feature, namely the constraints of the primary system or the harvesting process. We have shown how to compute the capacity of the channels...

  9. Phylogenetic relationships among Maloideae species

    Science.gov (United States)

    The Maloideae is a highly diverse sub-family of the Rosaceae containing several agronomically important species (Malus sp. and Pyrus sp.) and their wild relatives. Previous phylogenetic work within the group has revealed extensive intergeneric hybridization and polyploidization. In order to develop...

  10. Phylogenetics of neotropical Platymiscium (Leguminosae

    DEFF Research Database (Denmark)

    Saslis-Lagoudakis, C. Haris; Chase, Mark W; Robinson, Daniel N

    2008-01-01

    Platymiscium is a neotropical legume genus of forest trees in the Pterocarpus clade of the pantropical "dalbergioid" clade. It comprises 19 species (29 taxa), distributed from Mexico to southern Brazil. This study presents a molecular phylogenetic analysis of Platymiscium and allies inferred from...

  11. Slope constrained Topology Optimization

    DEFF Research Database (Denmark)

    Petersson, J.; Sigmund, Ole

    1998-01-01

    The problem of minimum compliance topology optimization of an elastic continuum is considered. A general continuous density-energy relation is assumed, including variable thickness sheet models and artificial power laws. To ensure existence of solutions, the design set is restricted by enforcing...

  12. Quantity Constrained General Equilibrium

    NARCIS (Netherlands)

    Babenko, R.; Talman, A.J.J.

    2006-01-01

    In a standard general equilibrium model it is assumed that there are no price restrictions and that prices adjust infinitely fast to their equilibrium values.In case of price restrictions a general equilibrium may not exist and rationing on net demands or supplies is needed to clear the markets.In

  13. Constraining curvatonic reheating

    Energy Technology Data Exchange (ETDEWEB)

    Hardwick, Robert J.; Vennin, Vincent; Koyama, Kazuya; Wands, David, E-mail: robert.hardwick@port.ac.uk, E-mail: vincent.vennin@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk, E-mail: david.wands@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth, PO1 3FX (United Kingdom)

    2016-08-01

    We derive the first systematic observational constraints on reheating in models of inflation where an additional light scalar field contributes to primordial density perturbations and affects the expansion history during reheating. This encompasses the original curvaton model but also covers a larger class of scenarios. We find that, compared to the single-field case, lower values of the energy density at the end of inflation and of the reheating temperature are preferred when an additional scalar field is introduced. For instance, if inflation is driven by a quartic potential, which is one of the most favoured models when a light scalar field is added, the upper bound T {sub reh} < 5 × 10{sup 4} GeV on the reheating temperature T {sub reh} is derived, and the implications of this value on post-inflationary physics are discussed. The information gained about reheating is also quantified and it is found that it remains modest in plateau inflation (though still larger than in the single-field version of the model) but can become substantial in quartic inflation. The role played by the vev of the additional scalar field at the end of inflation is highlighted, and opens interesting possibilities for exploring stochastic inflation effects that could determine its distribution.

  14. Constraining the mass of the Local Group

    Science.gov (United States)

    Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan

    2017-03-01

    The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter that cannot be directly observed. To meet this end, the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the Λ cold dark matter model that is used to set up the simulations, and an LG model that encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted on to the Cosmicflows-2 data base of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different vtan choices affect the peak mass values up to a factor of 2, and change mass ratios of MM31 to MMW by up to 20 per cent; (b) constrained simulations yield more sharply peaked posterior distributions compared with the random ones; (c) LG mass estimates are found to be smaller than those found using the timing argument; (d) preferred Milky Way masses lie in the range of (0.6-0.8) × 1012 M⊙; whereas (e) MM31 is found to vary between (1.0-2.0) × 1012 M⊙, with a strong dependence on the vtan values used.

  15. Capacity Constrained Routing Algorithms for Evacuation Route Planning

    National Research Council Canada - National Science Library

    Lu, Qingsong; George, Betsy; Shekhar, Shashi

    2006-01-01

    .... In this paper, we propose a new approach, namely a capacity constrained routing planner which models capacity as a time series and generalizes shortest path algorithms to incorporate capacity constraints...

  16. Using High Resolution Simulations with WRF/SSiB Regional Climate Model Constrained by In Situ Observations to Assess the Impacts of Dust in Snow in the Upper Colorado River Basin

    Science.gov (United States)

    Oaida, C. M.; Skiles, M.; Painter, T. H.; Xue, Y.

    2015-12-01

    The mountain snowpack is an essential resource for both the environment as well as society. Observational and energy balance modeling work have shown that dust on snow (DOS) in western U.S. (WUS) is a major contributor to snow processes, including snowmelt timing and runoff amount in regions like the Upper Colorado River Basin (UCRB). In order to accurately estimate the impact of DOS to the hydrologic cycle and water resources, now and under a changing climate, we need to be able to (1) adequately simulate the snowpack (accumulation), and (2) realistically represent DOS processes in models. Energy balance models do not capture the impact on a broader local or regional scale, nor the land-atmosphere feedbacks, while GCM studies cannot resolve orographic-related precipitation processes, and therefore snowpack accumulation, owing to coarse spatial resolution and smoother terrain. All this implies the impacts of dust on snow on the mountain snowpack and other hydrologic processes are likely not well captured in current modeling studies. Recent increase in computing power allows for RCMs to be used at higher spatial resolutions, while recent in situ observations of dust in snow properties can help constrain modeling simulations. Therefore, in the work presented here, we take advantage of these latest resources to address the some of the challenges outlined above. We employ the newly enhanced WRF/SSiB regional climate model at 4 km horizontal resolution. This scale has been shown by others to be adequate in capturing orographic processes over WUS. We also constrain the magnitude of dust deposition provided by a global chemistry and transport model, with in situ measurements taken at sites in the UCRB. Furthermore, we adjust the dust absorptive properties based on observed values at these sites, as opposed to generic global ones. This study aims to improve simulation of the impact of dust in snow on the hydrologic cycle and related water resources.

  17. Phylogenetic Signal in AFLP Data Sets

    NARCIS (Netherlands)

    Koopman, W.J.M.

    2005-01-01

    AFLP markers provide a potential source of phylogenetic information for molecular systematic studies. However, there are properties of restriction fragment data that limit phylogenetic interpretation of AFLPs. These are (a) possible nonindependence of fragments, (b) problems of homology assignment

  18. Phylogenetic Position of Barbus lacerta Heckel, 1843

    Directory of Open Access Journals (Sweden)

    Mustafa Korkmaz

    2015-11-01

    As a result, five clades come out from phylogenetic reconstruction and in phylogenetic tree Barbus lacerta determined to be sister group of Barbus macedonicus, Barbus oligolepis and Barbus plebejus complex.

  19. In vitro-analysis of kinematics and intradiscal pressures in cervical arthroplasty versus fusion--A biomechanical study in a sheep model with two semi-constrained prosthesis.

    Science.gov (United States)

    Daentzer, Dorothea; Welke, Bastian; Hurschler, Christof; Husmann, Nathalie; Jansen, Christina; Flamme, Christian Heinrich; Richter, Berna Ida

    2015-03-24

    As an alternative technique to arthrodesis of the cervical spine, total disc replacement (TDR) has increasingly been used with the aim of restoration of the physiological function of the treated and adjacent motions segments. The purpose of this experimental study was to analyze the kinematics of the target level as well as of the adjacent segments, and to measure the pressures in the proximal and distal disc after arthrodesis as well as after arthroplasty with two different semi-constrained types of prosthesis. Twelve cadaveric ovine cervical spines underwent polysegmental (C2-5) multidirectional flexibility testing with a sensor-guided industrial serial robot. Additionally, pressures were recorded in the proximal and distal disc. The following three conditions were tested: (1) intact specimen, (2) single-level arthrodesis C3/4, (3) single-level TDR C3/4 using the Discover® in the first six specimens and the activ® C in the other six cadavers. Statistical analysis was performed for the total range of motion (ROM), the intervertebral ROM (iROM) and the intradiscal pressures (IDP) to compare both the three different conditions as well as the two disc prosthesis among each other. The relative iROM in the target level was always lowered after fusion in the three directions of motion. In almost all cases, the relative iROM of the adjacent segments was almost always higher compared to the physiologic condition. After arthroplasty, we found increased relative iROM in the treated level in comparison to intact state in almost all cases, with relative iROM in the adjacent segments observed to be lower in almost all situations. The IDP in both adjacent discs always increased in flexion and extension after arthrodesis. In all but five cases, the IDP in each of the adjacent level was decreased below the values of the intact specimens after TDR. Overall, in none of the analyzed parameters were statistically significantly differences between both types of prostheses

  20. A program to compute the soft Robinson-Foulds distance between phylogenetic networks.

    Science.gov (United States)

    Lu, Bingxin; Zhang, Louxin; Leong, Hon Wai

    2017-03-14

    Over the past two decades, phylogenetic networks have been studied to model reticulate evolutionary events. The relationships among phylogenetic networks, phylogenetic trees and clusters serve as the basis for reconstruction and comparison of phylogenetic networks. To understand these relationships, two problems are raised: the tree containment problem, which asks whether a phylogenetic tree is displayed in a phylogenetic network, and the cluster containment problem, which asks whether a cluster is represented at a node in a phylogenetic network. Both the problems are NP-complete. A fast exponential-time algorithm for the cluster containment problem on arbitrary networks is developed and implemented in C. The resulting program is further extended into a computer program for fast computation of the Soft Robinson-Foulds distance between phylogenetic networks. Two computer programs are developed for facilitating reconstruction and validation of phylogenetic network models in evolutionary and comparative genomics. Our simulation tests indicated that they are fast enough for use in practice. Additionally, the distribution of the Soft Robinson-Foulds distance between phylogenetic networks is demonstrated to be unlikely normal by our simulation data.

  1. Self-constrained inversion of potential fields

    Science.gov (United States)

    Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.

    2013-11-01

    We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.

  2. The transposition distance for phylogenetic trees

    OpenAIRE

    Rossello, Francesc; Valiente, Gabriel

    2006-01-01

    The search for similarity and dissimilarity measures on phylogenetic trees has been motivated by the computation of consensus trees, the search by similarity in phylogenetic databases, and the assessment of clustering results in bioinformatics. The transposition distance for fully resolved phylogenetic trees is a recent addition to the extensive collection of available metrics for comparing phylogenetic trees. In this paper, we generalize the transposition distance from fully resolved to arbi...

  3. Constraining monodromy inflation

    International Nuclear Information System (INIS)

    Peiris, Hiranya V.; Easther, Richard; Flauger, Raphael

    2013-01-01

    We use cosmic microwave background (CMB) data from the 9-year WMAP release to derive constraints on monodromy inflation, which is characterized by a linear inflaton potential with a periodic modulation. We identify two possible periodic modulations that significantly improve the fit, lowering χ 2 by approximately 10 and 20. However, standard Bayesian model selection criteria assign roughly equal odds to the modulated potential and the unmodulated case. A modulated inflationary potential can generate substantial primordial non-Gaussianity with a specific and characteristic form. For the best-fit parameters to the WMAP angular power spectrum, the corresponding non-Gaussianity might be detectable in upcoming CMB data, allowing nontrivial consistency checks on the predictions of a modulated inflationary potential

  4. Phylogenetic footprints in organizational behavior

    OpenAIRE

    Witt, Ulrich; Schwesinger, Georg

    2012-01-01

    An evolutionary tool kit is applied in this paper to explain how innate social behavior traits evolved in early human groups. These traits were adapted to the particular production requirements of the group in human phylogeny. They shaped the group members' attitudes towards contributing to the group's goals and towards other group members. We argue that these attitudes are still present in modern humans and leave their phylogenetic footprints also in present-day organizational life. We discu...

  5. Using eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements, and PhenoCams to constrain a process-based biogeochemical model for carbon market-funded wetland restoration

    Science.gov (United States)

    Oikawa, P. Y.; Baldocchi, D. D.; Knox, S. H.; Sturtevant, C. S.; Verfaillie, J. G.; Dronova, I.; Jenerette, D.; Poindexter, C.; Huang, Y. W.

    2015-12-01

    We use multiple data streams in a model-data fusion approach to reduce uncertainty in predicting CO2 and CH4 exchange in drained and flooded peatlands. Drained peatlands in the Sacramento-San Joaquin River Delta, California are a strong source of CO2 to the atmosphere and flooded peatlands or wetlands are a strong CO2 sink. However, wetlands are also large sources of CH4 that can offset the greenhouse gas mitigation potential of wetland restoration. Reducing uncertainty in model predictions of annual CO2 and CH4 budgets is critical for including wetland restoration in Cap-and-Trade programs. We have developed and parameterized the Peatland Ecosystem Photosynthesis, Respiration, and Methane Transport model (PEPRMT) in a drained agricultural peatland and a restored wetland. Both ecosystem respiration (Reco) and CH4 production are a function of 2 soil carbon (C) pools (i.e. recently-fixed C and soil organic C), temperature, and water table height. Photosynthesis is predicted using a light use efficiency model. To estimate parameters we use a Markov Chain Monte Carlo approach with an adaptive Metropolis-Hastings algorithm. Multiple data streams are used to constrain model parameters including eddy covariance of CO2, 13CO2 and CH4, continuous soil respiration measurements and digital photography. Digital photography is used to estimate leaf area index, an important input variable for the photosynthesis model. Soil respiration and 13CO2 fluxes allow partitioning of eddy covariance data between Reco and photosynthesis. Partitioned fluxes of CO2 with associated uncertainty are used to parametrize the Reco and photosynthesis models within PEPRMT. Overall, PEPRMT model performance is high. For example, we observe high data-model agreement between modeled and observed partitioned Reco (r2 = 0.68; slope = 1; RMSE = 0.59 g C-CO2 m-2 d-1). Model validation demonstrated the model's ability to accurately predict annual budgets of CO2 and CH4 in a wetland system (within 14% and 1

  6. Two results on expected values of imbalance indices of phylogenetic trees

    OpenAIRE

    Mir, Arnau; Rossello, Francesc

    2012-01-01

    We compute an explicit formula for the expected value of the Colless index of a phylogenetic tree generated under the Yule model, and an explicit formula for the expected value of the Sackin index of a phylogenetic tree generated under the uniform model.

  7. Predicting rates of interspecific interaction from phylogenetic trees.

    Science.gov (United States)

    Nuismer, Scott L; Harmon, Luke J

    2015-01-01

    Integrating phylogenetic information can potentially improve our ability to explain species' traits, patterns of community assembly, the network structure of communities, and ecosystem function. In this study, we use mathematical models to explore the ecological and evolutionary factors that modulate the explanatory power of phylogenetic information for communities of species that interact within a single trophic level. We find that phylogenetic relationships among species can influence trait evolution and rates of interaction among species, but only under particular models of species interaction. For example, when interactions within communities are mediated by a mechanism of phenotype matching, phylogenetic trees make specific predictions about trait evolution and rates of interaction. In contrast, if interactions within a community depend on a mechanism of phenotype differences, phylogenetic information has little, if any, predictive power for trait evolution and interaction rate. Together, these results make clear and testable predictions for when and how evolutionary history is expected to influence contemporary rates of species interaction. © 2014 John Wiley & Sons Ltd/CNRS.

  8. Transforming phylogenetic networks: Moving beyond tree space.

    Science.gov (United States)

    Huber, Katharina T; Moulton, Vincent; Wu, Taoyang

    2016-09-07

    Phylogenetic networks are a generalization of phylogenetic trees that are used to represent reticulate evolution. Unrooted phylogenetic networks form a special class of such networks, which naturally generalize unrooted phylogenetic trees. In this paper we define two operations on unrooted phylogenetic networks, one of which is a generalization of the well-known nearest-neighbor interchange (NNI) operation on phylogenetic trees. We show that any unrooted phylogenetic network can be transformed into any other such network using only these operations. This generalizes the well-known fact that any phylogenetic tree can be transformed into any other such tree using only NNI operations. It also allows us to define a generalization of tree space and to define some new metrics on unrooted phylogenetic networks. To prove our main results, we employ some fascinating new connections between phylogenetic networks and cubic graphs that we have recently discovered. Our results should be useful in developing new strategies to search for optimal phylogenetic networks, a topic that has recently generated some interest in the literature, as well as for providing new ways to compare networks. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Nonbinary Tree-Based Phylogenetic Networks.

    Science.gov (United States)

    Jetten, Laura; van Iersel, Leo

    2018-01-01

    Rooted phylogenetic networks are used to describe evolutionary histories that contain non-treelike evolutionary events such as hybridization and horizontal gene transfer. In some cases, such histories can be described by a phylogenetic base-tree with additional linking arcs, which can, for example, represent gene transfer events. Such phylogenetic networks are called tree-based. Here, we consider two possible generalizations of this concept to nonbinary networks, which we call tree-based and strictly-tree-based nonbinary phylogenetic networks. We give simple graph-theoretic characterizations of tree-based and strictly-tree-based nonbinary phylogenetic networks. Moreover, we show for each of these two classes that it can be decided in polynomial time whether a given network is contained in the class. Our approach also provides a new view on tree-based binary phylogenetic networks. Finally, we discuss two examples of nonbinary phylogenetic networks in biology and show how our results can be applied to them.

  10. Assessment of Anthropogenic and Climatic Impacts on the Global Carbon Cycle Using a 3-D Model Constrained by Isotopic Carbon Measurements and Remote Sensing of Vegetation

    Science.gov (United States)

    Keeling, Charles D.; Piper, S. C.

    1998-01-01

    Our original proposal called for improved modeling of the terrestrial biospheric carbon cycle, specifically using biome-specific process models to account for both the energy and water budgets of plant growth, to facilitate investigations into recent changes in global atmospheric CO2 abundance and regional distribution. The carbon fluxes predicted by these models were to be incorporated into a global model of CO2 transport to establish large-scale regional fluxes of CO2 to and from the terrestrial biosphere subject to constraints imposed by direct measurements of atmospheric CO2 and its 13C/12C isotopic ratio. Our work was coordinated with a NASA project (NASA NAGW-3151) at the University of Montana under the direction of Steven Running, and was partially funded by the Electric Power Research Institute. The primary objective of this project was to develop and test the Biome-BGC model, a global biological process model with a daily time step which simulates the water, energy and carbon budgets of plant growth. The primary product, the unique global gridded daily land temperature, and the precipitation data set which was used to drive the process model is described. The Biome-BGC model was tested by comparison with a simpler biological model driven by satellite-derived (NDVI) Normalized Difference Vegetation Index and (PAR) Photosynthetically Active Radiation data and by comparison with atmospheric CO2 observations. The simple NDVI model is also described. To facilitate the comparison with atmospheric CO2 observations, a three-dimensional atmospheric transport model was used to produce predictions of atmospheric CO2 variations given CO2 fluxes owing to (NPP) Net Primary Productivity and heterotrophic respiration that were produced by the Biome-BGC model and by the NDVI model. The transport model that we used in this project, and errors associated with transport simulations, were characterized by a comparison of 12 transport models.

  11. Phylogenetic inference with weighted codon evolutionary distances.

    Science.gov (United States)

    Criscuolo, Alexis; Michel, Christian J

    2009-04-01

    We develop a new approach to estimate a matrix of pairwise evolutionary distances from a codon-based alignment based on a codon evolutionary model. The method first computes a standard distance matrix for each of the three codon positions. Then these three distance matrices are weighted according to an estimate of the global evolutionary rate of each codon position and averaged into a unique distance matrix. Using a large set of both real and simulated codon-based alignments of nucleotide sequences, we show that this approach leads to distance matrices that have a significantly better treelikeness compared to those obtained by standard nucleotide evolutionary distances. We also propose an alternative weighting to eliminate the part of the noise often associated with some codon positions, particularly the third position, which is known to induce a fast evolutionary rate. Simulation results show that fast distance-based tree reconstruction algorithms on distance matrices based on this codon position weighting can lead to phylogenetic trees that are at least as accurate as, if not better, than those inferred by maximum likelihood. Finally, a well-known multigene dataset composed of eight yeast species and 106 codon-based alignments is reanalyzed and shows that our codon evolutionary distances allow building a phylogenetic tree which is similar to those obtained by non-distance-based methods (e.g., maximum parsimony and maximum likelihood) and also significantly improved compared to standard nucleotide evolutionary distance estimates.

  12. A Distance Measure for Genome Phylogenetic Analysis

    Science.gov (United States)

    Cao, Minh Duc; Allison, Lloyd; Dix, Trevor

    Phylogenetic analyses of species based on single genes or parts of the genomes are often inconsistent because of factors such as variable rates of evolution and horizontal gene transfer. The availability of more and more sequenced genomes allows phylogeny construction from complete genomes that is less sensitive to such inconsistency. For such long sequences, construction methods like maximum parsimony and maximum likelihood are often not possible due to their intensive computational requirement. Another class of tree construction methods, namely distance-based methods, require a measure of distances between any two genomes. Some measures such as evolutionary edit distance of gene order and gene content are computational expensive or do not perform well when the gene content of the organisms are similar. This study presents an information theoretic measure of genetic distances between genomes based on the biological compression algorithm expert model. We demonstrate that our distance measure can be applied to reconstruct the consensus phylogenetic tree of a number of Plasmodium parasites from their genomes, the statistical bias of which would mislead conventional analysis methods. Our approach is also used to successfully construct a plausible evolutionary tree for the γ-Proteobacteria group whose genomes are known to contain many horizontally transferred genes.

  13. Robustness of ancestral sequence reconstruction to phylogenetic uncertainty.

    Science.gov (United States)

    Hanson-Smith, Victor; Kolaczkowski, Bryan; Thornton, Joseph W

    2010-09-01

    Ancestral sequence reconstruction (ASR) is widely used to formulate and test hypotheses about the sequences, functions, and structures of ancient genes. Ancestral sequences are usually inferred from an alignment of extant sequences using a maximum likelihood (ML) phylogenetic algorithm, which calculates the most likely ancestral sequence assuming a probabilistic model of sequence evolution and a specific phylogeny--typically the tree with the ML. The true phylogeny is seldom known with certainty, however. ML methods ignore this uncertainty, whereas Bayesian methods incorporate it by integrating the likelihood of each ancestral state over a distribution of possible trees. It is not known whether Bayesian approaches to phylogenetic uncertainty improve the accuracy of inferred ancestral sequences. Here, we use simulation-based experiments under both simplified and empirically derived conditions to compare the accuracy of ASR carried out using ML and Bayesian approaches. We show that incorporating phylogenetic uncertainty by integrating over topologies very rarely changes the inferred ancestral state and does not improve the accuracy of the reconstructed ancestral sequence. Ancestral state reconstructions are robust to uncertainty about the underlying tree because the conditions that produce phylogenetic uncertainty also make the ancestral state identical across plausible trees; conversely, the conditions under which different phylogenies yield different inferred ancestral states produce little or no ambiguity about the true phylogeny. Our results suggest that ML can produce accurate ASRs, even in the face of phylogenetic uncertainty. Using Bayesian integration to incorporate this uncertainty is neither necessary nor beneficial.

  14. An Agent-Based Model for Analyzing Control Policies and the Dynamic Service-Time Performance of a Capacity-Constrained Air Traffic Management Facility

    Science.gov (United States)

    Conway, Sheila R.

    2006-01-01

    Simple agent-based models may be useful for investigating air traffic control strategies as a precursory screening for more costly, higher fidelity simulation. Of concern is the ability of the models to capture the essence of the system and provide insight into system behavior in a timely manner and without breaking the bank. The method is put to the test with the development of a model to address situations where capacity is overburdened and potential for propagation of the resultant delay though later flights is possible via flight dependencies. The resultant model includes primitive representations of principal air traffic system attributes, namely system capacity, demand, airline schedules and strategy, and aircraft capability. It affords a venue to explore their interdependence in a time-dependent, dynamic system simulation. The scope of the research question and the carefully-chosen modeling fidelity did allow for the development of an agent-based model in short order. The model predicted non-linear behavior given certain initial conditions and system control strategies. Additionally, a combination of the model and dimensionless techniques borrowed from fluid systems was demonstrated that can predict the system s dynamic behavior across a wide range of parametric settings.

  15. The macroecology of phylogenetically structured hummingbird-plant networks

    DEFF Research Database (Denmark)

    González, Ana M. Martín; Dalsgaard, Bo; Nogues, David Bravo

    2015-01-01

    Aim To investigate the association between hummingbird–plant network structure and species richness, phylogenetic signal on species' interaction pattern, insularity and historical and current climate. Location Fifty-four communities along a c. 10,000 km latitudinal gradient across the Americas (39...... approach, we examined the influence of species richness, phylogenetic signal, insularity and current and historical climate conditions on network structure (null-model-corrected specialization and modularity). Results Phylogenetically related species, especially plants, showed a tendency to interact...... with a similar array of mutualistic partners. The spatial variation in network structure exhibited a constant association with species phylogeny (R2 = 0.18–0.19); however, network structure showed the strongest association with species richness and environmental factors (R2 = 0.20–0.44 and R2 = 0...

  16. Capturing Hotspots For Constrained Indoor Movement

    DEFF Research Database (Denmark)

    Ahmed, Tanvir; Pedersen, Torben Bach; Lu, Hua

    2013-01-01

    Finding the hotspots in large indoor spaces is very important for getting overloaded locations, security, crowd management, indoor navigation and guidance. The tracking data coming from indoor tracking are huge in volume and not readily available for finding hotspots. This paper presents a graph......-based model for constrained indoor movement that can map the tracking records into mapping records which represent the entry and exit times of an object in a particular location. Then it discusses the hotspots extraction technique from the mapping records....