WorldWideScience

Sample records for genome-scale constraint-based modeling

  1. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    Science.gov (United States)

    Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien

    2017-01-01

    Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.

  2. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    Directory of Open Access Journals (Sweden)

    Marko Budinich

    Full Text Available Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA and multi-objective flux variability analysis (MO-FVA. Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity that take place at the ecosystem scale.

  3. Acorn: A grid computing system for constraint based modeling and visualization of the genome scale metabolic reaction networks via a web interface

    Directory of Open Access Journals (Sweden)

    Bushell Michael E

    2011-05-01

    Full Text Available Abstract Background Constraint-based approaches facilitate the prediction of cellular metabolic capabilities, based, in turn on predictions of the repertoire of enzymes encoded in the genome. Recently, genome annotations have been used to reconstruct genome scale metabolic reaction networks for numerous species, including Homo sapiens, which allow simulations that provide valuable insights into topics, including predictions of gene essentiality of pathogens, interpretation of genetic polymorphism in metabolic disease syndromes and suggestions for novel approaches to microbial metabolic engineering. These constraint-based simulations are being integrated with the functional genomics portals, an activity that requires efficient implementation of the constraint-based simulations in the web-based environment. Results Here, we present Acorn, an open source (GNU GPL grid computing system for constraint-based simulations of genome scale metabolic reaction networks within an interactive web environment. The grid-based architecture allows efficient execution of computationally intensive, iterative protocols such as Flux Variability Analysis, which can be readily scaled up as the numbers of models (and users increase. The web interface uses AJAX, which facilitates efficient model browsing and other search functions, and intuitive implementation of appropriate simulation conditions. Research groups can install Acorn locally and create user accounts. Users can also import models in the familiar SBML format and link reaction formulas to major functional genomics portals of choice. Selected models and simulation results can be shared between different users and made publically available. Users can construct pathway map layouts and import them into the server using a desktop editor integrated within the system. Pathway maps are then used to visualise numerical results within the web environment. To illustrate these features we have deployed Acorn and created a

  4. Quantitative Assessment of Thermodynamic Constraints on the Solution Space of Genome-Scale Metabolic Models

    Science.gov (United States)

    Hamilton, Joshua J.; Dwivedi, Vivek; Reed, Jennifer L.

    2013-01-01

    Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. PMID:23870272

  5. Quantitative assessment of thermodynamic constraints on the solution space of genome-scale metabolic models.

    Science.gov (United States)

    Hamilton, Joshua J; Dwivedi, Vivek; Reed, Jennifer L

    2013-07-16

    Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  6. Genome-Scale, Constraint-Based Modeling of Nitrogen Oxide Fluxes during Coculture of Nitrosomonas europaea and Nitrobacter winogradskyi

    Science.gov (United States)

    Giguere, Andrew T.; Murthy, Ganti S.; Bottomley, Peter J.; Sayavedra-Soto, Luis A.

    2018-01-01

    ABSTRACT Nitrification, the aerobic oxidation of ammonia to nitrate via nitrite, emits nitrogen (N) oxide gases (NO, NO2, and N2O), which are potentially hazardous compounds that contribute to global warming. To better understand the dynamics of nitrification-derived N oxide production, we conducted culturing experiments and used an integrative genome-scale, constraint-based approach to model N oxide gas sources and sinks during complete nitrification in an aerobic coculture of two model nitrifying bacteria, the ammonia-oxidizing bacterium Nitrosomonas europaea and the nitrite-oxidizing bacterium Nitrobacter winogradskyi. The model includes biotic genome-scale metabolic models (iFC578 and iFC579) for each nitrifier and abiotic N oxide reactions. Modeling suggested both biotic and abiotic reactions are important sources and sinks of N oxides, particularly under microaerobic conditions predicted to occur in coculture. In particular, integrative modeling suggested that previous models might have underestimated gross NO production during nitrification due to not taking into account its rapid oxidation in both aqueous and gas phases. The integrative model may be found at https://github.com/chaplenf/microBiome-v2.1. IMPORTANCE Modern agriculture is sustained by application of inorganic nitrogen (N) fertilizer in the form of ammonium (NH4+). Up to 60% of NH4+-based fertilizer can be lost through leaching of nitrifier-derived nitrate (NO3−), and through the emission of N oxide gases (i.e., nitric oxide [NO], N dioxide [NO2], and nitrous oxide [N2O] gases), the latter being a potent greenhouse gas. Our approach to modeling of nitrification suggests that both biotic and abiotic mechanisms function as important sources and sinks of N oxides during microaerobic conditions and that previous models might have underestimated gross NO production during nitrification. PMID:29577088

  7. Genome-Scale, Constraint-Based Modeling of Nitrogen Oxide Fluxes during Coculture of Nitrosomonas europaea and Nitrobacter winogradskyi.

    Science.gov (United States)

    Mellbye, Brett L; Giguere, Andrew T; Murthy, Ganti S; Bottomley, Peter J; Sayavedra-Soto, Luis A; Chaplen, Frank W R

    2018-01-01

    Nitrification, the aerobic oxidation of ammonia to nitrate via nitrite, emits nitrogen (N) oxide gases (NO, NO 2 , and N 2 O), which are potentially hazardous compounds that contribute to global warming. To better understand the dynamics of nitrification-derived N oxide production, we conducted culturing experiments and used an integrative genome-scale, constraint-based approach to model N oxide gas sources and sinks during complete nitrification in an aerobic coculture of two model nitrifying bacteria, the ammonia-oxidizing bacterium Nitrosomonas europaea and the nitrite-oxidizing bacterium Nitrobacter winogradskyi . The model includes biotic genome-scale metabolic models (iFC578 and iFC579) for each nitrifier and abiotic N oxide reactions. Modeling suggested both biotic and abiotic reactions are important sources and sinks of N oxides, particularly under microaerobic conditions predicted to occur in coculture. In particular, integrative modeling suggested that previous models might have underestimated gross NO production during nitrification due to not taking into account its rapid oxidation in both aqueous and gas phases. The integrative model may be found at https://github.com/chaplenf/microBiome-v2.1. IMPORTANCE Modern agriculture is sustained by application of inorganic nitrogen (N) fertilizer in the form of ammonium (NH 4 + ). Up to 60% of NH 4 + -based fertilizer can be lost through leaching of nitrifier-derived nitrate (NO 3 - ), and through the emission of N oxide gases (i.e., nitric oxide [NO], N dioxide [NO 2 ], and nitrous oxide [N 2 O] gases), the latter being a potent greenhouse gas. Our approach to modeling of nitrification suggests that both biotic and abiotic mechanisms function as important sources and sinks of N oxides during microaerobic conditions and that previous models might have underestimated gross NO production during nitrification.

  8. Genome-scale modeling using flux ratio constraints to enable metabolic engineering of clostridial metabolism in silico.

    Science.gov (United States)

    McAnulty, Michael J; Yen, Jiun Y; Freedman, Benjamin G; Senger, Ryan S

    2012-05-14

    Genome-scale metabolic networks and flux models are an effective platform for linking an organism genotype to its phenotype. However, few modeling approaches offer predictive capabilities to evaluate potential metabolic engineering strategies in silico. A new method called "flux balance analysis with flux ratios (FBrAtio)" was developed in this research and applied to a new genome-scale model of Clostridium acetobutylicum ATCC 824 (iCAC490) that contains 707 metabolites and 794 reactions. FBrAtio was used to model wild-type metabolism and metabolically engineered strains of C. acetobutylicum where only flux ratio constraints and thermodynamic reversibility of reactions were required. The FBrAtio approach allowed solutions to be found through standard linear programming. Five flux ratio constraints were required to achieve a qualitative picture of wild-type metabolism for C. acetobutylicum for the production of: (i) acetate, (ii) lactate, (iii) butyrate, (iv) acetone, (v) butanol, (vi) ethanol, (vii) CO2 and (viii) H2. Results of this simulation study coincide with published experimental results and show the knockdown of the acetoacetyl-CoA transferase increases butanol to acetone selectivity, while the simultaneous over-expression of the aldehyde/alcohol dehydrogenase greatly increases ethanol production. FBrAtio is a promising new method for constraining genome-scale models using internal flux ratios. The method was effective for modeling wild-type and engineered strains of C. acetobutylicum.

  9. Deriving metabolic engineering strategies from genome-scale modeling with flux ratio constraints.

    Science.gov (United States)

    Yen, Jiun Y; Nazem-Bokaee, Hadi; Freedman, Benjamin G; Athamneh, Ahmad I M; Senger, Ryan S

    2013-05-01

    Optimized production of bio-based fuels and chemicals from microbial cell factories is a central goal of systems metabolic engineering. To achieve this goal, a new computational method of using flux balance analysis with flux ratios (FBrAtio) was further developed in this research and applied to five case studies to evaluate and design metabolic engineering strategies. The approach was implemented using publicly available genome-scale metabolic flux models. Synthetic pathways were added to these models along with flux ratio constraints by FBrAtio to achieve increased (i) cellulose production from Arabidopsis thaliana; (ii) isobutanol production from Saccharomyces cerevisiae; (iii) acetone production from Synechocystis sp. PCC6803; (iv) H2 production from Escherichia coli MG1655; and (v) isopropanol, butanol, and ethanol (IBE) production from engineered Clostridium acetobutylicum. The FBrAtio approach was applied to each case to simulate a metabolic engineering strategy already implemented experimentally, and flux ratios were continually adjusted to find (i) the end-limit of increased production using the existing strategy, (ii) new potential strategies to increase production, and (iii) the impact of these metabolic engineering strategies on product yield and culture growth. The FBrAtio approach has the potential to design "fine-tuned" metabolic engineering strategies in silico that can be implemented directly with available genomic tools. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Genome-scale comparison and constraint-based metabolic reconstruction of the facultative anaerobic Fe(III-reducer Rhodoferax ferrireducens

    Directory of Open Access Journals (Sweden)

    Daugherty Sean

    2009-09-01

    Full Text Available Abstract Background Rhodoferax ferrireducens is a metabolically versatile, Fe(III-reducing, subsurface microorganism that is likely to play an important role in the carbon and metal cycles in the subsurface. It also has the unique ability to convert sugars to electricity, oxidizing the sugars to carbon dioxide with quantitative electron transfer to graphite electrodes in microbial fuel cells. In order to expand our limited knowledge about R. ferrireducens, the complete genome sequence of this organism was further annotated and then the physiology of R. ferrireducens was investigated with a constraint-based, genome-scale in silico metabolic model and laboratory studies. Results The iterative modeling and experimental approach unveiled exciting, previously unknown physiological features, including an expanded range of substrates that support growth, such as cellobiose and citrate, and provided additional insights into important features such as the stoichiometry of the electron transport chain and the ability to grow via fumarate dismutation. Further analysis explained why R. ferrireducens is unable to grow via photosynthesis or fermentation of sugars like other members of this genus and uncovered novel genes for benzoate metabolism. The genome also revealed that R. ferrireducens is well-adapted for growth in the subsurface because it appears to be capable of dealing with a number of environmental insults, including heavy metals, aromatic compounds, nutrient limitation and oxidative stress. Conclusion This study demonstrates that combining genome-scale modeling with the annotation of a new genome sequence can guide experimental studies and accelerate the understanding of the physiology of under-studied yet environmentally relevant microorganisms.

  11. Direct coupling of a genome-scale microbial in silico model and a groundwater reactive transport model

    International Nuclear Information System (INIS)

    Fang, Yilin; Scheibe, Timothy D.; Mahadevan, Radhakrishnan; Garg, Srinath; Long, Philip E.; Lovley, Derek R.

    2011-01-01

    The activity of microorganisms often plays an important role in dynamic natural attenuation or engineered bioremediation of subsurface contaminants, such as chlorinated solvents, metals, and radionuclides. To evaluate and/or design bioremediated systems, quantitative reactive transport models are needed. State-of-the-art reactive transport models often ignore the microbial effects or simulate the microbial effects with static growth yield and constant reaction rate parameters over simulated conditions, while in reality microorganisms can dynamically modify their functionality (such as utilization of alternative respiratory pathways) in response to spatial and temporal variations in environmental conditions. Constraint-based genome-scale microbial in silico models, using genomic data and multiple-pathway reaction networks, have been shown to be able to simulate transient metabolism of some well studied microorganisms and identify growth rate, substrate uptake rates, and byproduct rates under different growth conditions. These rates can be identified and used to replace specific microbially-mediated reaction rates in a reactive transport model using local geochemical conditions as constraints. We previously demonstrated the potential utility of integrating a constraint based microbial metabolism model with a reactive transport simulator as applied to bioremediation of uranium in groundwater. However, that work relied on an indirect coupling approach that was effective for initial demonstration but may not be extensible to more complex problems that are of significant interest (e.g., communities of microbial species, multiple constraining variables). Here, we extend that work by presenting and demonstrating a method of directly integrating a reactive transport model (FORTRAN code) with constraint-based in silico models solved with IBM ILOG CPLEX linear optimizer base system (C library). The models were integrated with BABEL, a language interoperability tool. The

  12. Using Genome-scale Models to Predict Biological Capabilities

    DEFF Research Database (Denmark)

    O’Brien, Edward J.; Monk, Jonathan M.; Palsson, Bernhard O.

    2015-01-01

    Constraint-based reconstruction and analysis (COBRA) methods at the genome scale have been under development since the first whole-genome sequences appeared in the mid-1990s. A few years ago, this approach began to demonstrate the ability to predict a range of cellular functions, including cellul...

  13. Principles of proteome allocation are revealed using proteomic data and genome-scale models

    DEFF Research Database (Denmark)

    Yang, Laurence; Yurkovich, James T.; Lloyd, Colton J.

    2016-01-01

    to metabolism and fitness. Using proteomics data, we formulated allocation constraints for key proteome sectors in the ME model. The resulting calibrated model effectively computed the "generalist" (wild-type) E. coli proteome and phenotype across diverse growth environments. Across 15 growth conditions......Integrating omics data to refine or make context-specific models is an active field of constraint-based modeling. Proteomics now cover over 95% of the Escherichia coli proteome by mass. Genome-scale models of Metabolism and macromolecular Expression (ME) compute proteome allocation linked...... of these sectors for the general stress response sigma factor sigma(S). Finally, the sector constraints represent a general formalism for integrating omics data from any experimental condition into constraint-based ME models. The constraints can be fine-grained (individual proteins) or coarse-grained (functionally...

  14. Direct coupling of a genome-scale microbial in silico model and a groundwater reactive transport model.

    Science.gov (United States)

    Fang, Yilin; Scheibe, Timothy D; Mahadevan, Radhakrishnan; Garg, Srinath; Long, Philip E; Lovley, Derek R

    2011-03-25

    The activity of microorganisms often plays an important role in dynamic natural attenuation or engineered bioremediation of subsurface contaminants, such as chlorinated solvents, metals, and radionuclides. To evaluate and/or design bioremediated systems, quantitative reactive transport models are needed. State-of-the-art reactive transport models often ignore the microbial effects or simulate the microbial effects with static growth yield and constant reaction rate parameters over simulated conditions, while in reality microorganisms can dynamically modify their functionality (such as utilization of alternative respiratory pathways) in response to spatial and temporal variations in environmental conditions. Constraint-based genome-scale microbial in silico models, using genomic data and multiple-pathway reaction networks, have been shown to be able to simulate transient metabolism of some well studied microorganisms and identify growth rate, substrate uptake rates, and byproduct rates under different growth conditions. These rates can be identified and used to replace specific microbially-mediated reaction rates in a reactive transport model using local geochemical conditions as constraints. We previously demonstrated the potential utility of integrating a constraint-based microbial metabolism model with a reactive transport simulator as applied to bioremediation of uranium in groundwater. However, that work relied on an indirect coupling approach that was effective for initial demonstration but may not be extensible to more complex problems that are of significant interest (e.g., communities of microbial species and multiple constraining variables). Here, we extend that work by presenting and demonstrating a method of directly integrating a reactive transport model (FORTRAN code) with constraint-based in silico models solved with IBM ILOG CPLEX linear optimizer base system (C library). The models were integrated with BABEL, a language interoperability tool. The

  15. Direct coupling of a genome-scale microbial in silico model and a groundwater reactive transport model

    Science.gov (United States)

    Fang, Yilin; Scheibe, Timothy D.; Mahadevan, Radhakrishnan; Garg, Srinath; Long, Philip E.; Lovley, Derek R.

    2011-03-01

    The activity of microorganisms often plays an important role in dynamic natural attenuation or engineered bioremediation of subsurface contaminants, such as chlorinated solvents, metals, and radionuclides. To evaluate and/or design bioremediated systems, quantitative reactive transport models are needed. State-of-the-art reactive transport models often ignore the microbial effects or simulate the microbial effects with static growth yield and constant reaction rate parameters over simulated conditions, while in reality microorganisms can dynamically modify their functionality (such as utilization of alternative respiratory pathways) in response to spatial and temporal variations in environmental conditions. Constraint-based genome-scale microbial in silico models, using genomic data and multiple-pathway reaction networks, have been shown to be able to simulate transient metabolism of some well studied microorganisms and identify growth rate, substrate uptake rates, and byproduct rates under different growth conditions. These rates can be identified and used to replace specific microbially-mediated reaction rates in a reactive transport model using local geochemical conditions as constraints. We previously demonstrated the potential utility of integrating a constraint-based microbial metabolism model with a reactive transport simulator as applied to bioremediation of uranium in groundwater. However, that work relied on an indirect coupling approach that was effective for initial demonstration but may not be extensible to more complex problems that are of significant interest (e.g., communities of microbial species and multiple constraining variables). Here, we extend that work by presenting and demonstrating a method of directly integrating a reactive transport model (FORTRAN code) with constraint-based in silico models solved with IBM ILOG CPLEX linear optimizer base system (C library). The models were integrated with BABEL, a language interoperability tool. The

  16. Next-generation genome-scale models for metabolic engineering

    DEFF Research Database (Denmark)

    King, Zachary A.; Lloyd, Colton J.; Feist, Adam M.

    2015-01-01

    Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict...... examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering....

  17. Use of an uncertainty analysis for genome-scale models as a prediction tool for microbial growth processes in subsurface environments.

    Science.gov (United States)

    Klier, Christine

    2012-03-06

    The integration of genome-scale, constraint-based models of microbial cell function into simulations of contaminant transport and fate in complex groundwater systems is a promising approach to help characterize the metabolic activities of microorganisms in natural environments. In constraint-based modeling, the specific uptake flux rates of external metabolites are usually determined by Michaelis-Menten kinetic theory. However, extensive data sets based on experimentally measured values are not always available. In this study, a genome-scale model of Pseudomonas putida was used to study the key issue of uncertainty arising from the parametrization of the influx of two growth-limiting substrates: oxygen and toluene. The results showed that simulated growth rates are highly sensitive to substrate affinity constants and that uncertainties in specific substrate uptake rates have a significant influence on the variability of simulated microbial growth. Michaelis-Menten kinetic theory does not, therefore, seem to be appropriate for descriptions of substrate uptake processes in the genome-scale model of P. putida. Microbial growth rates of P. putida in subsurface environments can only be accurately predicted if the processes of complex substrate transport and microbial uptake regulation are sufficiently understood in natural environments and if data-driven uptake flux constraints can be applied.

  18. Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression

    DEFF Research Database (Denmark)

    Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.

    2017-01-01

    orders of magnitude. Data values also have greatly varying magnitudes. Standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME......Constraint-Based Reconstruction and Analysis (COBRA) is currently the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many...... models have 70,000 constraints and variables and will grow larger). We have developed a quadrupleprecision version of our linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging...

  19. Incorporating Protein Biosynthesis into the Saccharomyces cerevisiae Genome-scale Metabolic Model

    DEFF Research Database (Denmark)

    Olivares Hernandez, Roberto

    Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been construc......Based on stoichiometric biochemical equations that occur into the cell, the genome-scale metabolic models can quantify the metabolic fluxes, which are regarded as the final representation of the physiological state of the cell. For Saccharomyces Cerevisiae the genome scale model has been...

  20. Constraint-based modeling in microbial food biotechnology

    Science.gov (United States)

    Rau, Martin H.

    2018-01-01

    Genome-scale metabolic network reconstruction offers a means to leverage the value of the exponentially growing genomics data and integrate it with other biological knowledge in a structured format. Constraint-based modeling (CBM) enables both the qualitative and quantitative analyses of the reconstructed networks. The rapid advancements in these areas can benefit both the industrial production of microbial food cultures and their application in food processing. CBM provides several avenues for improving our mechanistic understanding of physiology and genotype–phenotype relationships. This is essential for the rational improvement of industrial strains, which can further be facilitated through various model-guided strain design approaches. CBM of microbial communities offers a valuable tool for the rational design of defined food cultures, where it can catalyze hypothesis generation and provide unintuitive rationales for the development of enhanced community phenotypes and, consequently, novel or improved food products. In the industrial-scale production of microorganisms for food cultures, CBM may enable a knowledge-driven bioprocess optimization by rationally identifying strategies for growth and stability improvement. Through these applications, we believe that CBM can become a powerful tool for guiding the areas of strain development, culture development and process optimization in the production of food cultures. Nevertheless, in order to make the correct choice of the modeling framework for a particular application and to interpret model predictions in a biologically meaningful manner, one should be aware of the current limitations of CBM. PMID:29588387

  1. Network Thermodynamic Curation of Human and Yeast Genome-Scale Metabolic Models

    Science.gov (United States)

    Martínez, Verónica S.; Quek, Lake-Ee; Nielsen, Lars K.

    2014-01-01

    Genome-scale models are used for an ever-widening range of applications. Although there has been much focus on specifying the stoichiometric matrix, the predictive power of genome-scale models equally depends on reaction directions. Two-thirds of reactions in the two eukaryotic reconstructions Homo sapiens Recon 1 and Yeast 5 are specified as irreversible. However, these specifications are mainly based on biochemical textbooks or on their similarity to other organisms and are rarely underpinned by detailed thermodynamic analysis. In this study, a to our knowledge new workflow combining network-embedded thermodynamic and flux variability analysis was used to evaluate existing irreversibility constraints in Recon 1 and Yeast 5 and to identify new ones. A total of 27 and 16 new irreversible reactions were identified in Recon 1 and Yeast 5, respectively, whereas only four reactions were found with directions incorrectly specified against thermodynamics (three in Yeast 5 and one in Recon 1). The workflow further identified for both models several isolated internal loops that require further curation. The framework also highlighted the need for substrate channeling (in human) and ATP hydrolysis (in yeast) for the essential reaction catalyzed by phosphoribosylaminoimidazole carboxylase in purine metabolism. Finally, the framework highlighted differences in proline metabolism between yeast (cytosolic anabolism and mitochondrial catabolism) and humans (exclusively mitochondrial metabolism). We conclude that network-embedded thermodynamics facilitates the specification and validation of irreversibility constraints in compartmentalized metabolic models, at the same time providing further insight into network properties. PMID:25028891

  2. A constraint-based model of Scheffersomyces stipitis for improved ethanol production

    Directory of Open Access Journals (Sweden)

    Liu Ting

    2012-09-01

    Full Text Available Abstract Background As one of the best xylose utilization microorganisms, Scheffersomyces stipitis exhibits great potential for the efficient lignocellulosic biomass fermentation. Therefore, a comprehensive understanding of its unique physiological and metabolic characteristics is required to further improve its performance on cellulosic ethanol production. Results A constraint-based genome-scale metabolic model for S. stipitis CBS 6054 was developed on the basis of its genomic, transcriptomic and literature information. The model iTL885 consists of 885 genes, 870 metabolites, and 1240 reactions. During the reconstruction process, 36 putative sugar transporters were reannotated and the metabolisms of 7 sugars were illuminated. Essentiality study was conducted to predict essential genes on different growth media. Key factors affecting cell growth and ethanol formation were investigated by the use of constraint-based analysis. Furthermore, the uptake systems and metabolic routes of xylose were elucidated, and the optimization strategies for the overproduction of ethanol were proposed from both genetic and environmental perspectives. Conclusions Systems biology modelling has proven to be a powerful tool for targeting metabolic changes. Thus, this systematic investigation of the metabolism of S. stipitis could be used as a starting point for future experiment designs aimed at identifying the metabolic bottlenecks of this important yeast.

  3. Integration of expression data in genome-scale metabolic network reconstructions

    Directory of Open Access Journals (Sweden)

    Anna S. Blazier

    2012-08-01

    Full Text Available With the advent of high-throughput technologies, the field of systems biology has amassed an abundance of omics data, quantifying thousands of cellular components across a variety of scales, ranging from mRNA transcript levels to metabolite quantities. Methods are needed to not only integrate this omics data but to also use this data to heighten the predictive capabilities of computational models. Several recent studies have successfully demonstrated how flux balance analysis (FBA, a constraint-based modeling approach, can be used to integrate transcriptomic data into genome-scale metabolic network reconstructions to generate predictive computational models. In this review, we summarize such FBA-based methods for integrating expression data into genome-scale metabolic network reconstructions, highlighting their advantages as well as their limitations.

  4. The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.

    NARCIS (Netherlands)

    Olivier, B.G.; Bergmann, F.T.

    2015-01-01

    Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux

  5. Network thermodynamic curation of human and yeast genome-scale metabolic models.

    Science.gov (United States)

    Martínez, Verónica S; Quek, Lake-Ee; Nielsen, Lars K

    2014-07-15

    Genome-scale models are used for an ever-widening range of applications. Although there has been much focus on specifying the stoichiometric matrix, the predictive power of genome-scale models equally depends on reaction directions. Two-thirds of reactions in the two eukaryotic reconstructions Homo sapiens Recon 1 and Yeast 5 are specified as irreversible. However, these specifications are mainly based on biochemical textbooks or on their similarity to other organisms and are rarely underpinned by detailed thermodynamic analysis. In this study, a to our knowledge new workflow combining network-embedded thermodynamic and flux variability analysis was used to evaluate existing irreversibility constraints in Recon 1 and Yeast 5 and to identify new ones. A total of 27 and 16 new irreversible reactions were identified in Recon 1 and Yeast 5, respectively, whereas only four reactions were found with directions incorrectly specified against thermodynamics (three in Yeast 5 and one in Recon 1). The workflow further identified for both models several isolated internal loops that require further curation. The framework also highlighted the need for substrate channeling (in human) and ATP hydrolysis (in yeast) for the essential reaction catalyzed by phosphoribosylaminoimidazole carboxylase in purine metabolism. Finally, the framework highlighted differences in proline metabolism between yeast (cytosolic anabolism and mitochondrial catabolism) and humans (exclusively mitochondrial metabolism). We conclude that network-embedded thermodynamics facilitates the specification and validation of irreversibility constraints in compartmentalized metabolic models, at the same time providing further insight into network properties. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  6. Modeling Lactococcus lactis using a genome-scale flux model

    Directory of Open Access Journals (Sweden)

    Nielsen Jens

    2005-06-01

    Full Text Available Abstract Background Genome-scale flux models are useful tools to represent and analyze microbial metabolism. In this work we reconstructed the metabolic network of the lactic acid bacteria Lactococcus lactis and developed a genome-scale flux model able to simulate and analyze network capabilities and whole-cell function under aerobic and anaerobic continuous cultures. Flux balance analysis (FBA and minimization of metabolic adjustment (MOMA were used as modeling frameworks. Results The metabolic network was reconstructed using the annotated genome sequence from L. lactis ssp. lactis IL1403 together with physiological and biochemical information. The established network comprised a total of 621 reactions and 509 metabolites, representing the overall metabolism of L. lactis. Experimental data reported in the literature was used to fit the model to phenotypic observations. Regulatory constraints had to be included to simulate certain metabolic features, such as the shift from homo to heterolactic fermentation. A minimal medium for in silico growth was identified, indicating the requirement of four amino acids in addition to a sugar. Remarkably, de novo biosynthesis of four other amino acids was observed even when all amino acids were supplied, which is in good agreement with experimental observations. Additionally, enhanced metabolic engineering strategies for improved diacetyl producing strains were designed. Conclusion The L. lactis metabolic network can now be used for a better understanding of lactococcal metabolic capabilities and potential, for the design of enhanced metabolic engineering strategies and for integration with other types of 'omic' data, to assist in finding new information on cellular organization and function.

  7. The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.

    Science.gov (United States)

    Olivier, Brett G; Bergmann, Frank T

    2015-09-04

    Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).

  8. Genome scale metabolic modeling of cancer

    DEFF Research Database (Denmark)

    Nilsson, Avlant; Nielsen, Jens

    2017-01-01

    of metabolism which allows simulation and hypotheses testing of metabolic strategies. It has successfully been applied to many microorganisms and is now used to study cancer metabolism. Generic models of human metabolism have been reconstructed based on the existence of metabolic genes in the human genome......Cancer cells reprogram metabolism to support rapid proliferation and survival. Energy metabolism is particularly important for growth and genes encoding enzymes involved in energy metabolism are frequently altered in cancer cells. A genome scale metabolic model (GEM) is a mathematical formalization...

  9. Searching for genomic constraints

    Energy Technology Data Exchange (ETDEWEB)

    Lio` , P [Cambridge, Univ. (United Kingdom). Genetics Dept.; Ruffo, S [Florence, Univ. (Italy). Fac. di Ingegneria. Dipt. di Energetica ` S. Stecco`

    1998-01-01

    The authors have analyzed general properties of very long DNA sequences belonging to simple and complex organisms, by using different correlation methods. They have distinguished those base compositional rules that concern the entire genome which they call `genomic constraints` from the rules that depend on the `external natural selection` acting on single genes, i. e. protein-centered constraints. They show that G + C content, purine / pyrimidine distributions and biological complexity of the organism are the most important factors which determine base compositional rules and genome complexity. Three main facts are here reported: bacteria with high G + C content have more restrictions on base composition than those with low G + C content; at constant G + C content more complex organisms, ranging from prokaryotes to higher eukaryotes (e.g. human) display an increase of repeats 10-20 nucleotides long, which are also partly responsible for long-range correlations; work selection of length 3 to 10 is stronger in human and in bacteria for two distinct reasons. With respect to previous studies, they have also compared the genomic sequence of the archeon Methanococcus jannaschii with those of bacteria and eukaryotes: it shows sometimes an intermediate statistical behaviour.

  10. Searching for genomic constraints

    International Nuclear Information System (INIS)

    Lio', P.; Ruffo, S.

    1998-01-01

    The authors have analyzed general properties of very long DNA sequences belonging to simple and complex organisms, by using different correlation methods. They have distinguished those base compositional rules that concern the entire genome which they call 'genomic constraints' from the rules that depend on the 'external natural selection' acting on single genes, i. e. protein-centered constraints. They show that G + C content, purine / pyrimidine distributions and biological complexity of the organism are the most important factors which determine base compositional rules and genome complexity. Three main facts are here reported: bacteria with high G + C content have more restrictions on base composition than those with low G + C content; at constant G + C content more complex organisms, ranging from prokaryotes to higher eukaryotes (e.g. human) display an increase of repeats 10-20 nucleotides long, which are also partly responsible for long-range correlations; work selection of length 3 to 10 is stronger in human and in bacteria for two distinct reasons. With respect to previous studies, they have also compared the genomic sequence of the archeon Methanococcus jannaschii with those of bacteria and eukaryotes: it shows sometimes an intermediate statistical behaviour

  11. Improved evidence-based genome-scale metabolic models for maize leaf, embryo, and endosperm

    Energy Technology Data Exchange (ETDEWEB)

    Seaver, Samuel M. D.; Bradbury, Louis M. T.; Frelin, Océane; Zarecki, Raphy; Ruppin, Eytan; Hanson, Andrew D.; Henry, Christopher S.

    2015-03-10

    There is a growing demand for genome-scale metabolic reconstructions for plants, fueled by the need to understand the metabolic basis of crop yield and by progress in genome and transcriptome sequencing. Methods are also required to enable the interpretation of plant transcriptome data to study how cellular metabolic activity varies under different growth conditions or even within different organs, tissues, and developmental stages. Such methods depend extensively on the accuracy with which genes have been mapped to the biochemical reactions in the plant metabolic pathways. Errors in these mappings lead to metabolic reconstructions with an inflated number of reactions and possible generation of unreliable metabolic phenotype predictions. Here we introduce a new evidence-based genome-scale metabolic reconstruction of maize, with significant improvements in the quality of the gene-reaction associations included within our model. We also present a new approach for applying our model to predict active metabolic genes based on transcriptome data. This method includes a minimal set of reactions associated with low expression genes to enable activity of a maximum number of reactions associated with high expression genes. We apply this method to construct an organ-specific model for the maize leaf, and tissue specific models for maize embryo and endosperm cells. We validate our models using fluxomics data for the endosperm and embryo, demonstrating an improved capacity of our models to fit the available fluxomics data. All models are publicly available via the DOE Systems Biology Knowledgebase and PlantSEED, and our new method is generally applicable for analysis transcript profiles from any plant, paving the way for further in silico studies with a wide variety of plant genomes.

  12. Genome-scale biological models for industrial microbial systems.

    Science.gov (United States)

    Xu, Nan; Ye, Chao; Liu, Liming

    2018-04-01

    The primary aims and challenges associated with microbial fermentation include achieving faster cell growth, higher productivity, and more robust production processes. Genome-scale biological models, predicting the formation of an interaction among genetic materials, enzymes, and metabolites, constitute a systematic and comprehensive platform to analyze and optimize the microbial growth and production of biological products. Genome-scale biological models can help optimize microbial growth-associated traits by simulating biomass formation, predicting growth rates, and identifying the requirements for cell growth. With regard to microbial product biosynthesis, genome-scale biological models can be used to design product biosynthetic pathways, accelerate production efficiency, and reduce metabolic side effects, leading to improved production performance. The present review discusses the development of microbial genome-scale biological models since their emergence and emphasizes their pertinent application in improving industrial microbial fermentation of biological products.

  13. Metabolic network reconstruction and genome-scale model of butanol-producing strain Clostridium beijerinckii NCIMB 8052

    Directory of Open Access Journals (Sweden)

    Kim Pan-Jun

    2011-08-01

    Full Text Available Abstract Background Solventogenic clostridia offer a sustainable alternative to petroleum-based production of butanol--an important chemical feedstock and potential fuel additive or replacement. C. beijerinckii is an attractive microorganism for strain design to improve butanol production because it (i naturally produces the highest recorded butanol concentrations as a byproduct of fermentation; and (ii can co-ferment pentose and hexose sugars (the primary products from lignocellulosic hydrolysis. Interrogating C. beijerinckii metabolism from a systems viewpoint using constraint-based modeling allows for simulation of the global effect of genetic modifications. Results We present the first genome-scale metabolic model (iCM925 for C. beijerinckii, containing 925 genes, 938 reactions, and 881 metabolites. To build the model we employed a semi-automated procedure that integrated genome annotation information from KEGG, BioCyc, and The SEED, and utilized computational algorithms with manual curation to improve model completeness. Interestingly, we found only a 34% overlap in reactions collected from the three databases--highlighting the importance of evaluating the predictive accuracy of the resulting genome-scale model. To validate iCM925, we conducted fermentation experiments using the NCIMB 8052 strain, and evaluated the ability of the model to simulate measured substrate uptake and product production rates. Experimentally observed fermentation profiles were found to lie within the solution space of the model; however, under an optimal growth objective, additional constraints were needed to reproduce the observed profiles--suggesting the existence of selective pressures other than optimal growth. Notably, a significantly enriched fraction of actively utilized reactions in simulations--constrained to reflect experimental rates--originated from the set of reactions that overlapped between all three databases (P = 3.52 × 10-9, Fisher's exact test

  14. Genome-scale modeling of yeast: chronology, applications and critical perspectives.

    Science.gov (United States)

    Lopes, Helder; Rocha, Isabel

    2017-08-01

    Over the last 15 years, several genome-scale metabolic models (GSMMs) were developed for different yeast species, aiding both the elucidation of new biological processes and the shift toward a bio-based economy, through the design of in silico inspired cell factories. Here, an historical perspective of the GSMMs built over time for several yeast species is presented and the main inheritance patterns among the metabolic reconstructions are highlighted. We additionally provide a critical perspective on the overall genome-scale modeling procedure, underlining incomplete model validation and evaluation approaches and the quest for the integration of regulatory and kinetic information into yeast GSMMs. A summary of experimentally validated model-based metabolic engineering applications of yeast species is further emphasized, while the main challenges and future perspectives for the field are finally addressed. © FEMS 2017.

  15. Genome-scale metabolic models applied to human health and disease.

    Science.gov (United States)

    Cook, Daniel J; Nielsen, Jens

    2017-11-01

    Advances in genome sequencing, high throughput measurement of gene and protein expression levels, data accessibility, and computational power have allowed genome-scale metabolic models (GEMs) to become a useful tool for understanding metabolic alterations associated with many different diseases. Despite the proven utility of GEMs, researchers confront multiple challenges in the use of GEMs, their application to human health and disease, and their construction and simulation in an organ-specific and disease-specific manner. Several approaches that researchers are taking to address these challenges include using proteomic and transcriptomic-informed methods to build GEMs for individual organs, diseases, and patients and using constraints on model behavior during simulation to match observed metabolic fluxes. We review the challenges facing researchers in the use of GEMs, review the approaches used to address these challenges, and describe advances that are on the horizon and could lead to a better understanding of human metabolism. WIREs Syst Biol Med 2017, 9:e1393. doi: 10.1002/wsbm.1393 For further resources related to this article, please visit the WIREs website. © 2017 Wiley Periodicals, Inc.

  16. Evaluation of a Genome-Scale In Silico Metabolic Model for Geobacter metallireducens by Using Proteomic Data from a Field Biostimulation Experiment

    Science.gov (United States)

    Fang, Yilin; Yabusaki, Steven B.; Lipton, Mary S.; Long, Philip E.

    2012-01-01

    Accurately predicting the interactions between microbial metabolism and the physical subsurface environment is necessary to enhance subsurface energy development, soil and groundwater cleanup, and carbon management. This study was an initial attempt to confirm the metabolic functional roles within an in silico model using environmental proteomic data collected during field experiments. Shotgun global proteomics data collected during a subsurface biostimulation experiment were used to validate a genome-scale metabolic model of Geobacter metallireducens—specifically, the ability of the metabolic model to predict metal reduction, biomass yield, and growth rate under dynamic field conditions. The constraint-based in silico model of G. metallireducens relates an annotated genome sequence to the physiological functions with 697 reactions controlled by 747 enzyme-coding genes. Proteomic analysis showed that 180 of the 637 G. metallireducens proteins detected during the 2008 experiment were associated with specific metabolic reactions in the in silico model. When the field-calibrated Fe(III) terminal electron acceptor process reaction in a reactive transport model for the field experiments was replaced with the genome-scale model, the model predicted that the largest metabolic fluxes through the in silico model reactions generally correspond to the highest abundances of proteins that catalyze those reactions. Central metabolism predicted by the model agrees well with protein abundance profiles inferred from proteomic analysis. Model discrepancies with the proteomic data, such as the relatively low abundances of proteins associated with amino acid transport and metabolism, revealed pathways or flux constraints in the in silico model that could be updated to more accurately predict metabolic processes that occur in the subsurface environment. PMID:23042184

  17. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    DEFF Research Database (Denmark)

    King, Zachary A.; Lu, Justin; Dräger, Andreas

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repo...

  18. Use of genome-scale microbial models for metabolic engineering

    DEFF Research Database (Denmark)

    Patil, Kiran Raosaheb; Åkesson, M.; Nielsen, Jens

    2004-01-01

    Metabolic engineering serves as an integrated approach to design new cell factories by providing rational design procedures and valuable mathematical and experimental tools. Mathematical models have an important role for phenotypic analysis, but can also be used for the design of optimal metaboli...... network structures. The major challenge for metabolic engineering in the post-genomic era is to broaden its design methodologies to incorporate genome-scale biological data. Genome-scale stoichiometric models of microorganisms represent a first step in this direction....

  19. Genome-scale metabolic models as platforms for strain design and biological discovery.

    Science.gov (United States)

    Mienda, Bashir Sajo

    2017-07-01

    Genome-scale metabolic models (GEMs) have been developed and used in guiding systems' metabolic engineering strategies for strain design and development. This strategy has been used in fermentative production of bio-based industrial chemicals and fuels from alternative carbon sources. However, computer-aided hypotheses building using established algorithms and software platforms for biological discovery can be integrated into the pipeline for strain design strategy to create superior strains of microorganisms for targeted biosynthetic goals. Here, I described an integrated workflow strategy using GEMs for strain design and biological discovery. Specific case studies of strain design and biological discovery using Escherichia coli genome-scale model are presented and discussed. The integrated workflow presented herein, when applied carefully would help guide future design strategies for high-performance microbial strains that have existing and forthcoming genome-scale metabolic models.

  20. In Silico Genome-Scale Reconstruction and Validation of the Corynebacterium glutamicum Metabolic Network

    DEFF Research Database (Denmark)

    Kjeldsen, Kjeld Raunkjær; Nielsen, J.

    2009-01-01

    A genome-scale metabolic model of the Gram-positive bacteria Corynebacterium glutamicum ATCC 13032 was constructed comprising 446 reactions and 411 metabolite, based on the annotated genome and available biochemical information. The network was analyzed using constraint based methods. The model...... was extensively validated against published flux data, and flux distribution values were found to correlate well between simulations and experiments. The split pathway of the lysine synthesis pathway of C. glutamicum was investigated, and it was found that the direct dehydrogenase variant gave a higher lysine...... yield than the alternative succinyl pathway at high lysine production rates. The NADPH demand of the network was not found to be critical for lysine production until lysine yields exceeded 55% (mmol lysine (mmol glucose)(-1)). The model was validated during growth on the organic acids acetate...

  1. A mathematical framework for yield (vs. rate) optimization in constraint-based modeling and applications in metabolic engineering.

    Science.gov (United States)

    Klamt, Steffen; Müller, Stefan; Regensburger, Georg; Zanghellini, Jürgen

    2018-02-07

    The optimization of metabolic rates (as linear objective functions) represents the methodical core of flux-balance analysis techniques which have become a standard tool for the study of genome-scale metabolic models. Besides (growth and synthesis) rates, metabolic yields are key parameters for the characterization of biochemical transformation processes, especially in the context of biotechnological applications. However, yields are ratios of rates, and hence the optimization of yields (as nonlinear objective functions) under arbitrary linear constraints is not possible with current flux-balance analysis techniques. Despite the fundamental importance of yields in constraint-based modeling, a comprehensive mathematical framework for yield optimization is still missing. We present a mathematical theory that allows one to systematically compute and analyze yield-optimal solutions of metabolic models under arbitrary linear constraints. In particular, we formulate yield optimization as a linear-fractional program. For practical computations, we transform the linear-fractional yield optimization problem to a (higher-dimensional) linear problem. Its solutions determine the solutions of the original problem and can be used to predict yield-optimal flux distributions in genome-scale metabolic models. For the theoretical analysis, we consider the linear-fractional problem directly. Most importantly, we show that the yield-optimal solution set (like the rate-optimal solution set) is determined by (yield-optimal) elementary flux vectors of the underlying metabolic model. However, yield- and rate-optimal solutions may differ from each other, and hence optimal (biomass or product) yields are not necessarily obtained at solutions with optimal (growth or synthesis) rates. Moreover, we discuss phase planes/production envelopes and yield spaces, in particular, we prove that yield spaces are convex and provide algorithms for their computation. We illustrate our findings by a small

  2. Reconstruction and analysis of a genome-scale metabolic model for Scheffersomyces stipitis

    Directory of Open Access Journals (Sweden)

    Balagurunathan Balaji

    2012-02-01

    Full Text Available Abstract Background Fermentation of xylose, the major component in hemicellulose, is essential for economic conversion of lignocellulosic biomass to fuels and chemicals. The yeast Scheffersomyces stipitis (formerly known as Pichia stipitis has the highest known native capacity for xylose fermentation and possesses several genes for lignocellulose bioconversion in its genome. Understanding the metabolism of this yeast at a global scale, by reconstructing the genome scale metabolic model, is essential for manipulating its metabolic capabilities and for successful transfer of its capabilities to other industrial microbes. Results We present a genome-scale metabolic model for Scheffersomyces stipitis, a native xylose utilizing yeast. The model was reconstructed based on genome sequence annotation, detailed experimental investigation and known yeast physiology. Macromolecular composition of Scheffersomyces stipitis biomass was estimated experimentally and its ability to grow on different carbon, nitrogen, sulphur and phosphorus sources was determined by phenotype microarrays. The compartmentalized model, developed based on an iterative procedure, accounted for 814 genes, 1371 reactions, and 971 metabolites. In silico computed growth rates were compared with high-throughput phenotyping data and the model could predict the qualitative outcomes in 74% of substrates investigated. Model simulations were used to identify the biosynthetic requirements for anaerobic growth of Scheffersomyces stipitis on glucose and the results were validated with published literature. The bottlenecks in Scheffersomyces stipitis metabolic network for xylose uptake and nucleotide cofactor recycling were identified by in silico flux variability analysis. The scope of the model in enhancing the mechanistic understanding of microbial metabolism is demonstrated by identifying a mechanism for mitochondrial respiration and oxidative phosphorylation. Conclusion The genome-scale

  3. Analysis of Piscirickettsia salmonis Metabolism Using Genome-Scale Reconstruction, Modeling, and Testing

    Directory of Open Access Journals (Sweden)

    María P. Cortés

    2017-12-01

    Full Text Available Piscirickettsia salmonis is an intracellular bacterial fish pathogen that causes piscirickettsiosis, a disease with highly adverse impact in the Chilean salmon farming industry. The development of effective treatment and control methods for piscireckttsiosis is still a challenge. To meet it the number of studies on P. salmonis has grown in the last couple of years but many aspects of the pathogen’s biology are still poorly understood. Studies on its metabolism are scarce and only recently a metabolic model for reference strain LF-89 was developed. We present a new genome-scale model for P. salmonis LF-89 with more than twice as many genes as in the previous model and incorporating specific elements of the fish pathogen metabolism. Comparative analysis with models of different bacterial pathogens revealed a lower flexibility in P. salmonis metabolic network. Through constraint-based analysis, we determined essential metabolites required for its growth and showed that it can benefit from different carbon sources tested experimentally in new defined media. We also built an additional model for strain A1-15972, and together with an analysis of P. salmonis pangenome, we identified metabolic features that differentiate two main species clades. Both models constitute a knowledge-base for P. salmonis metabolism and can be used to guide the efficient culture of the pathogen and the identification of specific drug targets.

  4. Bayesian Hierarchical Scale Mixtures of Log-Normal Models for Inference in Reliability with Stochastic Constraint

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2017-06-01

    Full Text Available This paper develops Bayesian inference in reliability of a class of scale mixtures of log-normal failure time (SMLNFT models with stochastic (or uncertain constraint in their reliability measures. The class is comprehensive and includes existing failure time (FT models (such as log-normal, log-Cauchy, and log-logistic FT models as well as new models that are robust in terms of heavy-tailed FT observations. Since classical frequency approaches to reliability analysis based on the SMLNFT model with stochastic constraint are intractable, the Bayesian method is pursued utilizing a Markov chain Monte Carlo (MCMC sampling based approach. This paper introduces a two-stage maximum entropy (MaxEnt prior, which elicits a priori uncertain constraint and develops Bayesian hierarchical SMLNFT model by using the prior. The paper also proposes an MCMC method for Bayesian inference in the SMLNFT model reliability and calls attention to properties of the MaxEnt prior that are useful for method development. Finally, two data sets are used to illustrate how the proposed methodology works.

  5. A Generalized Radiation Model for Human Mobility: Spatial Scale, Searching Direction and Trip Constraint.

    Directory of Open Access Journals (Sweden)

    Chaogui Kang

    Full Text Available We generalized the recently introduced "radiation model", as an analog to the generalization of the classic "gravity model", to consolidate its nature of universality for modeling diverse mobility systems. By imposing the appropriate scaling exponent λ, normalization factor κ and system constraints including searching direction and trip OD constraint, the generalized radiation model accurately captures real human movements in various scenarios and spatial scales, including two different countries and four different cities. Our analytical results also indicated that the generalized radiation model outperformed alternative mobility models in various empirical analyses.

  6. Genome-based microbial ecology of anammox granules in a full-scale wastewater treatment system

    NARCIS (Netherlands)

    Speth, D.R.; Zandt, M.H. in 't; Guerrero Cruz, S.; Dutilh, B.E.; Jetten, M.S.M.

    2016-01-01

    Partial-nitritation anammox (PNA) is a novel wastewater treatment procedure for energy-efficient ammonium removal. Here we use genome-resolved metagenomics to build a genome-based ecological model of the microbial community in a full-scale PNA reactor. Sludge from the bioreactor examined here is

  7. Thermodynamic analysis of regulation in metabolic networks using constraint-based modeling

    Directory of Open Access Journals (Sweden)

    Mahadevan Radhakrishnan

    2010-05-01

    Full Text Available Abstract Background Geobacter sulfurreducens is a member of the Geobacter species, which are capable of oxidation of organic waste coupled to the reduction of heavy metals and electrode with applications in bioremediation and bioenergy generation. While the metabolism of this organism has been studied through the development of a stoichiometry based genome-scale metabolic model, the associated regulatory network has not yet been well studied. In this manuscript, we report on the implementation of a thermodynamics based metabolic flux model for Geobacter sulfurreducens. We use this updated model to identify reactions that are subject to regulatory control in the metabolic network of G. sulfurreducens using thermodynamic variability analysis. Findings As a first step, we have validated the regulatory sites and bottleneck reactions predicted by the thermodynamic flux analysis in E. coli by evaluating the expression ranges of the corresponding genes. We then identified ten reactions in the metabolic network of G. sulfurreducens that are predicted to be candidates for regulation. We then compared the free energy ranges for these reactions with the corresponding gene expression fold changes under conditions of different environmental and genetic perturbations and show that the model predictions of regulation are consistent with data. In addition, we also identify reactions that operate close to equilibrium and show that the experimentally determined exchange coefficient (a measure of reversibility is significant for these reactions. Conclusions Application of the thermodynamic constraints resulted in identification of potential bottleneck reactions not only from the central metabolism but also from the nucleotide and amino acid subsystems, thereby showing the highly coupled nature of the thermodynamic constraints. In addition, thermodynamic variability analysis serves as a valuable tool in estimating the ranges of ΔrG' of every reaction in the model

  8. Constraints on genes shape long-term conservation of macro-synteny in metazoan genomes

    Directory of Open Access Journals (Sweden)

    Putnam Nicholas H

    2011-10-01

    Full Text Available Abstract Background Many metazoan genomes conserve chromosome-scale gene linkage relationships (“macro-synteny” from the common ancestor of multicellular animal life 1234, but the biological explanation for this conservation is still unknown. Double cut and join (DCJ is a simple, well-studied model of neutral genome evolution amenable to both simulation and mathematical analysis 5, but as we show here, it is not sufficent to explain long-term macro-synteny conservation. Results We examine a family of simple (one-parameter extensions of DCJ to identify models and choices of parameters consistent with the levels of macro- and micro-synteny conservation observed among animal genomes. Our software implements a flexible strategy for incorporating genomic context into the DCJ model to incorporate various types of genomic context (“DCJ-[C]”, and is available as open source software from http://github.com/putnamlab/dcj-c. Conclusions A simple model of genome evolution, in which DCJ moves are allowed only if they maintain chromosomal linkage among a set of constrained genes, can simultaneously account for the level of macro-synteny conservation and for correlated conservation among multiple pairs of species. Simulations under this model indicate that a constraint on approximately 7% of metazoan genes is sufficient to constrain genome rearrangement to an average rate of 25 inversions and 1.7 translocations per million years.

  9. Genome-based Modeling and Design of Metabolic Interactions in Microbial Communities.

    Science.gov (United States)

    Mahadevan, Radhakrishnan; Henson, Michael A

    2012-01-01

    Biotechnology research is traditionally focused on individual microbial strains that are perceived to have the necessary metabolic functions, or the capability to have these functions introduced, to achieve a particular task. For many important applications, the development of such omnipotent microbes is an extremely challenging if not impossible task. By contrast, nature employs a radically different strategy based on synergistic combinations of different microbial species that collectively achieve the desired task. These natural communities have evolved to exploit the native metabolic capabilities of each species and are highly adaptive to changes in their environments. However, microbial communities have proven difficult to study due to a lack of suitable experimental and computational tools. With the advent of genome sequencing, omics technologies, bioinformatics and genome-scale modeling, researchers now have unprecedented capabilities to analyze and engineer the metabolism of microbial communities. The goal of this review is to summarize recent applications of genome-scale metabolic modeling to microbial communities. A brief introduction to lumped community models is used to motivate the need for genome-level descriptions of individual species and their metabolic interactions. The review of genome-scale models begins with static modeling approaches, which are appropriate for communities where the extracellular environment can be assumed to be time invariant or slowly varying. Dynamic extensions of the static modeling approach are described, and then applications of genome-scale models for design of synthetic microbial communities are reviewed. The review concludes with a summary of metagenomic tools for analyzing community metabolism and an outlook for future research.

  10. GENOME-BASED MODELING AND DESIGN OF METABOLIC INTERACTIONS IN MICROBIAL COMMUNITIES

    Directory of Open Access Journals (Sweden)

    Radhakrishnan Mahadevan

    2012-10-01

    With the advent of genome sequencing, omics technologies, bioinformatics and genome-scale modeling, researchers now have unprecedented capabilities to analyze and engineer the metabolism of microbial communities. The goal of this review is to summarize recent applications of genome-scale metabolic modeling to microbial communities. A brief introduction to lumped community models is used to motivate the need for genome-level descriptions of individual species and their metabolic interactions. The review of genome-scale models begins with static modeling approaches, which are appropriate for communities where the extracellular environment can be assumed to be time invariant or slowly varying. Dynamic extensions of the static modeling approach are described, and then applications of genome-scale models for design of synthetic microbial communities are reviewed. The review concludes with a summary of metagenomic tools for analyzing community metabolism and an outlook for future research.

  11. Analysis of growth of Lactobacillus plantarum WCFS1 on a complex medium using a genome-scale metabolic model

    NARCIS (Netherlands)

    Teusink, B.; Wiersma, A.; Molenaar, D.; Francke, C.; Vos, de W.M.; Siezen, R.J.; Smid, E.J.

    2006-01-01

    A genome-scale metabolic model of the lactic acid bacterium Lactobacillus plantarum WCFS1 was constructed based on genomic content and experimental data. The complete model includes 721 genes, 643 reactions, and 531 metabolites. Different stoichiometric modeling techniques were used for

  12. Constraints based analysis of extended cybernetic models.

    Science.gov (United States)

    Mandli, Aravinda R; Venkatesh, Kareenhalli V; Modak, Jayant M

    2015-11-01

    The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Genome-scale modeling of the protein secretory machinery in yeast

    DEFF Research Database (Denmark)

    Feizi, Amir; Österlund, Tobias; Petranovic, Dina

    2013-01-01

    The protein secretory machinery in Eukarya is involved in post-translational modification (PTMs) and sorting of the secretory and many transmembrane proteins. While the secretory machinery has been well-studied using classic reductionist approaches, a holistic view of its complex nature is lacking....... Here, we present the first genome-scale model for the yeast secretory machinery which captures the knowledge generated through more than 50 years of research. The model is based on the concept of a Protein Specific Information Matrix (PSIM: characterized by seven PTMs features). An algorithm...

  14. Improved annotation through genome-scale metabolic modeling of Aspergillus oryzae

    DEFF Research Database (Denmark)

    Vongsangnak, Wanwipa; Olsen, Peter; Hansen, Kim

    2008-01-01

    Background: Since ancient times the filamentous fungus Aspergillus oryzae has been used in the fermentation industry for the production of fermented sauces and the production of industrial enzymes. Recently, the genome sequence of A. oryzae with 12,074 annotated genes was released but the number...... to a genome scale metabolic model of A. oryzae. Results: Our assembled EST sequences we identified 1,046 newly predicted genes in the A. oryzae genome. Furthermore, it was possible to assign putative protein functions to 398 of the newly predicted genes. Noteworthy, our annotation strategy resulted...... model was validated and shown to correctly describe the phenotypic behavior of A. oryzae grown on different carbon sources. Conclusion: A much enhanced annotation of the A. oryzae genome was performed and a genomescale metabolic model of A. oryzae was reconstructed. The model accurately predicted...

  15. TIGER: Toolbox for integrating genome-scale metabolic models, expression data, and transcriptional regulatory networks

    Directory of Open Access Journals (Sweden)

    Jensen Paul A

    2011-09-01

    Full Text Available Abstract Background Several methods have been developed for analyzing genome-scale models of metabolism and transcriptional regulation. Many of these methods, such as Flux Balance Analysis, use constrained optimization to predict relationships between metabolic flux and the genes that encode and regulate enzyme activity. Recently, mixed integer programming has been used to encode these gene-protein-reaction (GPR relationships into a single optimization problem, but these techniques are often of limited generality and lack a tool for automating the conversion of rules to a coupled regulatory/metabolic model. Results We present TIGER, a Toolbox for Integrating Genome-scale Metabolism, Expression, and Regulation. TIGER converts a series of generalized, Boolean or multilevel rules into a set of mixed integer inequalities. The package also includes implementations of existing algorithms to integrate high-throughput expression data with genome-scale models of metabolism and transcriptional regulation. We demonstrate how TIGER automates the coupling of a genome-scale metabolic model with GPR logic and models of transcriptional regulation, thereby serving as a platform for algorithm development and large-scale metabolic analysis. Additionally, we demonstrate how TIGER's algorithms can be used to identify inconsistencies and improve existing models of transcriptional regulation with examples from the reconstructed transcriptional regulatory network of Saccharomyces cerevisiae. Conclusion The TIGER package provides a consistent platform for algorithm development and extending existing genome-scale metabolic models with regulatory networks and high-throughput data.

  16. Enumeration of smallest intervention strategies in genome-scale metabolic networks.

    Directory of Open Access Journals (Sweden)

    Axel von Kamp

    2014-01-01

    Full Text Available One ultimate goal of metabolic network modeling is the rational redesign of biochemical networks to optimize the production of certain compounds by cellular systems. Although several constraint-based optimization techniques have been developed for this purpose, methods for systematic enumeration of intervention strategies in genome-scale metabolic networks are still lacking. In principle, Minimal Cut Sets (MCSs; inclusion-minimal combinations of reaction or gene deletions that lead to the fulfilment of a given intervention goal provide an exhaustive enumeration approach. However, their disadvantage is the combinatorial explosion in larger networks and the requirement to compute first the elementary modes (EMs which itself is impractical in genome-scale networks. We present MCSEnumerator, a new method for effective enumeration of the smallest MCSs (with fewest interventions in genome-scale metabolic network models. For this we combine two approaches, namely (i the mapping of MCSs to EMs in a dual network, and (ii a modified algorithm by which shortest EMs can be effectively determined in large networks. In this way, we can identify the smallest MCSs by calculating the shortest EMs in the dual network. Realistic application examples demonstrate that our algorithm is able to list thousands of the most efficient intervention strategies in genome-scale networks for various intervention problems. For instance, for the first time we could enumerate all synthetic lethals in E.coli with combinations of up to 5 reactions. We also applied the new algorithm exemplarily to compute strain designs for growth-coupled synthesis of different products (ethanol, fumarate, serine by E.coli. We found numerous new engineering strategies partially requiring less knockouts and guaranteeing higher product yields (even without the assumption of optimal growth than reported previously. The strength of the presented approach is that smallest intervention strategies can be

  17. Expanding a dynamic flux balance model of yeast fermentation to genome-scale

    Science.gov (United States)

    2011-01-01

    Background Yeast is considered to be a workhorse of the biotechnology industry for the production of many value-added chemicals, alcoholic beverages and biofuels. Optimization of the fermentation is a challenging task that greatly benefits from dynamic models able to accurately describe and predict the fermentation profile and resulting products under different genetic and environmental conditions. In this article, we developed and validated a genome-scale dynamic flux balance model, using experimentally determined kinetic constraints. Results Appropriate equations for maintenance, biomass composition, anaerobic metabolism and nutrient uptake are key to improve model performance, especially for predicting glycerol and ethanol synthesis. Prediction profiles of synthesis and consumption of the main metabolites involved in alcoholic fermentation closely agreed with experimental data obtained from numerous lab and industrial fermentations under different environmental conditions. Finally, fermentation simulations of genetically engineered yeasts closely reproduced previously reported experimental results regarding final concentrations of the main fermentation products such as ethanol and glycerol. Conclusion A useful tool to describe, understand and predict metabolite production in batch yeast cultures was developed. The resulting model, if used wisely, could help to search for new metabolic engineering strategies to manage ethanol content in batch fermentations. PMID:21595919

  18. Characteristic Model-Based Robust Model Predictive Control for Hypersonic Vehicles with Constraints

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2017-06-01

    Full Text Available Designing robust control for hypersonic vehicles in reentry is difficult, due to the features of the vehicles including strong coupling, non-linearity, and multiple constraints. This paper proposed a characteristic model-based robust model predictive control (MPC for hypersonic vehicles with reentry constraints. First, the hypersonic vehicle is modeled by a characteristic model composed of a linear time-varying system and a lumped disturbance. Then, the identification data are regenerated by the accumulative sum idea in the gray theory, which weakens effects of the random noises and strengthens regularity of the identification data. Based on the regenerated data, the time-varying parameters and the disturbance are online estimated according to the gray identification. At last, the mixed H2/H∞ robust predictive control law is proposed based on linear matrix inequalities (LMIs and receding horizon optimization techniques. Using active tackling system constraints of MPC, the input and state constraints are satisfied in the closed-loop control system. The validity of the proposed control is verified theoretically according to Lyapunov theory and illustrated by simulation results.

  19. Consistency maintenance for constraint in role-based access control model

    Institute of Scientific and Technical Information of China (English)

    韩伟力; 陈刚; 尹建伟; 董金祥

    2002-01-01

    Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces corresponding formal rules, rule-based reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally, the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-oriented product data management (PDM) system.

  20. Genome-based microbial ecology of anammox granules in a full-scale wastewater treatment system

    OpenAIRE

    Speth, D.R.; Zandt, M.H. in 't; Guerrero Cruz, S.; Dutilh, B.E.; Jetten, M.S.M.

    2016-01-01

    Partial-nitritation anammox (PNA) is a novel wastewater treatment procedure for energy-efficient ammonium removal. Here we use genome-resolved metagenomics to build a genome-based ecological model of the microbial community in a full-scale PNA reactor. Sludge from the bioreactor examined here is used to seed reactors in wastewater treatment plants around the world; however, the role of most of its microbial community in ammonium removal remains unknown. Our analysis yielded 23 near-complete d...

  1. Metingear: a development environment for annotating genome-scale metabolic models.

    Science.gov (United States)

    May, John W; James, A Gordon; Steinbeck, Christoph

    2013-09-01

    Genome-scale metabolic models often lack annotations that would allow them to be used for further analysis. Previous efforts have focused on associating metabolites in the model with a cross reference, but this can be problematic if the reference is not freely available, multiple resources are used or the metabolite is added from a literature review. Associating each metabolite with chemical structure provides unambiguous identification of the components and a more detailed view of the metabolism. We have developed an open-source desktop application that simplifies the process of adding database cross references and chemical structures to genome-scale metabolic models. Annotated models can be exported to the Systems Biology Markup Language open interchange format. Source code, binaries, documentation and tutorials are freely available at http://johnmay.github.com/metingear. The application is implemented in Java with bundles available for MS Windows and Macintosh OS X.

  2. Hysteresis modeling based on saturation operator without constraints

    International Nuclear Information System (INIS)

    Park, Y.W.; Seok, Y.T.; Park, H.J.; Chung, J.Y.

    2007-01-01

    This paper proposes a simple way to model complex hysteresis in a magnetostrictive actuator by employing the saturation operators without constraints. Having no constraints causes a singularity problem, i.e. the inverse matrix cannot be obtained during calculating the weights. To overcome it, a pseudoinverse concept is introduced. Simulation results are compared with the experimental data, based on a Terfenol-D actuator. It is clear that the proposed model is much closer to the experimental data than the modified PI model. The relative error is calculated as 12% and less than 1% with the modified PI Model and proposed model, respectively

  3. Identifying anti-growth factors for human cancer cell lines through genome-scale metabolic modeling

    DEFF Research Database (Denmark)

    Ghaffari, Pouyan; Mardinoglu, Adil; Asplund, Anna

    2015-01-01

    Human cancer cell lines are used as important model systems to study molecular mechanisms associated with tumor growth, hereunder how genomic and biological heterogeneity found in primary tumors affect cellular phenotypes. We reconstructed Genome scale metabolic models (GEMs) for eleven cell lines...... based on RNA-Seq data and validated the functionality of these models with data from metabolite profiling. We used cell line-specific GEMs to analyze the differences in the metabolism of cancer cell lines, and to explore the heterogeneous expression of the metabolic subsystems. Furthermore, we predicted...... for inhibition of cell growth may provide leads for the development of efficient cancer treatment strategies....

  4. Structural constraints in the packaging of bluetongue virus genomic segments

    OpenAIRE

    Burkhardt, Christiane; Sung, Po-Yu; Celma, Cristina C.; Roy, Polly

    2014-01-01

    : The mechanism used by bluetongue virus (BTV) to ensure the sorting and packaging of its 10 genomic segments is still poorly understood. In this study, we investigated the packaging constraints for two BTV genomic segments from two different serotypes. Segment 4 (S4) of BTV serotype 9 was mutated sequentially and packaging of mutant ssRNAs was investigated by two newly developed RNA packaging assay systems, one in vivo and the other in vitro. Modelling of the mutated ssRNA followed by bioche...

  5. Consistency maintenance for constraint in role-based access control model

    Institute of Scientific and Technical Information of China (English)

    韩伟力; 陈刚; 尹建伟; 董金祥

    2002-01-01

    Constraint is an important aspect of role-based access control and is sometimes argued to be the principal motivation for role-based access control (RBAC). But so far'few authors have discussed consistency maintenance for constraint in RBAC model. Based on researches of constraints among roles and types of inconsistency among constraints, this paper introduces correaponding formal rules, rulebased reasoning and corresponding methods to detect, avoid and resolve these inconsistencies. Finally,the paper introduces briefly the application of consistency maintenance in ZD-PDM, an enterprise-ori-ented product data management (PDM) system.

  6. solveME: fast and reliable solution of nonlinear ME models

    DEFF Research Database (Denmark)

    Yang, Laurence; Ma, Ding; Ebrahim, Ali

    2016-01-01

    Background: Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstr......Background: Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic...... reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Results: Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models...

  7. Route constraints model based on polychromatic sets

    Science.gov (United States)

    Yin, Xianjun; Cai, Chao; Wang, Houjun; Li, Dongwu

    2018-03-01

    With the development of unmanned aerial vehicle (UAV) technology, the fields of its application are constantly expanding. The mission planning of UAV is especially important, and the planning result directly influences whether the UAV can accomplish the task. In order to make the results of mission planning for unmanned aerial vehicle more realistic, it is necessary to consider not only the physical properties of the aircraft, but also the constraints among the various equipment on the UAV. However, constraints among the equipment of UAV are complex, and the equipment has strong diversity and variability, which makes these constraints difficult to be described. In order to solve the above problem, this paper, referring to the polychromatic sets theory used in the advanced manufacturing field to describe complex systems, presents a mission constraint model of UAV based on polychromatic sets.

  8. Selective constraint on noncoding regions of hominid genomes.

    Directory of Open Access Journals (Sweden)

    Eliot C Bush

    2005-12-01

    Full Text Available An important challenge for human evolutionary biology is to understand the genetic basis of human-chimpanzee differences. One influential idea holds that such differences depend, to a large extent, on adaptive changes in gene expression. An important step in assessing this hypothesis involves gaining a better understanding of selective constraint on noncoding regions of hominid genomes. In noncoding sequence, functional elements are frequently small and can be separated by large nonfunctional regions. For this reason, constraint in hominid genomes is likely to be patchy. Here we use conservation in more distantly related mammals and amniotes as a way of identifying small sequence windows that are likely to be functional. We find that putatively functional noncoding elements defined in this manner are subject to significant selective constraint in hominids.

  9. Selective Constraint on Noncoding Regions of Hominid Genomes.

    Directory of Open Access Journals (Sweden)

    2005-12-01

    Full Text Available An important challenge for human evolutionary biology is to understand the genetic basis of human-chimpanzee differences. One influential idea holds that such differences depend, to a large extent, on adaptive changes in gene expression. An important step in assessing this hypothesis involves gaining a better understanding of selective constraint on noncoding regions of hominid genomes. In noncoding sequence, functional elements are frequently small and can be separated by large nonfunctional regions. For this reason, constraint in hominid genomes is likely to be patchy. Here we use conservation in more distantly related mammals and amniotes as a way of identifying small sequence windows that are likely to be functional. We find that putatively functional noncoding elements defined in this manner are subject to significant selective constraint in hominids.

  10. Genome-scale modelling of microbial metabolism with temporal and spatial resolution.

    Science.gov (United States)

    Henson, Michael A

    2015-12-01

    Most natural microbial systems have evolved to function in environments with temporal and spatial variations. A major limitation to understanding such complex systems is the lack of mathematical modelling frameworks that connect the genomes of individual species and temporal and spatial variations in the environment to system behaviour. The goal of this review is to introduce the emerging field of spatiotemporal metabolic modelling based on genome-scale reconstructions of microbial metabolism. The extension of flux balance analysis (FBA) to account for both temporal and spatial variations in the environment is termed spatiotemporal FBA (SFBA). Following a brief overview of FBA and its established dynamic extension, the SFBA problem is introduced and recent progress is described. Three case studies are reviewed to illustrate the current state-of-the-art and possible future research directions are outlined. The author posits that SFBA is the next frontier for microbial metabolic modelling and a rapid increase in methods development and system applications is anticipated. © 2015 Authors; published by Portland Press Limited.

  11. Investigating host-pathogen behavior and their interaction using genome-scale metabolic network models.

    Science.gov (United States)

    Sadhukhan, Priyanka P; Raghunathan, Anu

    2014-01-01

    Genome Scale Metabolic Modeling methods represent one way to compute whole cell function starting from the genome sequence of an organism and contribute towards understanding and predicting the genotype-phenotype relationship. About 80 models spanning all the kingdoms of life from archaea to eukaryotes have been built till date and used to interrogate cell phenotype under varying conditions. These models have been used to not only understand the flux distribution in evolutionary conserved pathways like glycolysis and the Krebs cycle but also in applications ranging from value added product formation in Escherichia coli to predicting inborn errors of Homo sapiens metabolism. This chapter describes a protocol that delineates the process of genome scale metabolic modeling for analysing host-pathogen behavior and interaction using flux balance analysis (FBA). The steps discussed in the process include (1) reconstruction of a metabolic network from the genome sequence, (2) its representation in a precise mathematical framework, (3) its translation to a model, and (4) the analysis using linear algebra and optimization. The methods for biological interpretations of computed cell phenotypes in the context of individual host and pathogen models and their integration are also discussed.

  12. Genome-based microbial ecology of anammox granules in a full-scale wastewater treatment system.

    Science.gov (United States)

    Speth, Daan R; In 't Zandt, Michiel H; Guerrero-Cruz, Simon; Dutilh, Bas E; Jetten, Mike S M

    2016-03-31

    Partial-nitritation anammox (PNA) is a novel wastewater treatment procedure for energy-efficient ammonium removal. Here we use genome-resolved metagenomics to build a genome-based ecological model of the microbial community in a full-scale PNA reactor. Sludge from the bioreactor examined here is used to seed reactors in wastewater treatment plants around the world; however, the role of most of its microbial community in ammonium removal remains unknown. Our analysis yielded 23 near-complete draft genomes that together represent the majority of the microbial community. We assign these genomes to distinct anaerobic and aerobic microbial communities. In the aerobic community, nitrifying organisms and heterotrophs predominate. In the anaerobic community, widespread potential for partial denitrification suggests a nitrite loop increases treatment efficiency. Of our genomes, 19 have no previously cultivated or sequenced close relatives and six belong to bacterial phyla without any cultivated members, including the most complete Omnitrophica (formerly OP3) genome to date.

  13. Model-based control strategies for systems with constraints of the program type

    Science.gov (United States)

    Jarzębowska, Elżbieta

    2006-08-01

    The paper presents a model-based tracking control strategy for constrained mechanical systems. Constraints we consider can be material and non-material ones referred to as program constraints. The program constraint equations represent tasks put upon system motions and they can be differential equations of orders higher than one or two, and be non-integrable. The tracking control strategy relies upon two dynamic models: a reference model, which is a dynamic model of a system with arbitrary order differential constraints and a dynamic control model. The reference model serves as a motion planner, which generates inputs to the dynamic control model. It is based upon a generalized program motion equations (GPME) method. The method enables to combine material and program constraints and merge them both into the motion equations. Lagrange's equations with multipliers are the peculiar case of the GPME, since they can be applied to systems with constraints of first orders. Our tracking strategy referred to as a model reference program motion tracking control strategy enables tracking of any program motion predefined by the program constraints. It extends the "trajectory tracking" to the "program motion tracking". We also demonstrate that our tracking strategy can be extended to a hybrid program motion/force tracking.

  14. The OME Framework for genome-scale systems biology

    Energy Technology Data Exchange (ETDEWEB)

    Palsson, Bernhard O. [Univ. of California, San Diego, CA (United States); Ebrahim, Ali [Univ. of California, San Diego, CA (United States); Federowicz, Steve [Univ. of California, San Diego, CA (United States)

    2014-12-19

    The life sciences are undergoing continuous and accelerating integration with computational and engineering sciences. The biology that many in the field have been trained on may be hardly recognizable in ten to twenty years. One of the major drivers for this transformation is the blistering pace of advancements in DNA sequencing and synthesis. These advances have resulted in unprecedented amounts of new data, information, and knowledge. Many software tools have been developed to deal with aspects of this transformation and each is sorely needed [1-3]. However, few of these tools have been forced to deal with the full complexity of genome-scale models along with high throughput genome- scale data. This particular situation represents a unique challenge, as it is simultaneously necessary to deal with the vast breadth of genome-scale models and the dizzying depth of high-throughput datasets. It has been observed time and again that as the pace of data generation continues to accelerate, the pace of analysis significantly lags behind [4]. It is also evident that, given the plethora of databases and software efforts [5-12], it is still a significant challenge to work with genome-scale metabolic models, let alone next-generation whole cell models [13-15]. We work at the forefront of model creation and systems scale data generation [16-18]. The OME Framework was borne out of a practical need to enable genome-scale modeling and data analysis under a unified framework to drive the next generation of genome-scale biological models. Here we present the OME Framework. It exists as a set of Python classes. However, we want to emphasize the importance of the underlying design as an addition to the discussions on specifications of a digital cell. A great deal of work and valuable progress has been made by a number of communities [13, 19-24] towards interchange formats and implementations designed to achieve similar goals. While many software tools exist for handling genome-scale

  15. Genome scale models of yeast: towards standardized evaluation and consistent omic integration

    DEFF Research Database (Denmark)

    Sanchez, Benjamin J.; Nielsen, Jens

    2015-01-01

    Genome scale models (GEMs) have enabled remarkable advances in systems biology, acting as functional databases of metabolism, and as scaffolds for the contextualization of high-throughput data. In the case of Saccharomyces cerevisiae (budding yeast), several GEMs have been published and are curre......Genome scale models (GEMs) have enabled remarkable advances in systems biology, acting as functional databases of metabolism, and as scaffolds for the contextualization of high-throughput data. In the case of Saccharomyces cerevisiae (budding yeast), several GEMs have been published...... in which all levels of omics data (from gene expression to flux) have been integrated in yeast GEMs. Relevant conclusions and current challenges for both GEM evaluation and omic integration are highlighted....

  16. Genome-scale metabolic modeling of Mucor circinelloides and comparative analysis with other oleaginous species.

    Science.gov (United States)

    Vongsangnak, Wanwipa; Klanchui, Amornpan; Tawornsamretkit, Iyarest; Tatiyaborwornchai, Witthawin; Laoteng, Kobkul; Meechai, Asawin

    2016-06-01

    We present a novel genome-scale metabolic model iWV1213 of Mucor circinelloides, which is an oleaginous fungus for industrial applications. The model contains 1213 genes, 1413 metabolites and 1326 metabolic reactions across different compartments. We demonstrate that iWV1213 is able to accurately predict the growth rates of M. circinelloides on various nutrient sources and culture conditions using Flux Balance Analysis and Phenotypic Phase Plane analysis. Comparative analysis of three oleaginous genome-scale models, including M. circinelloides (iWV1213), Mortierella alpina (iCY1106) and Yarrowia lipolytica (iYL619_PCP) revealed that iWV1213 possesses a higher number of genes involved in carbohydrate, amino acid, and lipid metabolisms that might contribute to its versatility in nutrient utilization. Moreover, the identification of unique and common active reactions among the Zygomycetes oleaginous models using Flux Variability Analysis unveiled a set of gene/enzyme candidates as metabolic engineering targets for cellular improvement. Thus, iWV1213 offers a powerful metabolic engineering tool for multi-level omics analysis, enabling strain optimization as a cell factory platform of lipid-based production. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Large-Scale Constraint-Based Pattern Mining

    Science.gov (United States)

    Zhu, Feida

    2009-01-01

    We studied the problem of constraint-based pattern mining for three different data formats, item-set, sequence and graph, and focused on mining patterns of large sizes. Colossal patterns in each data formats are studied to discover pruning properties that are useful for direct mining of these patterns. For item-set data, we observed robustness of…

  18. In Silico Genome-Scale Reconstruction and Validation of the Staphylococcus aureus Metabolic Network

    NARCIS (Netherlands)

    Heinemann, Matthias; Kümmel, Anne; Ruinatscha, Reto; Panke, Sven

    2005-01-01

    A genome-scale metabolic model of the Gram-positive, facultative anaerobic opportunistic pathogen Staphylococcus aureus N315 was constructed based on current genomic data, literature, and physiological information. The model comprises 774 metabolic processes representing approximately 23% of all

  19. Computational Modeling of Human Metabolism and Its Application to Systems Biomedicine.

    Science.gov (United States)

    Aurich, Maike K; Thiele, Ines

    2016-01-01

    Modern high-throughput techniques offer immense opportunities to investigate whole-systems behavior, such as those underlying human diseases. However, the complexity of the data presents challenges in interpretation, and new avenues are needed to address the complexity of both diseases and data. Constraint-based modeling is one formalism applied in systems biology. It relies on a genome-scale reconstruction that captures extensive biochemical knowledge regarding an organism. The human genome-scale metabolic reconstruction is increasingly used to understand normal cellular and disease states because metabolism is an important factor in many human diseases. The application of human genome-scale reconstruction ranges from mere querying of the model as a knowledge base to studies that take advantage of the model's topology and, most notably, to functional predictions based on cell- and condition-specific metabolic models built based on omics data.An increasing number and diversity of biomedical questions are being addressed using constraint-based modeling and metabolic models. One of the most successful biomedical applications to date is cancer metabolism, but constraint-based modeling also holds great potential for inborn errors of metabolism or obesity. In addition, it offers great prospects for individualized approaches to diagnostics and the design of disease prevention and intervention strategies. Metabolic models support this endeavor by providing easy access to complex high-throughput datasets. Personalized metabolic models have been introduced. Finally, constraint-based modeling can be used to model whole-body metabolism, which will enable the elucidation of metabolic interactions between organs and disturbances of these interactions as either causes or consequence of metabolic diseases. This chapter introduces constraint-based modeling and describes some of its contributions to systems biomedicine.

  20. Comprehensive Mapping of Pluripotent Stem Cell Metabolism Using Dynamic Genome-Scale Network Modeling

    Directory of Open Access Journals (Sweden)

    Sriram Chandrasekaran

    2017-12-01

    Full Text Available Summary: Metabolism is an emerging stem cell hallmark tied to cell fate, pluripotency, and self-renewal, yet systems-level understanding of stem cell metabolism has been limited by the lack of genome-scale network models. Here, we develop a systems approach to integrate time-course metabolomics data with a computational model of metabolism to analyze the metabolic state of naive and primed murine pluripotent stem cells. Using this approach, we find that one-carbon metabolism involving phosphoglycerate dehydrogenase, folate synthesis, and nucleotide synthesis is a key pathway that differs between the two states, resulting in differential sensitivity to anti-folates. The model also predicts that the pluripotency factor Lin28 regulates this one-carbon metabolic pathway, which we validate using metabolomics data from Lin28-deficient cells. Moreover, we identify and validate metabolic reactions related to S-adenosyl-methionine production that can differentially impact histone methylation in naive and primed cells. Our network-based approach provides a framework for characterizing metabolic changes influencing pluripotency and cell fate. : Chandrasekaran et al. use computational modeling, metabolomics, and metabolic inhibitors to discover metabolic differences between various pluripotent stem cell states and infer their impact on stem cell fate decisions. Keywords: systems biology, stem cell biology, metabolism, genome-scale modeling, pluripotency, histone methylation, naive (ground state, primed state, cell fate, metabolic network

  1. Genome-scale metabolic representation of Amycolatopsis balhimycina

    DEFF Research Database (Denmark)

    Vongsangnak, Wanwipa; Figueiredo, L. F.; Förster, Jochen

    2012-01-01

    Infection caused by methicillin‐resistant Staphylococcus aureus (MRSA) is an increasing societal problem. Typically, glycopeptide antibiotics are used in the treatment of these infections. The most comprehensively studied glycopeptide antibiotic biosynthetic pathway is that of balhimycin...... to reconstruct a genome‐scale metabolic model for the organism. Here we generated an almost complete A. balhimycina genome sequence comprising 10,562,587 base pairs assembled into 2,153 contigs. The high GC‐genome (∼69%) includes 8,585 open reading frames (ORFs). We used our integrative toolbox called SEQTOR...

  2. Biofilm Formation Mechanisms of Pseudomonas aeruginosa Predicted via Genome-Scale Kinetic Models of Bacterial Metabolism

    Science.gov (United States)

    2016-03-15

    RESEARCH ARTICLE Biofilm Formation Mechanisms of Pseudomonas aeruginosa Predicted via Genome-Scale Kinetic Models of Bacterial Metabolism Francisco G...jaques.reifman.civ@mail.mil Abstract A hallmark of Pseudomonas aeruginosa is its ability to establish biofilm -based infections that are difficult to...eradicate. Biofilms are less susceptible to host inflammatory and immune responses and have higher antibiotic tolerance than free-living planktonic

  3. The RAVEN Toolbox and Its Use for Generating a Genome-scale Metabolic Model for Penicillium chrysogenum

    Science.gov (United States)

    Agren, Rasmus; Liu, Liming; Shoaie, Saeed; Vongsangnak, Wanwipa; Nookaew, Intawat; Nielsen, Jens

    2013-01-01

    We present the RAVEN (Reconstruction, Analysis and Visualization of Metabolic Networks) Toolbox: a software suite that allows for semi-automated reconstruction of genome-scale models. It makes use of published models and/or the KEGG database, coupled with extensive gap-filling and quality control features. The software suite also contains methods for visualizing simulation results and omics data, as well as a range of methods for performing simulations and analyzing the results. The software is a useful tool for system-wide data analysis in a metabolic context and for streamlined reconstruction of metabolic networks based on protein homology. The RAVEN Toolbox workflow was applied in order to reconstruct a genome-scale metabolic model for the important microbial cell factory Penicillium chrysogenum Wisconsin54-1255. The model was validated in a bibliomic study of in total 440 references, and it comprises 1471 unique biochemical reactions and 1006 ORFs. It was then used to study the roles of ATP and NADPH in the biosynthesis of penicillin, and to identify potential metabolic engineering targets for maximization of penicillin production. PMID:23555215

  4. Genome-scale reconstruction and in silico analysis of the Ralstonia eutropha H16 for polyhydroxyalkanoate synthesis, lithoautotrophic growth, and 2-methyl citric acid production

    Directory of Open Access Journals (Sweden)

    Kim Tae

    2011-06-01

    Full Text Available Abstract Background Ralstonia eutropha H16, found in both soil and water, is a Gram-negative lithoautotrophic bacterium that can utillize CO2 and H2 as its sources of carbon and energy in the absence of organic substrates. R. eutropha H16 can reach high cell densities either under lithoautotrophic or heterotrophic conditions, which makes it suitable for a number of biotechnological applications. It is the best known and most promising producer of polyhydroxyalkanoates (PHAs from various carbon substrates and is an environmentally important bacterium that can degrade aromatic compounds. In order to make R. eutropha H16 a more efficient and robust biofactory, system-wide metabolic engineering to improve its metabolic performance is essential. Thus, it is necessary to analyze its metabolic characteristics systematically and optimize the entire metabolic network at systems level. Results We present the lithoautotrophic genome-scale metabolic model of R. eutropha H16 based on the annotated genome with biochemical and physiological information. The stoichiometic model, RehMBEL1391, is composed of 1391 reactions including 229 transport reactions and 1171 metabolites. Constraints-based flux analyses were performed to refine and validate the genome-scale metabolic model under environmental and genetic perturbations. First, the lithoautotrophic growth characteristics of R. eutropha H16 were investigated under varying feeding ratios of gas mixture. Second, the genome-scale metabolic model was used to design the strategies for the production of poly[R-(--3hydroxybutyrate] (PHB under different pH values and carbon/nitrogen source uptake ratios. It was also used to analyze the metabolic characteristics of R. eutropha when the phosphofructokinase gene was expressed. Finally, in silico gene knockout simulations were performed to identify targets for metabolic engineering essential for the production of 2-methylcitric acid in R. eutropha H16. Conclusion The

  5. Constraint-based reachability

    Directory of Open Access Journals (Sweden)

    Arnaud Gotlieb

    2013-02-01

    Full Text Available Iterative imperative programs can be considered as infinite-state systems computing over possibly unbounded domains. Studying reachability in these systems is challenging as it requires to deal with an infinite number of states with standard backward or forward exploration strategies. An approach that we call Constraint-based reachability, is proposed to address reachability problems by exploring program states using a constraint model of the whole program. The keypoint of the approach is to interpret imperative constructions such as conditionals, loops, array and memory manipulations with the fundamental notion of constraint over a computational domain. By combining constraint filtering and abstraction techniques, Constraint-based reachability is able to solve reachability problems which are usually outside the scope of backward or forward exploration strategies. This paper proposes an interpretation of classical filtering consistencies used in Constraint Programming as abstract domain computations, and shows how this approach can be used to produce a constraint solver that efficiently generates solutions for reachability problems that are unsolvable by other approaches.

  6. Constraint-based scheduling applying constraint programming to scheduling problems

    CERN Document Server

    Baptiste, Philippe; Nuijten, Wim

    2001-01-01

    Constraint Programming is a problem-solving paradigm that establishes a clear distinction between two pivotal aspects of a problem: (1) a precise definition of the constraints that define the problem to be solved and (2) the algorithms and heuristics enabling the selection of decisions to solve the problem. It is because of these capabilities that Constraint Programming is increasingly being employed as a problem-solving tool to solve scheduling problems. Hence the development of Constraint-Based Scheduling as a field of study. The aim of this book is to provide an overview of the most widely used Constraint-Based Scheduling techniques. Following the principles of Constraint Programming, the book consists of three distinct parts: The first chapter introduces the basic principles of Constraint Programming and provides a model of the constraints that are the most often encountered in scheduling problems. Chapters 2, 3, 4, and 5 are focused on the propagation of resource constraints, which usually are responsibl...

  7. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    sample). The GBSeq data can be used directly in genomic models in the form of individual SNP allele-frequency estimates (e.g., reference reads/total reads per polymorphic site per individual), but is subject to measurement error due to the low sequencing depth per individual. Due to technical reasons....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  8. Enforcement of entailment constraints in distributed service-based business processes.

    Science.gov (United States)

    Hummer, Waldemar; Gaubatz, Patrick; Strembeck, Mark; Zdun, Uwe; Dustdar, Schahram

    2013-11-01

    A distributed business process is executed in a distributed computing environment. The service-oriented architecture (SOA) paradigm is a popular option for the integration of software services and execution of distributed business processes. Entailment constraints, such as mutual exclusion and binding constraints, are important means to control process execution. Mutually exclusive tasks result from the division of powerful rights and responsibilities to prevent fraud and abuse. In contrast, binding constraints define that a subject who performed one task must also perform the corresponding bound task(s). We aim to provide a model-driven approach for the specification and enforcement of task-based entailment constraints in distributed service-based business processes. Based on a generic metamodel, we define a domain-specific language (DSL) that maps the different modeling-level artifacts to the implementation-level. The DSL integrates elements from role-based access control (RBAC) with the tasks that are performed in a business process. Process definitions are annotated using the DSL, and our software platform uses automated model transformations to produce executable WS-BPEL specifications which enforce the entailment constraints. We evaluate the impact of constraint enforcement on runtime performance for five selected service-based processes from existing literature. Our evaluation demonstrates that the approach correctly enforces task-based entailment constraints at runtime. The performance experiments illustrate that the runtime enforcement operates with an overhead that scales well up to the order of several ten thousand logged invocations. Using our DSL annotations, the user-defined process definition remains declarative and clean of security enforcement code. Our approach decouples the concerns of (non-technical) domain experts from technical details of entailment constraint enforcement. The developed framework integrates seamlessly with WS-BPEL and the Web

  9. Predicting growth of the healthy infant using a genome scale metabolic model.

    Science.gov (United States)

    Nilsson, Avlant; Mardinoglu, Adil; Nielsen, Jens

    2017-01-01

    An estimated 165 million children globally have stunted growth, and extensive growth data are available. Genome scale metabolic models allow the simulation of molecular flux over each metabolic enzyme, and are well adapted to analyze biological systems. We used a human genome scale metabolic model to simulate the mechanisms of growth and integrate data about breast-milk intake and composition with the infant's biomass and energy expenditure of major organs. The model predicted daily metabolic fluxes from birth to age 6 months, and accurately reproduced standard growth curves and changes in body composition. The model corroborates the finding that essential amino and fatty acids do not limit growth, but that energy is the main growth limiting factor. Disruptions to the supply and demand of energy markedly affected the predicted growth, indicating that elevated energy expenditure may be detrimental. The model was used to simulate the metabolic effect of mineral deficiencies, and showed the greatest growth reduction for deficiencies in copper, iron, and magnesium ions which affect energy production through oxidative phosphorylation. The model and simulation method were integrated to a platform and shared with the research community. The growth model constitutes another step towards the complete representation of human metabolism, and may further help improve the understanding of the mechanisms underlying stunting.

  10. Integration of Genome Scale Metabolic Networks and Gene Regulation of Metabolic Enzymes With Physiologically Based Pharmacokinetics.

    Science.gov (United States)

    Maldonado, Elaina M; Leoncikas, Vytautas; Fisher, Ciarán P; Moore, J Bernadette; Plant, Nick J; Kierzek, Andrzej M

    2017-11-01

    The scope of physiologically based pharmacokinetic (PBPK) modeling can be expanded by assimilation of the mechanistic models of intracellular processes from systems biology field. The genome scale metabolic networks (GSMNs) represent a whole set of metabolic enzymes expressed in human tissues. Dynamic models of the gene regulation of key drug metabolism enzymes are available. Here, we introduce GSMNs and review ongoing work on integration of PBPK, GSMNs, and metabolic gene regulation. We demonstrate example models. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.

  11. CyanoBase: the cyanobacteria genome database update 2010

    OpenAIRE

    Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu

    2009-01-01

    CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in var...

  12. Reframed Genome-Scale Metabolic Model to Facilitate Genetic Design and Integration with Expression Data.

    Science.gov (United States)

    Gu, Deqing; Jian, Xingxing; Zhang, Cheng; Hua, Qiang

    2017-01-01

    Genome-scale metabolic network models (GEMs) have played important roles in the design of genetically engineered strains and helped biologists to decipher metabolism. However, due to the complex gene-reaction relationships that exist in model systems, most algorithms have limited capabilities with respect to directly predicting accurate genetic design for metabolic engineering. In particular, methods that predict reaction knockout strategies leading to overproduction are often impractical in terms of gene manipulations. Recently, we proposed a method named logical transformation of model (LTM) to simplify the gene-reaction associations by introducing intermediate pseudo reactions, which makes it possible to generate genetic design. Here, we propose an alternative method to relieve researchers from deciphering complex gene-reactions by adding pseudo gene controlling reactions. In comparison to LTM, this new method introduces fewer pseudo reactions and generates a much smaller model system named as gModel. We showed that gModel allows two seldom reported applications: identification of minimal genomes and design of minimal cell factories within a modified OptKnock framework. In addition, gModel could be used to integrate expression data directly and improve the performance of the E-Fmin method for predicting fluxes. In conclusion, the model transformation procedure will facilitate genetic research based on GEMs, extending their applications.

  13. Systems biology of bacterial nitrogen fixation: High-throughput technology and its integrative description with constraint-based modeling

    Directory of Open Access Journals (Sweden)

    Resendis-Antonio Osbaldo

    2011-07-01

    Full Text Available Abstract Background Bacterial nitrogen fixation is the biological process by which atmospheric nitrogen is uptaken by bacteroids located in plant root nodules and converted into ammonium through the enzymatic activity of nitrogenase. In practice, this biological process serves as a natural form of fertilization and its optimization has significant implications in sustainable agricultural programs. Currently, the advent of high-throughput technology supplies with valuable data that contribute to understanding the metabolic activity during bacterial nitrogen fixation. This undertaking is not trivial, and the development of computational methods useful in accomplishing an integrative, descriptive and predictive framework is a crucial issue to decoding the principles that regulated the metabolic activity of this biological process. Results In this work we present a systems biology description of the metabolic activity in bacterial nitrogen fixation. This was accomplished by an integrative analysis involving high-throughput data and constraint-based modeling to characterize the metabolic activity in Rhizobium etli bacteroids located at the root nodules of Phaseolus vulgaris (bean plant. Proteome and transcriptome technologies led us to identify 415 proteins and 689 up-regulated genes that orchestrate this biological process. Taking into account these data, we: 1 extended the metabolic reconstruction reported for R. etli; 2 simulated the metabolic activity during symbiotic nitrogen fixation; and 3 evaluated the in silico results in terms of bacteria phenotype. Notably, constraint-based modeling simulated nitrogen fixation activity in such a way that 76.83% of the enzymes and 69.48% of the genes were experimentally justified. Finally, to further assess the predictive scope of the computational model, gene deletion analysis was carried out on nine metabolic enzymes. Our model concluded that an altered metabolic activity on these enzymes induced

  14. WMAP constraints on the Cardassian model

    International Nuclear Information System (INIS)

    Sen, A.A.; Sen, S.

    2003-01-01

    We investigate the constraints on the Cardassian model using the recent results from the Wilkinson microwave anisotropy probe for the locations of the peaks of the cosmic microwave background (CMB) anisotropy spectrum. We find that the model is consistent with the recent observational data for a certain range of the model parameter n and the cosmological parameters. We find that the Cardassian model is favored compared to the ΛCDM model for a higher spectral index (n s ≅1) together with a lower value of the Hubble parameter h (h≤0.71). But for smaller values of n s , both ΛCDM and Cardassian models are equally favored. Also, irrespective of supernova constraints, CMB data alone predict the current acceleration of the Universe in this model. We have also studied the constraint on σ 8 , the rms density fluctuations at the 8h -1 Mpc scale

  15. Ensembl Genomes 2013: scaling up access to genome-wide data.

    Science.gov (United States)

    Kersey, Paul Julian; Allen, James E; Christensen, Mikkel; Davis, Paul; Falin, Lee J; Grabmueller, Christoph; Hughes, Daniel Seth Toney; Humphrey, Jay; Kerhornou, Arnaud; Khobova, Julia; Langridge, Nicholas; McDowall, Mark D; Maheswari, Uma; Maslen, Gareth; Nuhn, Michael; Ong, Chuang Kee; Paulini, Michael; Pedro, Helder; Toneva, Iliana; Tuli, Mary Ann; Walts, Brandon; Williams, Gareth; Wilson, Derek; Youens-Clark, Ken; Monaco, Marcela K; Stein, Joshua; Wei, Xuehong; Ware, Doreen; Bolser, Daniel M; Howe, Kevin Lee; Kulesha, Eugene; Lawson, Daniel; Staines, Daniel Michael

    2014-01-01

    Ensembl Genomes (http://www.ensemblgenomes.org) is an integrating resource for genome-scale data from non-vertebrate species. The project exploits and extends technologies for genome annotation, analysis and dissemination, developed in the context of the vertebrate-focused Ensembl project, and provides a complementary set of resources for non-vertebrate species through a consistent set of programmatic and interactive interfaces. These provide access to data including reference sequence, gene models, transcriptional data, polymorphisms and comparative analysis. This article provides an update to the previous publications about the resource, with a focus on recent developments. These include the addition of important new genomes (and related data sets) including crop plants, vectors of human disease and eukaryotic pathogens. In addition, the resource has scaled up its representation of bacterial genomes, and now includes the genomes of over 9000 bacteria. Specific extensions to the web and programmatic interfaces have been developed to support users in navigating these large data sets. Looking forward, analytic tools to allow targeted selection of data for visualization and download are likely to become increasingly important in future as the number of available genomes increases within all domains of life, and some of the challenges faced in representing bacterial data are likely to become commonplace for eukaryotes in future.

  16. Constraint-based modeling and kinetic analysis of the Smad dependent TGF-beta signaling pathway.

    Directory of Open Access Journals (Sweden)

    Zhike Zi

    Full Text Available BACKGROUND: Investigation of dynamics and regulation of the TGF-beta signaling pathway is central to the understanding of complex cellular processes such as growth, apoptosis, and differentiation. In this study, we aim at using systems biology approach to provide dynamic analysis on this pathway. METHODOLOGY/PRINCIPAL FINDINGS: We proposed a constraint-based modeling method to build a comprehensive mathematical model for the Smad dependent TGF-beta signaling pathway by fitting the experimental data and incorporating the qualitative constraints from the experimental analysis. The performance of the model generated by constraint-based modeling method is significantly improved compared to the model obtained by only fitting the quantitative data. The model agrees well with the experimental analysis of TGF-beta pathway, such as the time course of nuclear phosphorylated Smad, the subcellular location of Smad and signal response of Smad phosphorylation to different doses of TGF-beta. CONCLUSIONS/SIGNIFICANCE: The simulation results indicate that the signal response to TGF-beta is regulated by the balance between clathrin dependent endocytosis and non-clathrin mediated endocytosis. This model is useful to be built upon as new precise experimental data are emerging. The constraint-based modeling method can also be applied to quantitative modeling of other signaling pathways.

  17. CyanoBase: the cyanobacteria genome database update 2010.

    Science.gov (United States)

    Nakao, Mitsuteru; Okamoto, Shinobu; Kohara, Mitsuyo; Fujishiro, Tsunakazu; Fujisawa, Takatomo; Sato, Shusei; Tabata, Satoshi; Kaneko, Takakazu; Nakamura, Yasukazu

    2010-01-01

    CyanoBase (http://genome.kazusa.or.jp/cyanobase) is the genome database for cyanobacteria, which are model organisms for photosynthesis. The database houses cyanobacteria species information, complete genome sequences, genome-scale experiment data, gene information, gene annotations and mutant information. In this version, we updated these datasets and improved the navigation and the visual display of the data views. In addition, a web service API now enables users to retrieve the data in various formats with other tools, seamlessly.

  18. Reconstruction of genome-scale human metabolic models using omics data

    DEFF Research Database (Denmark)

    Ryu, Jae Yong; Kim, Hyun Uk; Lee, Sang Yup

    2015-01-01

    used to describe metabolic phenotypes of healthy and diseased human tissues and cells, and to predict therapeutic targets. Here we review recent trends in genome-scale human metabolic modeling, including various generic and tissue/cell type-specific human metabolic models developed to date, and methods......, databases and platforms used to construct them. For generic human metabolic models, we pay attention to Recon 2 and HMR 2.0 with emphasis on data sources used to construct them. Draft and high-quality tissue/cell type-specific human metabolic models have been generated using these generic human metabolic...... refined through gap filling, reaction directionality assignment and the subcellular localization of metabolic reactions. We review relevant tools for this model refinement procedure as well. Finally, we suggest the direction of further studies on reconstructing an improved human metabolic model....

  19. Detection of Common Problems in Real-Time and Multicore Systems Using Model-Based Constraints

    Directory of Open Access Journals (Sweden)

    Raphaël Beamonte

    2016-01-01

    Full Text Available Multicore systems are complex in that multiple processes are running concurrently and can interfere with each other. Real-time systems add on top of that time constraints, making results invalid as soon as a deadline has been missed. Tracing is often the most reliable and accurate tool available to study and understand those systems. However, tracing requires that users understand the kernel events and their meaning. It is therefore not very accessible. Using modeling to generate source code or represent applications’ workflow is handy for developers and has emerged as part of the model-driven development methodology. In this paper, we propose a new approach to system analysis using model-based constraints, on top of userspace and kernel traces. We introduce the constraints representation and how traces can be used to follow the application’s workflow and check the constraints we set on the model. We then present a number of common problems that we encountered in real-time and multicore systems and describe how our model-based constraints could have helped to save time by automatically identifying the unwanted behavior.

  20. Effects of Contingency versus Constraints on the Body-Mass Scaling of Metabolic Rate

    Directory of Open Access Journals (Sweden)

    Douglas S. Glazier

    2018-01-01

    Full Text Available I illustrate the effects of both contingency and constraints on the body-mass scaling of metabolic rate by analyzing the significantly different influences of ambient temperature (Ta on metabolic scaling in ectothermic versus endothermic animals. Interspecific comparisons show that increasing Ta results in decreasing metabolic scaling slopes in ectotherms, but increasing slopes in endotherms, a pattern uniquely predicted by the metabolic-level boundaries hypothesis, as amended to include effects of the scaling of thermal conductance in endotherms outside their thermoneutral zone. No other published theoretical model explicitly predicts this striking variation in metabolic scaling, which I explain in terms of contingent effects of Ta and thermoregulatory strategy in the context of physical and geometric constraints related to the scaling of surface area, volume, and heat flow across surfaces. My analysis shows that theoretical models focused on an ideal 3/4-power law, as explained by a single universally applicable mechanism, are clearly inadequate for explaining the diversity and environmental sensitivity of metabolic scaling. An important challenge is to develop a theory of metabolic scaling that recognizes the contingent effects of multiple mechanisms that are modulated by several extrinsic and intrinsic factors within specified constraints.

  1. Genome-scale modeling for metabolic engineering.

    Science.gov (United States)

    Simeonidis, Evangelos; Price, Nathan D

    2015-03-01

    We focus on the application of constraint-based methodologies and, more specifically, flux balance analysis in the field of metabolic engineering, and enumerate recent developments and successes of the field. We also review computational frameworks that have been developed with the express purpose of automatically selecting optimal gene deletions for achieving improved production of a chemical of interest. The application of flux balance analysis methods in rational metabolic engineering requires a metabolic network reconstruction and a corresponding in silico metabolic model for the microorganism in question. For this reason, we additionally present a brief overview of automated reconstruction techniques. Finally, we emphasize the importance of integrating metabolic networks with regulatory information-an area which we expect will become increasingly important for metabolic engineering-and present recent developments in the field of metabolic and regulatory integration.

  2. Multi-scale image segmentation method with visual saliency constraints and its application

    Science.gov (United States)

    Chen, Yan; Yu, Jie; Sun, Kaimin

    2018-03-01

    Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works

  3. Genome-scale reconstruction of the metabolic network in Yersinia pestis CO92

    Science.gov (United States)

    Navid, Ali; Almaas, Eivind

    2007-03-01

    The gram-negative bacterium Yersinia pestis is the causative agent of bubonic plague. Using publicly available genomic, biochemical and physiological data, we have developed a constraint-based flux balance model of metabolism in the CO92 strain (biovar Orientalis) of this organism. The metabolic reactions were appropriately compartmentalized, and the model accounts for the exchange of metabolites, as well as the import of nutrients and export of waste products. We have characterized the metabolic capabilities and phenotypes of this organism, after comparing the model predictions with available experimental observations to evaluate accuracy and completeness. We have also begun preliminary studies into how cellular metabolism affects virulence.

  4. Astrophysical constraints on Planck scale dissipative phenomena.

    Science.gov (United States)

    Liberati, Stefano; Maccione, Luca

    2014-04-18

    The emergence of a classical spacetime from any quantum gravity model is still a subtle and only partially understood issue. If indeed spacetime is arising as some sort of large scale condensate of more fundamental objects, then it is natural to expect that matter, being a collective excitation of the spacetime constituents, will present modified kinematics at sufficiently high energies. We consider here the phenomenology of the dissipative effects necessarily arising in such a picture. Adopting dissipative hydrodynamics as a general framework for the description of the energy exchange between collective excitations and the spacetime fundamental degrees of freedom, we discuss how rates of energy loss for elementary particles can be derived from dispersion relations and used to provide strong constraints on the base of current astrophysical observations of high-energy particles.

  5. A Constraint Model for Constrained Hidden Markov Models

    DEFF Research Database (Denmark)

    Christiansen, Henning; Have, Christian Theil; Lassen, Ole Torp

    2009-01-01

    A Hidden Markov Model (HMM) is a common statistical model which is widely used for analysis of biological sequence data and other sequential phenomena. In the present paper we extend HMMs with constraints and show how the familiar Viterbi algorithm can be generalized, based on constraint solving ...

  6. Large-scale genomic 2D visualization reveals extensive CG-AT skew correlation in bird genomes

    Directory of Open Access Journals (Sweden)

    Deng Xuemei

    2007-11-01

    Full Text Available Abstract Background Bird genomes have very different compositional structure compared with other warm-blooded animals. The variation in the base skew rules in the vertebrate genomes remains puzzling, but it must relate somehow to large-scale genome evolution. Current research is inclined to relate base skew with mutations and their fixation. Here we wish to explore base skew correlations in bird genomes, to develop methods for displaying and quantifying such correlations at different scales, and to discuss possible explanations for the peculiarities of the bird genomes in skew correlation. Results We have developed a method called Base Skew Double Triangle (BSDT for exhibiting the genome-scale change of AT/CG skew as a two-dimensional square picture, showing base skews at many scales simultaneously in a single image. By this method we found that most chicken chromosomes have high AT/CG skew correlation (symmetry in 2D picture, except for some microchromosomes. No other organisms studied (18 species show such high skew correlations. This visualized high correlation was validated by three kinds of quantitative calculations with overlapping and non-overlapping windows, all indicating that chicken and birds in general have a special genome structure. Similar features were also found in some of the mammal genomes, but clearly much weaker than in chickens. We presume that the skew correlation feature evolved near the time that birds separated from other vertebrate lineages. When we eliminated the repeat sequences from the genomes, the AT and CG skews correlation increased for some mammal genomes, but were still clearly lower than in chickens. Conclusion Our results suggest that BSDT is an expressive visualization method for AT and CG skew and enabled the discovery of the very high skew correlation in bird genomes; this peculiarity is worth further study. Computational analysis indicated that this correlation might be a compositional characteristic

  7. New Constraints on the running-mass inflation model

    OpenAIRE

    Covi, Laura; Lyth, David H.; Melchiorri, Alessandro

    2002-01-01

    We evaluate new observational constraints on the two-parameter scale-dependent spectral index predicted by the running-mass inflation model by combining the latest Cosmic Microwave Background (CMB) anisotropy measurements with the recent 2dFGRS data on the matter power spectrum, with Lyman $\\alpha $ forest data and finally with theoretical constraints on the reionization redshift. We find that present data still allow significant scale-dependence of $n$, which occurs in a physically reasonabl...

  8. Leptogenesis constraints on B - L breaking Higgs boson in TeV scale seesaw models

    Science.gov (United States)

    Dev, P. S. Bhupal; Mohapatra, Rabindra N.; Zhang, Yongchao

    2018-03-01

    In the type-I seesaw mechanism for neutrino masses, there exists a B - L symmetry, whose breaking leads to the lepton number violating mass of the heavy Majorana neutrinos. This would imply the existence of a new neutral scalar associated with the B - L symmetry breaking, analogous to the Higgs boson of the Standard Model. If in such models, the heavy neutrino decays are also responsible for the observed baryon asymmetry of the universe via the leptogenesis mechanism, the new seesaw scalar interactions with the heavy neutrinos will induce additional dilution terms for the heavy neutrino and lepton number densities. We make a detailed study of this dilution effect on the lepton asymmetry in three generic classes of seesaw models with TeV-scale B - L symmetry breaking, namely, in an effective theory framework and in scenarios with global or local U(1) B- L symmetry. We find that requiring successful leptogenesis imposes stringent constraints on the mass and couplings of the new scalar in all three cases, especially when it is lighter than the heavy neutrinos. We also discuss the implications of these new constraints and prospects of testing leptogenesis in presence of seesaw scalars at colliders.

  9. Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 Accurately Predicts Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions1

    Science.gov (United States)

    Zuñiga, Cristal; Li, Chien-Ting; Zielinski, Daniel C.; Guarnieri, Michael T.; Antoniewicz, Maciek R.; Zengler, Karsten

    2016-01-01

    The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. PMID:27372244

  10. Genome-Scale Metabolic Model for the Green Alga Chlorella vulgaris UTEX 395 Accurately Predicts Phenotypes under Autotrophic, Heterotrophic, and Mixotrophic Growth Conditions.

    Science.gov (United States)

    Zuñiga, Cristal; Li, Chien-Ting; Huelsman, Tyler; Levering, Jennifer; Zielinski, Daniel C; McConnell, Brian O; Long, Christopher P; Knoshaug, Eric P; Guarnieri, Michael T; Antoniewicz, Maciek R; Betenbaugh, Michael J; Zengler, Karsten

    2016-09-01

    The green microalga Chlorella vulgaris has been widely recognized as a promising candidate for biofuel production due to its ability to store high lipid content and its natural metabolic versatility. Compartmentalized genome-scale metabolic models constructed from genome sequences enable quantitative insight into the transport and metabolism of compounds within a target organism. These metabolic models have long been utilized to generate optimized design strategies for an improved production process. Here, we describe the reconstruction, validation, and application of a genome-scale metabolic model for C. vulgaris UTEX 395, iCZ843. The reconstruction represents the most comprehensive model for any eukaryotic photosynthetic organism to date, based on the genome size and number of genes in the reconstruction. The highly curated model accurately predicts phenotypes under photoautotrophic, heterotrophic, and mixotrophic conditions. The model was validated against experimental data and lays the foundation for model-driven strain design and medium alteration to improve yield. Calculated flux distributions under different trophic conditions show that a number of key pathways are affected by nitrogen starvation conditions, including central carbon metabolism and amino acid, nucleotide, and pigment biosynthetic pathways. Furthermore, model prediction of growth rates under various medium compositions and subsequent experimental validation showed an increased growth rate with the addition of tryptophan and methionine. © 2016 American Society of Plant Biologists. All rights reserved.

  11. Constraint-Based Abstraction of a Model Checker for Infinite State Systems

    DEFF Research Database (Denmark)

    Banda, Gourinath; Gallagher, John Patrick

    Abstract interpretation-based model checking provides an approach to verifying properties of infinite-state systems. In practice, most previous work on abstract model checking is either restricted to verifying universal properties, or develops special techniques for temporal logics such as modal t...... to implementation of abstract model checking algorithms for abstract domains based on constraints, making use of an SMT solver....

  12. New constraints on the running-mass inflation model

    International Nuclear Information System (INIS)

    Covi, L.; Lyth, D.H.; Melchiorri, A.

    2002-10-01

    We evaluate new observational constraints on the two-parameter scale-dependent spectral index predicted by the running-mass inflation model by combining the latest cosmic microwave background (CMB) anisotropy measurements with the recent 2dFGRS data on the matter power spectrum, with Lyman α forest data and finally with theoretical constraints on the reionization redshift. We find that present data still allow significant scale-dependence of n, which occurs in a physically reasonable regime of parameter space. (orig.)

  13. A Probabilistic Genome-Wide Gene Reading Frame Sequence Model

    DEFF Research Database (Denmark)

    Have, Christian Theil; Mørk, Søren

    We introduce a new type of probabilistic sequence model, that model the sequential composition of reading frames of genes in a genome. Our approach extends gene finders with a model of the sequential composition of genes at the genome-level -- effectively producing a sequential genome annotation...... as output. The model can be used to obtain the most probable genome annotation based on a combination of i: a gene finder score of each gene candidate and ii: the sequence of the reading frames of gene candidates through a genome. The model --- as well as a higher order variant --- is developed and tested...... and are evaluated by the effect on prediction performance. Since bacterial gene finding to a large extent is a solved problem it forms an ideal proving ground for evaluating the explicit modeling of larger scale gene sequence composition of genomes. We conclude that the sequential composition of gene reading frames...

  14. Phylogenetic distribution of large-scale genome patchiness

    Directory of Open Access Journals (Sweden)

    Hackenberg Michael

    2008-04-01

    Full Text Available Abstract Background The phylogenetic distribution of large-scale genome structure (i.e. mosaic compositional patchiness has been explored mainly by analytical ultracentrifugation of bulk DNA. However, with the availability of large, good-quality chromosome sequences, and the recently developed computational methods to directly analyze patchiness on the genome sequence, an evolutionary comparative analysis can be carried out at the sequence level. Results The local variations in the scaling exponent of the Detrended Fluctuation Analysis are used here to analyze large-scale genome structure and directly uncover the characteristic scales present in genome sequences. Furthermore, through shuffling experiments of selected genome regions, computationally-identified, isochore-like regions were identified as the biological source for the uncovered large-scale genome structure. The phylogenetic distribution of short- and large-scale patchiness was determined in the best-sequenced genome assemblies from eleven eukaryotic genomes: mammals (Homo sapiens, Pan troglodytes, Mus musculus, Rattus norvegicus, and Canis familiaris, birds (Gallus gallus, fishes (Danio rerio, invertebrates (Drosophila melanogaster and Caenorhabditis elegans, plants (Arabidopsis thaliana and yeasts (Saccharomyces cerevisiae. We found large-scale patchiness of genome structure, associated with in silico determined, isochore-like regions, throughout this wide phylogenetic range. Conclusion Large-scale genome structure is detected by directly analyzing DNA sequences in a wide range of eukaryotic chromosome sequences, from human to yeast. In all these genomes, large-scale patchiness can be associated with the isochore-like regions, as directly detected in silico at the sequence level.

  15. Genomic divergences among cattle, dog and human estimated from large-scale alignments of genomic sequences

    Directory of Open Access Journals (Sweden)

    Shade Larry L

    2006-06-01

    Full Text Available Abstract Background Approximately 11 Mb of finished high quality genomic sequences were sampled from cattle, dog and human to estimate genomic divergences and their regional variation among these lineages. Results Optimal three-way multi-species global sequence alignments for 84 cattle clones or loci (each >50 kb of genomic sequence were constructed using the human and dog genome assemblies as references. Genomic divergences and substitution rates were examined for each clone and for various sequence classes under different functional constraints. Analysis of these alignments revealed that the overall genomic divergences are relatively constant (0.32–0.37 change/site for pairwise comparisons among cattle, dog and human; however substitution rates vary across genomic regions and among different sequence classes. A neutral mutation rate (2.0–2.2 × 10(-9 change/site/year was derived from ancestral repetitive sequences, whereas the substitution rate in coding sequences (1.1 × 10(-9 change/site/year was approximately half of the overall rate (1.9–2.0 × 10(-9 change/site/year. Relative rate tests also indicated that cattle have a significantly faster rate of substitution as compared to dog and that this difference is about 6%. Conclusion This analysis provides a large-scale and unbiased assessment of genomic divergences and regional variation of substitution rates among cattle, dog and human. It is expected that these data will serve as a baseline for future mammalian molecular evolution studies.

  16. Genome-scale model guided design of Propionibacterium for enhanced propionic acid production

    Directory of Open Access Journals (Sweden)

    Laura Navone

    2018-06-01

    Full Text Available Production of propionic acid by fermentation of propionibacteria has gained increasing attention in the past few years. However, biomanufacturing of propionic acid cannot compete with the current oxo-petrochemical synthesis process due to its well-established infrastructure, low oil prices and the high downstream purification costs of microbial production. Strain improvement to increase propionic acid yield is the best alternative to reduce downstream purification costs. The recent generation of genome-scale models for a number of Propionibacterium species facilitates the rational design of metabolic engineering strategies and provides a new opportunity to explore the metabolic potential of the Wood-Werkman cycle. Previous strategies for strain improvement have individually targeted acid tolerance, rate of propionate production or minimisation of by-products. Here we used the P. freudenreichii subsp. shermanii and the pan-Propionibacterium genome-scale metabolic models (GEMs to simultaneously target these combined issues. This was achieved by focussing on strategies which yield higher energies and directly suppress acetate formation. Using P. freudenreichii subsp. shermanii, two strategies were assessed. The first tested the ability to manipulate the redox balance to favour propionate production by over-expressing the first two enzymes of the pentose-phosphate pathway (PPP, Zwf (glucose-6-phosphate 1-dehydrogenase and Pgl (6-phosphogluconolactonase. Results showed a 4-fold increase in propionate to acetate ratio during the exponential growth phase. Secondly, the ability to enhance the energy yield from propionate production by over-expressing an ATP-dependent phosphoenolpyruvate carboxykinase (PEPCK and sodium-pumping methylmalonyl-CoA decarboxylase (MMD was tested, which extended the exponential growth phase. Together, these strategies demonstrate that in silico design strategies are predictive and can be used to reduce by-product formation in

  17. Effect of amino acid supplementation on titer and glycosylation distribution in hybridoma cell cultures-Systems biology-based interpretation using genome-scale metabolic flux balance model and multivariate data analysis.

    Science.gov (United States)

    Reimonn, Thomas M; Park, Seo-Young; Agarabi, Cyrus D; Brorson, Kurt A; Yoon, Seongkyu

    2016-09-01

    Genome-scale flux balance analysis (FBA) is a powerful systems biology tool to characterize intracellular reaction fluxes during cell cultures. FBA estimates intracellular reaction rates by optimizing an objective function, subject to the constraints of a metabolic model and media uptake/excretion rates. A dynamic extension to FBA, dynamic flux balance analysis (DFBA), can calculate intracellular reaction fluxes as they change during cell cultures. In a previous study by Read et al. (2013), a series of informed amino acid supplementation experiments were performed on twelve parallel murine hybridoma cell cultures, and this data was leveraged for further analysis (Read et al., Biotechnol Prog. 2013;29:745-753). In order to understand the effects of media changes on the model murine hybridoma cell line, a systems biology approach is applied in the current study. Dynamic flux balance analysis was performed using a genome-scale mouse metabolic model, and multivariate data analysis was used for interpretation. The calculated reaction fluxes were examined using partial least squares and partial least squares discriminant analysis. The results indicate media supplementation increases product yield because it raises nutrient levels extending the growth phase, and the increased cell density allows for greater culture performance. At the same time, the directed supplementation does not change the overall metabolism of the cells. This supports the conclusion that product quality, as measured by glycoform assays, remains unchanged because the metabolism remains in a similar state. Additionally, the DFBA shows that metabolic state varies more at the beginning of the culture but less by the middle of the growth phase, possibly due to stress on the cells during inoculation. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1163-1173, 2016. © 2016 American Institute of Chemical Engineers.

  18. A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.

    Science.gov (United States)

    Röhl, Annika; Bockmayr, Alexander

    2017-01-03

    Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.

  19. Reconstruction and in silico analysis of an Actinoplanes sp. SE50/110 genome-scale metabolic model for acarbose production

    Directory of Open Access Journals (Sweden)

    Yali eWang

    2015-06-01

    Full Text Available Actinoplanes sp. SE50/110 produces the -glucosidase inhibitor acarbose, which is used to treat type 2 diabetes mellitus. To obtain a comprehensive understanding of its cellular metabolism, a genome-scale metabolic model of strain SE50/110, iYLW1028, was reconstructed on the bases of the genome annotation, biochemical databases, and extensive literature mining. Model iYLW1028 comprises 1028 genes, 1128 metabolites and 1219 reactions. 122 and 81 genes were essential for cell growth on acarbose synthesis and sucrose media, respectively, and the acarbose biosynthetic pathway in SE50/110 was expounded completely. Based on model predictions, the addition of arginine and histidine to the media increased acarbose production by 78% and 59%, respectively. Additionally, dissolved oxygen has a great effect on acarbose production based on model predictions. Furthermore, genes to be overexpressed for the overproduction of acarbose were identified, and the deletion of treY eliminated the formation of by-product component C. Model iYLW1028 is a useful platform for optimizing and systems metabolic engineering for acarbose production in Actinoplanes sp. SE50/110.

  20. The Genome-Scale Integrated Networks in Microorganisms

    Directory of Open Access Journals (Sweden)

    Tong Hao

    2018-02-01

    Full Text Available The genome-scale cellular network has become a necessary tool in the systematic analysis of microbes. In a cell, there are several layers (i.e., types of the molecular networks, for example, genome-scale metabolic network (GMN, transcriptional regulatory network (TRN, and signal transduction network (STN. It has been realized that the limitation and inaccuracy of the prediction exist just using only a single-layer network. Therefore, the integrated network constructed based on the networks of the three types attracts more interests. The function of a biological process in living cells is usually performed by the interaction of biological components. Therefore, it is necessary to integrate and analyze all the related components at the systems level for the comprehensively and correctly realizing the physiological function in living organisms. In this review, we discussed three representative genome-scale cellular networks: GMN, TRN, and STN, representing different levels (i.e., metabolism, gene regulation, and cellular signaling of a cell’s activities. Furthermore, we discussed the integration of the networks of the three types. With more understanding on the complexity of microbial cells, the development of integrated network has become an inevitable trend in analyzing genome-scale cellular networks of microorganisms.

  1. Augment clinical measurement using a constraint-based esophageal model

    Science.gov (United States)

    Kou, Wenjun; Acharya, Shashank; Kahrilas, Peter; Patankar, Neelesh; Pandolfino, John

    2017-11-01

    Quantifying the mechanical properties of the esophageal wall is crucial to understanding impairments of trans-esophageal flow characteristic of several esophageal diseases. However, these data are unavailable owing to technological limitations of current clinical diagnostic instruments that instead display esophageal luminal cross sectional area based on intraluminal impedance change. In this work, we developed an esophageal model to predict bolus flow and the wall property based on clinical measurements. The model used the constraint-based immersed-boundary method developed previously by our group. Specifically, we first approximate the time-dependent wall geometry based on impedance planimetry data on luminal cross sectional area. We then fed these along with pressure data into the model and computed wall tension based on simulated pressure and flow fields, and the material property based on the strain-stress relationship. As examples, we applied this model to augment FLIP (Functional Luminal Imaging Probe) measurements in three clinical cases: a normal subject, achalasia, and eosinophilic esophagitis (EoE). Our findings suggest that the wall stiffness was greatest in the EoE case, followed by the achalasia case, and then the normal. This is supported by NIH Grant R01 DK56033 and R01 DK079902.

  2. GEMMER: GEnome-wide tool for Multi-scale Modeling data Extraction and Representation for Saccharomyces cerevisiae.

    Science.gov (United States)

    Mondeel, Thierry D G A; Crémazy, Frédéric; Barberis, Matteo

    2018-02-01

    Multi-scale modeling of biological systems requires integration of various information about genes and proteins that are connected together in networks. Spatial, temporal and functional information is available; however, it is still a challenge to retrieve and explore this knowledge in an integrated, quick and user-friendly manner. We present GEMMER (GEnome-wide tool for Multi-scale Modelling data Extraction and Representation), a web-based data-integration tool that facilitates high quality visualization of physical, regulatory and genetic interactions between proteins/genes in Saccharomyces cerevisiae. GEMMER creates network visualizations that integrate information on function, temporal expression, localization and abundance from various existing databases. GEMMER supports modeling efforts by effortlessly gathering this information and providing convenient export options for images and their underlying data. GEMMER is freely available at http://gemmer.barberislab.com. Source code, written in Python, JavaScript library D3js, PHP and JSON, is freely available at https://github.com/barberislab/GEMMER. M.Barberis@uva.nl. Supplementary data are available at Bioinformatics online. © The Author(s) 2018. Published by Oxford University Press.

  3. Structural constraints in the packaging of bluetongue virus genomic segments.

    Science.gov (United States)

    Burkhardt, Christiane; Sung, Po-Yu; Celma, Cristina C; Roy, Polly

    2014-10-01

    The mechanism used by bluetongue virus (BTV) to ensure the sorting and packaging of its 10 genomic segments is still poorly understood. In this study, we investigated the packaging constraints for two BTV genomic segments from two different serotypes. Segment 4 (S4) of BTV serotype 9 was mutated sequentially and packaging of mutant ssRNAs was investigated by two newly developed RNA packaging assay systems, one in vivo and the other in vitro. Modelling of the mutated ssRNA followed by biochemical data analysis suggested that a conformational motif formed by interaction of the 5' and 3' ends of the molecule was necessary and sufficient for packaging. A similar structural signal was also identified in S8 of BTV serotype 1. Furthermore, the same conformational analysis of secondary structures for positive-sense ssRNAs was used to generate a chimeric segment that maintained the putative packaging motif but contained unrelated internal sequences. This chimeric segment was packaged successfully, confirming that the motif identified directs the correct packaging of the segment. © 2014 The Authors.

  4. Observational Constraints for Modeling Diffuse Molecular Clouds

    Science.gov (United States)

    Federman, S. R.

    2014-02-01

    Ground-based and space-borne observations of diffuse molecular clouds suggest a number of areas where further improvements to modeling efforts is warranted. I will highlight those that have the widest applicability. The range in CO fractionation caused by selective isotope photodissociation, in particular the large 12C16O/13C16O ratios observed toward stars in Ophiuchus, is not reproduced well by current models. Our ongoing laboratory measurements of oscillator strengths and predissociation rates for Rydberg transitions in CO isotopologues may help clarify the situtation. The CH+ abundance continues to draw attention. Small scale structure seen toward ζ Per may provide additional constraints on the possible synthesis routes. The connection between results from optical transitions and those from radio and sub-millimeter wave transitions requires further effort. A study of OH+ and OH toward background stars reveals that these species favor different environments. This brings to focus the need to model each cloud along the line of sight separately, and to allow the physical conditions to vary within an individual cloud, in order to gain further insight into the chemistry. Now that an extensive set of data on molecular excitation is available, the models should seek to reproduce these data to place further constraints on the modeling results.

  5. Constraints and entropy in a model of network evolution

    Science.gov (United States)

    Tee, Philip; Wakeman, Ian; Parisis, George; Dawes, Jonathan; Kiss, István Z.

    2017-11-01

    Barabási-Albert's "Scale Free" model is the starting point for much of the accepted theory of the evolution of real world communication networks. Careful comparison of the theory with a wide range of real world networks, however, indicates that the model is in some cases, only a rough approximation to the dynamical evolution of real networks. In particular, the exponent γ of the power law distribution of degree is predicted by the model to be exactly 3, whereas in a number of real world networks it has values between 1.2 and 2.9. In addition, the degree distributions of real networks exhibit cut offs at high node degree, which indicates the existence of maximal node degrees for these networks. In this paper we propose a simple extension to the "Scale Free" model, which offers better agreement with the experimental data. This improvement is satisfying, but the model still does not explain why the attachment probabilities should favor high degree nodes, or indeed how constraints arrive in non-physical networks. Using recent advances in the analysis of the entropy of graphs at the node level we propose a first principles derivation for the "Scale Free" and "constraints" model from thermodynamic principles, and demonstrate that both preferential attachment and constraints could arise as a natural consequence of the second law of thermodynamics.

  6. In silico analysis of human metabolism: Reconstruction, contextualization and application of genome-scale models

    DEFF Research Database (Denmark)

    Geng, Jun; Nielsen, Jens

    2017-01-01

    The arising prevalence of metabolic diseases calls for a holistic approach for analysis of the underlying nature of abnormalities in cellular functions. Through mathematic representation and topological analysis of cellular metabolism, GEnome scale metabolic Models (GEMs) provide a promising fram...... that correctly describe interactions between cells or tissues, and we therefore discuss how GEMs can be integrated with blood circulation models. Finally, we end the review with proposing some possible future research directions....

  7. Safety constraints applied to an adaptive Bayesian condition-based maintenance optimization model

    International Nuclear Information System (INIS)

    Flage, Roger; Coit, David W.; Luxhøj, James T.; Aven, Terje

    2012-01-01

    A model is described that determines an optimal inspection and maintenance scheme for a deteriorating unit with a stochastic degradation process with independent and stationary increments and for which the parameters are uncertain. This model and resulting maintenance plans offers some distinct benefits compared to prior research because the uncertainty of the degradation process is accommodated by a Bayesian approach and two new safety constraints have been applied to the problem: (1) with a given subjective probability (degree of belief), the limiting relative frequency of one or more failures during a fixed time interval is bounded; or (2) the subjective probability of one or more failures during a fixed time interval is bounded. In the model, the parameter(s) of a condition-based inspection scheduling function and a preventive replacement threshold are jointly optimized upon each replacement and inspection such as to minimize the expected long run cost per unit of time, but also considering one of the specified safety constraints. A numerical example is included to illustrate the effect of imposing each of the two different safety constraints.

  8. Constraints on Exotic Dipole-Dipole Couplings between Electrons at the Micrometer Scale.

    Science.gov (United States)

    Kotler, Shlomi; Ozeri, Roee; Kimball, Derek F Jackson

    2015-08-21

    New constraints on exotic dipole-dipole interactions between electrons at the micrometer scale are established, based on a recent measurement of the magnetic interaction between two trapped 88Sr(+) ions. For light bosons (mass≤0.1  eV) we obtain a 90% confidence interval for an axial-vector-mediated interaction strength of |g(A)(e)g(A)(e)/4πℏc|≤1.2×10(-17). Assuming CPT invariance, this constraint is compared to that on anomalous electron-positron interactions, derived from positronium hyperfine spectroscopy. We find that the electron-electron constraint is 6 orders of magnitude more stringent than the electron-positron counterpart. Bounds on pseudoscalar-mediated interaction as well as on torsion gravity are also derived and compared with previous work performed at different length scales. Our constraints benefit from the high controllability of the experimental system which contained only two trapped particles. It therefore suggests a useful new platform for exotic particle searches, complementing other experimental efforts.

  9. Genome-scale metabolic model of Pichia pastoris with native and humanized glycosylation of recombinant proteins.

    Science.gov (United States)

    Irani, Zahra Azimzadeh; Kerkhoven, Eduard J; Shojaosadati, Seyed Abbas; Nielsen, Jens

    2016-05-01

    Pichia pastoris is used for commercial production of human therapeutic proteins, and genome-scale models of P. pastoris metabolism have been generated in the past to study the metabolism and associated protein production by this yeast. A major challenge with clinical usage of recombinant proteins produced by P. pastoris is the difference in N-glycosylation of proteins produced by humans and this yeast. However, through metabolic engineering, a P. pastoris strain capable of producing humanized N-glycosylated proteins was constructed. The current genome-scale models of P. pastoris do not address native nor humanized N-glycosylation, and we therefore developed ihGlycopastoris, an extension to the iLC915 model with both native and humanized N-glycosylation for recombinant protein production, but also an estimation of N-glycosylation of P. pastoris native proteins. This new model gives a better prediction of protein yield, demonstrates the effect of the different types of N-glycosylation of protein yield, and can be used to predict potential targets for strain improvement. The model represents a step towards a more complete description of protein production in P. pastoris, which is required for using these models to understand and optimize protein production processes. © 2015 Wiley Periodicals, Inc.

  10. Fuzzy Constraint-Based Agent Negotiation

    Institute of Scientific and Technical Information of China (English)

    Menq-Wen Lin; K. Robert Lai; Ting-Jung Yu

    2005-01-01

    Conflicts between two or more parties arise for various reasons and perspectives. Thus, resolution of conflicts frequently relies on some form of negotiation. This paper presents a general problem-solving framework for modeling multi-issue multilateral negotiation using fuzzy constraints. Agent negotiation is formulated as a distributed fuzzy constraint satisfaction problem (DFCSP). Fuzzy constrains are thus used to naturally represent each agent's desires involving imprecision and human conceptualization, particularly when lexical imprecision and subjective matters are concerned. On the other hand, based on fuzzy constraint-based problem-solving, our approach enables an agent not only to systematically relax fuzzy constraints to generate a proposal, but also to employ fuzzy similarity to select the alternative that is subject to its acceptability by the opponents. This task of problem-solving is to reach an agreement that benefits all agents with a high satisfaction degree of fuzzy constraints, and move towards the deal more quickly since their search focuses only on the feasible solution space. An application to multilateral negotiation of a travel planning is provided to demonstrate the usefulness and effectiveness of our framework.

  11. Respiration climacteric in tomato fruits elucidated by constraint-based modelling.

    Science.gov (United States)

    Colombié, Sophie; Beauvoit, Bertrand; Nazaret, Christine; Bénard, Camille; Vercambre, Gilles; Le Gall, Sophie; Biais, Benoit; Cabasson, Cécile; Maucourt, Mickaël; Bernillon, Stéphane; Moing, Annick; Dieuaide-Noubhani, Martine; Mazat, Jean-Pierre; Gibon, Yves

    2017-03-01

    Tomato is a model organism to study the development of fleshy fruit including ripening initiation. Unfortunately, few studies deal with the brief phase of accelerated ripening associated with the respiration climacteric because of practical problems involved in measuring fruit respiration. Because constraint-based modelling allows predicting accurate metabolic fluxes, we investigated the respiration and energy dissipation of fruit pericarp at the breaker stage using a detailed stoichiometric model of the respiratory pathway, including alternative oxidase and uncoupling proteins. Assuming steady-state, a metabolic dataset was transformed into constraints to solve the model on a daily basis throughout tomato fruit development. We detected a peak of CO 2 released and an excess of energy dissipated at 40 d post anthesis (DPA) just before the onset of ripening coinciding with the respiration climacteric. We demonstrated the unbalanced carbon allocation with the sharp slowdown of accumulation (for syntheses and storage) and the beginning of the degradation of starch and cell wall polysaccharides. Experiments with fruits harvested from plants cultivated under stress conditions confirmed the concept. We conclude that modelling with an accurate metabolic dataset is an efficient tool to bypass the difficulty of measuring fruit respiration and to elucidate the underlying mechanisms of ripening. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  12. Multi-scale structural community organisation of the human genome.

    Science.gov (United States)

    Boulos, Rasha E; Tremblay, Nicolas; Arneodo, Alain; Borgnat, Pierre; Audit, Benjamin

    2017-04-11

    Structural interaction frequency matrices between all genome loci are now experimentally achievable thanks to high-throughput chromosome conformation capture technologies. This ensues a new methodological challenge for computational biology which consists in objectively extracting from these data the structural motifs characteristic of genome organisation. We deployed the fast multi-scale community mining algorithm based on spectral graph wavelets to characterise the networks of intra-chromosomal interactions in human cell lines. We observed that there exist structural domains of all sizes up to chromosome length and demonstrated that the set of structural communities forms a hierarchy of chromosome segments. Hence, at all scales, chromosome folding predominantly involves interactions between neighbouring sites rather than the formation of links between distant loci. Multi-scale structural decomposition of human chromosomes provides an original framework to question structural organisation and its relationship to functional regulation across the scales. By construction the proposed methodology is independent of the precise assembly of the reference genome and is thus directly applicable to genomes whose assembly is not fully determined.

  13. When clusters collide: constraints on antimatter on the largest scales

    International Nuclear Information System (INIS)

    Steigman, Gary

    2008-01-01

    Observations have ruled out the presence of significant amounts of antimatter in the Universe on scales ranging from the solar system, to the Galaxy, to groups and clusters of galaxies, and even to distances comparable to the scale of the present horizon. Except for the model-dependent constraints on the largest scales, the most significant upper limits to diffuse antimatter in the Universe are those on the ∼Mpc scale of clusters of galaxies provided by the EGRET upper bounds to annihilation gamma rays from galaxy clusters whose intracluster gas is revealed through its x-ray emission. On the scale of individual clusters of galaxies the upper bounds to the fraction of mixed matter and antimatter for the 55 clusters from a flux-limited x-ray survey range from 5 × 10 −9 to −6 , strongly suggesting that individual clusters of galaxies are made entirely of matter or of antimatter. X-ray and gamma-ray observations of colliding clusters of galaxies, such as the Bullet Cluster, permit these constraints to be extended to even larger scales. If the observations of the Bullet Cluster, where the upper bound to the antimatter fraction is found to be −6 , can be generalized to other colliding clusters of galaxies, cosmologically significant amounts of antimatter will be excluded on scales of order ∼20 Mpc (M∼5×10 15 M sun )

  14. Constraints on Interacting Scalars in 2T Field Theory and No Scale Models in 1T Field Theory

    CERN Document Server

    Bars, Itzhak

    2010-01-01

    In this paper I determine the general form of the physical and mathematical restrictions that arise on the interactions of gravity and scalar fields in the 2T field theory setting, in d+2 dimensions, as well as in the emerging shadows in d dimensions. These constraints on scalar fields follow from an underlying Sp(2,R) gauge symmetry in phase space. Determining these general constraints provides a basis for the construction of 2T supergravity, as well as physical applications in 1T-field theory, that are discussed briefly here, and more detail elsewhere. In particular, no scale models that lead to a vanishing cosmological constant at the classical level emerge naturally in this setting.

  15. Constraint-based Student Modelling in Probability Story Problems with Scaffolding Techniques

    Directory of Open Access Journals (Sweden)

    Nabila Khodeir

    2018-01-01

    Full Text Available Constraint-based student modelling (CBM is an important technique employed in intelligent tutoring systems to model student knowledge to provide relevant assistance. This paper introduces the Math Story Problem Tutor (MAST, a Web-based intelligent tutoring system for probability story problems, which is able to generate problems of different contexts, types and difficulty levels for self-paced learning. Constraints in MAST are specified at a low-level of granularity to allow fine-grained diagnosis of the student error. Furthermore, MAST extends CBM to address errors due to misunderstanding of the narrative story. It can locate and highlight keywords that may have been overlooked or misunderstood leading to an error. This is achieved by utilizing the role of sentences and keywords that are defined through the Natural Language Generation (NLG methods deployed in the story problem generation. MAST also integrates CBM with scaffolding questions and feedback to provide various forms of help and guidance to the student. This allows the student to discover and correct any errors in his/her solution. MAST has been preliminary evaluated empirically and the results show the potential effectiveness in tutoring students with a decrease in the percentage of violated constraints along the learning curve. Additionally, there is a significant improvement in the results of the post–test exam in comparison to the pre-test exam of the students using MAST in comparison to those relying on the textbook

  16. Genome-scale metabolic model of the fission yeast Schizosaccharomyces pombe and the reconciliation of in silico/in vivo mutant growth

    Science.gov (United States)

    2012-01-01

    Background Over the last decade, the genome-scale metabolic models have been playing increasingly important roles in elucidating metabolic characteristics of biological systems for a wide range of applications including, but not limited to, system-wide identification of drug targets and production of high value biochemical compounds. However, these genome-scale metabolic models must be able to first predict known in vivo phenotypes before it is applied towards these applications with high confidence. One benchmark for measuring the in silico capability in predicting in vivo phenotypes is the use of single-gene mutant libraries to measure the accuracy of knockout simulations in predicting mutant growth phenotypes. Results Here we employed a systematic and iterative process, designated as Reconciling In silico/in vivo mutaNt Growth (RING), to settle discrepancies between in silico prediction and in vivo observations to a newly reconstructed genome-scale metabolic model of the fission yeast, Schizosaccharomyces pombe, SpoMBEL1693. The predictive capabilities of the genome-scale metabolic model in predicting single-gene mutant growth phenotypes were measured against the single-gene mutant library of S. pombe. The use of RING resulted in improving the overall predictive capability of SpoMBEL1693 by 21.5%, from 61.2% to 82.7% (92.5% of the negative predictions matched the observed growth phenotype and 79.7% the positive predictions matched the observed growth phenotype). Conclusion This study presents validation and refinement of a newly reconstructed metabolic model of the yeast S. pombe, through improving the metabolic model’s predictive capabilities by reconciling the in silico predicted growth phenotypes of single-gene knockout mutants, with experimental in vivo growth data. PMID:22631437

  17. Genome-Scale Analysis of Translation Elongation with a Ribosome Flow Model

    Science.gov (United States)

    Meilijson, Isaac; Kupiec, Martin; Ruppin, Eytan

    2011-01-01

    We describe the first large scale analysis of gene translation that is based on a model that takes into account the physical and dynamical nature of this process. The Ribosomal Flow Model (RFM) predicts fundamental features of the translation process, including translation rates, protein abundance levels, ribosomal densities and the relation between all these variables, better than alternative (‘non-physical’) approaches. In addition, we show that the RFM can be used for accurate inference of various other quantities including genes' initiation rates and translation costs. These quantities could not be inferred by previous predictors. We find that increasing the number of available ribosomes (or equivalently the initiation rate) increases the genomic translation rate and the mean ribosome density only up to a certain point, beyond which both saturate. Strikingly, assuming that the translation system is tuned to work at the pre-saturation point maximizes the predictive power of the model with respect to experimental data. This result suggests that in all organisms that were analyzed (from bacteria to Human), the global initiation rate is optimized to attain the pre-saturation point. The fact that similar results were not observed for heterologous genes indicates that this feature is under selection. Remarkably, the gap between the performance of the RFM and alternative predictors is strikingly large in the case of heterologous genes, testifying to the model's promising biotechnological value in predicting the abundance of heterologous proteins before expressing them in the desired host. PMID:21909250

  18. A Model for the Two-dimensional no Isolated Bits Constraint

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Laursen, Torben Vaarby

    2006-01-01

    A stationary model is presented for the two-dimensional (2-D) no isolated bits (n.i.b.) constraint over an extended alphabet defined by the elements within 1 by 2 blocks. This block-wise model is based on a set of sufficient conditions for a Pickard random field (PRF) over an m-ary alphabet....... Iterative techniques are applied as part of determining the model parameters. Given two Markov chains describing a boundary, an algorithm is presented which determines whether a certain PRF consistent with the boundary exists. Iterative scaling is used as part of the algorithm, which also determines...

  19. Estimated allele substitution effects underlying genomic evaluation models depend on the scaling of allele counts

    NARCIS (Netherlands)

    Bouwman, Aniek C.; Hayes, Ben J.; Calus, Mario P.L.

    2017-01-01

    Background: Genomic evaluation is used to predict direct genomic values (DGV) for selection candidates in breeding programs, but also to estimate allele substitution effects (ASE) of single nucleotide polymorphisms (SNPs). Scaling of allele counts influences the estimated ASE, because scaling of

  20. Annotated Draft Genome Assemblies for the Northern Bobwhite (Colinus virginianus) and the Scaled Quail (Callipepla squamata) Reveal Disparate Estimates of Modern Genome Diversity and Historic Effective Population Size.

    Science.gov (United States)

    Oldeschulte, David L; Halley, Yvette A; Wilson, Miranda L; Bhattarai, Eric K; Brashear, Wesley; Hill, Joshua; Metz, Richard P; Johnson, Charles D; Rollins, Dale; Peterson, Markus J; Bickhart, Derek M; Decker, Jared E; Sewell, John F; Seabury, Christopher M

    2017-09-07

    Northern bobwhite ( Colinus virginianus ; hereafter bobwhite) and scaled quail ( Callipepla squamata ) populations have suffered precipitous declines across most of their US ranges. Illumina-based first- (v1.0) and second- (v2.0) generation draft genome assemblies for the scaled quail and the bobwhite produced N50 scaffold sizes of 1.035 and 2.042 Mb, thereby producing a 45-fold improvement in contiguity over the existing bobwhite assembly, and ≥90% of the assembled genomes were captured within 1313 and 8990 scaffolds, respectively. The scaled quail assembly (v1.0 = 1.045 Gb) was ∼20% smaller than the bobwhite (v2.0 = 1.254 Gb), which was supported by kmer-based estimates of genome size. Nevertheless, estimates of GC content (41.72%; 42.66%), genome-wide repetitive content (10.40%; 10.43%), and MAKER-predicted protein coding genes (17,131; 17,165) were similar for the scaled quail (v1.0) and bobwhite (v2.0) assemblies, respectively. BUSCO analyses utilizing 3023 single-copy orthologs revealed a high level of assembly completeness for the scaled quail (v1.0; 84.8%) and the bobwhite (v2.0; 82.5%), as verified by comparison with well-established avian genomes. We also detected 273 putative segmental duplications in the scaled quail genome (v1.0), and 711 in the bobwhite genome (v2.0), including some that were shared among both species. Autosomal variant prediction revealed ∼2.48 and 4.17 heterozygous variants per kilobase within the scaled quail (v1.0) and bobwhite (v2.0) genomes, respectively, and estimates of historic effective population size were uniformly higher for the bobwhite across all time points in a coalescent model. However, large-scale declines were predicted for both species beginning ∼15-20 KYA. Copyright © 2017 Oldeschulte et al.

  1. Annotated Draft Genome Assemblies for the Northern Bobwhite (Colinus virginianus and the Scaled Quail (Callipepla squamata Reveal Disparate Estimates of Modern Genome Diversity and Historic Effective Population Size

    Directory of Open Access Journals (Sweden)

    David L. Oldeschulte

    2017-09-01

    Full Text Available Northern bobwhite (Colinus virginianus; hereafter bobwhite and scaled quail (Callipepla squamata populations have suffered precipitous declines across most of their US ranges. Illumina-based first- (v1.0 and second- (v2.0 generation draft genome assemblies for the scaled quail and the bobwhite produced N50 scaffold sizes of 1.035 and 2.042 Mb, thereby producing a 45-fold improvement in contiguity over the existing bobwhite assembly, and ≥90% of the assembled genomes were captured within 1313 and 8990 scaffolds, respectively. The scaled quail assembly (v1.0 = 1.045 Gb was ∼20% smaller than the bobwhite (v2.0 = 1.254 Gb, which was supported by kmer-based estimates of genome size. Nevertheless, estimates of GC content (41.72%; 42.66%, genome-wide repetitive content (10.40%; 10.43%, and MAKER-predicted protein coding genes (17,131; 17,165 were similar for the scaled quail (v1.0 and bobwhite (v2.0 assemblies, respectively. BUSCO analyses utilizing 3023 single-copy orthologs revealed a high level of assembly completeness for the scaled quail (v1.0; 84.8% and the bobwhite (v2.0; 82.5%, as verified by comparison with well-established avian genomes. We also detected 273 putative segmental duplications in the scaled quail genome (v1.0, and 711 in the bobwhite genome (v2.0, including some that were shared among both species. Autosomal variant prediction revealed ∼2.48 and 4.17 heterozygous variants per kilobase within the scaled quail (v1.0 and bobwhite (v2.0 genomes, respectively, and estimates of historic effective population size were uniformly higher for the bobwhite across all time points in a coalescent model. However, large-scale declines were predicted for both species beginning ∼15–20 KYA.

  2. Anomalous scaling of structure functions and dynamic constraints on turbulence simulations

    International Nuclear Information System (INIS)

    Yakhot, Victor; Sreenivasan, Katepalli R.

    2006-12-01

    The connection between anomalous scaling of structure functions (intermittency) and numerical methods for turbulence simulations is discussed. It is argued that the computational work for direct numerical simulations (DNS) of fully developed turbulence increases as Re 4 , and not as Re 3 expected from Kolmogorov's theory, where Re is a large-scale Reynolds number. Various relations for the moments of acceleration and velocity derivatives are derived. An infinite set of exact constraints on dynamically consistent subgrid models for Large Eddy Simulations (LES) is derived from the Navier-Stokes equations, and some problems of principle associated with existing LES models are highlighted. (author)

  3. Quantum information density scaling and qubit operation time constraints of CMOS silicon-based quantum computer architectures

    Science.gov (United States)

    Rotta, Davide; Sebastiano, Fabio; Charbon, Edoardo; Prati, Enrico

    2017-06-01

    Even the quantum simulation of an apparently simple molecule such as Fe2S2 requires a considerable number of qubits of the order of 106, while more complex molecules such as alanine (C3H7NO2) require about a hundred times more. In order to assess such a multimillion scale of identical qubits and control lines, the silicon platform seems to be one of the most indicated routes as it naturally provides, together with qubit functionalities, the capability of nanometric, serial, and industrial-quality fabrication. The scaling trend of microelectronic devices predicting that computing power would double every 2 years, known as Moore's law, according to the new slope set after the 32-nm node of 2009, suggests that the technology roadmap will achieve the 3-nm manufacturability limit proposed by Kelly around 2020. Today, circuital quantum information processing architectures are predicted to take advantage from the scalability ensured by silicon technology. However, the maximum amount of quantum information per unit surface that can be stored in silicon-based qubits and the consequent space constraints on qubit operations have never been addressed so far. This represents one of the key parameters toward the implementation of quantum error correction for fault-tolerant quantum information processing and its dependence on the features of the technology node. The maximum quantum information per unit surface virtually storable and controllable in the compact exchange-only silicon double quantum dot qubit architecture is expressed as a function of the complementary metal-oxide-semiconductor technology node, so the size scale optimizing both physical qubit operation time and quantum error correction requirements is assessed by reviewing the physical and technological constraints. According to the requirements imposed by the quantum error correction method and the constraints given by the typical strength of the exchange coupling, we determine the workable operation frequency

  4. An Integrative Bioinformatics Framework for Genome-scale Multiple Level Network Reconstruction of Rice

    Directory of Open Access Journals (Sweden)

    Liu Lili

    2013-06-01

    Full Text Available Understanding how metabolic reactions translate the genome of an organism into its phenotype is a grand challenge in biology. Genome-wide association studies (GWAS statistically connect genotypes to phenotypes, without any recourse to known molecular interactions, whereas a molecular mechanistic description ties gene function to phenotype through gene regulatory networks (GRNs, protein-protein interactions (PPIs and molecular pathways. Integration of different regulatory information levels of an organism is expected to provide a good way for mapping genotypes to phenotypes. However, the lack of curated metabolic model of rice is blocking the exploration of genome-scale multi-level network reconstruction. Here, we have merged GRNs, PPIs and genome-scale metabolic networks (GSMNs approaches into a single framework for rice via omics’ regulatory information reconstruction and integration. Firstly, we reconstructed a genome-scale metabolic model, containing 4,462 function genes, 2,986 metabolites involved in 3,316 reactions, and compartmentalized into ten subcellular locations. Furthermore, 90,358 pairs of protein-protein interactions, 662,936 pairs of gene regulations and 1,763 microRNA-target interactions were integrated into the metabolic model. Eventually, a database was developped for systematically storing and retrieving the genome-scale multi-level network of rice. This provides a reference for understanding genotype-phenotype relationship of rice, and for analysis of its molecular regulatory network.

  5. Genome-scale neurogenetics: methodology and meaning.

    Science.gov (United States)

    McCarroll, Steven A; Feng, Guoping; Hyman, Steven E

    2014-06-01

    Genetic analysis is currently offering glimpses into molecular mechanisms underlying such neuropsychiatric disorders as schizophrenia, bipolar disorder and autism. After years of frustration, success in identifying disease-associated DNA sequence variation has followed from new genomic technologies, new genome data resources, and global collaborations that could achieve the scale necessary to find the genes underlying highly polygenic disorders. Here we describe early results from genome-scale studies of large numbers of subjects and the emerging significance of these results for neurobiology.

  6. Ensembl Genomes: an integrative resource for genome-scale data from non-vertebrate species.

    Science.gov (United States)

    Kersey, Paul J; Staines, Daniel M; Lawson, Daniel; Kulesha, Eugene; Derwent, Paul; Humphrey, Jay C; Hughes, Daniel S T; Keenan, Stephan; Kerhornou, Arnaud; Koscielny, Gautier; Langridge, Nicholas; McDowall, Mark D; Megy, Karine; Maheswari, Uma; Nuhn, Michael; Paulini, Michael; Pedro, Helder; Toneva, Iliana; Wilson, Derek; Yates, Andrew; Birney, Ewan

    2012-01-01

    Ensembl Genomes (http://www.ensemblgenomes.org) is an integrative resource for genome-scale data from non-vertebrate species. The project exploits and extends technology (for genome annotation, analysis and dissemination) developed in the context of the (vertebrate-focused) Ensembl project and provides a complementary set of resources for non-vertebrate species through a consistent set of programmatic and interactive interfaces. These provide access to data including reference sequence, gene models, transcriptional data, polymorphisms and comparative analysis. Since its launch in 2009, Ensembl Genomes has undergone rapid expansion, with the goal of providing coverage of all major experimental organisms, and additionally including taxonomic reference points to provide the evolutionary context in which genes can be understood. Against the backdrop of a continuing increase in genome sequencing activities in all parts of the tree of life, we seek to work, wherever possible, with the communities actively generating and using data, and are participants in a growing range of collaborations involved in the annotation and analysis of genomes.

  7. Strongest experimental constraints on SU(5)×U(1) supergravity models

    Science.gov (United States)

    Lopez, Jorge L.; Nanopoulos, D. V.; Park, Gye T.; Zichichi, A.

    1994-01-01

    We consider a class of well-motivated string-inspired flipped SU(5) supergravity models which include four supersymmetry-breaking scenarios: no-scale, strict no-scale, dilaton, and special dilaton, such that only three parameters are needed to describe all new phenomena (mt,tanβ,mg~). We show that the CERN LEP precise measurements of the electroweak parameters in the form of the ɛ1 variable and the CLEO II allowed range for B(b-->sγ) are at present the most important experimental constraints on this class of models. For mt>~155 (165) GeV, the ɛ1 constraint [at 90 (95)% C.L.] requires the presence of light charginos (m+/-χ1360 GeV, mq~sγ) constraint excludes a significant fraction of the otherwise allowed region in the (m+/-χ1,tanβ) plane (irrespective of the magnitude of the chargino mass), while future experimental improvements will result in decisive tests of these models. In light of the ɛ1 constraint, we conclude that the outlook for chargino and selectron detection at LEP II and at DESY HERA is quite favorable in this class of models.

  8. Large-scale chromosome folding versus genomic DNA sequences: A discrete double Fourier transform technique.

    Science.gov (United States)

    Chechetkin, V R; Lobzin, V V

    2017-08-07

    Using state-of-the-art techniques combining imaging methods and high-throughput genomic mapping tools leaded to the significant progress in detailing chromosome architecture of various organisms. However, a gap still remains between the rapidly growing structural data on the chromosome folding and the large-scale genome organization. Could a part of information on the chromosome folding be obtained directly from underlying genomic DNA sequences abundantly stored in the databanks? To answer this question, we developed an original discrete double Fourier transform (DDFT). DDFT serves for the detection of large-scale genome regularities associated with domains/units at the different levels of hierarchical chromosome folding. The method is versatile and can be applied to both genomic DNA sequences and corresponding physico-chemical parameters such as base-pairing free energy. The latter characteristic is closely related to the replication and transcription and can also be used for the assessment of temperature or supercoiling effects on the chromosome folding. We tested the method on the genome of E. coli K-12 and found good correspondence with the annotated domains/units established experimentally. As a brief illustration of further abilities of DDFT, the study of large-scale genome organization for bacteriophage PHIX174 and bacterium Caulobacter crescentus was also added. The combined experimental, modeling, and bioinformatic DDFT analysis should yield more complete knowledge on the chromosome architecture and genome organization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Toward the automated generation of genome-scale metabolic networks in the SEED.

    Science.gov (United States)

    DeJongh, Matthew; Formsma, Kevin; Boillot, Paul; Gould, John; Rycenga, Matthew; Best, Aaron

    2007-04-26

    Current methods for the automated generation of genome-scale metabolic networks focus on genome annotation and preliminary biochemical reaction network assembly, but do not adequately address the process of identifying and filling gaps in the reaction network, and verifying that the network is suitable for systems level analysis. Thus, current methods are only sufficient for generating draft-quality networks, and refinement of the reaction network is still largely a manual, labor-intensive process. We have developed a method for generating genome-scale metabolic networks that produces substantially complete reaction networks, suitable for systems level analysis. Our method partitions the reaction space of central and intermediary metabolism into discrete, interconnected components that can be assembled and verified in isolation from each other, and then integrated and verified at the level of their interconnectivity. We have developed a database of components that are common across organisms, and have created tools for automatically assembling appropriate components for a particular organism based on the metabolic pathways encoded in the organism's genome. This focuses manual efforts on that portion of an organism's metabolism that is not yet represented in the database. We have demonstrated the efficacy of our method by reverse-engineering and automatically regenerating the reaction network from a published genome-scale metabolic model for Staphylococcus aureus. Additionally, we have verified that our method capitalizes on the database of common reaction network components created for S. aureus, by using these components to generate substantially complete reconstructions of the reaction networks from three other published metabolic models (Escherichia coli, Helicobacter pylori, and Lactococcus lactis). We have implemented our tools and database within the SEED, an open-source software environment for comparative genome annotation and analysis. Our method sets the

  10. Toward the automated generation of genome-scale metabolic networks in the SEED

    Directory of Open Access Journals (Sweden)

    Gould John

    2007-04-01

    Full Text Available Abstract Background Current methods for the automated generation of genome-scale metabolic networks focus on genome annotation and preliminary biochemical reaction network assembly, but do not adequately address the process of identifying and filling gaps in the reaction network, and verifying that the network is suitable for systems level analysis. Thus, current methods are only sufficient for generating draft-quality networks, and refinement of the reaction network is still largely a manual, labor-intensive process. Results We have developed a method for generating genome-scale metabolic networks that produces substantially complete reaction networks, suitable for systems level analysis. Our method partitions the reaction space of central and intermediary metabolism into discrete, interconnected components that can be assembled and verified in isolation from each other, and then integrated and verified at the level of their interconnectivity. We have developed a database of components that are common across organisms, and have created tools for automatically assembling appropriate components for a particular organism based on the metabolic pathways encoded in the organism's genome. This focuses manual efforts on that portion of an organism's metabolism that is not yet represented in the database. We have demonstrated the efficacy of our method by reverse-engineering and automatically regenerating the reaction network from a published genome-scale metabolic model for Staphylococcus aureus. Additionally, we have verified that our method capitalizes on the database of common reaction network components created for S. aureus, by using these components to generate substantially complete reconstructions of the reaction networks from three other published metabolic models (Escherichia coli, Helicobacter pylori, and Lactococcus lactis. We have implemented our tools and database within the SEED, an open-source software environment for comparative

  11. Sequence modelling and an extensible data model for genomic database

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peter Wei-Der [California Univ., San Francisco, CA (United States); Univ. of California, Berkeley, CA (United States)

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS`s do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the ``Extensible Object Model``, to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  12. Sequence modelling and an extensible data model for genomic database

    Energy Technology Data Exchange (ETDEWEB)

    Li, Peter Wei-Der (California Univ., San Francisco, CA (United States) Lawrence Berkeley Lab., CA (United States))

    1992-01-01

    The Human Genome Project (HGP) plans to sequence the human genome by the beginning of the next century. It will generate DNA sequences of more than 10 billion bases and complex marker sequences (maps) of more than 100 million markers. All of these information will be stored in database management systems (DBMSs). However, existing data models do not have the abstraction mechanism for modelling sequences and existing DBMS's do not have operations for complex sequences. This work addresses the problem of sequence modelling in the context of the HGP and the more general problem of an extensible object data model that can incorporate the sequence model as well as existing and future data constructs and operators. First, we proposed a general sequence model that is application and implementation independent. This model is used to capture the sequence information found in the HGP at the conceptual level. In addition, abstract and biological sequence operators are defined for manipulating the modelled sequences. Second, we combined many features of semantic and object oriented data models into an extensible framework, which we called the Extensible Object Model'', to address the need of a modelling framework for incorporating the sequence data model with other types of data constructs and operators. This framework is based on the conceptual separation between constructors and constraints. We then used this modelling framework to integrate the constructs for the conceptual sequence model. The Extensible Object Model is also defined with a graphical representation, which is useful as a tool for database designers. Finally, we defined a query language to support this model and implement the query processor to demonstrate the feasibility of the extensible framework and the usefulness of the conceptual sequence model.

  13. Power Laws, Scale-Free Networks and Genome Biology

    CERN Document Server

    Koonin, Eugene V; Karev, Georgy P

    2006-01-01

    Power Laws, Scale-free Networks and Genome Biology deals with crucial aspects of the theoretical foundations of systems biology, namely power law distributions and scale-free networks which have emerged as the hallmarks of biological organization in the post-genomic era. The chapters in the book not only describe the interesting mathematical properties of biological networks but moves beyond phenomenology, toward models of evolution capable of explaining the emergence of these features. The collection of chapters, contributed by both physicists and biologists, strives to address the problems in this field in a rigorous but not excessively mathematical manner and to represent different viewpoints, which is crucial in this emerging discipline. Each chapter includes, in addition to technical descriptions of properties of biological networks and evolutionary models, a more general and accessible introduction to the respective problems. Most chapters emphasize the potential of theoretical systems biology for disco...

  14. Density-Based Clustering with Geographical Background Constraints Using a Semantic Expression Model

    Directory of Open Access Journals (Sweden)

    Qingyun Du

    2016-05-01

    Full Text Available A semantics-based method for density-based clustering with constraints imposed by geographical background knowledge is proposed. In this paper, we apply an ontological approach to the DBSCAN (Density-Based Geospatial Clustering of Applications with Noise algorithm in the form of knowledge representation for constraint clustering. When used in the process of clustering geographic information, semantic reasoning based on a defined ontology and its relationships is primarily intended to overcome the lack of knowledge of the relevant geospatial data. Better constraints on the geographical knowledge yield more reasonable clustering results. This article uses an ontology to describe the four types of semantic constraints for geographical backgrounds: “No Constraints”, “Constraints”, “Cannot-Link Constraints”, and “Must-Link Constraints”. This paper also reports the implementation of a prototype clustering program. Based on the proposed approach, DBSCAN can be applied with both obstacle and non-obstacle constraints as a semi-supervised clustering algorithm and the clustering results are displayed on a digital map.

  15. Capturing the response of Clostridium acetobutylicum to chemical stressors using a regulated genome-scale metabolic model

    International Nuclear Information System (INIS)

    Dash, Satyakam; Mueller, Thomas J.; Venkataramanan, Keerthi P.; Papoutsakis, Eleftherios T.; Maranas, Costas D.

    2014-01-01

    Clostridia are anaerobic Gram-positive Firmicutes containing broad and flexible systems for substrate utilization, which have been used successfully to produce a range of industrial compounds. Clostridium acetobutylicum has been used to produce butanol on an industrial scale through acetone-butanol-ethanol (ABE) fermentation. A genome-scale metabolic (GSM) model is a powerful tool for understanding the metabolic capacities of an organism and developing metabolic engineering strategies for strain development. The integration of stress related specific transcriptomics information with the GSM model provides opportunities for elucidating the focal points of regulation

  16. PROBING THE INFLATON: SMALL-SCALE POWER SPECTRUM CONSTRAINTS FROM MEASUREMENTS OF THE COSMIC MICROWAVE BACKGROUND ENERGY SPECTRUM

    International Nuclear Information System (INIS)

    Chluba, Jens; Erickcek, Adrienne L.; Ben-Dayan, Ido

    2012-01-01

    In the early universe, energy stored in small-scale density perturbations is quickly dissipated by Silk damping, a process that inevitably generates μ- and y-type spectral distortions of the cosmic microwave background (CMB). These spectral distortions depend on the shape and amplitude of the primordial power spectrum at wavenumbers k ∼ 4 Mpc –1 . Here, we study constraints on the primordial power spectrum derived from COBE/FIRAS and forecasted for PIXIE. We show that measurements of μ and y impose strong bounds on the integrated small-scale power, and we demonstrate how to compute these constraints using k-space window functions that account for the effects of thermalization and dissipation physics. We show that COBE/FIRAS places a robust upper limit on the amplitude of the small-scale power spectrum. This limit is about three orders of magnitude stronger than the one derived from primordial black holes in the same scale range. Furthermore, this limit could be improved by another three orders of magnitude with PIXIE, potentially opening up a new window to early universe physics. To illustrate the power of these constraints, we consider several generic models for the small-scale power spectrum predicted by different inflation scenarios, including running-mass inflation models and inflation scenarios with episodes of particle production. PIXIE could place very tight constraints on these scenarios, potentially even ruling out running-mass inflation models if no distortion is detected. We also show that inflation models with sub-Planckian field excursion that generate detectable tensor perturbations should simultaneously produce a large CMB spectral distortion, a link that could potentially be established with PIXIE.

  17. Systems Biology Approach to Bioremediation of Nitroaromatics: Constraint-Based Analysis of 2,4,6-Trinitrotoluene Biotransformation by Escherichia coli

    Directory of Open Access Journals (Sweden)

    Maryam Iman

    2017-08-01

    Full Text Available Microbial remediation of nitroaromatic compounds (NACs is a promising environmentally friendly and cost-effective approach to the removal of these life-threating agents. Escherichia coli (E. coli has shown remarkable capability for the biotransformation of 2,4,6-trinitro-toluene (TNT. Efforts to develop E. coli as an efficient TNT degrading biocatalyst will benefit from holistic flux-level description of interactions between multiple TNT transforming pathways operating in the strain. To gain such an insight, we extended the genome-scale constraint-based model of E. coli to account for a curated version of major TNT transformation pathways known or evidently hypothesized to be active in E. coli in present of TNT. Using constraint-based analysis (CBA methods, we then performed several series of in silico experiments to elucidate the contribution of these pathways individually or in combination to the E. coli TNT transformation capacity. Results of our analyses were validated by replicating several experimentally observed TNT degradation phenotypes in E. coli cultures. We further used the extended model to explore the influence of process parameters, including aeration regime, TNT concentration, cell density, and carbon source on TNT degradation efficiency. We also conducted an in silico metabolic engineering study to design a series of E. coli mutants capable of degrading TNT at higher yield compared with the wild-type strain. Our study, therefore, extends the application of CBA to bioremediation of nitroaromatics and demonstrates the usefulness of this approach to inform bioremediation research.

  18. Ocean biogeochemistry modeled with emergent trait-based genomics

    Science.gov (United States)

    Coles, V. J.; Stukel, M. R.; Brooks, M. T.; Burd, A.; Crump, B. C.; Moran, M. A.; Paul, J. H.; Satinsky, B. M.; Yager, P. L.; Zielinski, B. L.; Hood, R. R.

    2017-12-01

    Marine ecosystem models have advanced to incorporate metabolic pathways discovered with genomic sequencing, but direct comparisons between models and “omics” data are lacking. We developed a model that directly simulates metagenomes and metatranscriptomes for comparison with observations. Model microbes were randomly assigned genes for specialized functions, and communities of 68 species were simulated in the Atlantic Ocean. Unfit organisms were replaced, and the model self-organized to develop community genomes and transcriptomes. Emergent communities from simulations that were initialized with different cohorts of randomly generated microbes all produced realistic vertical and horizontal ocean nutrient, genome, and transcriptome gradients. Thus, the library of gene functions available to the community, rather than the distribution of functions among specific organisms, drove community assembly and biogeochemical gradients in the model ocean.

  19. A site specific model and analysis of the neutral somatic mutation rate in whole-genome cancer data.

    Science.gov (United States)

    Bertl, Johanna; Guo, Qianyun; Juul, Malene; Besenbacher, Søren; Nielsen, Morten Muhlig; Hornshøj, Henrik; Pedersen, Jakob Skou; Hobolth, Asger

    2018-04-19

    Detailed modelling of the neutral mutational process in cancer cells is crucial for identifying driver mutations and understanding the mutational mechanisms that act during cancer development. The neutral mutational process is very complex: whole-genome analyses have revealed that the mutation rate differs between cancer types, between patients and along the genome depending on the genetic and epigenetic context. Therefore, methods that predict the number of different types of mutations in regions or specific genomic elements must consider local genomic explanatory variables. A major drawback of most methods is the need to average the explanatory variables across the entire region or genomic element. This procedure is particularly problematic if the explanatory variable varies dramatically in the element under consideration. To take into account the fine scale of the explanatory variables, we model the probabilities of different types of mutations for each position in the genome by multinomial logistic regression. We analyse 505 cancer genomes from 14 different cancer types and compare the performance in predicting mutation rate for both regional based models and site-specific models. We show that for 1000 randomly selected genomic positions, the site-specific model predicts the mutation rate much better than regional based models. We use a forward selection procedure to identify the most important explanatory variables. The procedure identifies site-specific conservation (phyloP), replication timing, and expression level as the best predictors for the mutation rate. Finally, our model confirms and quantifies certain well-known mutational signatures. We find that our site-specific multinomial regression model outperforms the regional based models. The possibility of including genomic variables on different scales and patient specific variables makes it a versatile framework for studying different mutational mechanisms. Our model can serve as the neutral null model

  20. A Consensus Genome-scale Reconstruction of Chinese Hamster Ovary Cell Metabolism

    KAUST Repository

    Hefzi, Hooman

    2016-11-23

    Chinese hamster ovary (CHO) cells dominate biotherapeutic protein production and are widely used in mammalian cell line engineering research. To elucidate metabolic bottlenecks in protein production and to guide cell engineering and bioprocess optimization, we reconstructed the metabolic pathways in CHO and associated them with >1,700 genes in the Cricetulus griseus genome. The genome-scale metabolic model based on this reconstruction, iCHO1766, and cell-line-specific models for CHO-K1, CHO-S, and CHO-DG44 cells provide the biochemical basis of growth and recombinant protein production. The models accurately predict growth phenotypes and known auxotrophies in CHO cells. With the models, we quantify the protein synthesis capacity of CHO cells and demonstrate that common bioprocess treatments, such as histone deacetylase inhibitors, inefficiently increase product yield. However, our simulations show that the metabolic resources in CHO are more than three times more efficiently utilized for growth or recombinant protein synthesis following targeted efforts to engineer the CHO secretory pathway. This model will further accelerate CHO cell engineering and help optimize bioprocesses.

  1. ELMs and constraints on the H-mode pedestal: A model based on peeling-ballooning modes

    International Nuclear Information System (INIS)

    Snyder, P.B.; Ferron, J.R.; Wilson, H.R.

    2003-01-01

    We propose a model for Edge Localized Modes (ELMs) and pedestal constraint based upon theoretical analysis of instabilities which can limit the pedestal height and drive ELMs. The sharp pressure gradients, and resulting bootstrap current, in the pedestal region provide free energy to drive peeling and ballooning modes. The interaction of peeling-ballooning coupling, ballooning mode second stability, and finite-Larmor-radius effects results in coupled peeling-ballooning modes of intermediate wavelength generally being the limiting instability. A highly efficient new MHD code, ELITE, is used to calculate quantitative stability constraints on the pedestal, including con straits on the pedestal height. Because of the impact of collisionality on the bootstrap current, these pedestal constraints are dependant on the density and temperature separately, rather than simply on the pressure. A model of various ELM types is developed, and quantitatively compared to data. A number of observations agree with predictions, including ELM onset times, ELM depth and variation in pedestal height with collisionality and discharge shape. Stability analysis of series of model equilibria are used both o predict and interpret pedestal trends in existing experiments and to project pedestal constraints for future burning plasma tokamak designs. (author)

  2. Distributed constraint satisfaction for coordinating and integrating a large-scale, heterogenous enterprise

    CERN Document Server

    Eisenberg, C

    2003-01-01

    Market forces are continuously driving public and private organisations towards higher productivity, shorter process and production times, and fewer labour hours. To cope with these changes, organisations are adopting new organisational models of coordination and cooperation that increase their flexibility, consistency, efficiency, productivity and profit margins. In this thesis an organisational model of coordination and cooperation is examined using a real life example; the technical integration of a distributed large-scale project of an international physics collaboration. The distributed resource constraint project scheduling problem is modelled and solved with the methods of distributed constraint satisfaction. A distributed local search method, the distributed breakout algorithm (DisBO), is used as the basis for the coordination scheme. The efficiency of the local search method is improved by extending it with an incremental problem solving scheme with variable ordering. The scheme is implemented as cen...

  3. Analysis of Aspergillus nidulans metabolism at the genome-scale

    DEFF Research Database (Denmark)

    David, Helga; Ozcelik, İlknur Ş; Hofmann, Gerald

    2008-01-01

    of relevant secondary metabolites, was reconstructed based on detailed metabolic reconstructions available for A. niger and Saccharomyces cerevisiae, and information on the genetics, biochemistry and physiology of A. nidulans. Thereby, it was possible to identify metabolic functions without a gene associated...... a function. Results: In this work, we have manually assigned functions to 472 orphan genes in the metabolism of A. nidulans, by using a pathway-driven approach and by employing comparative genomics tools based on sequence similarity. The central metabolism of A. nidulans, as well as biosynthetic pathways......, in an objective and systematic manner. The functional assignments served as a basis to develop a mathematical model, linking 666 genes (both previously and newly annotated) to metabolic roles. The model was used to simulate metabolic behavior and additionally to integrate, analyze and interpret large-scale gene...

  4. Non-universal gaugino mass GUT models in the light of dark matter and LHC constraints

    International Nuclear Information System (INIS)

    Chakrabortty, Joydeep; Mohanty, Subhendra; Rao, Soumya

    2014-01-01

    We perform a comprehensive study of SU(5), SO(10) and E(6) supersymmetric GUT models where the gaugino masses are generated through the F-term breaking vacuum expectation values of the non-singlet scalar fields. In these models the gauginos are non-universal at the GUT scale unlike in the mSUGRA scenario. We discuss the properties of the LSP which is stable and a viable candidate for cold dark matter. We look for the GUT scale parameter space that leads to the the lightest SM like Higgs mass in the range of 122–127 GeV compatible with the observations at ATLAS and CMS, the relic density in the allowed range of WMAP-PLANCK and compatible with other constraints from colliders and direct detection experiments. We scan universal scalar (m 0 G ), trilinear coupling A 0 and SU(3) C gaugino mass (M 3 G ) as the independent free parameters for these models. Based on the gaugino mass ratios at the GUT scale, we classify 25 SUSY GUT models and find that of these only 13 models satisfy the dark matter and collider constraints. Out of these 13 models there is only one model where there is a sizeable SUSY contribution to muon (g−2)

  5. Comparative BAC-based mapping in the white-throated sparrow, a novel behavioral genomics model, using interspecies overgo hybridization

    Directory of Open Access Journals (Sweden)

    Gonser Rusty A

    2011-06-01

    Full Text Available Abstract Background The genomics era has produced an arsenal of resources from sequenced organisms allowing researchers to target species that do not have comparable mapping and sequence information. These new "non-model" organisms offer unique opportunities to examine environmental effects on genomic patterns and processes. Here we use comparative mapping as a first step in characterizing the genome organization of a novel animal model, the white-throated sparrow (Zonotrichia albicollis, which occurs as white or tan morphs that exhibit alternative behaviors and physiology. Morph is determined by the presence or absence of a complex chromosomal rearrangement. This species is an ideal model for behavioral genomics because the association between genotype and phenotype is absolute, making it possible to identify the genomic bases of phenotypic variation. Findings We initiated a genomic study in this species by characterizing the white-throated sparrow BAC library via filter hybridization with overgo probes designed for the chicken, turkey, and zebra finch. Cross-species hybridization resulted in 640 positive sparrow BACs assigned to 77 chicken loci across almost all macro-and microchromosomes, with a focus on the chromosomes associated with morph. Out of 216 overgos, 36% of the probes hybridized successfully, with an average number of 3.0 positive sparrow BACs per overgo. Conclusions These data will be utilized for determining chromosomal architecture and for fine-scale mapping of candidate genes associated with phenotypic differences. Our research confirms the utility of interspecies hybridization for developing comparative maps in other non-model organisms.

  6. Roles of Solvent Accessibility and Gene Expression in Modeling Protein Sequence Evolution

    OpenAIRE

    Kuangyu Wang; Shuhui Yu; Xiang Ji; Clemens Lakner; Alexander Griffing; Jeffrey L. Thorne

    2015-01-01

    Models of protein evolution tend to ignore functional constraints, although structural constraints are sometimes incorporated. Here we propose a probabilistic framework for codon substitution that evaluates joint effects of relative solvent accessibility (RSA), a structural constraint; and gene expression, a functional constraint. First, we explore the relationship between RSA and codon usage at the genomic scale as well as at the individual gene scale. Motivated by these results, we construc...

  7. GEM System: automatic prototyping of cell-wide metabolic pathway models from genomes

    Directory of Open Access Journals (Sweden)

    Nakayama Yoichi

    2006-03-01

    Full Text Available Abstract Background Successful realization of a "systems biology" approach to analyzing cells is a grand challenge for our understanding of life. However, current modeling approaches to cell simulation are labor-intensive, manual affairs, and therefore constitute a major bottleneck in the evolution of computational cell biology. Results We developed the Genome-based Modeling (GEM System for the purpose of automatically prototyping simulation models of cell-wide metabolic pathways from genome sequences and other public biological information. Models generated by the GEM System include an entire Escherichia coli metabolism model comprising 968 reactions of 1195 metabolites, achieving 100% coverage when compared with the KEGG database, 92.38% with the EcoCyc database, and 95.06% with iJR904 genome-scale model. Conclusion The GEM System prototypes qualitative models to reduce the labor-intensive tasks required for systems biology research. Models of over 90 bacterial genomes are available at our web site.

  8. Impacts of Base-Case and Post-Contingency Constraint Relaxations on Static and Dynamic Operational Security

    Science.gov (United States)

    Salloum, Ahmed

    Constraint relaxation by definition means that certain security, operational, or financial constraints are allowed to be violated in the energy market model for a predetermined penalty price. System operators utilize this mechanism in an effort to impose a price-cap on shadow prices throughout the market. In addition, constraint relaxations can serve as corrective approximations that help in reducing the occurrence of infeasible or extreme solutions in the day-ahead markets. This work aims to capture the impact constraint relaxations have on system operational security. Moreover, this analysis also provides a better understanding of the correlation between DC market models and AC real-time systems and analyzes how relaxations in market models propagate to real-time systems. This information can be used not only to assess the criticality of constraint relaxations, but also as a basis for determining penalty prices more accurately. Constraint relaxations practice was replicated in this work using a test case and a real-life large-scale system, while capturing both energy market aspects and AC real-time system performance. System performance investigation included static and dynamic security analysis for base-case and post-contingency operating conditions. PJM peak hour loads were dynamically modeled in order to capture delayed voltage recovery and sustained depressed voltage profiles as a result of reactive power deficiency caused by constraint relaxations. Moreover, impacts of constraint relaxations on operational system security were investigated when risk based penalty prices are used. Transmission lines in the PJM system were categorized according to their risk index and each category was as-signed a different penalty price accordingly in order to avoid real-time overloads on high risk lines. This work also extends the investigation of constraint relaxations to post-contingency relaxations, where emergency limits are allowed to be relaxed in energy market models

  9. Genome-scale model guided design of Propionibacterium for enhanced propionic acid production.

    Science.gov (United States)

    Navone, Laura; McCubbin, Tim; Gonzalez-Garcia, Ricardo A; Nielsen, Lars K; Marcellin, Esteban

    2018-06-01

    Production of propionic acid by fermentation of propionibacteria has gained increasing attention in the past few years. However, biomanufacturing of propionic acid cannot compete with the current oxo-petrochemical synthesis process due to its well-established infrastructure, low oil prices and the high downstream purification costs of microbial production. Strain improvement to increase propionic acid yield is the best alternative to reduce downstream purification costs. The recent generation of genome-scale models for a number of Propionibacterium species facilitates the rational design of metabolic engineering strategies and provides a new opportunity to explore the metabolic potential of the Wood-Werkman cycle. Previous strategies for strain improvement have individually targeted acid tolerance, rate of propionate production or minimisation of by-products. Here we used the P. freudenreichii subsp . shermanii and the pan- Propionibacterium genome-scale metabolic models (GEMs) to simultaneously target these combined issues. This was achieved by focussing on strategies which yield higher energies and directly suppress acetate formation. Using P. freudenreichii subsp . shermanii , two strategies were assessed. The first tested the ability to manipulate the redox balance to favour propionate production by over-expressing the first two enzymes of the pentose-phosphate pathway (PPP), Zwf (glucose-6-phosphate 1-dehydrogenase) and Pgl (6-phosphogluconolactonase). Results showed a 4-fold increase in propionate to acetate ratio during the exponential growth phase. Secondly, the ability to enhance the energy yield from propionate production by over-expressing an ATP-dependent phosphoenolpyruvate carboxykinase (PEPCK) and sodium-pumping methylmalonyl-CoA decarboxylase (MMD) was tested, which extended the exponential growth phase. Together, these strategies demonstrate that in silico design strategies are predictive and can be used to reduce by-product formation in

  10. ELMs and constraints on the H-mode pedestal: A model based on peeling-ballooning modes

    International Nuclear Information System (INIS)

    Snyder, P.B.

    2002-01-01

    Maximizing the pedestal height while maintaining acceptable ELMs is a key issue for optimizing tokamak performance. We present a model for ELMs and pedestal constraints based upon theoretical analysis of edge instabilities which can limit the pedestal height and drive ELMs. Sharp pedestal pressure gradients drive large bootstrap currents which play a complex dual role in the stability physics. Consequently, the dominant modes are often intermediate-n coupled 'peeling-ballooning' modes, driven both by current and the pressure gradient. A highly efficient new MHD code, ELITE, is used to study these modes, and calculate quantitative stability constraints on the pedestal, including direct constraints on the pedestal height. A model of various ELM types is developed, and quantitatively compared to data from several tokamaks. A number of observations agree with predictions, including ELM onset times, ELM depth, and variation in pedestal height with discharge shape. Projections of pedestal stability constraints for Next Step designs, and nonlinear simulations of peeling-ballooning modes using the BOUT code are also presented. (author)

  11. Transactive-Market-Based Operation of Distributed Electrical Energy Storage with Grid Constraints

    Directory of Open Access Journals (Sweden)

    M. Nazif Faqiry

    2017-11-01

    Full Text Available In a transactive energy market, distributed energy resources (DERs such as dispatchable distributed generators (DGs, electrical energy storages (EESs, distribution-scale load aggregators (LAs, and renewable energy sources (RESs have to earn their share of supply or demand through a bidding process. In such a market, the distribution system operator (DSO may optimally schedule these resources, first in a forward market, i.e., day-ahead, and in a real-time market later on, while maintaining a reliable and economic distribution grid. In this paper, an efficient day-ahead scheduling of these resources, in the presence of interaction with wholesale market at the locational marginal price (LMP, is studied. Due to inclusion of EES units with integer constraints, a detailed mixed integer linear programming (MILP formulation that incorporates simplified DistFlow equations to account for grid constraints is proposed. Convex quadratic line and transformer apparent power flow constraints have been linearized using an outer approximation. The proposed model schedules DERs based on distribution locational marginal price (DLMP, which is obtained as the Lagrange multiplier of the real power balance constraint at each distribution bus while maintaining physical grid constraints such as line limits, transformer limits, and bus voltage magnitudes. Case studies are performed on a modified IEEE 13-bus system with high DER penetration. Simulation results show the validity and efficiency of the proposed model.

  12. A Web-Based Comparative Genomics Tutorial for Investigating Microbial Genomes

    Directory of Open Access Journals (Sweden)

    Michael Strong

    2009-12-01

    Full Text Available As the number of completely sequenced microbial genomes continues to rise at an impressive rate, it is important to prepare students with the skills necessary to investigate microorganisms at the genomic level. As a part of the core curriculum for first-year graduate students in the biological sciences, we have implemented a web-based tutorial to introduce students to the fields of comparative and functional genomics. The tutorial focuses on recent computational methods for identifying functionally linked genes and proteins on a genome-wide scale and was used to introduce students to the Rosetta Stone, Phylogenetic Profile, conserved Gene Neighbor, and Operon computational methods. Students learned to use a number of publicly available web servers and databases to identify functionally linked genes in the Escherichia coli genome, with emphasis on genome organization and operon structure. The overall effectiveness of the tutorial was assessed based on student evaluations and homework assignments. The tutorial is available to other educators at http://www.doe-mbi.ucla.edu/~strong/m253.php.

  13. Construction of a Genome-Scale Metabolic Model of Arthrospira platensis NIES-39 and Metabolic Design for Cyanobacterial Bioproduction.

    Directory of Open Access Journals (Sweden)

    Katsunori Yoshikawa

    Full Text Available Arthrospira (Spirulina platensis is a promising feedstock and host strain for bioproduction because of its high accumulation of glycogen and superior characteristics for industrial production. Metabolic simulation using a genome-scale metabolic model and flux balance analysis is a powerful method that can be used to design metabolic engineering strategies for the improvement of target molecule production. In this study, we constructed a genome-scale metabolic model of A. platensis NIES-39 including 746 metabolic reactions and 673 metabolites, and developed novel strategies to improve the production of valuable metabolites, such as glycogen and ethanol. The simulation results obtained using the metabolic model showed high consistency with experimental results for growth rates under several trophic conditions and growth capabilities on various organic substrates. The metabolic model was further applied to design a metabolic network to improve the autotrophic production of glycogen and ethanol. Decreased flux of reactions related to the TCA cycle and phosphoenolpyruvate reaction were found to improve glycogen production. Furthermore, in silico knockout simulation indicated that deletion of genes related to the respiratory chain, such as NAD(PH dehydrogenase and cytochrome-c oxidase, could enhance ethanol production by using ammonium as a nitrogen source.

  14. Large-scale parallel genome assembler over cloud computing environment.

    Science.gov (United States)

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  15. Constraint-based solver for the Military unit path finding problem

    CSIR Research Space (South Africa)

    Leenen, L

    2010-04-01

    Full Text Available -based approach because it requires flexibility in modelling. The authors formulate the MUPFP as a constraint satisfaction problem and a constraint-based extension of the search algorithm. The concept demonstrator uses a provided map, for example taken from Google...

  16. Extreme-Scale De Novo Genome Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Georganas, Evangelos [Intel Corporation, Santa Clara, CA (United States); Hofmeyr, Steven [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Egan, Rob [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Buluc, Aydin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.; Rokhsar, Daniel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Computational Research Division; Yelick, Katherine [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Joint Genome Inst.

    2017-09-26

    De novo whole genome assembly reconstructs genomic sequence from short, overlapping, and potentially erroneous DNA segments and is one of the most important computations in modern genomics. This work presents HipMER, a high-quality end-to-end de novo assembler designed for extreme scale analysis, via efficient parallelization of the Meraculous code. Genome assembly software has many components, each of which stresses different components of a computer system. This chapter explains the computational challenges involved in each step of the HipMer pipeline, the key distributed data structures, and communication costs in detail. We present performance results of assembling the human genome and the large hexaploid wheat genome on large supercomputers up to tens of thousands of cores.

  17. Insertion Sequence-Caused Large Scale-Rearrangements in the Genome of Escherichia coli

    Science.gov (United States)

    2016-07-18

    affordable ap- proach to genome-wide characterization of genetic varia - tion in bacterial and eukaryotic genomes (1–3). In addition to small-scale...Paired-End Reads), that uses a graph-based al- gorithm (27) capable of detecting most large-scale varia - tion involving repetitive regions, including novel...Avila,P., Grinsted,J. and De La Cruz,F. (1988) Analysis of the variable endpoints generated by one-ended transposition of Tn21.. J. Bacteriol., 170

  18. Sensitivity of the model error parameter specification in weak-constraint four-dimensional variational data assimilation

    Science.gov (United States)

    Shaw, Jeremy A.; Daescu, Dacian N.

    2017-08-01

    This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.

  19. Assessment of realizability constraints in v2-f turbulence models

    International Nuclear Information System (INIS)

    Sveningsson, A.; Davidson, L.

    2004-01-01

    The use of the realizability constraint in v 2 -f turbulence models is assessed by computing a stator vane passage flow. In this flow the stagnation region is large and it is shown that the time scale bound suggested by [Int. J. Heat Fluid Flow 17 (1995) 89] is well suited to prevent unphysical growth of turbulence kinetic energy. However, this constraint causes numerical instabilities when used in the equation for the relaxation parameter, f. It is also shown that the standard use of the realizability constraint in the v 2 -f model is inconsistent and some modifications are suggested. These changes of the v 2 -f model are examined and shown to have negligible effect on the overall performance of the v 2 -f model. In this work two different versions of the v 2 -f model are investigated and the results obtained are compared with experimental data. The model on a form similar to that originally suggested by Durbin (e.g. [AIAA J. 33 (1995) 659]) produced the overall best agreement with stator vane heat transfer data

  20. Black hole constraints on the running-mass inflation model

    OpenAIRE

    Leach, Samuel M; Grivell, Ian J; Liddle, Andrew R

    2000-01-01

    The running-mass inflation model, which has strong motivation from particle physics, predicts density perturbations whose spectral index is strongly scale-dependent. For a large part of parameter space the spectrum rises sharply to short scales. In this paper we compute the production of primordial black holes, using both analytic and numerical calculation of the density perturbation spectra. Observational constraints from black hole production are shown to exclude a large region of otherwise...

  1. Calculations of Inflaton Decays and Reheating: with Applications to No-Scale Inflation Models

    CERN Document Server

    Ellis, John; Nanopoulos, Dimitri V; Olive, Keith A

    2015-01-01

    We discuss inflaton decays and reheating in no-scale Starobinsky-like models of inflation, calculating the effective equation-of-state parameter, $w$, during the epoch of inflaton decay, the reheating temperature, $T_{\\rm reh}$, and the number of inflationary e-folds, $N_*$, comparing analytical approximations with numerical calculations. We then illustrate these results with applications to models based on no-scale supergravity and motivated by generic string compactifications, including scenarios where the inflaton is identified as an untwisted-sector matter field with direct Yukawa couplings to MSSM fields, and where the inflaton decays via gravitational-strength interactions. Finally, we use our results to discuss the constraints on these models imposed by present measurements of the scalar spectral index $n_s$ and the tensor-to-scalar perturbation ratio $r$, converting them into constraints on $N_*$, the inflaton decay rate and other parameters of specific no-scale inflationary models.

  2. 1/f and the Earthquake Problem: Scaling constraints that facilitate operational earthquake forecasting

    Science.gov (United States)

    yoder, M. R.; Rundle, J. B.; Turcotte, D. L.

    2012-12-01

    The difficulty of forecasting earthquakes can fundamentally be attributed to the self-similar, or "1/f", nature of seismic sequences. Specifically, the rate of occurrence of earthquakes is inversely proportional to their magnitude m, or more accurately to their scalar moment M. With respect to this "1/f problem," it can be argued that catalog selection (or equivalently, determining catalog constraints) constitutes the most significant challenge to seismicity based earthquake forecasting. Here, we address and introduce a potential solution to this most daunting problem. Specifically, we introduce a framework to constrain, or partition, an earthquake catalog (a study region) in order to resolve local seismicity. In particular, we combine Gutenberg-Richter (GR), rupture length, and Omori scaling with various empirical measurements to relate the size (spatial and temporal extents) of a study area (or bins within a study area) to the local earthquake magnitude potential - the magnitude of earthquake the region is expected to experience. From this, we introduce a new type of time dependent hazard map for which the tuning parameter space is nearly fully constrained. In a similar fashion, by combining various scaling relations and also by incorporating finite extents (rupture length, area, and duration) as constraints, we develop a method to estimate the Omori (temporal) and spatial aftershock decay parameters as a function of the parent earthquake's magnitude m. From this formulation, we develop an ETAS type model that overcomes many point-source limitations of contemporary ETAS. These models demonstrate promise with respect to earthquake forecasting applications. Moreover, the methods employed suggest a general framework whereby earthquake and other complex-system, 1/f type, problems can be constrained from scaling relations and finite extents.; Record-breaking hazard map of southern California, 2012-08-06. "Warm" colors indicate local acceleration (elevated hazard

  3. Novel insights into obesity and diabetes through genome-scale metabolic modeling

    Directory of Open Access Journals (Sweden)

    Leif eVäremo

    2013-04-01

    Full Text Available The growing prevalence of metabolic diseases, such as obesity and diabetes, are putting a high strain on global healthcare systems as well as increasing the demand for efficient treatment strategies. More than 360 million people worldwide are suffering from type 2 diabetes and, with the current trends, the projection is that 10% of the global adult population will be affected by 2030. In light of the systemic properties of metabolic diseases as well as the interconnected nature of metabolism, it is necessary to begin taking a holistic approach to study these diseases. Human genome-scale metabolic models (GEMs are topological and mathematical representations of cell metabolism and have proven to be valuable tools in the area of systems biology. Successful applications of GEMs include the process of gaining further biological and mechanistic understanding of diseases, finding potential biomarkers and identifying new drug targets. This review will focus on the modeling of human metabolism in the field of obesity and diabetes, showing its vast range of applications of clinical importance as well as point out future challenges.

  4. An Efficient Constraint Boundary Sampling Method for Sequential RBDO Using Kriging Surrogate Model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jihoon; Jang, Junyong; Kim, Shinyu; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Cho, Sugil; Kim, Hyung Woo; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Busan (Korea, Republic of)

    2016-06-15

    Reliability-based design optimization (RBDO) requires a high computational cost owing to its reliability analysis. A surrogate model is introduced to reduce the computational cost in RBDO. The accuracy of the reliability depends on the accuracy of the surrogate model of constraint boundaries in the surrogated-model-based RBDO. In earlier researches, constraint boundary sampling (CBS) was proposed to approximate accurately the boundaries of constraints by locating sample points on the boundaries of constraints. However, because CBS uses sample points on all constraint boundaries, it creates superfluous sample points. In this paper, efficient constraint boundary sampling (ECBS) is proposed to enhance the efficiency of CBS. ECBS uses the statistical information of a kriging surrogate model to locate sample points on or near the RBDO solution. The efficiency of ECBS is verified by mathematical examples.

  5. Satellite-based emission constraint for nitrogen oxides: Capability and uncertainty

    Science.gov (United States)

    Lin, J.; McElroy, M. B.; Boersma, F.; Nielsen, C.; Zhao, Y.; Lei, Y.; Liu, Y.; Zhang, Q.; Liu, Z.; Liu, H.; Mao, J.; Zhuang, G.; Roozendael, M.; Martin, R.; Wang, P.; Spurr, R. J.; Sneep, M.; Stammes, P.; Clemer, K.; Irie, H.

    2013-12-01

    Vertical column densities (VCDs) of tropospheric nitrogen dioxide (NO2) retrieved from satellite remote sensing have been employed widely to constrain emissions of nitrogen oxides (NOx). A major strength of satellite-based emission constraint is analysis of emission trends and variability, while a crucial limitation is errors both in satellite NO2 data and in model simulations relating NOx emissions to NO2 columns. Through a series of studies, we have explored these aspects over China. We separate anthropogenic from natural sources of NOx by exploiting their different seasonality. We infer trends of NOx emissions in recent years and effects of a variety of socioeconomic events at different spatiotemporal scales including the general economic growth, global financial crisis, Chinese New Year, and Beijing Olympics. We further investigate the impact of growing NOx emissions on particulate matter (PM) pollution in China. As part of recent developments, we identify and correct errors in both satellite NO2 retrieval and model simulation that ultimately affect NOx emission constraint. We improve the treatments of aerosol optical effects, clouds and surface reflectance in the NO2 retrieval process, using as reference ground-based MAX-DOAS measurements to evaluate the improved retrieval results. We analyze the sensitivity of simulated NO2 to errors in the model representation of major meteorological and chemical processes with a subsequent correction of model bias. Future studies will implement these improvements to re-constrain NOx emissions.

  6. Simulating non-holonomic constraints within the LCP-based simulation framework

    DEFF Research Database (Denmark)

    Ellekilde, Lars-Peter; Petersen, Henrik Gordon

    2006-01-01

    be incorporated directly, and derive formalism for how the non-holonomic contact constraints can be modelled as a combination of non-holonomic equality constraints and ordinary contacts constraints. For each of these three we are able to guarantee solvability, when using Lemke's algorithm. A number of examples......In this paper, we will extend the linear complementarity problem-based rigid-body simulation framework with non-holonomic constraints. We consider three different types of such, namely equality, inequality and contact constraints. We show how non-holonomic equality and inequality constraints can...... are included to demonstrate the non-holonomic constraints. Udgivelsesdato: Marts...

  7. Constraining Genome-Scale Models to Represent the Bow Tie Structure of Metabolism for 13C Metabolic Flux Analysis

    Directory of Open Access Journals (Sweden)

    Tyler W. H. Backman

    2018-01-01

    Full Text Available Determination of internal metabolic fluxes is crucial for fundamental and applied biology because they map how carbon and electrons flow through metabolism to enable cell function. 13 C Metabolic Flux Analysis ( 13 C MFA and Two-Scale 13 C Metabolic Flux Analysis (2S- 13 C MFA are two techniques used to determine such fluxes. Both operate on the simplifying approximation that metabolic flux from peripheral metabolism into central “core” carbon metabolism is minimal, and can be omitted when modeling isotopic labeling in core metabolism. The validity of this “two-scale” or “bow tie” approximation is supported both by the ability to accurately model experimental isotopic labeling data, and by experimentally verified metabolic engineering predictions using these methods. However, the boundaries of core metabolism that satisfy this approximation can vary across species, and across cell culture conditions. Here, we present a set of algorithms that (1 systematically calculate flux bounds for any specified “core” of a genome-scale model so as to satisfy the bow tie approximation and (2 automatically identify an updated set of core reactions that can satisfy this approximation more efficiently. First, we leverage linear programming to simultaneously identify the lowest fluxes from peripheral metabolism into core metabolism compatible with the observed growth rate and extracellular metabolite exchange fluxes. Second, we use Simulated Annealing to identify an updated set of core reactions that allow for a minimum of fluxes into core metabolism to satisfy these experimental constraints. Together, these methods accelerate and automate the identification of a biologically reasonable set of core reactions for use with 13 C MFA or 2S- 13 C MFA, as well as provide for a substantially lower set of flux bounds for fluxes into the core as compared with previous methods. We provide an open source Python implementation of these algorithms at https://github.com/JBEI/limitfluxtocore.

  8. A Three-Box Model of Thermohaline Circulation under the Energy Constraint

    International Nuclear Information System (INIS)

    Shen Yang; Guan Yu-Ping; Liang Chu-Jin; Chen Da-Ke

    2011-01-01

    The driving mechanism of thermohaline circulation is still a controversial topic in physical oceanography. Classic theory is based on Stommel's two-box model under buoyancy constraint. Recently, Guan and Huang proposed a new viewpoint in the framework of energy constraint with a two-box model. We extend it to a three-box model, including the effect of wind-driven circulation. Using this simple model, we further study how ocean mixing impacts on thermohaline circulation under the energy constraint. (geophysics, astronomy, and astrophysics)

  9. Constraint-Based Local Search for Constrained Optimum Paths Problems

    Science.gov (United States)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  10. Possible cosmogenic neutrino constraints on Planck-scale Lorentz violation

    Energy Technology Data Exchange (ETDEWEB)

    Mattingly, David M. [New Hamshire Univ., Durham, NH (United States); Maccione, Luca [DESY Hamburg (Germany). Theory Group; Galaverni, Matteo [INAF-IASF Bologna (Italy); Liberati, Stefano [INFN, Trieste (Italy); SISSA, Trieste (Italy); Sigl, Guenter [Hamburg Univ. (Germany). II. Inst. fuer Theoretische Physik

    2009-11-15

    We study, within an effective field theory framework, O(E{sup 2}/M{sup 2}{sub Pl}) Planck-scale suppressed Lorentz invariance violation (LV) effects in the neutrino sector, whose size we parameterize by a dimensionless parameter {eta}{sub {nu}}. We find deviations from predictions of Lorentz invariant physics in the cosmogenic neutrino spectrum. For positive O(1) coefficients no neutrino will survive above 10{sup 19} eV. The existence of this cutoff generates a bump in the neutrino spectrum at energies of 10{sup 17} eV. Although at present no constraint can be cast, as current experiments do not have enough sensitivity to detect ultra-high-energy neutrinos, we show that experiments in construction or being planned have the potential to cast limits as strong as {eta}{sub {nu}} Constraints on {eta}{sub {nu}}<0 can in principle be obtained with this strategy, but they require a more detailed modeling of how LV affects the neutrino sector. (orig.)

  11. Possible cosmogenic neutrino constraints on Planck-scale Lorentz violation

    International Nuclear Information System (INIS)

    Mattingly, David M.; Maccione, Luca; Galaverni, Matteo; Liberati, Stefano; Sigl, Günter

    2010-01-01

    We study, within an effective field theory framework, O(E 2 M Pl 2 ) Planck-scale suppressed Lorentz invariance violation (LV) effects in the neutrino sector, whose size we parameterize by a dimensionless parameter η ν . We find deviations from predictions of Lorentz invariant physics in the cosmogenic neutrino spectrum. For positive O(1) coefficients no neutrino will survive above 10 19 eV. The existence of this cutoff generates a bump in the neutrino spectrum at energies of 10 17 eV. Although at present no constraint can be cast, as current experiments do not have enough sensitivity to detect ultra-high-energy neutrinos, we show that experiments in construction or being planned have the potential to cast limits as strong as η ν ∼ −4 on the neutrino LV parameter, depending on how LV is distributed among neutrino mass states. Constraints on η ν < 0 can in principle be obtained with this strategy, but they require a more detailed modeling of how LV affects the neutrino sector

  12. Relaxation of selective constraints causes independent selenoprotein extinction in insect genomes.

    Directory of Open Access Journals (Sweden)

    Charles E Chapple

    Full Text Available BACKGROUND: Selenoproteins are a diverse family of proteins notable for the presence of the 21st amino acid, selenocysteine. Until very recently, all metazoan genomes investigated encoded selenoproteins, and these proteins had therefore been believed to be essential for animal life. Challenging this assumption, recent comparative analyses of insect genomes have revealed that some insect genomes appear to have lost selenoprotein genes. METHODOLOGY/PRINCIPAL FINDINGS: In this paper we investigate in detail the fate of selenoproteins, and that of selenoprotein factors, in all available arthropod genomes. We use a variety of in silico comparative genomics approaches to look for known selenoprotein genes and factors involved in selenoprotein biosynthesis. We have found that five insect species have completely lost the ability to encode selenoproteins and that selenoprotein loss in these species, although so far confined to the Endopterygota infraclass, cannot be attributed to a single evolutionary event, but rather to multiple, independent events. Loss of selenoproteins and selenoprotein factors is usually coupled to the deletion of the entire no-longer functional genomic region, rather than to sequence degradation and consequent pseudogenisation. Such dynamics of gene extinction are consistent with the high rate of genome rearrangements observed in Drosophila. We have also found that, while many selenoprotein factors are concomitantly lost with the selenoproteins, others are present and conserved in all investigated genomes, irrespective of whether they code for selenoproteins or not, suggesting that they are involved in additional, non-selenoprotein related functions. CONCLUSIONS/SIGNIFICANCE: Selenoproteins have been independently lost in several insect species, possibly as a consequence of the relaxation in insects of the selective constraints acting across metazoans to maintain selenoproteins. The dispensability of selenoproteins in insects may

  13. Score-based prediction of genomic islands in prokaryotic genomes using hidden Markov models

    Directory of Open Access Journals (Sweden)

    Surovcik Katharina

    2006-03-01

    Full Text Available Abstract Background Horizontal gene transfer (HGT is considered a strong evolutionary force shaping the content of microbial genomes in a substantial manner. It is the difference in speed enabling the rapid adaptation to changing environmental demands that distinguishes HGT from gene genesis, duplications or mutations. For a precise characterization, algorithms are needed that identify transfer events with high reliability. Frequently, the transferred pieces of DNA have a considerable length, comprise several genes and are called genomic islands (GIs or more specifically pathogenicity or symbiotic islands. Results We have implemented the program SIGI-HMM that predicts GIs and the putative donor of each individual alien gene. It is based on the analysis of codon usage (CU of each individual gene of a genome under study. CU of each gene is compared against a carefully selected set of CU tables representing microbial donors or highly expressed genes. Multiple tests are used to identify putatively alien genes, to predict putative donors and to mask putatively highly expressed genes. Thus, we determine the states and emission probabilities of an inhomogeneous hidden Markov model working on gene level. For the transition probabilities, we draw upon classical test theory with the intention of integrating a sensitivity controller in a consistent manner. SIGI-HMM was written in JAVA and is publicly available. It accepts as input any file created according to the EMBL-format. It generates output in the common GFF format readable for genome browsers. Benchmark tests showed that the output of SIGI-HMM is in agreement with known findings. Its predictions were both consistent with annotated GIs and with predictions generated by different methods. Conclusion SIGI-HMM is a sensitive tool for the identification of GIs in microbial genomes. It allows to interactively analyze genomes in detail and to generate or to test hypotheses about the origin of acquired

  14. Constraints on exotic dipole-dipole couplings between electrons at the micron scale

    Science.gov (United States)

    Kotler, Shlomi; Ozeri, Roee; Jackson Kimball, Derek

    2015-05-01

    Until recently, the magnetic dipole-dipole coupling between electrons had not been directly observed experimentally. This is because at the atomic scale dipole-dipole coupling is dominated by the exchange interaction and at larger distances the dipole-dipole coupling is overwhelmed by ambient magnetic field noise. In spite of these challenges, the magnetic dipole-dipole interaction between two electron spins separated by 2.4 microns was recently measured using the valence electrons of trapped Strontium ions [S. Kotler, N. Akerman, N. Navon, Y. Glickman, and R. Ozeri, Nature 510, 376 (2014)]. We have used this measurement to directly constrain exotic dipole-dipole interactions between electrons at the micron scale. For light bosons (mass 0.1 eV), we find that coupling constants describing pseudoscalar and axial-vector mediated interactions must be | gPegPe/4 πℏc | <= 1 . 5 × 10-3 and | gAegAe/4 πℏc | <= 1 . 2 × 10-17 , respectively, at the 90% confidence level. These bounds significantly improve on previous constraints in this mass range: for example, the constraints on axial-vector interactions are six orders of magnitude stronger than electron-positron constraints based on positronium spectroscopy. Supported by the National Science Foundation, I-Core: the Israeli excellence center, and the European Research Council.

  15. Constraints on small-scale cosmological fluctuations from SNe lensing dispersion

    International Nuclear Information System (INIS)

    Ben-Dayan, Ido; Takahashi, Ryuichi

    2015-04-01

    We provide predictions on small-scale cosmological density power spectrum from supernova lensing dispersion. Parameterizing the primordial power spectrum with running α and running of running β of the spectral index, we exclude large positive α and β parameters which induce too large lensing dispersions over current observational upper bound. We ran cosmological N-body simulations of collisionless dark matter particles to investigate non-linear evolution of the primordial power spectrum with positive running parameters. The initial small-scale enhancement of the power spectrum is largely erased when entering into the non-linear regime. For example, even if the linear power spectrum at k>10 hMpc -1 is enhanced by 1-2 orders of magnitude, the enhancement much decreases to a factor of 2-3 at late time (z≤1.5). Therefore, the lensing dispersion induced by the dark matter fluctuations weakly constrains the running parameters. When including baryon-cooling effects (which strongly enhance the small-scale clustering), the constraint is comparable or tighter than the PLANCK constraint, depending on the UV cut-off. Further investigations of the non-linear matter spectrum with baryonic processes is needed to reach a firm constraint.

  16. Finding Deadlocks of Event-B Models by Constraint Solving

    DEFF Research Database (Denmark)

    Hallerstede, Stefan; Leuschel, Michael

    we propose a constraint-based approach to nding deadlocks employing the ProB constraint solver to nd values for the constants and variables of formal models that describe a deadlocking state. We discuss the principles of the technique implemented in ProB's Prolog kernel and present some results...

  17. Strongest experimental constraints on SU(5)xU(1) supergravity models

    International Nuclear Information System (INIS)

    Lopez, J.L.; Nanopoulos, D.V.; Park, G.T.; Zichichi, A.

    1994-01-01

    We consider a class of well-motivated string-inspired flipped SU(5) supergravity models which include four supersymmetry-breaking scenarios: no-scale, strict no-scale, dilaton, and special dilaton, such that only three parameters are needed to describe all new phenomena (m t ,tanβ,m g ). We show that the CERN LEP precise measurements of the electroweak parameters in the form of the ε 1 variable and the CLEO II allowed range for B(b→sγ) are at present the most important experimental constraints on this class of models. For m t approx-gt 155 (165) GeV, the ε 1 constraint [at 90 (95)% C.L.] requires the presence of light charginos (m χ1 ± approx-lt 50--100 GeV depending on m t ). Since all sparticle masses are proportional to m g , m χ1 ± approx-lt 100 GeV implies m χ1 0 approx-lt 55 GeV, m χ2 0 approx-lt 100 GeV, m g approx-lt 360 GeV, m q approx-lt 350 (365) GeV, m e R approx-lt 80 (125) GeV, m e L approx-lt 120 (155) GeV, and m n u approx-lt 100 (140) GeV in the no-scale (dilaton) flipped SU(5) supergravity model. The B(b→sγ) constraint excludes a significant fraction of the otherwise allowed region in the (m χ1 ± ,tanβ) plane (irrespective of the magnitude of the chargino mass), while future experimental improvements will result in decisive tests of these models

  18. Genome Modeling System: A Knowledge Management Platform for Genomics.

    Directory of Open Access Journals (Sweden)

    Malachi Griffith

    2015-07-01

    Full Text Available In this work, we present the Genome Modeling System (GMS, an analysis information management system capable of executing automated genome analysis pipelines at a massive scale. The GMS framework provides detailed tracking of samples and data coupled with reliable and repeatable analysis pipelines. The GMS also serves as a platform for bioinformatics development, allowing a large team to collaborate on data analysis, or an individual researcher to leverage the work of others effectively within its data management system. Rather than separating ad-hoc analysis from rigorous, reproducible pipelines, the GMS promotes systematic integration between the two. As a demonstration of the GMS, we performed an integrated analysis of whole genome, exome and transcriptome sequencing data from a breast cancer cell line (HCC1395 and matched lymphoblastoid line (HCC1395BL. These data are available for users to test the software, complete tutorials and develop novel GMS pipeline configurations. The GMS is available at https://github.com/genome/gms.

  19. IMGMD: A platform for the integration and standardisation of In silico Microbial Genome-scale Metabolic Models.

    Science.gov (United States)

    Ye, Chao; Xu, Nan; Dong, Chuan; Ye, Yuannong; Zou, Xuan; Chen, Xiulai; Guo, Fengbiao; Liu, Liming

    2017-04-07

    Genome-scale metabolic models (GSMMs) constitute a platform that combines genome sequences and detailed biochemical information to quantify microbial physiology at the system level. To improve the unity, integrity, correctness, and format of data in published GSMMs, a consensus IMGMD database was built in the LAMP (Linux + Apache + MySQL + PHP) system by integrating and standardizing 328 GSMMs constructed for 139 microorganisms. The IMGMD database can help microbial researchers download manually curated GSMMs, rapidly reconstruct standard GSMMs, design pathways, and identify metabolic targets for strategies on strain improvement. Moreover, the IMGMD database facilitates the integration of wet-lab and in silico data to gain an additional insight into microbial physiology. The IMGMD database is freely available, without any registration requirements, at http://imgmd.jiangnan.edu.cn/database.

  20. An object model for genome information at all levels of resolution

    Energy Technology Data Exchange (ETDEWEB)

    Honda, S.; Parrott, N.W.; Smith, R.; Lawrence, C.

    1993-12-31

    An object model for genome data at all levels of resolution is described. The model was derived by considering the requirements for representing genome related objects in three application domains: genome maps, large-scale DNA sequencing, and exploring functional information in gene and protein sequences. The methodology used for the object-oriented analysis is also described.

  1. A Novel Methodology to Estimate Metabolic Flux Distributions in Constraint-Based Models

    Directory of Open Access Journals (Sweden)

    Francesco Alessandro Massucci

    2013-09-01

    Full Text Available Quite generally, constraint-based metabolic flux analysis describes the space of viable flux configurations for a metabolic network as a high-dimensional polytope defined by the linear constraints that enforce the balancing of production and consumption fluxes for each chemical species in the system. In some cases, the complexity of the solution space can be reduced by performing an additional optimization, while in other cases, knowing the range of variability of fluxes over the polytope provides a sufficient characterization of the allowed configurations. There are cases, however, in which the thorough information encoded in the individual distributions of viable fluxes over the polytope is required. Obtaining such distributions is known to be a highly challenging computational task when the dimensionality of the polytope is sufficiently large, and the problem of developing cost-effective ad hoc algorithms has recently seen a major surge of interest. Here, we propose a method that allows us to perform the required computation heuristically in a time scaling linearly with the number of reactions in the network, overcoming some limitations of similar techniques employed in recent years. As a case study, we apply it to the analysis of the human red blood cell metabolic network, whose solution space can be sampled by different exact techniques, like Hit-and-Run Monte Carlo (scaling roughly like the third power of the system size. Remarkably accurate estimates for the true distributions of viable reaction fluxes are obtained, suggesting that, although further improvements are desirable, our method enhances our ability to analyze the space of allowed configurations for large biochemical reaction networks.

  2. Proposal of Constraints Analysis Method Based on Network Model for Task Planning

    Science.gov (United States)

    Tomiyama, Tomoe; Sato, Tatsuhiro; Morita, Toyohisa; Sasaki, Toshiro

    Deregulation has been accelerating several activities toward reengineering business processes, such as railway through service and modal shift in logistics. Making those activities successful, business entities have to regulate new business rules or know-how (we call them ‘constraints’). According to the new constraints, they need to manage business resources such as instruments, materials, workers and so on. In this paper, we propose a constraint analysis method to define constraints for task planning of the new business processes. To visualize each constraint's influence on planning, we propose a network model which represents allocation relations between tasks and resources. The network can also represent task ordering relations and resource grouping relations. The proposed method formalizes the way of defining constraints manually as repeatedly checking the network structure and finding conflicts between constraints. Being applied to crew scheduling problems shows that the method can adequately represent and define constraints of some task planning problems with the following fundamental features, (1) specifying work pattern to some resources, (2) restricting the number of resources for some works, (3) requiring multiple resources for some works, (4) prior allocation of some resources to some works and (5) considering the workload balance between resources.

  3. Cosmological constraints on radion evolution in the universal extra dimension model

    International Nuclear Information System (INIS)

    Chan, K. C.; Chu, M.-C.

    2008-01-01

    The constraints on the radion evolution in the universal extra dimension (UED) model from cosmic microwave background (CMB) and Type Ia supernovae (SNe Ia) data are studied. In the UED model, where both the gravity and standard model fields can propagate in the extra dimensions, the evolution of the extra-dimensional volume, the radion, induces variation of fundamental constants. We discuss the effects of variation of the relevant constants in the context of UED for the CMB power spectrum and SNe Ia data. We then use the three-year WMAP data to constrain the radion evolution at z∼1100, and the 2σ constraint on ρ/ρ 0 (ρ is a function of the radion, to be defined in the text) is [-8.8,6.6]x10 -13 yr -1 . The SNe Ia gold sample yields a constraint on ρ/ρ 0 , for redshift between 0 and 1, to be [-4.7,14]x10 -13 yr -1 . Furthermore, the constraints from SNe Ia can be interpreted as bounds on the evolution QCD scale parameter, Λ QCD /Λ QCD,0 , [-1.4,2.8]x10 -11 yr -1 , without reference to the UED model.

  4. ReacKnock: identifying reaction deletion strategies for microbial strain optimization based on genome-scale metabolic network.

    Directory of Open Access Journals (Sweden)

    Zixiang Xu

    Full Text Available Gene knockout has been used as a common strategy to improve microbial strains for producing chemicals. Several algorithms are available to predict the target reactions to be deleted. Most of them apply mixed integer bi-level linear programming (MIBLP based on metabolic networks, and use duality theory to transform bi-level optimization problem of large-scale MIBLP to single-level programming. However, the validity of the transformation was not proved. Solution of MIBLP depends on the structure of inner problem. If the inner problem is continuous, Karush-Kuhn-Tucker (KKT method can be used to reformulate the MIBLP to a single-level one. We adopt KKT technique in our algorithm ReacKnock to attack the intractable problem of the solution of MIBLP, demonstrated with the genome-scale metabolic network model of E. coli for producing various chemicals such as succinate, ethanol, threonine and etc. Compared to the previous methods, our algorithm is fast, stable and reliable to find the optimal solutions for all the chemical products tested, and able to provide all the alternative deletion strategies which lead to the same industrial objective.

  5. Multiscale modeling of three-dimensional genome

    Science.gov (United States)

    Zhang, Bin; Wolynes, Peter

    The genome, the blueprint of life, contains nearly all the information needed to build and maintain an entire organism. A comprehensive understanding of the genome is of paramount interest to human health and will advance progress in many areas, including life sciences, medicine, and biotechnology. The overarching goal of my research is to understand the structure-dynamics-function relationships of the human genome. In this talk, I will be presenting our efforts in moving towards that goal, with a particular emphasis on studying the three-dimensional organization, the structure of the genome with multi-scale approaches. Specifically, I will discuss the reconstruction of genome structures at both interphase and metaphase by making use of data from chromosome conformation capture experiments. Computationally modeling of chromatin fiber at atomistic level from first principles will also be presented as our effort for studying the genome structure from bottom up.

  6. Genome-scale metabolic analysis of Clostridium thermocellum for bioethanol production

    Directory of Open Access Journals (Sweden)

    Brooks J Paul

    2010-03-01

    Full Text Available Abstract Background Microorganisms possess diverse metabolic capabilities that can potentially be leveraged for efficient production of biofuels. Clostridium thermocellum (ATCC 27405 is a thermophilic anaerobe that is both cellulolytic and ethanologenic, meaning that it can directly use the plant sugar, cellulose, and biochemically convert it to ethanol. A major challenge in using microorganisms for chemical production is the need to modify the organism to increase production efficiency. The process of properly engineering an organism is typically arduous. Results Here we present a genome-scale model of C. thermocellum metabolism, iSR432, for the purpose of establishing a computational tool to study the metabolic network of C. thermocellum and facilitate efforts to engineer C. thermocellum for biofuel production. The model consists of 577 reactions involving 525 intracellular metabolites, 432 genes, and a proteomic-based representation of a cellulosome. The process of constructing this metabolic model led to suggested annotation refinements for 27 genes and identification of areas of metabolism requiring further study. The accuracy of the iSR432 model was tested using experimental growth and by-product secretion data for growth on cellobiose and fructose. Analysis using this model captures the relationship between the reduction-oxidation state of the cell and ethanol secretion and allowed for prediction of gene deletions and environmental conditions that would increase ethanol production. Conclusions By incorporating genomic sequence data, network topology, and experimental measurements of enzyme activities and metabolite fluxes, we have generated a model that is reasonably accurate at predicting the cellular phenotype of C. thermocellum and establish a strong foundation for rational strain design. In addition, we are able to draw some important conclusions regarding the underlying metabolic mechanisms for observed behaviors of C. thermocellum

  7. Genome scale engineering techniques for metabolic engineering.

    Science.gov (United States)

    Liu, Rongming; Bassalo, Marcelo C; Zeitoun, Ramsey I; Gill, Ryan T

    2015-11-01

    Metabolic engineering has expanded from a focus on designs requiring a small number of genetic modifications to increasingly complex designs driven by advances in genome-scale engineering technologies. Metabolic engineering has been generally defined by the use of iterative cycles of rational genome modifications, strain analysis and characterization, and a synthesis step that fuels additional hypothesis generation. This cycle mirrors the Design-Build-Test-Learn cycle followed throughout various engineering fields that has recently become a defining aspect of synthetic biology. This review will attempt to summarize recent genome-scale design, build, test, and learn technologies and relate their use to a range of metabolic engineering applications. Copyright © 2015 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  8. Genome-scale modeling enables metabolic engineering of Saccharomyces cerevisiae for succinic acid production.

    Science.gov (United States)

    Agren, Rasmus; Otero, José Manuel; Nielsen, Jens

    2013-07-01

    In this work, we describe the application of a genome-scale metabolic model and flux balance analysis for the prediction of succinic acid overproduction strategies in Saccharomyces cerevisiae. The top three single gene deletion strategies, Δmdh1, Δoac1, and Δdic1, were tested using knock-out strains cultivated anaerobically on glucose, coupled with physiological and DNA microarray characterization. While Δmdh1 and Δoac1 strains failed to produce succinate, Δdic1 produced 0.02 C-mol/C-mol glucose, in close agreement with model predictions (0.03 C-mol/C-mol glucose). Transcriptional profiling suggests that succinate formation is coupled to mitochondrial redox balancing, and more specifically, reductive TCA cycle activity. While far from industrial titers, this proof-of-concept suggests that in silico predictions coupled with experimental validation can be used to identify novel and non-intuitive metabolic engineering strategies.

  9. Reliability based topology optimization for continuum structures with local failure constraints

    DEFF Research Database (Denmark)

    Luo, Yangjun; Zhou, Mingdong; Wang, Michael Yu

    2014-01-01

    This paper presents an effective method for stress constrained topology optimization problems under load and material uncertainties. Based on the Performance Measure Approach (PMA), the optimization problem is formulated as to minimize the objective function under a large number of (stress......-related) target performance constraints. In order to overcome the stress singularity phenomenon caused by the combined stress and reliability constraints, a reduction strategy on target reliability index is proposed and utilized together with the ε-relaxation approach. Meanwhile, an enhanced aggregation method...... is employed to aggregate the selected active constraints using a general K–S function, which avoids expensive computational cost from the large-scale nature of local failure constraints. Several numerical examples are given to demonstrate the validity of the present method....

  10. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  11. Galaxy clustering and small-scale CBR anisotropy constraints on galaxy origin scenarios

    International Nuclear Information System (INIS)

    Lucchin, F.

    1986-01-01

    The problem of the origin of cosmic structures (galaxies, galaxy clusters,......) represents the crossroads of the modern cosmology: it is correlated both with the theoretical model of the very early universe and with most of the present observational data. In this context, galaxy origin scenarios are reviewed. The cosmological relevance of the observed clustering properties of the universe is outlined. The observational constraints, due to small-scale cosmic background radiation (CBR) anisotropies, on galaxy origin scenarios are discussed. (author)

  12. Progressive Amalgamation of Building Clusters for Map Generalization Based on Scaling Subgroups

    Directory of Open Access Journals (Sweden)

    Xianjin He

    2018-03-01

    Full Text Available Map generalization utilizes transformation operations to derive smaller-scale maps from larger-scale maps, and is a key procedure for the modelling and understanding of geographic space. Studies to date have largely applied a fixed tolerance to aggregate clustered buildings into a single object, resulting in the loss of details that meet cartographic constraints and may be of importance for users. This study aims to develop a method that amalgamates clustered buildings gradually without significant modification of geometry, while preserving the map details as much as possible under cartographic constraints. The amalgamation process consists of three key steps. First, individual buildings are grouped into distinct clusters by using the graph-based spatial clustering application with random forest (GSCARF method. Second, building clusters are decomposed into scaling subgroups according to homogeneity with regard to the mean distance of subgroups. Thus, hierarchies of building clusters can be derived based on scaling subgroups. Finally, an amalgamation operation is progressively performed from the bottom-level subgroups to the top-level subgroups using the maximum distance of each subgroup as the amalgamating tolerance instead of using a fixed tolerance. As a consequence of this step, generalized intermediate scaling results are available, which can form the multi-scale representation of buildings. The experimental results show that the proposed method can generate amalgams with correct details, statistical area balance and orthogonal shape while satisfying cartographic constraints (e.g., minimum distance and minimum area.

  13. The human noncoding genome defined by genetic diversity.

    Science.gov (United States)

    di Iulio, Julia; Bartha, Istvan; Wong, Emily H M; Yu, Hung-Chun; Lavrenko, Victor; Yang, Dongchan; Jung, Inkyung; Hicks, Michael A; Shah, Naisha; Kirkness, Ewen F; Fabani, Martin M; Biggs, William H; Ren, Bing; Venter, J Craig; Telenti, Amalio

    2018-03-01

    Understanding the significance of genetic variants in the noncoding genome is emerging as the next challenge in human genomics. We used the power of 11,257 whole-genome sequences and 16,384 heptamers (7-nt motifs) to build a map of sequence constraint for the human species. This build differed substantially from traditional maps of interspecies conservation and identified regulatory elements among the most constrained regions of the genome. Using new Hi-C experimental data, we describe a strong pattern of coordination over 2 Mb where the most constrained regulatory elements associate with the most essential genes. Constrained regions of the noncoding genome are up to 52-fold enriched for known pathogenic variants as compared to unconstrained regions (21-fold when compared to the genome average). This map of sequence constraint across thousands of individuals is an asset to help interpret noncoding elements in the human genome, prioritize variants and reconsider gene units at a larger scale.

  14. Constraint-Muse: A Soft-Constraint Based System for Music Therapy

    Science.gov (United States)

    Hölzl, Matthias; Denker, Grit; Meier, Max; Wirsing, Martin

    Monoidal soft constraints are a versatile formalism for specifying and solving multi-criteria optimization problems with dynamically changing user preferences. We have developed a prototype tool for interactive music creation, called Constraint Muse, that uses monoidal soft constraints to ensure that a dynamically generated melody harmonizes with input from other sources. Constraint Muse provides an easy to use interface based on Nintendo Wii controllers and is intended to be used in music therapy for people with Parkinson’s disease and for children with high-functioning autism or Asperger’s syndrome.

  15. Decoding Synteny Blocks and Large-Scale Duplications in Mammalian and Plant Genomes

    Science.gov (United States)

    Peng, Qian; Alekseyev, Max A.; Tesler, Glenn; Pevzner, Pavel A.

    The existing synteny block reconstruction algorithms use anchors (e.g., orthologous genes) shared over all genomes to construct the synteny blocks for multiple genomes. This approach, while efficient for a few genomes, cannot be scaled to address the need to construct synteny blocks in many mammalian genomes that are currently being sequenced. The problem is that the number of anchors shared among all genomes quickly decreases with the increase in the number of genomes. Another problem is that many genomes (plant genomes in particular) had extensive duplications, which makes decoding of genomic architecture and rearrangement analysis in plants difficult. The existing synteny block generation algorithms in plants do not address the issue of generating non-overlapping synteny blocks suitable for analyzing rearrangements and evolution history of duplications. We present a new algorithm based on the A-Bruijn graph framework that overcomes these difficulties and provides a unified approach to synteny block reconstruction for multiple genomes, and for genomes with large duplications.

  16. Computing the functional proteome

    DEFF Research Database (Denmark)

    O'Brien, Edward J.; Palsson, Bernhard

    2015-01-01

    Constraint-based models enable the computation of feasible, optimal, and realized biological phenotypes from reaction network reconstructions and constraints on their operation. To date, stoichiometric reconstructions have largely focused on metabolism, resulting in genome-scale metabolic models (M...

  17. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  18. Towards Accurate Modelling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    Science.gov (United States)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron K.; Scoccimarro, Roman; Piscionere, Jennifer A.; Wibking, Benjamin D.

    2018-04-01

    Interpreting the small-scale clustering of galaxies with halo models can elucidate the connection between galaxies and dark matter halos. Unfortunately, the modelling is typically not sufficiently accurate for ruling out models statistically. It is thus difficult to use the information encoded in small scales to test cosmological models or probe subtle features of the galaxy-halo connection. In this paper, we attempt to push halo modelling into the "accurate" regime with a fully numerical mock-based methodology and careful treatment of statistical and systematic errors. With our forward-modelling approach, we can incorporate clustering statistics beyond the traditional two-point statistics. We use this modelling methodology to test the standard ΛCDM + halo model against the clustering of SDSS DR7 galaxies. Specifically, we use the projected correlation function, group multiplicity function and galaxy number density as constraints. We find that while the model fits each statistic separately, it struggles to fit them simultaneously. Adding group statistics leads to a more stringent test of the model and significantly tighter constraints on model parameters. We explore the impact of varying the adopted halo definition and cosmological model and find that changing the cosmology makes a significant difference. The most successful model we tried (Planck cosmology with Mvir halos) matches the clustering of low luminosity galaxies, but exhibits a 2.3σ tension with the clustering of luminous galaxies, thus providing evidence that the "standard" halo model needs to be extended. This work opens the door to adding interesting freedom to the halo model and including additional clustering statistics as constraints.

  19. Expressing Model Constraints Visually with VMQL

    DEFF Research Database (Denmark)

    Störrle, Harald

    2011-01-01

    ) for specifying constraints on UML models. We examine VMQL's usability by controlled experiments and its expressiveness by a representative sample. We conclude that VMQL is less expressive than OCL, although expressive enough for most of the constraints in the sample. In terms of usability, however, VMQL......OCL is the de facto standard language for expressing constraints and queries on UML models. However, OCL expressions are very difficult to create, understand, and maintain, even with the sophisticated tool support now available. In this paper, we propose to use the Visual Model Query Language (VMQL...

  20. Constraint-based Word Segmentation for Chinese

    DEFF Research Database (Denmark)

    Christiansen, Henning; Bo, Li

    2014-01-01

    -hoc and statistically based methods. In this paper, we show experiments of implementing different approaches to CWSP in the framework of CHR Grammars [Christiansen, 2005] that provides a constraint solving approach to language analysis. CHR Grammars are based upon Constraint Handling Rules, CHR [Frühwirth, 1998, 2009......], which is a declarative, high-level programming language for specification and implementation of constraint solvers....

  1. Structure-based Markov random field model for representing evolutionary constraints on functional sites.

    Science.gov (United States)

    Jeong, Chan-Seok; Kim, Dongsup

    2016-02-24

    Elucidating the cooperative mechanism of interconnected residues is an important component toward understanding the biological function of a protein. Coevolution analysis has been developed to model the coevolutionary information reflecting structural and functional constraints. Recently, several methods have been developed based on a probabilistic graphical model called the Markov random field (MRF), which have led to significant improvements for coevolution analysis; however, thus far, the performance of these models has mainly been assessed by focusing on the aspect of protein structure. In this study, we built an MRF model whose graphical topology is determined by the residue proximity in the protein structure, and derived a novel positional coevolution estimate utilizing the node weight of the MRF model. This structure-based MRF method was evaluated for three data sets, each of which annotates catalytic site, allosteric site, and comprehensively determined functional site information. We demonstrate that the structure-based MRF architecture can encode the evolutionary information associated with biological function. Furthermore, we show that the node weight can more accurately represent positional coevolution information compared to the edge weight. Lastly, we demonstrate that the structure-based MRF model can be reliably built with only a few aligned sequences in linear time. The results show that adoption of a structure-based architecture could be an acceptable approximation for coevolution modeling with efficient computation complexity.

  2. Public health and valorization of genome-based technologies: a new model.

    Science.gov (United States)

    Lal, Jonathan A; Schulte In den Bäumen, Tobias; Morré, Servaas A; Brand, Angela

    2011-12-05

    The success rate of timely translation of genome-based technologies to commercially feasible products/services with applicability in health care systems is significantly low. We identified both industry and scientists neglect health policy aspects when commercializing their technology, more specifically, Public Health Assessment Tools (PHAT) and early on involvement of decision makers through which market authorization and reimbursements are dependent. While Technology Transfer (TT) aims to facilitate translation of ideas into products, Health Technology Assessment, one component of PHAT, for example, facilitates translation of products/processes into healthcare services and eventually comes up with recommendations for decision makers. We aim to propose a new model of valorization to optimize integration of genome-based technologies into the healthcare system. The method used to develop our model is an adapted version of the Fish Trap Model and the Basic Design Cycle. We found although different, similarities exist between TT and PHAT. Realizing the potential of being mutually beneficial justified our proposal of their relative parallel initiation. We observed that the Public Health Genomics Wheel should be included in this relative parallel activity to ensure all societal/policy aspects are dealt with preemptively by both stakeholders. On further analysis, we found out this whole process is dependent on the Value of Information. As a result, we present our LAL (Learning Adapting Leveling) model which proposes, based on market demand; TT and PHAT by consultation/bi-lateral communication should advocate for relevant technologies. This can be achieved by public-private partnerships (PPPs). These widely defined PPPs create the innovation network which is a developing, consultative/collaborative-networking platform between TT and PHAT. This network has iterations and requires learning, assimilating and using knowledge developed and is called absorption capacity. We

  3. BBN constraints on MeV-scale dark sectors. Part I. Sterile decays

    Science.gov (United States)

    Hufnagel, Marco; Schmidt-Hoberg, Kai; Wild, Sebastian

    2018-02-01

    We study constraints from Big Bang Nucleosynthesis on inert particles in a dark sector which contribute to the Hubble rate and therefore change the predictions of the primordial nuclear abundances. We pay special attention to the case of MeV-scale particles decaying into dark radiation, which are neither fully relativistic nor non-relativistic during all temperatures relevant to Big Bang Nucleosynthesis. As an application we discuss the implications of our general results for models of self-interacting dark matter with light mediators.

  4. Genome-scale model-driven strain design for dicarboxylic acid production in Yarrowia lipolytica.

    Science.gov (United States)

    Mishra, Pranjul; Lee, Na-Rae; Lakshmanan, Meiyappan; Kim, Minsuk; Kim, Byung-Gee; Lee, Dong-Yup

    2018-03-19

    Recently, there have been several attempts to produce long-chain dicarboxylic acids (DCAs) in various microbial hosts. Of these, Yarrowia lipolytica has great potential due to its oleaginous characteristics and unique ability to utilize hydrophobic substrates. However, Y. lipolytica should be further engineered to make it more competitive: the current approaches are mostly intuitive and cumbersome, thus limiting its industrial application. In this study, we proposed model-guided metabolic engineering strategies for enhanced production of DCAs in Y. lipolytica. At the outset, we reconstructed genome-scale metabolic model (GSMM) of Y. lipolytica (iYLI647) by substantially expanding the previous models. Subsequently, the model was validated using three sets of published culture experiment data. It was finally exploited to identify genetic engineering targets for overexpression, knockout, and cofactor modification by applying several in silico strain design methods, which potentially give rise to high yield production of the industrially relevant long-chain DCAs, e.g., dodecanedioic acid (DDDA). The resultant targets include (1) malate dehydrogenase and malic enzyme genes and (2) glutamate dehydrogenase gene, in silico overexpression of which generated additional NADPH required for fatty acid synthesis, leading to the increased DDDA fluxes by 48% and 22% higher, respectively, compared to wild-type. We further investigated the effect of supplying branched-chain amino acids on the acetyl-CoA turn-over rate which is key metabolite for fatty acid synthesis, suggesting their significance for production of DDDA in Y. lipolytica. In silico model-based strain design strategies allowed us to identify several metabolic engineering targets for overproducing DCAs in lipid accumulating yeast, Y. lipolytica. Thus, the current study can provide a methodological framework that is applicable to other oleaginous yeasts for value-added biochemical production.

  5. More on cosmological constraints on spontaneous R-symmetry breaking models

    International Nuclear Information System (INIS)

    Hamada, Yuta; Kobayashi, Tatsuo; Kamada, Kohei; Ecole Polytechnique Federale de Lausanne; Ookouchi, Yutaka

    2013-10-01

    We study the spontaneous R-symmetry breaking model and investigate the cosmological constraints on this model due to the pseudo Nambu-Goldstone boson, R-axion. We consider the R-axion which has relatively heavy mass in order to complement our previous work. In this regime, model parameters, R-axions mass and R-symmetry breaking scale, are constrained by Big Bang Nucleosynthesis and overproduction of the gravitino produced from R-axion decay and thermal plasma. We find that the allowed parameter space is very small for high reheating temperature. For low reheating temperature, the U(1) R breaking scale f a is constrained as f a 12-14 GeV regardless of the value of R-axion mass.

  6. Automated Generation of OCL Constraints: NL based Approach vs Pattern Based Approach

    Directory of Open Access Journals (Sweden)

    IMRAN SARWAR BAJWA

    2017-04-01

    Full Text Available This paper presents an approach used for automated generations of software constraints. In this model, the SBVR (Semantics of Business Vocabulary and Rules based semi-formal representation is obtained from the syntactic and semantic analysis of a NL (Natural Language (such as English sentence. A SBVR representation is easy to translate to other formal languages as SBVR is based on higher-order logic like other formal languages such as OCL (Object Constraint Language. The proposed model endows with a systematic and powerful system of incorporating NL knowledge on the formal languages. A prototype is constructed in Java (an Eclipse plug-in as a proof of the concept. The performance was tested for a few sample texts taken from existing research thesis reports and books

  7. iCN718, an Updated and Improved Genome-Scale Metabolic Network Reconstruction of Acinetobacter baumannii AYE.

    Science.gov (United States)

    Norsigian, Charles J; Kavvas, Erol; Seif, Yara; Palsson, Bernhard O; Monk, Jonathan M

    2018-01-01

    Acinetobacter baumannii has become an urgent clinical threat due to the recent emergence of multi-drug resistant strains. There is thus a significant need to discover new therapeutic targets in this organism. One means for doing so is through the use of high-quality genome-scale reconstructions. Well-curated and accurate genome-scale models (GEMs) of A. baumannii would be useful for improving treatment options. We present an updated and improved genome-scale reconstruction of A. baumannii AYE, named iCN718, that improves and standardizes previous A. baumannii AYE reconstructions. iCN718 has 80% accuracy for predicting gene essentiality data and additionally can predict large-scale phenotypic data with as much as 89% accuracy, a new capability for an A. baumannii reconstruction. We further demonstrate that iCN718 can be used to analyze conserved metabolic functions in the A. baumannii core genome and to build strain-specific GEMs of 74 other A. baumannii strains from genome sequence alone. iCN718 will serve as a resource to integrate and synthesize new experimental data being generated for this urgent threat pathogen.

  8. [Multispectral Radiation Algorithm Based on Emissivity Model Constraints for True Temperature Measurement].

    Science.gov (United States)

    Liang, Mei; Sun, Xiao-gang; Luan, Mei-sheng

    2015-10-01

    Temperature measurement is one of the important factors for ensuring product quality, reducing production cost and ensuring experiment safety in industrial manufacture and scientific experiment. Radiation thermometry is the main method for non-contact temperature measurement. The second measurement (SM) method is one of the common methods in the multispectral radiation thermometry. However, the SM method cannot be applied to on-line data processing. To solve the problems, a rapid inversion method for multispectral radiation true temperature measurement is proposed and constraint conditions of emissivity model are introduced based on the multispectral brightness temperature model. For non-blackbody, it can be drawn that emissivity is an increasing function in the interval if the brightness temperature is an increasing function or a constant function in a range and emissivity satisfies an inequality of emissivity and wavelength in that interval if the brightness temperature is a decreasing function in a range, according to the relationship of brightness temperatures at different wavelengths. The construction of emissivity assumption values is reduced from multiclass to one class and avoiding the unnecessary emissivity construction with emissivity model constraint conditions on the basis of brightness temperature information. Simulation experiments and comparisons for two different temperature points are carried out based on five measured targets with five representative variation trends of real emissivity. decreasing monotonically, increasing monotonically, first decreasing with wavelength and then increasing, first increasing and then decreasing and fluctuating with wavelength randomly. The simulation results show that compared with the SM method, for the same target under the same initial temperature and emissivity search range, the processing speed of the proposed algorithm is increased by 19.16%-43.45% with the same precision and the same calculation results.

  9. Noise analysis of genome-scale protein synthesis using a discrete computational model of translation

    Energy Technology Data Exchange (ETDEWEB)

    Racle, Julien; Hatzimanikatis, Vassily, E-mail: vassily.hatzimanikatis@epfl.ch [Laboratory of Computational Systems Biotechnology, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland); Swiss Institute of Bioinformatics (SIB), CH-1015 Lausanne (Switzerland); Stefaniuk, Adam Jan [Laboratory of Computational Systems Biotechnology, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne (Switzerland)

    2015-07-28

    Noise in genetic networks has been the subject of extensive experimental and computational studies. However, very few of these studies have considered noise properties using mechanistic models that account for the discrete movement of ribosomes and RNA polymerases along their corresponding templates (messenger RNA (mRNA) and DNA). The large size of these systems, which scales with the number of genes, mRNA copies, codons per mRNA, and ribosomes, is responsible for some of the challenges. Additionally, one should be able to describe the dynamics of ribosome exchange between the free ribosome pool and those bound to mRNAs, as well as how mRNA species compete for ribosomes. We developed an efficient algorithm for stochastic simulations that addresses these issues and used it to study the contribution and trade-offs of noise to translation properties (rates, time delays, and rate-limiting steps). The algorithm scales linearly with the number of mRNA copies, which allowed us to study the importance of genome-scale competition between mRNAs for the same ribosomes. We determined that noise is minimized under conditions maximizing the specific synthesis rate. Moreover, sensitivity analysis of the stochastic system revealed the importance of the elongation rate in the resultant noise, whereas the translation initiation rate constant was more closely related to the average protein synthesis rate. We observed significant differences between our results and the noise properties of the most commonly used translation models. Overall, our studies demonstrate that the use of full mechanistic models is essential for the study of noise in translation and transcription.

  10. Observational constraints on tachyonic chameleon dark energy model

    Science.gov (United States)

    Banijamali, A.; Bellucci, S.; Fazlpour, B.; Solbi, M.

    2018-03-01

    It has been recently shown that tachyonic chameleon model of dark energy in which tachyon scalar field non-minimally coupled to the matter admits stable scaling attractor solution that could give rise to the late-time accelerated expansion of the universe and hence alleviate the coincidence problem. In the present work, we use data from Type Ia supernova (SN Ia) and Baryon Acoustic oscillations to place constraints on the model parameters. In our analysis we consider in general exponential and non-exponential forms for the non-minimal coupling function and tachyonic potential and show that the scenario is compatible with observations.

  11. Constraint theory multidimensional mathematical model management

    CERN Document Server

    Friedman, George J

    2017-01-01

    Packed with new material and research, this second edition of George Friedman’s bestselling Constraint Theory remains an invaluable reference for all engineers, mathematicians, and managers concerned with modeling. As in the first edition, this text analyzes the way Constraint Theory employs bipartite graphs and presents the process of locating the “kernel of constraint” trillions of times faster than brute-force approaches, determining model consistency and computational allowability. Unique in its abundance of topological pictures of the material, this book balances left- and right-brain perceptions to provide a thorough explanation of multidimensional mathematical models. Much of the extended material in this new edition also comes from Phan Phan’s PhD dissertation in 2011, titled “Expanding Constraint Theory to Determine Well-Posedness of Large Mathematical Models.” Praise for the first edition: "Dr. George Friedman is indisputably the father of the very powerful methods of constraint theory...

  12. Identifying all moiety conservation laws in genome-scale metabolic networks.

    Science.gov (United States)

    De Martino, Andrea; De Martino, Daniele; Mulet, Roberto; Pagnani, Andrea

    2014-01-01

    The stoichiometry of a metabolic network gives rise to a set of conservation laws for the aggregate level of specific pools of metabolites, which, on one hand, pose dynamical constraints that cross-link the variations of metabolite concentrations and, on the other, provide key insight into a cell's metabolic production capabilities. When the conserved quantity identifies with a chemical moiety, extracting all such conservation laws from the stoichiometry amounts to finding all non-negative integer solutions of a linear system, a programming problem known to be NP-hard. We present an efficient strategy to compute the complete set of integer conservation laws of a genome-scale stoichiometric matrix, also providing a certificate for correctness and maximality of the solution. Our method is deployed for the analysis of moiety conservation relationships in two large-scale reconstructions of the metabolism of the bacterium E. coli, in six tissue-specific human metabolic networks, and, finally, in the human reactome as a whole, revealing that bacterial metabolism could be evolutionarily designed to cover broader production spectra than human metabolism. Convergence to the full set of moiety conservation laws in each case is achieved in extremely reduced computing times. In addition, we uncover a scaling relation that links the size of the independent pool basis to the number of metabolites, for which we present an analytical explanation.

  13. Identifying all moiety conservation laws in genome-scale metabolic networks.

    Directory of Open Access Journals (Sweden)

    Andrea De Martino

    Full Text Available The stoichiometry of a metabolic network gives rise to a set of conservation laws for the aggregate level of specific pools of metabolites, which, on one hand, pose dynamical constraints that cross-link the variations of metabolite concentrations and, on the other, provide key insight into a cell's metabolic production capabilities. When the conserved quantity identifies with a chemical moiety, extracting all such conservation laws from the stoichiometry amounts to finding all non-negative integer solutions of a linear system, a programming problem known to be NP-hard. We present an efficient strategy to compute the complete set of integer conservation laws of a genome-scale stoichiometric matrix, also providing a certificate for correctness and maximality of the solution. Our method is deployed for the analysis of moiety conservation relationships in two large-scale reconstructions of the metabolism of the bacterium E. coli, in six tissue-specific human metabolic networks, and, finally, in the human reactome as a whole, revealing that bacterial metabolism could be evolutionarily designed to cover broader production spectra than human metabolism. Convergence to the full set of moiety conservation laws in each case is achieved in extremely reduced computing times. In addition, we uncover a scaling relation that links the size of the independent pool basis to the number of metabolites, for which we present an analytical explanation.

  14. Large-Scale Sequencing: The Future of Genomic Sciences Colloquium

    Energy Technology Data Exchange (ETDEWEB)

    Margaret Riley; Merry Buckley

    2009-01-01

    Genetic sequencing and the various molecular techniques it has enabled have revolutionized the field of microbiology. Examining and comparing the genetic sequences borne by microbes - including bacteria, archaea, viruses, and microbial eukaryotes - provides researchers insights into the processes microbes carry out, their pathogenic traits, and new ways to use microorganisms in medicine and manufacturing. Until recently, sequencing entire microbial genomes has been laborious and expensive, and the decision to sequence the genome of an organism was made on a case-by-case basis by individual researchers and funding agencies. Now, thanks to new technologies, the cost and effort of sequencing is within reach for even the smallest facilities, and the ability to sequence the genomes of a significant fraction of microbial life may be possible. The availability of numerous microbial genomes will enable unprecedented insights into microbial evolution, function, and physiology. However, the current ad hoc approach to gathering sequence data has resulted in an unbalanced and highly biased sampling of microbial diversity. A well-coordinated, large-scale effort to target the breadth and depth of microbial diversity would result in the greatest impact. The American Academy of Microbiology convened a colloquium to discuss the scientific benefits of engaging in a large-scale, taxonomically-based sequencing project. A group of individuals with expertise in microbiology, genomics, informatics, ecology, and evolution deliberated on the issues inherent in such an effort and generated a set of specific recommendations for how best to proceed. The vast majority of microbes are presently uncultured and, thus, pose significant challenges to such a taxonomically-based approach to sampling genome diversity. However, we have yet to even scratch the surface of the genomic diversity among cultured microbes. A coordinated sequencing effort of cultured organisms is an appropriate place to begin

  15. Coverage-based constraints for IMRT optimization

    Science.gov (United States)

    Mescher, H.; Ulrich, S.; Bangert, M.

    2017-09-01

    Radiation therapy treatment planning requires an incorporation of uncertainties in order to guarantee an adequate irradiation of the tumor volumes. In current clinical practice, uncertainties are accounted for implicitly with an expansion of the target volume according to generic margin recipes. Alternatively, it is possible to account for uncertainties by explicit minimization of objectives that describe worst-case treatment scenarios, the expectation value of the treatment or the coverage probability of the target volumes during treatment planning. In this note we show that approaches relying on objectives to induce a specific coverage of the clinical target volumes are inevitably sensitive to variation of the relative weighting of the objectives. To address this issue, we introduce coverage-based constraints for intensity-modulated radiation therapy (IMRT) treatment planning. Our implementation follows the concept of coverage-optimized planning that considers explicit error scenarios to calculate and optimize patient-specific probabilities q(\\hat{d}, \\hat{v}) of covering a specific target volume fraction \\hat{v} with a certain dose \\hat{d} . Using a constraint-based reformulation of coverage-based objectives we eliminate the trade-off between coverage and competing objectives during treatment planning. In-depth convergence tests including 324 treatment plan optimizations demonstrate the reliability of coverage-based constraints for varying levels of probability, dose and volume. General clinical applicability of coverage-based constraints is demonstrated for two cases. A sensitivity analysis regarding penalty variations within this planing study based on IMRT treatment planning using (1) coverage-based constraints, (2) coverage-based objectives, (3) probabilistic optimization, (4) robust optimization and (5) conventional margins illustrates the potential benefit of coverage-based constraints that do not require tedious adjustment of target volume objectives.

  16. A model of neutrino and Higgs physics at the electroweak scale

    International Nuclear Information System (INIS)

    Aranda, Alfredo; Blanno, Omar; Diaz-Cruz, J. Lorenzo

    2008-01-01

    We present and explore the Higgs physics of a model that in addition to the Standard Model fields includes a lepton number violating singlet scalar field. Based on the fact that the only experimental data we have so far for physics beyond the Standard Model is that of neutrino physics, we impose a constraint for any addition not to introduce new higher scales. As such, we introduce right-handed neutrinos with an electroweak scale mass. We study the Higgs decay H→νν and show that it leads to different signatures compared to those in the Standard Model, making it possible to detect them and to probe the nature of their couplings

  17. Evolutionarily conserved elements in vertebrate, insect, worm, and yeast genomes

    DEFF Research Database (Denmark)

    Siepel, Adam; Bejerano, Gill; Pedersen, Jakob Skou

    2005-01-01

    We have conducted a comprehensive search for conserved elements in vertebrate genomes, using genome-wide multiple alignments of five vertebrate species (human, mouse, rat, chicken, and Fugu rubripes). Parallel searches have been performed with multiple alignments of four insect species (three...... species of Drosophila and Anopheles gambiae), two species of Caenorhabditis, and seven species of Saccharomyces. Conserved elements were identified with a computer program called phastCons, which is based on a two-state phylogenetic hidden Markov model (phylo-HMM). PhastCons works by fitting a phylo......-HMM to the data by maximum likelihood, subject to constraints designed to calibrate the model across species groups, and then predicting conserved elements based on this model. The predicted elements cover roughly 3%-8% of the human genome (depending on the details of the calibration procedure) and substantially...

  18. Integrated systems optimization model for biofuel development: The influence of environmental constraints

    Science.gov (United States)

    Housh, M.; Ng, T.; Cai, X.

    2012-12-01

    The environmental impact is one of the major concerns of biofuel development. While many other studies have examined the impact of biofuel expansion on stream flow and water quality, this study examines the problem from the other side - will and how a biofuel production target be affected by given environmental constraints. For this purpose, an integrated model comprises of different sub-systems of biofuel refineries, transportation, agriculture, water resources and crops/ethanol market has been developed. The sub-systems are integrated into one large-scale model to guide the optimal development plan considering the interdependency between the subsystems. The optimal development plan includes biofuel refineries location and capacity, refinery operation, land allocation between biofuel and food crops, and the corresponding stream flow and nitrate load in the watershed. The watershed is modeled as a network flow, in which the nodes represent sub-watersheds and the arcs are defined as the linkage between the sub-watersheds. The runoff contribution of each sub-watershed is determined based on the land cover and the water uses in that sub-watershed. Thus, decisions of other sub-systems such as the land allocation in the land use sub-system and the water use in the refinery sub-system define the sources and the sinks of the network. Environmental policies will be addressed in the integrated model by imposing stream flow and nitrate load constraints. These constraints can be specified by location and time in the watershed to reflect the spatial and temporal variation of the regulations. Preliminary results show that imposing monthly water flow constraints and yearly nitrate load constraints will change the biofuel development plan dramatically. Sensitivity analysis is performed to examine how the environmental constraints and their spatial and the temporal distribution influence the overall biofuel development plan and the performance of each of the sub

  19. Physical constraints on models of gamma-ray bursters

    International Nuclear Information System (INIS)

    Epstein, R.I.

    1985-01-01

    This report deals with the constraints that can be placed on models of gamma-ray burst sources based on only the well-established observational facts and physical principles. The premise is developed that the very hard x-ray and gamma-ray continua spectra are well-established aspects of gamma-ray bursts. Recent theoretical work on gamma-ray bursts are summarized with emphasis on the geometrical properties of the models. Constraints on the source models which are implied by the x-ray and gamma-ray spectra are described. The allowed ranges for the luminosity and characteristic dimension for gamma-ray burst sources are shown. Some of the deductions and inferences about the nature of the gamma-ray burst sources are summarized. 67 refs., 3 figs

  20. Optimal knockout strategies in genome-scale metabolic networks using particle swarm optimization.

    Science.gov (United States)

    Nair, Govind; Jungreuthmayer, Christian; Zanghellini, Jürgen

    2017-02-01

    Knockout strategies, particularly the concept of constrained minimal cut sets (cMCSs), are an important part of the arsenal of tools used in manipulating metabolic networks. Given a specific design, cMCSs can be calculated even in genome-scale networks. We would however like to find not only the optimal intervention strategy for a given design but the best possible design too. Our solution (PSOMCS) is to use particle swarm optimization (PSO) along with the direct calculation of cMCSs from the stoichiometric matrix to obtain optimal designs satisfying multiple objectives. To illustrate the working of PSOMCS, we apply it to a toy network. Next we show its superiority by comparing its performance against other comparable methods on a medium sized E. coli core metabolic network. PSOMCS not only finds solutions comparable to previously published results but also it is orders of magnitude faster. Finally, we use PSOMCS to predict knockouts satisfying multiple objectives in a genome-scale metabolic model of E. coli and compare it with OptKnock and RobustKnock. PSOMCS finds competitive knockout strategies and designs compared to other current methods and is in some cases significantly faster. It can be used in identifying knockouts which will force optimal desired behaviors in large and genome scale metabolic networks. It will be even more useful as larger metabolic models of industrially relevant organisms become available.

  1. Model predictive control-based scheduler for repetitive discrete event systems with capacity constraints

    Directory of Open Access Journals (Sweden)

    Hiroyuki Goto

    2013-07-01

    Full Text Available A model predictive control-based scheduler for a class of discrete event systems is designed and developed. We focus on repetitive, multiple-input, multiple-output, and directed acyclic graph structured systems on which capacity constraints can be imposed. The target system’s behaviour is described by linear equations in max-plus algebra, referred to as state-space representation. Assuming that the system’s performance can be improved by paying additional cost, we adjust the system parameters and determine control inputs for which the reference output signals can be observed. The main contribution of this research is twofold, 1: For systems with capacity constraints, we derived an output prediction equation as functions of adjustable variables in a recursive form, 2: Regarding the construct for the system’s representation, we improved the structure to accomplish general operations which are essential for adjusting the system parameters. The result of numerical simulation in a later section demonstrates the effectiveness of the developed controller.

  2. Clinical Processes - The Killer Application for Constraint-Based Process Interactions

    DEFF Research Database (Denmark)

    Jiménez-Ramírez, Andrés; Barba, Irene; Reichert, Manfred

    2018-01-01

    . The scenario is subject to complex temporal constraints and entails the need for coordinating the constraint-based interactions among the processes related to a patient treatment process. As demonstrated in this work, the selected real process scenario can be suitably modeled through a declarative approach....... examples. However, to the best of our knowledge, they have not been used to model complex, real-world scenarios that comprise constraints going beyond control-flow. In this paper, we propose the use of a declarative language for modeling a sophisticated healthcare process scenario from the real world......For more than a decade, the interest in aligning information systems in a process-oriented way has been increasing. To enable operational support for business processes, the latter are usually specified in an imperative way. The resulting process models, however, tend to be too rigid to meet...

  3. Constraint Differentiation

    DEFF Research Database (Denmark)

    Mödersheim, Sebastian Alexander; Basin, David; Viganò, Luca

    2010-01-01

    We introduce constraint differentiation, a powerful technique for reducing search when model-checking security protocols using constraint-based methods. Constraint differentiation works by eliminating certain kinds of redundancies that arise in the search space when using constraints to represent...... results show that constraint differentiation substantially reduces search and considerably improves the performance of OFMC, enabling its application to a wider class of problems....

  4. Linear time algorithms to construct populations fitting multiple constraint distributions at genomic scales.

    Science.gov (United States)

    Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi

    2017-10-09

    Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.

  5. Modeling external constraints: Applying expert systems to nuclear plants

    International Nuclear Information System (INIS)

    Beck, C.E.; Behera, A.K.

    1993-01-01

    Artificial Intelligence (AI) applications in nuclear plants have received much attention over the past decade. Specific applications that have been addressed include development of models and knowledge-bases, plant maintenance, operations, procedural guidance, risk assessment, and design tools. This paper examines the issue of external constraints, with a focus on the use of Al and expert systems as design tools. It also provides several suggested methods for addressing these constraints within the Al framework. These methods include a State Matrix scheme, a layered structure for the knowledge base, and application of the dynamic parameter concept

  6. A model-based framework for incremental scale-up of wastewater treatment processes

    DEFF Research Database (Denmark)

    Mauricio Iglesias, Miguel; Sin, Gürkan

    Scale-up is traditionally done following specific ratios or rules of thumb which do not lead to optimal results. We present a generic framework to assist in scale-up of wastewater treatment processes based on multiscale modelling, multiobjective optimisation and a validation of the model at the new...... large scale. The framework is illustrated by the scale-up of a complete autotropic nitrogen removal process. The model based multiobjective scaleup offers a promising improvement compared to the rule of thumbs based emprical scale up rules...

  7. Genome-scale reconstruction of the Streptococcus pyogenes M49 metabolic network reveals growth requirements and indicates potential drug targets.

    Science.gov (United States)

    Levering, Jennifer; Fiedler, Tomas; Sieg, Antje; van Grinsven, Koen W A; Hering, Silvio; Veith, Nadine; Olivier, Brett G; Klett, Lara; Hugenholtz, Jeroen; Teusink, Bas; Kreikemeyer, Bernd; Kummer, Ursula

    2016-08-20

    Genome-scale metabolic models comprise stoichiometric relations between metabolites, as well as associations between genes and metabolic reactions and facilitate the analysis of metabolism. We computationally reconstructed the metabolic network of the lactic acid bacterium Streptococcus pyogenes M49. Initially, we based the reconstruction on genome annotations and already existing and curated metabolic networks of Bacillus subtilis, Escherichia coli, Lactobacillus plantarum and Lactococcus lactis. This initial draft was manually curated with the final reconstruction accounting for 480 genes associated with 576 reactions and 558 metabolites. In order to constrain the model further, we performed growth experiments of wild type and arcA deletion strains of S. pyogenes M49 in a chemically defined medium and calculated nutrient uptake and production fluxes. We additionally performed amino acid auxotrophy experiments to test the consistency of the model. The established genome-scale model can be used to understand the growth requirements of the human pathogen S. pyogenes and define optimal and suboptimal conditions, but also to describe differences and similarities between S. pyogenes and related lactic acid bacteria such as L. lactis in order to find strategies to reduce the growth of the pathogen and propose drug targets. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Scale hierarchy in Hořava-Lifshitz gravity: strong constraint from synchrotron radiation in the Crab Nebula.

    Science.gov (United States)

    Liberati, Stefano; Maccione, Luca; Sotiriou, Thomas P

    2012-10-12

    Hořava-Lifshitz gravity models contain higher-order operators suppressed by a characteristic scale, which is required to be parametrically smaller than the Planck scale. We show that recomputed synchrotron radiation constraints from the Crab Nebula suffice to exclude the possibility that this scale is of the same order of magnitude as the Lorentz breaking scale in the matter sector. This highlights the need for a mechanism that suppresses the percolation of Lorentz violation in the matter sector and is effective for higher-order operators as well.

  9. More on cosmological constraints on spontaneous R-symmetry breaking models

    Energy Technology Data Exchange (ETDEWEB)

    Hamada, Yuta; Kobayashi, Tatsuo [Kyoto Univ. (Japan). Dept. of Physics; Kamada, Kohei [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Ecole Polytechnique Federale de Lausanne (Switzerland). Inst. de Theorie des Phenomenes Physiques; Ookouchi, Yutaka [Kyushu Univ., Fukuoka (Japan). Faculty of Arts and Science

    2013-10-15

    We study the spontaneous R-symmetry breaking model and investigate the cosmological constraints on this model due to the pseudo Nambu-Goldstone boson, R-axion. We consider the R-axion which has relatively heavy mass in order to complement our previous work. In this regime, model parameters, R-axions mass and R-symmetry breaking scale, are constrained by Big Bang Nucleosynthesis and overproduction of the gravitino produced from R-axion decay and thermal plasma. We find that the allowed parameter space is very small for high reheating temperature. For low reheating temperature, the U(1){sub R} breaking scale f{sub a} is constrained as f{sub a}<10{sup 12-14} GeV regardless of the value of R-axion mass.

  10. Exploiting linkage disequilibrium in statistical modelling in quantitative genomics

    DEFF Research Database (Denmark)

    Wang, Lei

    Alleles at two loci are said to be in linkage disequilibrium (LD) when they are correlated or statistically dependent. Genomic prediction and gene mapping rely on the existence of LD between gentic markers and causul variants of complex traits. In the first part of the thesis, a novel method...... to quantify and visualize local variation in LD along chromosomes in describet, and applied to characterize LD patters at the local and genome-wide scale in three Danish pig breeds. In the second part, different ways of taking LD into account in genomic prediction models are studied. One approach is to use...... the recently proposed antedependence models, which treat neighbouring marker effects as correlated; another approach involves use of haplotype block information derived using the program Beagle. The overall conclusion is that taking LD information into account in genomic prediction models potentially improves...

  11. A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling

    Science.gov (United States)

    Shapiro, B.; Jin, Q.

    2015-12-01

    Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.

  12. Evaluating the role of behavioral factors and practical constraints in the performance of an agent-based model of farmer decision making

    DEFF Research Database (Denmark)

    Malawska, Anna Katarzyna; Topping, Christopher John

    2016-01-01

    Farmer decision making models often focus on the behavioral assumptions in the representation of the decision making, applying bounded rationality theory to shift away from the generally criticized profit maximizer approach. Although complex on the behavioral side, such representations are usually...... simplistic with respect to the available choice options in farmer decision making and practical constraints related to farming decisions. To ascertain the relevance of modeling different facets of farmer decision making, we developed an agent based model of farmer decision making on crop choice, fertilizer...... and pesticide usage using an existing economic farm optimization model. We then gradually modified the model to include practical agronomic constraints and assumptions reflecting bounded rationality, and assessed the explanatory power of the added model components. The assessments were based on comparisons...

  13. Multiparameter elastic full waveform inversion with facies-based constraints

    Science.gov (United States)

    Zhang, Zhen-dong; Alkhalifah, Tariq; Naeini, Ehsan Zabihi; Sun, Bingbing

    2018-06-01

    Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize FWI beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a priori information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.

  14. Multiparameter Elastic Full Waveform Inversion with Facies-based Constraints

    Science.gov (United States)

    Zhang, Zhen-dong; Alkhalifah, Tariq; Naeini, Ehsan Zabihi; Sun, Bingbing

    2018-03-01

    Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize full waveform inversion beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a prior information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.

  15. Multiparameter Elastic Full Waveform Inversion with Facies-based Constraints

    KAUST Repository

    Zhang, Zhendong

    2018-03-20

    Full waveform inversion (FWI) incorporates all the data characteristics to estimate the parameters described by the assumed physics of the subsurface. However, current efforts to utilize full waveform inversion beyond improved acoustic imaging, like in reservoir delineation, faces inherent challenges related to the limited resolution and the potential trade-off between the elastic model parameters. Some anisotropic parameters are insufficiently updated because of their minor contributions to the surface collected data. Adding rock physics constraints to the inversion helps mitigate such limited sensitivity, but current approaches to add such constraints are based on including them as a priori knowledge mostly valid around the well or as a global constraint for the whole area. Since similar rock formations inside the Earth admit consistent elastic properties and relative values of elasticity and anisotropy parameters (this enables us to define them as a seismic facies), utilizing such localized facies information in FWI can improve the resolution of inverted parameters. We propose a novel approach to use facies-based constraints in both isotropic and anisotropic elastic FWI. We invert for such facies using Bayesian theory and update them at each iteration of the inversion using both the inverted models and a prior information. We take the uncertainties of the estimated parameters (approximated by radiation patterns) into consideration and improve the quality of estimated facies maps. Four numerical examples corresponding to different acquisition, physical assumptions and model circumstances are used to verify the effectiveness of the proposed method.

  16. A biological-based model that links genomic instability, bystander effects, and adaptive response

    International Nuclear Information System (INIS)

    Scott, B.R.

    2004-01-01

    This paper links genomic instability, bystander effects, and adaptive response in mammalian cell communities via a novel biological-based, dose-response model called NEOTRANS 3 . The model is an extension of the NEOTRANS 2 model that addressed stochastic effects (genomic instability, mutations, and neoplastic transformation) associated with brief exposure to low radiation doses. With both models, ionizing radiation produces DNA damage in cells that can be associated with varying degrees of genomic instability. Cells with persistent problematic instability (PPI) are mutants that arise via misrepair of DNA damage. Progeny of PPI cells also have PPI and can undergo spontaneous neoplastic transformation. Unlike NEOTRANS 2 , with NEOTRANS 3 newly induced mutant PPI cells and their neoplastically transformed progeny can be suppressed via our previously introduced protective apoptosis-mediated (PAM) process, which can be activated by low linear energy transfer (LET) radiation. However, with NEOTRANS 3 (which like NEOTRANS 2 involves cross-talk between nongenomically compromised [e.g., nontransformed, nonmutants] and genomically compromised [e.g., mutants, transformants, etc.] cells), it is assumed that PAM is only activated over a relatively narrow, dose-rate-dependent interval (D PAM ,D off ); where D PAM is a small stochastic activation threshold, and D off is the stochastic dose above which PAM does not occur. PAM cooperates with activated normal DNA repair and with activated normal apoptosis in guarding against genomic instability. Normal repair involves both error-free repair and misrepair components. Normal apoptosis and the error-free component of normal repair protect mammals by preventing the occurrence of mutant cells. PAM selectively removes mutant cells arising via the misrepair component of normal repair, selectively removes existing neoplastically transformed cells, and probably selectively removes other genomically compromised cells when it is activated

  17. A protocol for generating a high-quality genome-scale metabolic reconstruction.

    Science.gov (United States)

    Thiele, Ines; Palsson, Bernhard Ø

    2010-01-01

    Network reconstructions are a common denominator in systems biology. Bottom-up metabolic network reconstructions have been developed over the last 10 years. These reconstructions represent structured knowledge bases that abstract pertinent information on the biochemical transformations taking place within specific target organisms. The conversion of a reconstruction into a mathematical format facilitates a myriad of computational biological studies, including evaluation of network content, hypothesis testing and generation, analysis of phenotypic characteristics and metabolic engineering. To date, genome-scale metabolic reconstructions for more than 30 organisms have been published and this number is expected to increase rapidly. However, these reconstructions differ in quality and coverage that may minimize their predictive potential and use as knowledge bases. Here we present a comprehensive protocol describing each step necessary to build a high-quality genome-scale metabolic reconstruction, as well as the common trials and tribulations. Therefore, this protocol provides a helpful manual for all stages of the reconstruction process.

  18. Genome-Scale Reconstruction of the Human Astrocyte Metabolic Network

    OpenAIRE

    Mart?n-Jim?nez, Cynthia A.; Salazar-Barreto, Diego; Barreto, George E.; Gonz?lez, Janneth

    2017-01-01

    Astrocytes are the most abundant cells of the central nervous system; they have a predominant role in maintaining brain metabolism. In this sense, abnormal metabolic states have been found in different neuropathological diseases. Determination of metabolic states of astrocytes is difficult to model using current experimental approaches given the high number of reactions and metabolites present. Thus, genome-scale metabolic networks derived from transcriptomic data can be used as a framework t...

  19. Elucidating the triplicated ancestral genome structure of radish based on chromosome-level comparison with the Brassica genomes.

    Science.gov (United States)

    Jeong, Young-Min; Kim, Namshin; Ahn, Byung Ohg; Oh, Mijin; Chung, Won-Hyong; Chung, Hee; Jeong, Seongmun; Lim, Ki-Byung; Hwang, Yoon-Jung; Kim, Goon-Bo; Baek, Seunghoon; Choi, Sang-Bong; Hyung, Dae-Jin; Lee, Seung-Won; Sohn, Seong-Han; Kwon, Soo-Jin; Jin, Mina; Seol, Young-Joo; Chae, Won Byoung; Choi, Keun Jin; Park, Beom-Seok; Yu, Hee-Ju; Mun, Jeong-Hwan

    2016-07-01

    This study presents a chromosome-scale draft genome sequence of radish that is assembled into nine chromosomal pseudomolecules. A comprehensive comparative genome analysis with the Brassica genomes provides genomic evidences on the evolution of the mesohexaploid radish genome. Radish (Raphanus sativus L.) is an agronomically important root vegetable crop and its origin and phylogenetic position in the tribe Brassiceae is controversial. Here we present a comprehensive analysis of the radish genome based on the chromosome sequences of R. sativus cv. WK10039. The radish genome was sequenced and assembled into 426.2 Mb spanning >98 % of the gene space, of which 344.0 Mb were integrated into nine chromosome pseudomolecules. Approximately 36 % of the genome was repetitive sequences and 46,514 protein-coding genes were predicted and annotated. Comparative mapping of the tPCK-like ancestral genome revealed that the radish genome has intermediate characteristics between the Brassica A/C and B genomes in the triplicated segments, suggesting an internal origin from the genus Brassica. The evolutionary characteristics shared between radish and other Brassica species provided genomic evidences that the current form of nine chromosomes in radish was rearranged from the chromosomes of hexaploid progenitor. Overall, this study provides a chromosome-scale draft genome sequence of radish as well as novel insight into evolution of the mesohexaploid genomes in the tribe Brassiceae.

  20. The infinite sites model of genome evolution.

    Science.gov (United States)

    Ma, Jian; Ratan, Aakrosh; Raney, Brian J; Suh, Bernard B; Miller, Webb; Haussler, David

    2008-09-23

    We formalize the problem of recovering the evolutionary history of a set of genomes that are related to an unseen common ancestor genome by operations of speciation, deletion, insertion, duplication, and rearrangement of segments of bases. The problem is examined in the limit as the number of bases in each genome goes to infinity. In this limit, the chromosomes are represented by continuous circles or line segments. For such an infinite-sites model, we present a polynomial-time algorithm to find the most parsimonious evolutionary history of any set of related present-day genomes.

  1. Fully predictive simulation of real-scale cable tray fire based on small-scale laboratory experiments

    Energy Technology Data Exchange (ETDEWEB)

    Beji, Tarek; Merci, Bart [Ghent Univ. (Belgium). Dept. of Flow, Heat and Combustion Mechanics; Bonte, Frederick [Bel V, Brussels (Belgium)

    2015-12-15

    This paper presents a computational fluid dynamics (CFD)-based modelling strategy for real-scale cable tray fires. The challenge was to perform fully predictive simulations (that could be called 'blind' simulations) using solely information from laboratory-scale experiments, in addition to the geometrical arrangement of the cables. The results of the latter experiments were used (1) to construct the fuel molecule and the chemical reaction for combustion, and (2) to estimate the overall pyrolysis and burning behaviour. More particularly, the strategy regarding the second point consists of adopting a surface-based pyrolysis model. Since the burning behaviour of each cable could not be tracked individually (due to computational constraints), 'groups' of cables were modelled with an overall cable surface area equal to the actual value. The results obtained for one large-scale test (a stack of five horizontal trays) are quite encouraging, especially for the peak Heat Release Rate (HRR) that was predicted with a relative deviation of 3 %. The time to reach the peak is however overestimated by 4.7 min (i.e. 94 %). Also, the fire duration is overestimated by 5 min (i.e. 24 %). These discrepancies are mainly attributed to differences in the HRRPUA (heat release rate per unit area) profiles between the small-scale and large-scale. The latter was calculated by estimating the burning area of cables using video fire analysis (VFA).

  2. Constraint-based scheduling

    Science.gov (United States)

    Zweben, Monte

    1993-01-01

    The GERRY scheduling system developed by NASA Ames with assistance from the Lockheed Space Operations Company, and the Lockheed Artificial Intelligence Center, uses a method called constraint-based iterative repair. Using this technique, one encodes both hard rules and preference criteria into data structures called constraints. GERRY repeatedly attempts to improve schedules by seeking repairs for violated constraints. The system provides a general scheduling framework which is being tested on two NASA applications. The larger of the two is the Space Shuttle Ground Processing problem which entails the scheduling of all the inspection, repair, and maintenance tasks required to prepare the orbiter for flight. The other application involves power allocation for the NASA Ames wind tunnels. Here the system will be used to schedule wind tunnel tests with the goal of minimizing power costs. In this paper, we describe the GERRY system and its application to the Space Shuttle problem. We also speculate as to how the system would be used for manufacturing, transportation, and military problems.

  3. Spatial organization of the budding yeast genome in the cell nucleus and identification of specific chromatin interactions from multi-chromosome constrained chromatin model.

    Science.gov (United States)

    Gürsoy, Gamze; Xu, Yun; Liang, Jie

    2017-07-01

    Nuclear landmarks and biochemical factors play important roles in the organization of the yeast genome. The interaction pattern of budding yeast as measured from genome-wide 3C studies are largely recapitulated by model polymer genomes subject to landmark constraints. However, the origin of inter-chromosomal interactions, specific roles of individual landmarks, and the roles of biochemical factors in yeast genome organization remain unclear. Here we describe a multi-chromosome constrained self-avoiding chromatin model (mC-SAC) to gain understanding of the budding yeast genome organization. With significantly improved sampling of genome structures, both intra- and inter-chromosomal interaction patterns from genome-wide 3C studies are accurately captured in our model at higher resolution than previous studies. We show that nuclear confinement is a key determinant of the intra-chromosomal interactions, and centromere tethering is responsible for the inter-chromosomal interactions. In addition, important genomic elements such as fragile sites and tRNA genes are found to be clustered spatially, largely due to centromere tethering. We uncovered previously unknown interactions that were not captured by genome-wide 3C studies, which are found to be enriched with tRNA genes, RNAPIII and TFIIS binding. Moreover, we identified specific high-frequency genome-wide 3C interactions that are unaccounted for by polymer effects under landmark constraints. These interactions are enriched with important genes and likely play biological roles.

  4. Constraints, Trade-offs and the Currency of Fitness.

    Science.gov (United States)

    Acerenza, Luis

    2016-03-01

    Understanding evolutionary trajectories remains a difficult task. This is because natural evolutionary processes are simultaneously affected by various types of constraints acting at the different levels of biological organization. Of particular importance are constraints where correlated changes occur in opposite directions, called trade-offs. Here we review and classify the main evolutionary constraints and trade-offs, operating at all levels of trait hierarchy. Special attention is given to life history trade-offs and the conflict between the survival and reproduction components of fitness. Cellular mechanisms underlying fitness trade-offs are described. At the metabolic level, a linear trade-off between growth and flux variability was found, employing bacterial genome-scale metabolic reconstructions. Its analysis indicates that flux variability can be considered as the currency of fitness. This currency is used for fitness transfer between fitness components during adaptations. Finally, a discussion is made regarding the constraints which limit the increase in the amount of fitness currency during evolution, suggesting that occupancy constraints are probably the main restrictions.

  5. A detailed model for simulation of catchment scale subsurface hydrologic processes

    Science.gov (United States)

    Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    A catchment scale numerical model is developed based on the three-dimensional transient Richards equation describing fluid flow in variably saturated porous media. The model is designed to take advantage of digital elevation data bases and of information extracted from these data bases by topographic analysis. The practical application of the model is demonstrated in simulations of a small subcatchment of the Konza Prairie reserve near Manhattan, Kansas. In a preliminary investigation of computational issues related to model resolution, we obtain satisfactory numerical results using large aspect ratios, suggesting that horizontal grid dimensions may not be unreasonably constrained by the typically much smaller vertical length scale of a catchment and by vertical discretization requirements. Additional tests are needed to examine the effects of numerical constraints and parameter heterogeneity in determining acceptable grid aspect ratios. In other simulations we attempt to match the observed streamflow response of the catchment, and we point out the small contribution of the streamflow component to the overall water balance of the catchment.

  6. Robust Model Predictive Control Using Linear Matrix Inequalities for the Treatment of Asymmetric Output Constraints

    Directory of Open Access Journals (Sweden)

    Mariana Santos Matos Cavalca

    2012-01-01

    Full Text Available One of the main advantages of predictive control approaches is the capability of dealing explicitly with constraints on the manipulated and output variables. However, if the predictive control formulation does not consider model uncertainties, then the constraint satisfaction may be compromised. A solution for this inconvenience is to use robust model predictive control (RMPC strategies based on linear matrix inequalities (LMIs. However, LMI-based RMPC formulations typically consider only symmetric constraints. This paper proposes a method based on pseudoreferences to treat asymmetric output constraints in integrating SISO systems. Such technique guarantees robust constraint satisfaction and convergence of the state to the desired equilibrium point. A case study using numerical simulation indicates that satisfactory results can be achieved.

  7. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-03-27

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.

  8. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  9. Systematic construction of kinetic models from genome-scale metabolic networks.

    Directory of Open Access Journals (Sweden)

    Natalie J Stanford

    Full Text Available The quantitative effects of environmental and genetic perturbations on metabolism can be studied in silico using kinetic models. We present a strategy for large-scale model construction based on a logical layering of data such as reaction fluxes, metabolite concentrations, and kinetic constants. The resulting models contain realistic standard rate laws and plausible parameters, adhere to the laws of thermodynamics, and reproduce a predefined steady state. These features have not been simultaneously achieved by previous workflows. We demonstrate the advantages and limitations of the workflow by translating the yeast consensus metabolic network into a kinetic model. Despite crudely selected data, the model shows realistic control behaviour, a stable dynamic, and realistic response to perturbations in extracellular glucose concentrations. The paper concludes by outlining how new data can continuously be fed into the workflow and how iterative model building can assist in directing experiments.

  10. Systematic Construction of Kinetic Models from Genome-Scale Metabolic Networks

    Science.gov (United States)

    Smallbone, Kieran; Klipp, Edda; Mendes, Pedro; Liebermeister, Wolfram

    2013-01-01

    The quantitative effects of environmental and genetic perturbations on metabolism can be studied in silico using kinetic models. We present a strategy for large-scale model construction based on a logical layering of data such as reaction fluxes, metabolite concentrations, and kinetic constants. The resulting models contain realistic standard rate laws and plausible parameters, adhere to the laws of thermodynamics, and reproduce a predefined steady state. These features have not been simultaneously achieved by previous workflows. We demonstrate the advantages and limitations of the workflow by translating the yeast consensus metabolic network into a kinetic model. Despite crudely selected data, the model shows realistic control behaviour, a stable dynamic, and realistic response to perturbations in extracellular glucose concentrations. The paper concludes by outlining how new data can continuously be fed into the workflow and how iterative model building can assist in directing experiments. PMID:24324546

  11. Microarray Data Processing Techniques for Genome-Scale Network Inference from Large Public Repositories.

    Science.gov (United States)

    Chockalingam, Sriram; Aluru, Maneesha; Aluru, Srinivas

    2016-09-19

    Pre-processing of microarray data is a well-studied problem. Furthermore, all popular platforms come with their own recommended best practices for differential analysis of genes. However, for genome-scale network inference using microarray data collected from large public repositories, these methods filter out a considerable number of genes. This is primarily due to the effects of aggregating a diverse array of experiments with different technical and biological scenarios. Here we introduce a pre-processing pipeline suitable for inferring genome-scale gene networks from large microarray datasets. We show that partitioning of the available microarray datasets according to biological relevance into tissue- and process-specific categories significantly extends the limits of downstream network construction. We demonstrate the effectiveness of our pre-processing pipeline by inferring genome-scale networks for the model plant Arabidopsis thaliana using two different construction methods and a collection of 11,760 Affymetrix ATH1 microarray chips. Our pre-processing pipeline and the datasets used in this paper are made available at http://alurulab.cc.gatech.edu/microarray-pp.

  12. Comprehensive reconstruction and in silico analysis of Aspergillus niger genome-scale metabolic network model that accounts for 1210 ORFs.

    Science.gov (United States)

    Lu, Hongzhong; Cao, Weiqiang; Ouyang, Liming; Xia, Jianye; Huang, Mingzhi; Chu, Ju; Zhuang, Yingping; Zhang, Siliang; Noorman, Henk

    2017-03-01

    Aspergillus niger is one of the most important cell factories for industrial enzymes and organic acids production. A comprehensive genome-scale metabolic network model (GSMM) with high quality is crucial for efficient strain improvement and process optimization. The lack of accurate reaction equations and gene-protein-reaction associations (GPRs) in the current best model of A. niger named GSMM iMA871, however, limits its application scope. To overcome these limitations, we updated the A. niger GSMM by combining the latest genome annotation and literature mining technology. Compared with iMA871, the number of reactions in iHL1210 was increased from 1,380 to 1,764, and the number of unique ORFs from 871 to 1,210. With the aid of our transcriptomics analysis, the existence of 63% ORFs and 68% reactions in iHL1210 can be verified when glucose was used as the only carbon source. Physiological data from chemostat cultivations, 13 C-labeled and molecular experiments from the published literature were further used to check the performance of iHL1210. The average correlation coefficients between the predicted fluxes and estimated fluxes from 13 C-labeling data were sufficiently high (above 0.89) and the prediction of cell growth on most of the reported carbon and nitrogen sources was consistent. Using the updated genome-scale model, we evaluated gene essentiality on synthetic and yeast extract medium, as well as the effects of NADPH supply on glucoamylase production in A. niger. In summary, the new A. niger GSMM iHL1210 contains significant improvements with respect to the metabolic coverage and prediction performance, which paves the way for systematic metabolic engineering of A. niger. Biotechnol. Bioeng. 2017;114: 685-695. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Constraint-based Attribute and Interval Planning

    Science.gov (United States)

    Jonsson, Ari; Frank, Jeremy

    2013-01-01

    In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We then show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. In addition, we de ne compatibilities, a compact mechanism for describing planning domains. We describe how this framework can incorporate the use of constraint reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.

  14. Modeling and Simulation of Optimal Resource Management during the Diurnal Cycle in Emiliania huxleyi by Genome-Scale Reconstruction and an Extended Flux Balance Analysis Approach.

    Science.gov (United States)

    Knies, David; Wittmüß, Philipp; Appel, Sebastian; Sawodny, Oliver; Ederer, Michael; Feuer, Ronny

    2015-10-28

    The coccolithophorid unicellular alga Emiliania huxleyi is known to form large blooms, which have a strong effect on the marine carbon cycle. As a photosynthetic organism, it is subjected to a circadian rhythm due to the changing light conditions throughout the day. For a better understanding of the metabolic processes under these periodically-changing environmental conditions, a genome-scale model based on a genome reconstruction of the E. huxleyi strain CCMP 1516 was created. It comprises 410 reactions and 363 metabolites. Biomass composition is variable based on the differentiation into functional biomass components and storage metabolites. The model is analyzed with a flux balance analysis approach called diurnal flux balance analysis (diuFBA) that was designed for organisms with a circadian rhythm. It allows storage metabolites to accumulate or be consumed over the diurnal cycle, while keeping the structure of a classical FBA problem. A feature of this approach is that the production and consumption of storage metabolites is not defined externally via the biomass composition, but the result of optimal resource management adapted to the diurnally-changing environmental conditions. The model in combination with this approach is able to simulate the variable biomass composition during the diurnal cycle in proximity to literature data.

  15. Modeling and Simulation of Optimal Resource Management during the Diurnal Cycle in Emiliania huxleyi by Genome-Scale Reconstruction and an Extended Flux Balance Analysis Approach

    Directory of Open Access Journals (Sweden)

    David Knies

    2015-10-01

    Full Text Available The coccolithophorid unicellular alga Emiliania huxleyi is known to form large blooms, which have a strong effect on the marine carbon cycle. As a photosynthetic organism, it is subjected to a circadian rhythm due to the changing light conditions throughout the day. For a better understanding of the metabolic processes under these periodically-changing environmental conditions, a genome-scale model based on a genome reconstruction of the E. huxleyi strain CCMP 1516 was created. It comprises 410 reactions and 363 metabolites. Biomass composition is variable based on the differentiation into functional biomass components and storage metabolites. The model is analyzed with a flux balance analysis approach called diurnal flux balance analysis (diuFBA that was designed for organisms with a circadian rhythm. It allows storage metabolites to accumulate or be consumed over the diurnal cycle, while keeping the structure of a classical FBA problem. A feature of this approach is that the production and consumption of storage metabolites is not defined externally via the biomass composition, but the result of optimal resource management adapted to the diurnally-changing environmental conditions. The model in combination with this approach is able to simulate the variable biomass composition during the diurnal cycle in proximity to literature data.

  16. q-Virasoro constraints in matrix models

    Energy Technology Data Exchange (ETDEWEB)

    Nedelin, Anton [Dipartimento di Fisica, Università di Milano-Bicocca and INFN, sezione di Milano-Bicocca, Piazza della Scienza 3, I-20126 Milano (Italy); Department of Physics and Astronomy, Uppsala university,Box 516, SE-75120 Uppsala (Sweden); Zabzine, Maxim [Department of Physics and Astronomy, Uppsala university,Box 516, SE-75120 Uppsala (Sweden)

    2017-03-20

    The Virasoro constraints play the important role in the study of matrix models and in understanding of the relation between matrix models and CFTs. Recently the localization calculations in supersymmetric gauge theories produced new families of matrix models and we have very limited knowledge about these matrix models. We concentrate on elliptic generalization of hermitian matrix model which corresponds to calculation of partition function on S{sup 3}×S{sup 1} for vector multiplet. We derive the q-Virasoro constraints for this matrix model. We also observe some interesting algebraic properties of the q-Virasoro algebra.

  17. Building a semantic web-based metadata repository for facilitating detailed clinical modeling in cancer genome studies.

    Science.gov (United States)

    Sharma, Deepak K; Solbrig, Harold R; Tao, Cui; Weng, Chunhua; Chute, Christopher G; Jiang, Guoqian

    2017-06-05

    Detailed Clinical Models (DCMs) have been regarded as the basis for retaining computable meaning when data are exchanged between heterogeneous computer systems. To better support clinical cancer data capturing and reporting, there is an emerging need to develop informatics solutions for standards-based clinical models in cancer study domains. The objective of the study is to develop and evaluate a cancer genome study metadata management system that serves as a key infrastructure in supporting clinical information modeling in cancer genome study domains. We leveraged a Semantic Web-based metadata repository enhanced with both ISO11179 metadata standard and Clinical Information Modeling Initiative (CIMI) Reference Model. We used the common data elements (CDEs) defined in The Cancer Genome Atlas (TCGA) data dictionary, and extracted the metadata of the CDEs using the NCI Cancer Data Standards Repository (caDSR) CDE dataset rendered in the Resource Description Framework (RDF). The ITEM/ITEM_GROUP pattern defined in the latest CIMI Reference Model is used to represent reusable model elements (mini-Archetypes). We produced a metadata repository with 38 clinical cancer genome study domains, comprising a rich collection of mini-Archetype pattern instances. We performed a case study of the domain "clinical pharmaceutical" in the TCGA data dictionary and demonstrated enriched data elements in the metadata repository are very useful in support of building detailed clinical models. Our informatics approach leveraging Semantic Web technologies provides an effective way to build a CIMI-compliant metadata repository that would facilitate the detailed clinical modeling to support use cases beyond TCGA in clinical cancer study domains.

  18. Model quality assessment using distance constraints from alignments

    DEFF Research Database (Denmark)

    Paluszewski, Martin; Karplus, Kevin

    2008-01-01

    that model which is closest to the true structure. In this article, we present a new approach for addressing the MQA problem. It is based on distance constraints extracted from alignments to templates of known structure, and is implemented in the Undertaker program for protein structure prediction. One novel...... feature is that we extract noncontact constraints as well as contact constraints. We describe how the distance constraint extraction is done and we show how they can be used to address the MQA problem. We have compared our method on CASP7 targets and the results show that our method is at least comparable...... with the best MQA methods that were assessed at CASP7. We also propose a new evaluation measure, Kendall's tau, that is more interpretable than conventional measures used for evaluating MQA methods (Pearson's r and Spearman's rho). We show clear examples where Kendall's tau agrees much more with our intuition...

  19. Genomic prediction based on data from three layer lines using non-linear regression models.

    Science.gov (United States)

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional

  20. A Genomics-Based Model for Prediction of Severe Bioprosthetic Mitral Valve Calcification.

    Science.gov (United States)

    Ponasenko, Anastasia V; Khutornaya, Maria V; Kutikhin, Anton G; Rutkovskaya, Natalia V; Tsepokina, Anna V; Kondyukova, Natalia V; Yuzhalin, Arseniy E; Barbarash, Leonid S

    2016-08-31

    Severe bioprosthetic mitral valve calcification is a significant problem in cardiovascular surgery. Unfortunately, clinical markers did not demonstrate efficacy in prediction of severe bioprosthetic mitral valve calcification. Here, we examined whether a genomics-based approach is efficient in predicting the risk of severe bioprosthetic mitral valve calcification. A total of 124 consecutive Russian patients who underwent mitral valve replacement surgery were recruited. We investigated the associations of the inherited variation in innate immunity, lipid metabolism and calcium metabolism genes with severe bioprosthetic mitral valve calcification. Genotyping was conducted utilizing the TaqMan assay. Eight gene polymorphisms were significantly associated with severe bioprosthetic mitral valve calcification and were therefore included into stepwise logistic regression which identified male gender, the T/T genotype of the rs3775073 polymorphism within the TLR6 gene, the C/T genotype of the rs2229238 polymorphism within the IL6R gene, and the A/A genotype of the rs10455872 polymorphism within the LPA gene as independent predictors of severe bioprosthetic mitral valve calcification. The developed genomics-based model had fair predictive value with area under the receiver operating characteristic (ROC) curve of 0.73. In conclusion, our genomics-based approach is efficient for the prediction of severe bioprosthetic mitral valve calcification.

  1. Genome-scale reconstruction of the metabolic network in Yersinia pestis, strain 91001

    Energy Technology Data Exchange (ETDEWEB)

    Navid, A; Almaas, E

    2009-01-13

    The gram-negative bacterium Yersinia pestis, the aetiological agent of bubonic plague, is one the deadliest pathogens known to man. Despite its historical reputation, plague is a modern disease which annually afflicts thousands of people. Public safety considerations greatly limit clinical experimentation on this organism and thus development of theoretical tools to analyze the capabilities of this pathogen is of utmost importance. Here, we report the first genome-scale metabolic model of Yersinia pestis biovar Mediaevalis based both on its recently annotated genome, and physiological and biochemical data from literature. Our model demonstrates excellent agreement with Y. pestis known metabolic needs and capabilities. Since Y. pestis is a meiotrophic organism, we have developed CryptFind, a systematic approach to identify all candidate cryptic genes responsible for known and theoretical meiotrophic phenomena. In addition to uncovering every known cryptic gene for Y. pestis, our analysis of the rhamnose fermentation pathway suggests that betB is the responsible cryptic gene. Despite all of our medical advances, we still do not have a vaccine for bubonic plague. Recent discoveries of antibiotic resistant strains of Yersinia pestis coupled with the threat of plague being used as a bioterrorism weapon compel us to develop new tools for studying the physiology of this deadly pathogen. Using our theoretical model, we can study the cell's phenotypic behavior under different circumstances and identify metabolic weaknesses which may be harnessed for the development of therapeutics. Additionally, the automatic identification of cryptic genes expands the usage of genomic data for pharmaceutical purposes.

  2. New Constraints on Dark Matter Effective Theories from Standard Model Loops

    CERN Document Server

    Crivellin, Andreas; Procura, Massimiliano

    2014-01-01

    We consider an effective field theory for a gauge singlet Dirac dark matter (DM) particle interacting with the Standard Model (SM) fields via effective operators suppressed by the scale $\\Lambda \\gtrsim 1$ TeV. We perform a systematic analysis of the leading loop contributions to spin-independent (SI) DM--nucleon scattering using renormalization group evolution between $\\Lambda$ and the low-energy scale probed by direct detection experiments. We find that electroweak interactions induce operator mixings such that operators that are naively velocity-suppressed and spin-dependent can actually contribute to SI scattering. This allows us to put novel constraints on Wilson coefficients that were so far poorly bounded by direct detection. Constraints from current searches are comparable to LHC bounds, and will significantly improve in the near future. Interestingly, the loop contribution we find is maximally isospin violating even if the underlying theory is isospin conserving.

  3. Birth of scale-free molecular networks and the number of distinct DNA and protein domains per genome.

    Science.gov (United States)

    Rzhetsky, A; Gomez, S M

    2001-10-01

    Current growth in the field of genomics has provided a number of exciting approaches to the modeling of evolutionary mechanisms within the genome. Separately, dynamical and statistical analyses of networks such as the World Wide Web and the social interactions existing between humans have shown that these networks can exhibit common fractal properties-including the property of being scale-free. This work attempts to bridge these two fields and demonstrate that the fractal properties of molecular networks are linked to the fractal properties of their underlying genomes. We suggest a stochastic model capable of describing the evolutionary growth of metabolic or signal-transduction networks. This model generates networks that share important statistical properties (so-called scale-free behavior) with real molecular networks. In particular, the frequency of vertices connected to exactly k other vertices follows a power-law distribution. The shape of this distribution remains invariant to changes in network scale: a small subgraph has the same distribution as the complete graph from which it is derived. Furthermore, the model correctly predicts that the frequencies of distinct DNA and protein domains also follow a power-law distribution. Finally, the model leads to a simple equation linking the total number of different DNA and protein domains in a genome with both the total number of genes and the overall network topology. MatLab (MathWorks, Inc.) programs described in this manuscript are available on request from the authors. ar345@columbia.edu.

  4. Restricted DCJ-indel model: sorting linear genomes with DCJ and indels

    Science.gov (United States)

    2012-01-01

    Background The double-cut-and-join (DCJ) is a model that is able to efficiently sort a genome into another, generalizing the typical mutations (inversions, fusions, fissions, translocations) to which genomes are subject, but allowing the existence of circular chromosomes at the intermediate steps. In the general model many circular chromosomes can coexist in some intermediate step. However, when the compared genomes are linear, it is more plausible to use the so-called restricted DCJ model, in which we proceed the reincorporation of a circular chromosome immediately after its creation. These two consecutive DCJ operations, which create and reincorporate a circular chromosome, mimic a transposition or a block-interchange. When the compared genomes have the same content, it is known that the genomic distance for the restricted DCJ model is the same as the distance for the general model. If the genomes have unequal contents, in addition to DCJ it is necessary to consider indels, which are insertions and deletions of DNA segments. Linear time algorithms were proposed to compute the distance and to find a sorting scenario in a general, unrestricted DCJ-indel model that considers DCJ and indels. Results In the present work we consider the restricted DCJ-indel model for sorting linear genomes with unequal contents. We allow DCJ operations and indels with the following constraint: if a circular chromosome is created by a DCJ, it has to be reincorporated in the next step (no other DCJ or indel can be applied between the creation and the reincorporation of a circular chromosome). We then develop a sorting algorithm and give a tight upper bound for the restricted DCJ-indel distance. Conclusions We have given a tight upper bound for the restricted DCJ-indel distance. The question whether this bound can be reduced so that both the general and the restricted DCJ-indel distances are equal remains open. PMID:23281630

  5. Extension of landscape-based population viability models to ecoregional scales for conservation planning

    Science.gov (United States)

    Thomas W. Bonnot; Frank R. III Thompson; Joshua Millspaugh

    2011-01-01

    Landscape-based population models are potentially valuable tools in facilitating conservation planning and actions at large scales. However, such models have rarely been applied at ecoregional scales. We extended landscape-based population models to ecoregional scales for three species of concern in the Central Hardwoods Bird Conservation Region and compared model...

  6. How robust are inflation model and dark matter constraints from cosmological data?

    International Nuclear Information System (INIS)

    Hamann, J.; Hannestad, S.; Sloth, M.S.; Wong, Y.Y.Y.

    2006-11-01

    High-precision data from observation of the cosmic microwave background and the large scale structure of the universe provide very tight constraints on the effective parameters that describe cosmological inflation. Indeed, within a constrained class of ΛCDM models, the simple λφ 4 chaotic inflation model already appears to be ruled out by cosmological data. In this paper, we compute constraints on inflationary parameters within a more general framework that includes other physically motivated parameters such as a nonzero neutrino mass. We find that a strong degeneracy between the tensor-to-scalar ratio τ and the neutrino mass prevents λφ 4 from being excluded by present data. Reversing the argument, if λφ 4 is the correct model of inflation, it predicts a sum of neutrino masses at 0.3→0.5 eV, a range compatible with present experimental limits and within the reach of the next generation of neutrino mass measurements. We also discuss the associated constraints on the dark matter density, the dark energy equation of state, and spatial curvature, and show that the allowed regions are significantly altered. Importantly, we find an allowed range of 0.094 c h 2 <0.136 for the dark matter density, a factor of two larger than that reported in previous studies. This expanded parameter space may have implications for constraints on SUSY dark matter models. (orig.)

  7. New Genome Similarity Measures based on Conserved Gene Adjacencies.

    Science.gov (United States)

    Doerr, Daniel; Kowada, Luis Antonio B; Araujo, Eloi; Deshpande, Shachi; Dantas, Simone; Moret, Bernard M E; Stoye, Jens

    2017-06-01

    Many important questions in molecular biology, evolution, and biomedicine can be addressed by comparative genomic approaches. One of the basic tasks when comparing genomes is the definition of measures of similarity (or dissimilarity) between two genomes, for example, to elucidate the phylogenetic relationships between species. The power of different genome comparison methods varies with the underlying formal model of a genome. The simplest models impose the strong restriction that each genome under study must contain the same genes, each in exactly one copy. More realistic models allow several copies of a gene in a genome. One speaks of gene families, and comparative genomic methods that allow this kind of input are called gene family-based. The most powerful-but also most complex-models avoid this preprocessing of the input data and instead integrate the family assignment within the comparative analysis. Such methods are called gene family-free. In this article, we study an intermediate approach between family-based and family-free genomic similarity measures. Introducing this simpler model, called gene connections, we focus on the combinatorial aspects of gene family-free genome comparison. While in most cases, the computational costs to the general family-free case are the same, we also find an instance where the gene connections model has lower complexity. Within the gene connections model, we define three variants of genomic similarity measures that have different expression powers. We give polynomial-time algorithms for two of them, while we show NP-hardness for the third, most powerful one. We also generalize the measures and algorithms to make them more robust against recent local disruptions in gene order. Our theoretical findings are supported by experimental results, proving the applicability and performance of our newly defined similarity measures.

  8. Dark matter constraints in the minimal and nonminimal supersymmetric standard model

    International Nuclear Information System (INIS)

    Stephan, A.

    1998-01-01

    We determine the allowed parameter space and the particle spectra of the minimal SUSY standard model (MSSM) and nonminimal SUSY standard model (NMSSM) imposing correct electroweak gauge symmetry breaking and recent experimental constraints. The parameters of the models are evolved with the SUSY renormalization group equations assuming universality at the grand unified scale. Applying the new unbounded from below constraints we can exclude the lightest SUSY particle singlinos and light scalar and pseudoscalar Higgs singlets of the NMSSM. This exclusion removes the experimental possibility to distinguish between the MSSM and NMSSM via the recently proposed search for an additional cascade produced in the decay of the B-ino into the LSP singlino. Furthermore, the effects of the dark matter condition for the MSSM and NMSSM are investigated and the differences concerning the parameter space, the SUSY particle, and Higgs sector are discussed. thinsp copyright 1998 The American Physical Society

  9. CBDS: Constraint-based diagnostic system for malfunction identification in the nuclear power plant

    International Nuclear Information System (INIS)

    Ha, J.

    1992-01-01

    Traditional rule-based diagnostic expert systems use the experience of experts in the form of rules that associate symptoms with underlying faults. A commonly recognized failing of such systems is their narrow range of expertise and their inability to recognize problems outside this range of expertise. A model base diagnostic system isolating malfunctioning components-CBDS, the Constraint based Diagnostic System-has been developed. Since the intended behavior of a device is more predictable than unintended behaviors (faults), a model based system using the intended behavior has a potential to diagnose unexpected malfunctions by considering faults as open-quotes anything other than the intended behavior.close quotes As a knowledge base, the CBDS generates and decomposes a constraint network based on the structure and behavior model, which are represented symbolically in algebraic equations. Behaviors of generic components are organized in a component model library. Once the library is available, actual domain knowledge can be represented by declaring component types and their connections. To capture various plant knowledge, the mixed model was developed which allow the use of different parameter types in one equation by defining various operators. The CBDS uses the general idea of model based diagnosis. It detects a discrepancy between observation and prediction using constraint propagation, which carriers and accumulates the assumptions when parameter values are deduced. When measured plant parameters are asserted into a constraint network and are propagated through the network, a discrepancy will be detected if there exists any malfunctioning component. The CBDS was tested in the Recirculation Flow Control System of a BWR, and has been shown to be able to diagnose unexpected events

  10. The general behavior of NLO unintegrated parton distributions based on the single-scale evolution and the angular ordering constraint

    International Nuclear Information System (INIS)

    Hosseinkhani, H.; Modarres, M.

    2011-01-01

    To overcome the complexity of generalized two hard scale (k t ,μ) evolution equation, well known as the Ciafaloni, Catani, Fiorani and Marchesini (CCFM) evolution equations, and calculate the unintegrated parton distribution functions (UPDF), Kimber, Martin and Ryskin (KMR) proposed a procedure based on (i) the inclusion of single-scale (μ) only at the last step of evolution and (ii) the angular ordering constraint (AOC) on the DGLAP terms (the DGLAP collinear approximation), to bring the second scale, k t into the UPDF evolution equations. In this work we intend to use the MSTW2008 (Martin et al.) parton distribution functions (PDF) and try to calculate UPDF for various values of x (the longitudinal fraction of parton momentum), μ (the probe scale) and k t (the parton transverse momentum) to see the general behavior of three-dimensional UPDF at the NLO level up to the LHC working energy scales (μ 2 ). It is shown that there exits some pronounced peaks for the three-dimensional UPDF(f a (x,k t )) with respect to the two variables x and k t at various energies (μ). These peaks get larger and move to larger values of k t , as the energy (μ) is increased. We hope these peaks could be detected in the LHC experiments at CERN and other laboratories in the less exclusive processes.

  11. Application of fracture toughness scaling models to the ductile-to- brittle transition

    International Nuclear Information System (INIS)

    Link, R.E.; Joyce, J.A.

    1996-01-01

    An experimental investigation of fracture toughness in the ductile-brittle transition range was conducted. A large number of ASTM A533, Grade B steel, bend and tension specimens with varying crack lengths were tested throughout the transition region. Cleavage fracture toughness scaling models were utilized to correct the data for the loss of constraint in short crack specimens and tension geometries. The toughness scaling models were effective in reducing the scatter in the data, but tended to over-correct the results for the short crack bend specimens. A proposed ASTM Test Practice for Fracture Toughness in the Transition Range, which employs a master curve concept, was applied to the results. The proposed master curve over predicted the fracture toughness in the mid-transition and a modified master curve was developed that more accurately modeled the transition behavior of the material. Finally, the modified master curve and the fracture toughness scaling models were combined to predict the as-measured fracture toughness of the short crack bend and the tension specimens. It was shown that when the scaling models over correct the data for loss of constraint, they can also lead to non-conservative estimates of the increase in toughness for low constraint geometries

  12. Analysing human genomes at different scales

    DEFF Research Database (Denmark)

    Liu, Siyang

    The thriving of the Next-Generation sequencing (NGS) technologies in the past decade has dramatically revolutionized the field of human genetics. We are experiencing a wave of several large-scale whole genome sequencing studies of humans in the world. Those studies vary greatly regarding cohort...... will be reflected by the analysis of real data. This thesis covers studies in two human genome sequencing projects that distinctly differ in terms of studied population, sample size and sequencing depth. In the first project, we sequenced 150 Danish individuals from 50 trio families to 78x coverage....... The sophisticated experimental design enables high-quality de novo assembly of the genomes and provides a good opportunity for mapping the structural variations in the human population. We developed the AsmVar approach to discover, genotype and characterize the structural variations from the assemblies. Our...

  13. Genome-scale reconstruction of metabolic networks of Lactobacillus casei ATCC 334 and 12A.

    Directory of Open Access Journals (Sweden)

    Elena Vinay-Lara

    Full Text Available Lactobacillus casei strains are widely used in industry and the utility of this organism in these industrial applications is strain dependent. Hence, tools capable of predicting strain specific phenotypes would have utility in the selection of strains for specific industrial processes. Genome-scale metabolic models can be utilized to better understand genotype-phenotype relationships and to compare different organisms. To assist in the selection and development of strains with enhanced industrial utility, genome-scale models for L. casei ATCC 334, a well characterized strain, and strain 12A, a corn silage isolate, were constructed. Draft models were generated from RAST genome annotations using the Model SEED database and refined by evaluating ATP generating cycles, mass-and-charge-balances of reactions, and growth phenotypes. After the validation process was finished, we compared the metabolic networks of these two strains to identify metabolic, genetic and ortholog differences that may lead to different phenotypic behaviors. We conclude that the metabolic capabilities of the two networks are highly similar. The L. casei ATCC 334 model accounts for 1,040 reactions, 959 metabolites and 548 genes, while the L. casei 12A model accounts for 1,076 reactions, 979 metabolites and 640 genes. The developed L. casei ATCC 334 and 12A metabolic models will enable better understanding of the physiology of these organisms and be valuable tools in the development and selection of strains with enhanced utility in a variety of industrial applications.

  14. Consistent constraints on the Standard Model Effective Field Theory

    International Nuclear Information System (INIS)

    Berthier, Laure; Trott, Michael

    2016-01-01

    We develop the global constraint picture in the (linear) effective field theory generalisation of the Standard Model, incorporating data from detectors that operated at PEP, PETRA, TRISTAN, SpS, Tevatron, SLAC, LEPI and LEP II, as well as low energy precision data. We fit one hundred and three observables. We develop a theory error metric for this effective field theory, which is required when constraints on parameters at leading order in the power counting are to be pushed to the percent level, or beyond, unless the cut off scale is assumed to be large, Λ≳ 3 TeV. We more consistently incorporate theoretical errors in this work, avoiding this assumption, and as a direct consequence bounds on some leading parameters are relaxed. We show how an S,T analysis is modified by the theory errors we include as an illustrative example.

  15. Small Scale Yielding Correction of Constraint Loss in Small Sized Fracture Toughness Test Specimens

    International Nuclear Information System (INIS)

    Kim, Maan Won; Kim, Min Chul; Lee, Bong Sang; Hong, Jun Hwa

    2005-01-01

    Fracture toughness data in the ductile-brittle transition region of ferritic steels show scatter produced by local sampling effects and specimen geometry dependence which results from relaxation in crack tip constraint. The ASTM E1921 provides a standard test method to define the median toughness temperature curve, so called Master Curve, for the material corresponding to a 1T crack front length and also defines a reference temperature, T 0 , at which median toughness value is 100 MPam for a 1T size specimen. The ASTM E1921 procedures assume that high constraint, small scaling yielding (SSY) conditions prevail at fracture along the crack front. Violation of the SSY assumption occurs most often during tests of smaller specimens. Constraint loss in such cases leads to higher toughness values and thus lower T 0 values. When applied to a structure with low constraint geometry, the standard fracture toughness estimates may lead to strongly over-conservative estimates. A lot of efforts have been made to adjust the constraint effect. In this work, we applied a small-scale yielding correction (SSYC) to adjust the constraint loss of 1/3PCVN and PCVN specimens which are relatively smaller than 1T size specimen at the fracture toughness Master Curve test

  16. Marker-based estimation of genetic parameters in genomics.

    Directory of Open Access Journals (Sweden)

    Zhiqiu Hu

    Full Text Available Linear mixed model (LMM analysis has been recently used extensively for estimating additive genetic variances and narrow-sense heritability in many genomic studies. While the LMM analysis is computationally less intensive than the Bayesian algorithms, it remains infeasible for large-scale genomic data sets. In this paper, we advocate the use of a statistical procedure known as symmetric differences squared (SDS as it may serve as a viable alternative when the LMM methods have difficulty or fail to work with large datasets. The SDS procedure is a general and computationally simple method based only on the least squares regression analysis. We carry out computer simulations and empirical analyses to compare the SDS procedure with two commonly used LMM-based procedures. Our results show that the SDS method is not as good as the LMM methods for small data sets, but it becomes progressively better and can match well with the precision of estimation by the LMM methods for data sets with large sample sizes. Its major advantage is that with larger and larger samples, it continues to work with the increasing precision of estimation while the commonly used LMM methods are no longer able to work under our current typical computing capacity. Thus, these results suggest that the SDS method can serve as a viable alternative particularly when analyzing 'big' genomic data sets.

  17. A probabilistic model to predict clinical phenotypic traits from genome sequencing.

    Science.gov (United States)

    Chen, Yun-Ching; Douville, Christopher; Wang, Cheng; Niknafs, Noushin; Yeo, Grace; Beleva-Guthrie, Violeta; Carter, Hannah; Stenson, Peter D; Cooper, David N; Li, Biao; Mooney, Sean; Karchin, Rachel

    2014-09-01

    Genetic screening is becoming possible on an unprecedented scale. However, its utility remains controversial. Although most variant genotypes cannot be easily interpreted, many individuals nevertheless attempt to interpret their genetic information. Initiatives such as the Personal Genome Project (PGP) and Illumina's Understand Your Genome are sequencing thousands of adults, collecting phenotypic information and developing computational pipelines to identify the most important variant genotypes harbored by each individual. These pipelines consider database and allele frequency annotations and bioinformatics classifications. We propose that the next step will be to integrate these different sources of information to estimate the probability that a given individual has specific phenotypes of clinical interest. To this end, we have designed a Bayesian probabilistic model to predict the probability of dichotomous phenotypes. When applied to a cohort from PGP, predictions of Gilbert syndrome, Graves' disease, non-Hodgkin lymphoma, and various blood groups were accurate, as individuals manifesting the phenotype in question exhibited the highest, or among the highest, predicted probabilities. Thirty-eight PGP phenotypes (26%) were predicted with area-under-the-ROC curve (AUC)>0.7, and 23 (15.8%) of these were statistically significant, based on permutation tests. Moreover, in a Critical Assessment of Genome Interpretation (CAGI) blinded prediction experiment, the models were used to match 77 PGP genomes to phenotypic profiles, generating the most accurate prediction of 16 submissions, according to an independent assessor. Although the models are currently insufficiently accurate for diagnostic utility, we expect their performance to improve with growth of publicly available genomics data and model refinement by domain experts.

  18. The Genome-Based Metabolic Systems Engineering to Boost Levan Production in a Halophilic Bacterial Model.

    Science.gov (United States)

    Aydin, Busra; Ozer, Tugba; Oner, Ebru Toksoy; Arga, Kazim Yalcin

    2018-03-01

    Metabolic systems engineering is being used to redirect microbial metabolism for the overproduction of chemicals of interest with the aim of transforming microbial hosts into cellular factories. In this study, a genome-based metabolic systems engineering approach was designed and performed to improve biopolymer biosynthesis capability of a moderately halophilic bacterium Halomonas smyrnensis AAD6 T producing levan, which is a fructose homopolymer with many potential uses in various industries and medicine. For this purpose, the genome-scale metabolic model for AAD6 T was used to characterize the metabolic resource allocation, specifically to design metabolic engineering strategies for engineered bacteria with enhanced levan production capability. Simulations were performed in silico to determine optimal gene knockout strategies to develop new strains with enhanced levan production capability. The majority of the gene knockout strategies emphasized the vital role of the fructose uptake mechanism, and pointed out the fructose-specific phosphotransferase system (PTS fru ) as the most promising target for further metabolic engineering studies. Therefore, the PTS fru of AAD6 T was restructured with insertional mutagenesis and triparental mating techniques to construct a novel, engineered H. smyrnensis strain, BMA14. Fermentation experiments were carried out to demonstrate the high efficiency of the mutant strain BMA14 in terms of final levan concentration, sucrose consumption rate, and sucrose conversion efficiency, when compared to the AAD6 T . The genome-based metabolic systems engineering approach presented in this study might be considered an efficient framework to redirect microbial metabolism for the overproduction of chemicals of interest, and the novel strain BMA14 might be considered a potential microbial cell factory for further studies aimed to design levan production processes with lower production costs.

  19. Genome-Wide Fine-Scale Recombination Rate Variation in Drosophila melanogaster

    Science.gov (United States)

    Song, Yun S.

    2012-01-01

    Estimating fine-scale recombination maps of Drosophila from population genomic data is a challenging problem, in particular because of the high background recombination rate. In this paper, a new computational method is developed to address this challenge. Through an extensive simulation study, it is demonstrated that the method allows more accurate inference, and exhibits greater robustness to the effects of natural selection and noise, compared to a well-used previous method developed for studying fine-scale recombination rate variation in the human genome. As an application, a genome-wide analysis of genetic variation data is performed for two Drosophila melanogaster populations, one from North America (Raleigh, USA) and the other from Africa (Gikongoro, Rwanda). It is shown that fine-scale recombination rate variation is widespread throughout the D. melanogaster genome, across all chromosomes and in both populations. At the fine-scale, a conservative, systematic search for evidence of recombination hotspots suggests the existence of a handful of putative hotspots each with at least a tenfold increase in intensity over the background rate. A wavelet analysis is carried out to compare the estimated recombination maps in the two populations and to quantify the extent to which recombination rates are conserved. In general, similarity is observed at very broad scales, but substantial differences are seen at fine scales. The average recombination rate of the X chromosome appears to be higher than that of the autosomes in both populations, and this pattern is much more pronounced in the African population than the North American population. The correlation between various genomic features—including recombination rates, diversity, divergence, GC content, gene content, and sequence quality—is examined using the wavelet analysis, and it is shown that the most notable difference between D. melanogaster and humans is in the correlation between recombination and

  20. Modelling and optimal operation of a small-scale integrated energy based district heating and cooling system

    International Nuclear Information System (INIS)

    Jing, Z.X.; Jiang, X.S.; Wu, Q.H.; Tang, W.H.; Hua, B.

    2014-01-01

    This paper presents a comprehensive model of a small-scale integrated energy based district heating and cooling (DHC) system located in a residential area of hot-summer and cold-winter zone, which makes joint use of wind energy, solar energy, natural gas and electric energy. The model includes an off-grid wind turbine generator, heat producers, chillers, a water supply network and terminal loads. This research also investigates an optimal operating strategy based on Group Search Optimizer (GSO), through which the daily running cost of the system is optimized in both the heating and cooling modes. The strategy can be used to find the optimal number of operating chillers, optimal outlet water temperature set points of boilers and optimal water flow set points of pumps, taking into account cost functions and various operating constraints. In order to verify the model and the optimal operating strategy, performance tests have been undertaken using MATLAB. The simulation results prove the validity of the model and show that the strategy is able to minimize the system operation cost. The proposed system is evaluated in comparison with a conventional separation production (SP) system. The feasibility of investment for the DHC system is also discussed. The comparative results demonstrate the investment feasibility, the significant energy saving and the cost reduction, achieved in daily operation in an environment, where there are varying heating loads, cooling loads, wind speeds, solar radiations and electricity prices. - Highlights: • A model of a small-scale integrated energy based DHC system is presented. • An off-grid wind generator used for water heating is embedded in the model. • An optimal control strategy is studied to optimize the running cost of the system. • The designed system is proved to be energy efficient and cost effective in operation

  1. Using Maximum Entropy to Find Patterns in Genomes

    Science.gov (United States)

    Liu, Sophia; Hockenberry, Adam; Lancichinetti, Andrea; Jewett, Michael; Amaral, Luis

    The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. To accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. This approach can also be easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes. National Institute of General Medical Science, Northwestern University Presidential Fellowship, National Science Foundation, David and Lucile Packard Foundation, Camille Dreyfus Teacher Scholar Award.

  2. MtDNA genomes reveal a relaxation of selective constraints in low-BMI individuals in a Uyghur population.

    Science.gov (United States)

    Zheng, Hong-Xiang; Li, Lei; Jiang, Xiao-Yan; Yan, Shi; Qin, Zhendong; Wang, Xiaofeng; Jin, Li

    2017-10-01

    Considerable attention has been focused on the effect of deleterious mutations caused by the recent relaxation of selective constraints on human health, including the prevalence of obesity, which might represent an adaptive response of energy-conserving metabolism under the conditions of modern society. Mitochondrial DNA (mtDNA) encoding 13 core subunits of oxidative phosphorylation plays an important role in metabolism. Therefore, we hypothesized that a relaxation of selection constraints on mtDNA and an increase in the proportion of deleterious mutations have played a role in obesity prevalence. In this study, we collected and sequenced the mtDNA genomes of 722 Uyghurs, a typical population with a high prevalence of obesity. We identified the variants that occurred in the Uyghur population for each sample and found that the number of nonsynonymous mutations carried by Uyghur individuals declined with elevation of their BMI (P = 0.015). We further calculated the nonsynonymous and synonymous ratio (N/S) of the high-BMI and low-BMI haplogroups, and the results showed that a significantly higher N/S occurred in the whole mtDNA genomes of the low-BMI haplogroups (0.64) than in that of the high-BMI haplogroups (0.35, P = 0.030) and ancestor haplotypes (0.41, P = 0.032); these findings indicated that low-BMI individuals showed a recent relaxation of selective constraints. In addition, we investigated six clinical characteristics and found that fasting plasma glucose might be correlated with the N/S and selective pressures. We hypothesized that a higher proportion of deleterious mutations led to mild mitochondrial dysfunction, which helps to drive glucose consumption and thereby prevents obesity. Our results provide new insights into the relationship between obesity predisposition and mitochondrial genome evolution.

  3. Constraint-based component-modeling for knowledge-based design

    Science.gov (United States)

    Kolb, Mark A.

    1992-01-01

    The paper describes the application of various advanced programming techniques derived from artificial intelligence research to the development of flexible design tools for conceptual design. Special attention is given to two techniques which appear to be readily applicable to such design tools: the constraint propagation technique and the object-oriented programming. The implementation of these techniques in a prototype computer tool, Rubber Airplane, is described.

  4. Query by Constraint Propagation in the Concept-Oriented Data Model

    Directory of Open Access Journals (Sweden)

    Alexandr Savinov

    2006-09-01

    Full Text Available The paper describes an approach to query processing in the concept-oriented data model. This approach is based on imposing constraints and specifying the result type. The constraints are then automatically propagated over the model and the result contains all related data items. The simplest constraint propagation strategy consists of two steps: propagating down to the most specific level using de-projection and propagating up to the target concept using projection. A more complex strategy described in the paper may consist of many de-projection/projection steps passing through some intermediate concepts. An advantage of the described query mechanism is that it does not need any join conditions because it uses the structure of the model for propagation. Moreover, this mechanism does not require specifying an access path using dimension names. Thus even rather complex queries can be expressed in simple and natural form because they are expressed by specifying what information is available and what related data we want to get.

  5. A joint-constraint model for human joints using signed distance-fields

    DEFF Research Database (Denmark)

    Engell-Nørregård, Morten Pol; Abel, Sarah Maria Niebe; Erleben, Kenny

    2012-01-01

    We present a local joint-constraint model for a single joint which is based on distance fields. Our model is fast, general, and well suited for modeling human joints. In this work, we take a geometric approach and model the geometry of the boundary of the feasible region, i.e., the boundary of all...... allowed poses. A region of feasible poses can be built by embedding motion captured data points in a signed distance field. The only assumption is that the feasible poses form a single connected set of angular values. We show how signed distance fields can be used to generate fast and general joint......-joint dependencies, or joints with more than three degrees of freedom. The resolution of the joint-constraints can be tweaked individually for each degree of freedom, which can be used to optimize memory usage. We perform a comparative study of the key-properties of various joint-constraint models, as well...

  6. Design constraints for electron-positron linear colliders

    International Nuclear Information System (INIS)

    Mondelli, A.; Chernin, D.

    1991-01-01

    A prescription for examining the design constraints in the e + -e - linear collider is presented. By specifying limits on certain key quantities, an allowed region of parameter space can be presented, hopefully clarifying some of the design options. The model starts with the parameters at the interaction point (IP), where the expressions for the luminosity, the disruption parameter, beamstrahlung, and average beam power constitute four relations among eleven IP parameters. By specifying the values of five of these quantities, and using these relationships, the unknown parameter space can be reduced to a two-dimensional space. Curves of constraint can be plotted in this space to define an allowed operating region. An accelerator model, based on a modified, scaled SLAC structure, can then be used to derive the corresponding parameter space including the constraints derived from power consumption and wake field effects. The results show that longer, lower gradient accelerators are advantageous

  7. Multi-scale exploration of the technical, economic, and environmental dimensions of bio-based chemical production.

    Science.gov (United States)

    Zhuang, Kai H; Herrgård, Markus J

    2015-09-01

    In recent years, bio-based chemicals have gained traction as a sustainable alternative to petrochemicals. However, despite rapid advances in metabolic engineering and synthetic biology, there remain significant economic and environmental challenges. In order to maximize the impact of research investment in a new bio-based chemical industry, there is a need for assessing the technological, economic, and environmental potentials of combinations of biomass feedstocks, biochemical products, bioprocess technologies, and metabolic engineering approaches in the early phase of development of cell factories. To address this issue, we have developed a comprehensive Multi-scale framework for modeling Sustainable Industrial Chemicals production (MuSIC), which integrates modeling approaches for cellular metabolism, bioreactor design, upstream/downstream processes and economic impact assessment. We demonstrate the use of the MuSIC framework in a case study where two major polymer precursors (1,3-propanediol and 3-hydroxypropionic acid) are produced from two biomass feedstocks (corn-based glucose and soy-based glycerol) through 66 proposed biosynthetic pathways in two host organisms (Escherichia coli and Saccharomyces cerevisiae). The MuSIC framework allows exploration of tradeoffs and interactions between economy-scale objectives (e.g. profit maximization, emission minimization), constraints (e.g. land-use constraints) and process- and cell-scale technology choices (e.g. strain design or oxygenation conditions). We demonstrate that economy-scale assessment can be used to guide specific strain design decisions in metabolic engineering, and that these design decisions can be affected by non-intuitive dependencies across multiple scales. Copyright © 2015 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  8. Fungal Genomics Program

    Energy Technology Data Exchange (ETDEWEB)

    Grigoriev, Igor

    2012-03-12

    The JGI Fungal Genomics Program aims to scale up sequencing and analysis of fungal genomes to explore the diversity of fungi important for energy and the environment, and to promote functional studies on a system level. Combining new sequencing technologies and comparative genomics tools, JGI is now leading the world in fungal genome sequencing and analysis. Over 120 sequenced fungal genomes with analytical tools are available via MycoCosm (www.jgi.doe.gov/fungi), a web-portal for fungal biologists. Our model of interacting with user communities, unique among other sequencing centers, helps organize these communities, improves genome annotation and analysis work, and facilitates new larger-scale genomic projects. This resulted in 20 high-profile papers published in 2011 alone and contributing to the Genomics Encyclopedia of Fungi, which targets fungi related to plant health (symbionts, pathogens, and biocontrol agents) and biorefinery processes (cellulose degradation, sugar fermentation, industrial hosts). Our next grand challenges include larger scale exploration of fungal diversity (1000 fungal genomes), developing molecular tools for DOE-relevant model organisms, and analysis of complex systems and metagenomes.

  9. Transport simulations TFTR: Theoretically-based transport models and current scaling

    International Nuclear Information System (INIS)

    Redi, M.H.; Cummings, J.C.; Bush, C.E.; Fredrickson, E.; Grek, B.; Hahm, T.S.; Hill, K.W.; Johnson, D.W.; Mansfield, D.K.; Park, H.; Scott, S.D.; Stratton, B.C.; Synakowski, E.J.; Tang, W.M.; Taylor, G.

    1991-12-01

    In order to study the microscopic physics underlying observed L-mode current scaling, 1-1/2-d BALDUR has been used to simulate density and temperature profiles for high and low current, neutral beam heated discharges on TFTR with several semi-empirical, theoretically-based models previously compared for TFTR, including several versions of trapped electron drift wave driven transport. Experiments at TFTR, JET and D3-D show that I p scaling of τ E does not arise from edge modes as previously thought, and is most likely to arise from nonlocal processes or from the I p -dependence of local plasma core transport. Consistent with this, it is found that strong current scaling does not arise from any of several edge models of resistive ballooning. Simulations with the profile consistent drift wave model and with a new model for toroidal collisionless trapped electron mode core transport in a multimode formalism, lead to strong current scaling of τ E for the L-mode cases on TFTR. None of the theoretically-based models succeeded in simulating the measured temperature and density profiles for both high and low current experiments

  10. Determining the control circuitry of redox metabolism at the genome-scale.

    Directory of Open Access Journals (Sweden)

    Stephen Federowicz

    2014-04-01

    Full Text Available Determining how facultative anaerobic organisms sense and direct cellular responses to electron acceptor availability has been a subject of intense study. However, even in the model organism Escherichia coli, established mechanisms only explain a small fraction of the hundreds of genes that are regulated during electron acceptor shifts. Here we propose a qualitative model that accounts for the full breadth of regulated genes by detailing how two global transcription factors (TFs, ArcA and Fnr of E. coli, sense key metabolic redox ratios and act on a genome-wide basis to regulate anabolic, catabolic, and energy generation pathways. We first fill gaps in our knowledge of this transcriptional regulatory network by carrying out ChIP-chip and gene expression experiments to identify 463 regulatory events. We then interfaced this reconstructed regulatory network with a highly curated genome-scale metabolic model to show that ArcA and Fnr regulate >80% of total metabolic flux and 96% of differential gene expression across fermentative and nitrate respiratory conditions. Based on the data, we propose a feedforward with feedback trim regulatory scheme, given the extensive repression of catabolic genes by ArcA and extensive activation of chemiosmotic genes by Fnr. We further corroborated this regulatory scheme by showing a 0.71 r(2 (p<1e-6 correlation between changes in metabolic flux and changes in regulatory activity across fermentative and nitrate respiratory conditions. Finally, we are able to relate the proposed model to a wealth of previously generated data by contextualizing the existing transcriptional regulatory network.

  11. Constraint based modeling of metabolism allows finding metabolic cancer hallmarks and identifying personalized therapeutic windows.

    Science.gov (United States)

    Bordel, Sergio

    2018-04-13

    In order to choose optimal personalized anticancer treatments, transcriptomic data should be analyzed within the frame of biological networks. The best known human biological network (in terms of the interactions between its different components) is metabolism. Cancer cells have been known to have specific metabolic features for a long time and currently there is a growing interest in characterizing new cancer specific metabolic hallmarks. In this article it is presented a method to find personalized therapeutic windows using RNA-seq data and Genome Scale Metabolic Models. This method is implemented in the python library, pyTARG. Our predictions showed that the most anticancer selective (affecting 27 out of 34 considered cancer cell lines and only 1 out of 6 healthy mesenchymal stem cell lines) single metabolic reactions are those involved in cholesterol biosynthesis. Excluding cholesterol biosynthesis, all the considered cell lines can be selectively affected by targeting different combinations (from 1 to 5 reactions) of only 18 metabolic reactions, which suggests that a small subset of drugs or siRNAs combined in patient specific manners could be at the core of metabolism based personalized treatments.

  12. Unconventional Constraints on Nitrogen Chemistry using DC3 Observations and Trajectory-based Chemical Modeling

    Science.gov (United States)

    Shu, Q.; Henderson, B. H.

    2017-12-01

    Chemical transport models underestimate nitrogen dioxide observations in the upper troposphere (UT). Previous research in the UT succeeded in combining model predictions with field campaign measurements to demonstrate that the nitric acid formation rate (HO + NO2 → HNO3 (R1)) is overestimated by 22% (Henderson et al., 2012). A subsequent publication (Seltzer et al., 2015) demonstrated that single chemical constraint alters ozone and aerosol formation/composition. This work attempts to replicate previous chemical constraints with newer observations and a different modeling framework. We apply the previously successful constraint framework to Deep Convection Clouds and Chemistry (DC3). DC3 is a more recent field campaign where simulated nitrogen imbalances still exist. Freshly convected air parcels, identified in the DC3 dataset, as initial coordinates to initiate Lagrangian trajectories. Along each trajectory, we simulate the air parcel chemical state. Samples along the trajectories will form ensembles that represent possible realizations of UT air parcels. We then apply Bayesian inference to constrain nitrogen chemistry and compare results to the existing literature. Our anticipated results will confirm overestimation of HNO3 formation rate in previous work and provide further constraints on other nitrogen reaction rate coefficients that affect terminal products from NOx. We will particularly focus on organic nitrate chemistry that laboratory literature has yet to fully address. The results will provide useful insights into nitrogen chemistry that affects climate and human health.

  13. A Hybrid Programming Framework for Modeling and Solving Constraint Satisfaction and Optimization Problems

    Directory of Open Access Journals (Sweden)

    Paweł Sitek

    2016-01-01

    Full Text Available This paper proposes a hybrid programming framework for modeling and solving of constraint satisfaction problems (CSPs and constraint optimization problems (COPs. Two paradigms, CLP (constraint logic programming and MP (mathematical programming, are integrated in the framework. The integration is supplemented with the original method of problem transformation, used in the framework as a presolving method. The transformation substantially reduces the feasible solution space. The framework automatically generates CSP and COP models based on current values of data instances, questions asked by a user, and set of predicates and facts of the problem being modeled, which altogether constitute a knowledge database for the given problem. This dynamic generation of dedicated models, based on the knowledge base, together with the parameters changing externally, for example, the user’s questions, is the implementation of the autonomous search concept. The models are solved using the internal or external solvers integrated with the framework. The architecture of the framework as well as its implementation outline is also included in the paper. The effectiveness of the framework regarding the modeling and solution search is assessed through the illustrative examples relating to scheduling problems with additional constrained resources.

  14. Calibration and analysis of genome-based models for microbial ecology.

    Science.gov (United States)

    Louca, Stilianos; Doebeli, Michael

    2015-10-16

    Microbial ecosystem modeling is complicated by the large number of unknown parameters and the lack of appropriate calibration tools. Here we present a novel computational framework for modeling microbial ecosystems, which combines genome-based model construction with statistical analysis and calibration to experimental data. Using this framework, we examined the dynamics of a community of Escherichia coli strains that emerged in laboratory evolution experiments, during which an ancestral strain diversified into two coexisting ecotypes. We constructed a microbial community model comprising the ancestral and the evolved strains, which we calibrated using separate monoculture experiments. Simulations reproduced the successional dynamics in the evolution experiments, and pathway activation patterns observed in microarray transcript profiles. Our approach yielded detailed insights into the metabolic processes that drove bacterial diversification, involving acetate cross-feeding and competition for organic carbon and oxygen. Our framework provides a missing link towards a data-driven mechanistic microbial ecology.

  15. [Modeling continuous scaling of NDVI based on fractal theory].

    Science.gov (United States)

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  16. Multi-scale coding of genomic information: From DNA sequence to genome structure and function

    International Nuclear Information System (INIS)

    Arneodo, Alain; Vaillant, Cedric; Audit, Benjamin; Argoul, Francoise; D'Aubenton-Carafa, Yves; Thermes, Claude

    2011-01-01

    Understanding how chromatin is spatially and dynamically organized in the nucleus of eukaryotic cells and how this affects genome functions is one of the main challenges of cell biology. Since the different orders of packaging in the hierarchical organization of DNA condition the accessibility of DNA sequence elements to trans-acting factors that control the transcription and replication processes, there is actually a wealth of structural and dynamical information to learn in the primary DNA sequence. In this review, we show that when using concepts, methodologies, numerical and experimental techniques coming from statistical mechanics and nonlinear physics combined with wavelet-based multi-scale signal processing, we are able to decipher the multi-scale sequence encoding of chromatin condensation-decondensation mechanisms that play a fundamental role in regulating many molecular processes involved in nuclear functions.

  17. How robust are inflation model and dark matter constraints from cosmological data?

    DEFF Research Database (Denmark)

    Hamann, Jan; Hannestad, Steen; Sloth, Martin Snoager

    2006-01-01

    the tensor-to-scalar ratio r and the neutrino mass prevents lambda phi^4 from being excluded by present data. Reversing the argument, if lambda phi^4 is the correct model of inflation, it predicts a sum of neutrino masses at 0.3-0.5 eV, a range compatible with present experimental limits and within the reach......High-precision data from observation of the cosmic microwave background and the large scale structure of the universe provide very tight constraints on the effective parameters that describe cosmological inflation. Indeed, within a constrained class of LambdaCDM models, the simple lambda phi^4...... chaotic inflation model already appears to be ruled out by cosmological data. In this paper, we compute constraints on inflationary parameters within a more general framework that includes other physically motivated parameters such as a nonzero neutrino mass. We find that a strong degeneracy between...

  18. Metabolite coupling in genome-scale metabolic networks

    Directory of Open Access Journals (Sweden)

    Palsson Bernhard Ø

    2006-03-01

    Full Text Available Abstract Background Biochemically detailed stoichiometric matrices have now been reconstructed for various bacteria, yeast, and for the human cardiac mitochondrion based on genomic and proteomic data. These networks have been manually curated based on legacy data and elementally and charge balanced. Comparative analysis of these well curated networks is now possible. Pairs of metabolites often appear together in several network reactions, linking them topologically. This co-occurrence of pairs of metabolites in metabolic reactions is termed herein "metabolite coupling." These metabolite pairs can be directly computed from the stoichiometric matrix, S. Metabolite coupling is derived from the matrix ŜŜT, whose off-diagonal elements indicate the number of reactions in which any two metabolites participate together, where Ŝ is the binary form of S. Results Metabolite coupling in the studied networks was found to be dominated by a relatively small group of highly interacting pairs of metabolites. As would be expected, metabolites with high individual metabolite connectivity also tended to be those with the highest metabolite coupling, as the most connected metabolites couple more often. For metabolite pairs that are not highly coupled, we show that the number of reactions a pair of metabolites shares across a metabolic network closely approximates a line on a log-log scale. We also show that the preferential coupling of two metabolites with each other is spread across the spectrum of metabolites and is not unique to the most connected metabolites. We provide a measure for determining which metabolite pairs couple more often than would be expected based on their individual connectivity in the network and show that these metabolites often derive their principal biological functions from existing in pairs. Thus, analysis of metabolite coupling provides information beyond that which is found from studying the individual connectivity of individual

  19. Block Pickard Models for Two-Dimensional Constraints

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Justesen, Jørn

    2009-01-01

    In Pickard random fields (PRF), the probabilities of finite configurations and the entropy of the field can be calculated explicitly, but only very simple structures can be incorporated into such a field. Given two Markov chains describing a boundary, an algorithm is presented which determines...... for the domino tiling constraint represented by a quaternary alphabet. PRF models are also presented for higher order constraints, including the no isolated bits (n.i.b.) constraint, and a minimum distance 3 constraint by defining super symbols on blocks of binary symbols....

  20. CRISPR/Cas9 based genome editing of Penicillium chrysogenum

    NARCIS (Netherlands)

    Pohl, Carsten; Kiel, Jan A K W; Driessen, Arnold J M; Bovenberg, Roel A L; Nygård, Yvonne

    2016-01-01

    CRISPR/Cas9 based systems have emerged as versatile platforms for precision genome editing in a wide range of organisms. Here we have developed powerful CRISPR/Cas9 tools for marker-based and marker-free genome modifications in Penicillium chrysogenum, a model filamentous fungus and industrially

  1. Optimizing Global Coronal Magnetic Field Models Using Image-Based Constraints

    Science.gov (United States)

    Jones-Mecholsky, Shaela I.; Davila, Joseph M.; Uritskiy, Vadim

    2016-01-01

    The coronal magnetic field directly or indirectly affects a majority of the phenomena studied in the heliosphere. It provides energy for coronal heating, controls the release of coronal mass ejections, and drives heliospheric and magnetospheric activity, yet the coronal magnetic field itself has proven difficult to measure. This difficulty has prompted a decades-long effort to develop accurate, timely, models of the field, an effort that continues today. We have developed a method for improving global coronal magnetic field models by incorporating the type of morphological constraints that could be derived from coronal images. Here we report promising initial tests of this approach on two theoretical problems, and discuss opportunities for application.

  2. Sexagesimal scale for mapping human genome Escala sexagesimal para mapear el genoma humano

    Directory of Open Access Journals (Sweden)

    RICARDO CRUZ-COKE

    2001-03-01

    Full Text Available In a previous work I designed a diagram of the human genome based on a circular ideogram of the haploid set of chromosomes, using a low resolution scale of Megabase units. The purpose of this work is to draft a new scale to measure the physical map of the human genome at the highest resolution level. The entire length of the haploid genome of males is deployed in a circumference, marked with a sexagesimal scale with 360 degrees and 1296000 arc seconds. The radio of this circunference displays a semilogaritmic metric scale from 1 m up to the nanometer level. The base pair level of DNA sequences, 10-9 of this circunsference, is measured in milliarsec unit (mas, equivalent to a thousand of arcsecond. The "mas" unit, correspond to 1.27 nanometers (nm or 0.427 base pair (bp and it is the framework for measure DNA sequences. Thus the three billion base pairs of the human genome may be identified by 1296000000 "mas" units in continous correlation from number 1 to number 1296000000. This sexagesimal scale covers all the levels of the nuclear genetic material, from nucleotides to chromosomes. The locations of every codon and every gene may be numbered in the physical map of chomosome regions according to this new scale, instead of the partial kilobase and Megabase scales used today. The advantage of the new scale is the unification of the set of chromosomes under a continous scale of measurement at the DNA level, facilitating the correlation with the phenotypes of man and other speciesEn un trabajo anterior yo diseñé un diagrama del genoma humano basado en un ideograma circular del conjunto haploide de cromosomas, usando una escala de baja resolución en megabases. El propósito de este trabajo es el de diseñar una nueva escala para medir el mapa físico del genoma humano al más alto nivel de resolución. La longitud completa del genoma haploide del varon es extendido en una circunsferencia, marcada con una escala sexagesimal de 360 grados y 1296000

  3. Zea mays iRS1563: A Comprehensive Genome-Scale Metabolic Reconstruction of Maize Metabolism

    Science.gov (United States)

    Saha, Rajib; Suthers, Patrick F.; Maranas, Costas D.

    2011-01-01

    The scope and breadth of genome-scale metabolic reconstructions have continued to expand over the last decade. Herein, we introduce a genome-scale model for a plant with direct applications to food and bioenergy production (i.e., maize). Maize annotation is still underway, which introduces significant challenges in the association of metabolic functions to genes. The developed model is designed to meet rigorous standards on gene-protein-reaction (GPR) associations, elementally and charged balanced reactions and a biomass reaction abstracting the relative contribution of all biomass constituents. The metabolic network contains 1,563 genes and 1,825 metabolites involved in 1,985 reactions from primary and secondary maize metabolism. For approximately 42% of the reactions direct literature evidence for the participation of the reaction in maize was found. As many as 445 reactions and 369 metabolites are unique to the maize model compared to the AraGEM model for A. thaliana. 674 metabolites and 893 reactions are present in Zea mays iRS1563 that are not accounted for in maize C4GEM. All reactions are elementally and charged balanced and localized into six different compartments (i.e., cytoplasm, mitochondrion, plastid, peroxisome, vacuole and extracellular). GPR associations are also established based on the functional annotation information and homology prediction accounting for monofunctional, multifunctional and multimeric proteins, isozymes and protein complexes. We describe results from performing flux balance analysis under different physiological conditions, (i.e., photosynthesis, photorespiration and respiration) of a C4 plant and also explore model predictions against experimental observations for two naturally occurring mutants (i.e., bm1 and bm3). The developed model corresponds to the largest and more complete to-date effort at cataloguing metabolism for a plant species. PMID:21755001

  4. Power capability evaluation for lithium iron phosphate batteries based on multi-parameter constraints estimation

    Science.gov (United States)

    Wang, Yujie; Pan, Rui; Liu, Chang; Chen, Zonghai; Ling, Qiang

    2018-01-01

    The battery power capability is intimately correlated with the climbing, braking and accelerating performance of the electric vehicles. Accurate power capability prediction can not only guarantee the safety but also regulate driving behavior and optimize battery energy usage. However, the nonlinearity of the battery model is very complex especially for the lithium iron phosphate batteries. Besides, the hysteresis loop in the open-circuit voltage curve is easy to cause large error in model prediction. In this work, a multi-parameter constraints dynamic estimation method is proposed to predict the battery continuous period power capability. A high-fidelity battery model which considers the battery polarization and hysteresis phenomenon is presented to approximate the high nonlinearity of the lithium iron phosphate battery. Explicit analyses of power capability with multiple constraints are elaborated, specifically the state-of-energy is considered in power capability assessment. Furthermore, to solve the problem of nonlinear system state estimation, and suppress noise interference, the UKF based state observer is employed for power capability prediction. The performance of the proposed methodology is demonstrated by experiments under different dynamic characterization schedules. The charge and discharge power capabilities of the lithium iron phosphate batteries are quantitatively assessed under different time scales and temperatures.

  5. Integration of genome-scale metabolic networks into whole-body PBPK models shows phenotype-specific cases of drug-induced metabolic perturbation.

    Science.gov (United States)

    Cordes, Henrik; Thiel, Christoph; Baier, Vanessa; Blank, Lars M; Kuepfer, Lars

    2018-01-01

    Drug-induced perturbations of the endogenous metabolic network are a potential root cause of cellular toxicity. A mechanistic understanding of such unwanted side effects during drug therapy is therefore vital for patient safety. The comprehensive assessment of such drug-induced injuries requires the simultaneous consideration of both drug exposure at the whole-body and resulting biochemical responses at the cellular level. We here present a computational multi-scale workflow that combines whole-body physiologically based pharmacokinetic (PBPK) models and organ-specific genome-scale metabolic network (GSMN) models through shared reactions of the xenobiotic metabolism. The applicability of the proposed workflow is illustrated for isoniazid, a first-line antibacterial agent against Mycobacterium tuberculosis , which is known to cause idiosyncratic drug-induced liver injuries (DILI). We combined GSMN models of a human liver with N-acetyl transferase 2 (NAT2)-phenotype-specific PBPK models of isoniazid. The combined PBPK-GSMN models quantitatively describe isoniazid pharmacokinetics, as well as intracellular responses, and changes in the exometabolome in a human liver following isoniazid administration. Notably, intracellular and extracellular responses identified with the PBPK-GSMN models are in line with experimental and clinical findings. Moreover, the drug-induced metabolic perturbations are distributed and attenuated in the metabolic network in a phenotype-dependent manner. Our simulation results show that a simultaneous consideration of both drug pharmacokinetics at the whole-body and metabolism at the cellular level is mandatory to explain drug-induced injuries at the patient level. The proposed workflow extends our mechanistic understanding of the biochemistry underlying adverse events and may be used to prevent drug-induced injuries in the future.

  6. Fan-out Estimation in Spin-based Quantum Computer Scale-up.

    Science.gov (United States)

    Nguyen, Thien; Hill, Charles D; Hollenberg, Lloyd C L; James, Matthew R

    2017-10-17

    Solid-state spin-based qubits offer good prospects for scaling based on their long coherence times and nexus to large-scale electronic scale-up technologies. However, high-threshold quantum error correction requires a two-dimensional qubit array operating in parallel, posing significant challenges in fabrication and control. While architectures incorporating distributed quantum control meet this challenge head-on, most designs rely on individual control and readout of all qubits with high gate densities. We analysed the fan-out routing overhead of a dedicated control line architecture, basing the analysis on a generalised solid-state spin qubit platform parameterised to encompass Coulomb confined (e.g. donor based spin qubits) or electrostatically confined (e.g. quantum dot based spin qubits) implementations. The spatial scalability under this model is estimated using standard electronic routing methods and present-day fabrication constraints. Based on reasonable assumptions for qubit control and readout we estimate 10 2 -10 5 physical qubits, depending on the quantum interconnect implementation, can be integrated and fanned-out independently. Assuming relatively long control-free interconnects the scalability can be extended. Ultimately, the universal quantum computation may necessitate a much higher number of integrated qubits, indicating that higher dimensional electronics fabrication and/or multiplexed distributed control and readout schemes may be the preferredstrategy for large-scale implementation.

  7. Survey of protein–DNA interactions in Aspergillus oryzae on a genomic scale

    Science.gov (United States)

    Wang, Chao; Lv, Yangyong; Wang, Bin; Yin, Chao; Lin, Ying; Pan, Li

    2015-01-01

    The genome-scale delineation of in vivo protein–DNA interactions is key to understanding genome function. Only ∼5% of transcription factors (TFs) in the Aspergillus genus have been identified using traditional methods. Although the Aspergillus oryzae genome contains >600 TFs, knowledge of the in vivo genome-wide TF-binding sites (TFBSs) in aspergilli remains limited because of the lack of high-quality antibodies. We investigated the landscape of in vivo protein–DNA interactions across the A. oryzae genome through coupling the DNase I digestion of intact nuclei with massively parallel sequencing and the analysis of cleavage patterns in protein–DNA interactions at single-nucleotide resolution. The resulting map identified overrepresented de novo TF-binding motifs from genomic footprints, and provided the detailed chromatin remodeling patterns and the distribution of digital footprints near transcription start sites. The TFBSs of 19 known Aspergillus TFs were also identified based on DNase I digestion data surrounding potential binding sites in conjunction with TF binding specificity information. We observed that the cleavage patterns of TFBSs were dependent on the orientation of TF motifs and independent of strand orientation, consistent with the DNA shape features of binding motifs with flanking sequences. PMID:25883143

  8. GIGGLE: a search engine for large-scale integrated genome analysis.

    Science.gov (United States)

    Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R

    2018-02-01

    GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation.

  9. Mechanistically-Based Field-Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    International Nuclear Information System (INIS)

    Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour

    2007-01-01

    Effective environmental management of DOE sites requires reliable prediction of reactive transport phenomena. A central issue in prediction of subsurface reactive transport is the impact of multiscale physical, chemical, and biological heterogeneity. Heterogeneity manifests itself through incomplete mixing of reactants at scales below those at which concentrations are explicitly defined (i.e., the numerical grid scale). This results in a mismatch between simulated reaction processes (formulated in terms of average concentrations) and actual processes (controlled by local concentrations). At the field scale, this results in apparent scale-dependence of model parameters and inability to utilize laboratory parameters in field models. Accordingly, most field modeling efforts are restricted to empirical estimation of model parameters by fitting to field observations, which renders extrapolation of model predictions beyond fitted conditions unreliable. The objective of this project is to develop a theoretical and computational framework for (1) connecting models of coupled reactive transport from pore-scale processes to field-scale bioremediation through a hierarchy of models that maintain crucial information from the smaller scales at the larger scales; and (2) quantifying the uncertainty that is introduced by both the upscaling process and uncertainty in physical parameters. One of the challenges of addressing scale-dependent effects of coupled processes in heterogeneous porous media is the problem-specificity of solutions. Much effort has been aimed at developing generalized scaling laws or theories, but these require restrictive assumptions that render them ineffective in many real problems. We propose instead an approach that applies physical and numerical experiments at small scales (specifically the pore scale) to a selected model system in order to identify the scaling approach appropriate to that type of problem. Although the results of such studies will

  10. Mechanistically-Based Field-Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    Energy Technology Data Exchange (ETDEWEB)

    Tim Scheibe; Alexandre Tartakovsky; Brian Wood; Joe Seymour

    2007-04-19

    Effective environmental management of DOE sites requires reliable prediction of reactive transport phenomena. A central issue in prediction of subsurface reactive transport is the impact of multiscale physical, chemical, and biological heterogeneity. Heterogeneity manifests itself through incomplete mixing of reactants at scales below those at which concentrations are explicitly defined (i.e., the numerical grid scale). This results in a mismatch between simulated reaction processes (formulated in terms of average concentrations) and actual processes (controlled by local concentrations). At the field scale, this results in apparent scale-dependence of model parameters and inability to utilize laboratory parameters in field models. Accordingly, most field modeling efforts are restricted to empirical estimation of model parameters by fitting to field observations, which renders extrapolation of model predictions beyond fitted conditions unreliable. The objective of this project is to develop a theoretical and computational framework for (1) connecting models of coupled reactive transport from pore-scale processes to field-scale bioremediation through a hierarchy of models that maintain crucial information from the smaller scales at the larger scales; and (2) quantifying the uncertainty that is introduced by both the upscaling process and uncertainty in physical parameters. One of the challenges of addressing scale-dependent effects of coupled processes in heterogeneous porous media is the problem-specificity of solutions. Much effort has been aimed at developing generalized scaling laws or theories, but these require restrictive assumptions that render them ineffective in many real problems. We propose instead an approach that applies physical and numerical experiments at small scales (specifically the pore scale) to a selected model system in order to identify the scaling approach appropriate to that type of problem. Although the results of such studies will

  11. Nonlinear Model Predictive Control with Constraint Satisfactions for a Quadcopter

    Science.gov (United States)

    Wang, Ye; Ramirez-Jaime, Andres; Xu, Feng; Puig, Vicenç

    2017-01-01

    This paper presents a nonlinear model predictive control (NMPC) strategy combined with constraint satisfactions for a quadcopter. The full dynamics of the quadcopter describing the attitude and position are nonlinear, which are quite sensitive to changes of inputs and disturbances. By means of constraint satisfactions, partial nonlinearities and modeling errors of the control-oriented model of full dynamics can be transformed into the inequality constraints. Subsequently, the quadcopter can be controlled by an NMPC controller with the updated constraints generated by constraint satisfactions. Finally, the simulation results applied to a quadcopter simulator are provided to show the effectiveness of the proposed strategy.

  12. Optimal dynamic voltage scaling for wireless sensor nodes with real-time constraints

    Science.gov (United States)

    Cassandras, Christos G.; Zhuang, Shixin

    2005-11-01

    Sensors are increasingly embedded in manufacturing systems and wirelessly networked to monitor and manage operations ranging from process and inventory control to tracking equipment and even post-manufacturing product monitoring. In building such sensor networks, a critical issue is the limited and hard to replenish energy in the devices involved. Dynamic voltage scaling is a technique that controls the operating voltage of a processor to provide desired performance while conserving energy and prolonging the overall network's lifetime. We consider such power-limited devices processing time-critical tasks which are non-preemptive, aperiodic and have uncertain arrival times. We treat voltage scaling as a dynamic optimization problem whose objective is to minimize energy consumption subject to hard or soft real-time execution constraints. In the case of hard constraints, we build on prior work (which engages a voltage scaling controller at task completion times) by developing an intra-task controller that acts at all arrival times of incoming tasks. We show that this optimization problem can be decomposed into two simpler ones whose solution leads to an algorithm that does not actually require solving any nonlinear programming problems. In the case of soft constraints, this decomposition must be partly relaxed, but it still leads to a scalable (linear in the number of tasks) algorithm. Simulation results are provided to illustrate performance improvements in systems with intra-task controllers compared to uncontrolled systems or those using inter-task control.

  13. A scale-free structure prior for graphical models with applications in functional genomics.

    Directory of Open Access Journals (Sweden)

    Paul Sheridan

    Full Text Available The problem of reconstructing large-scale, gene regulatory networks from gene expression data has garnered considerable attention in bioinformatics over the past decade with the graphical modeling paradigm having emerged as a popular framework for inference. Analysis in a full Bayesian setting is contingent upon the assignment of a so-called structure prior-a probability distribution on networks, encoding a priori biological knowledge either in the form of supplemental data or high-level topological features. A key topological consideration is that a wide range of cellular networks are approximately scale-free, meaning that the fraction, , of nodes in a network with degree is roughly described by a power-law with exponent between and . The standard practice, however, is to utilize a random structure prior, which favors networks with binomially distributed degree distributions. In this paper, we introduce a scale-free structure prior for graphical models based on the formula for the probability of a network under a simple scale-free network model. Unlike the random structure prior, its scale-free counterpart requires a node labeling as a parameter. In order to use this prior for large-scale network inference, we design a novel Metropolis-Hastings sampler for graphical models that includes a node labeling as a state space variable. In a simulation study, we demonstrate that the scale-free structure prior outperforms the random structure prior at recovering scale-free networks while at the same time retains the ability to recover random networks. We then estimate a gene association network from gene expression data taken from a breast cancer tumor study, showing that scale-free structure prior recovers hubs, including the previously unknown hub SLC39A6, which is a zinc transporter that has been implicated with the spread of breast cancer to the lymph nodes. Our analysis of the breast cancer expression data underscores the value of the scale

  14. Bayesian Model Selection under Time Constraints

    Science.gov (United States)

    Hoege, M.; Nowak, W.; Illman, W. A.

    2017-12-01

    Bayesian model selection (BMS) provides a consistent framework for rating and comparing models in multi-model inference. In cases where models of vastly different complexity compete with each other, we also face vastly different computational runtimes of such models. For instance, time series of a quantity of interest can be simulated by an autoregressive process model that takes even less than a second for one run, or by a partial differential equations-based model with runtimes up to several hours or even days. The classical BMS is based on a quantity called Bayesian model evidence (BME). It determines the model weights in the selection process and resembles a trade-off between bias of a model and its complexity. However, in practice, the runtime of models is another weight relevant factor for model selection. Hence, we believe that it should be included, leading to an overall trade-off problem between bias, variance and computing effort. We approach this triple trade-off from the viewpoint of our ability to generate realizations of the models under a given computational budget. One way to obtain BME values is through sampling-based integration techniques. We argue with the fact that more expensive models can be sampled much less under time constraints than faster models (in straight proportion to their runtime). The computed evidence in favor of a more expensive model is statistically less significant than the evidence computed in favor of a faster model, since sampling-based strategies are always subject to statistical sampling error. We present a straightforward way to include this misbalance into the model weights that are the basis for model selection. Our approach follows directly from the idea of insufficient significance. It is based on a computationally cheap bootstrapping error estimate of model evidence and is easy to implement. The approach is illustrated in a small synthetic modeling study.

  15. Extending SME to Handle Large-Scale Cognitive Modeling.

    Science.gov (United States)

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  16. Evaluating Direct Manipulation Operations for Constraint-Based Layout

    OpenAIRE

    Zeidler , Clemens; Lutteroth , Christof; Stuerzlinger , Wolfgang; Weber , Gerald

    2013-01-01

    Part 11: Interface Layout and Data Entry; International audience; Layout managers are used to control the placement of widgets in graphical user interfaces (GUIs). Constraint-based layout managers are more powerful than other ones. However, they are also more complex and their layouts are prone to problems that usually require direct editing of constraints. Today, designers commonly use GUI builders to specify GUIs. The complexities of traditional approaches to constraint-based layouts pose c...

  17. Sequential computation of elementary modes and minimal cut sets in genome-scale metabolic networks using alternate integer linear programming.

    Science.gov (United States)

    Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami

    2017-08-01

    Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  18. Finding Nemo's Genes: A chromosome-scale reference assembly of the genome of the orange clownfish Amphiprion percula

    KAUST Repository

    Lehmann, Robert; Lightfoot, Damien J; Schunter, Celia Marei; Michell, Craig T; Ohyanagi, Hajime; Mineta, Katsuhiko; Foret, Sylvain; Berumen, Michael L.; Miller, David J; Aranda, Manuel; Gojobori, Takashi; Munday, Philip L; Ravasi, Timothy

    2018-01-01

    The iconic orange clownfish, Amphiprion percula, is a model organism for studying the ecology and evolution of reef fishes, including patterns of population connectivity, sex change, social organization, habitat selection and adaptation to climate change. Notably, the orange clownfish is the only reef fish for which a complete larval dispersal kernel has been established and was the first fish species for which it was demonstrated that anti-predator responses of reef fishes could be impaired by ocean acidification. Despite its importance, molecular resources for this species remain scarce and until now it lacked a reference genome assembly. Here we present a de novo chromosome-scale assembly of the genome of the orange clownfish Amphiprion percula. We utilized single-molecule real-time sequencing technology from Pacific Biosciences to produce an initial polished assembly comprised of 1,414 contigs, with a contig N50 length of 1.86 Mb. Using Hi-C based chromatin contact maps, 98% of the genome assembly were placed into 24 chromosomes, resulting in a final assembly of 908.8 Mb in length with contig and scaffold N50s of 3.12 and 38.4 Mb, respectively. This makes it one of the most contiguous and complete fish genome assemblies currently available. The genome was annotated with 26,597 protein coding genes and contains 96% of the core set of conserved actinopterygian orthologs. The availability of this reference genome assembly as a community resource will further strengthen the role of the orange clownfish as a model species for research on the ecology and evolution of reef fishes.

  19. Finding Nemo's Genes: A chromosome-scale reference assembly of the genome of the orange clownfish Amphiprion percula

    KAUST Repository

    Lehmann, Robert

    2018-03-08

    The iconic orange clownfish, Amphiprion percula, is a model organism for studying the ecology and evolution of reef fishes, including patterns of population connectivity, sex change, social organization, habitat selection and adaptation to climate change. Notably, the orange clownfish is the only reef fish for which a complete larval dispersal kernel has been established and was the first fish species for which it was demonstrated that anti-predator responses of reef fishes could be impaired by ocean acidification. Despite its importance, molecular resources for this species remain scarce and until now it lacked a reference genome assembly. Here we present a de novo chromosome-scale assembly of the genome of the orange clownfish Amphiprion percula. We utilized single-molecule real-time sequencing technology from Pacific Biosciences to produce an initial polished assembly comprised of 1,414 contigs, with a contig N50 length of 1.86 Mb. Using Hi-C based chromatin contact maps, 98% of the genome assembly were placed into 24 chromosomes, resulting in a final assembly of 908.8 Mb in length with contig and scaffold N50s of 3.12 and 38.4 Mb, respectively. This makes it one of the most contiguous and complete fish genome assemblies currently available. The genome was annotated with 26,597 protein coding genes and contains 96% of the core set of conserved actinopterygian orthologs. The availability of this reference genome assembly as a community resource will further strengthen the role of the orange clownfish as a model species for research on the ecology and evolution of reef fishes.

  20. Judgement of Design Scheme Based on Flexible Constraint in ICAD

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The conception of flexible constraint is proposed in the paper. The solution of flexible constraint is in special range, and maybe different in different instances of same design scheme. The paper emphasis on how to evaluate and optimize a design scheme with flexible constraints based on the satisfaction degree function defined on flexible constraints. The conception of flexible constraint is used to solve constraint conflict and design optimization in complicated constraint-based assembly design by the PFM parametrization assembly design system. An instance of gear-box design is used for verifying optimization method.

  1. GIGGLE: a search engine for large-scale integrated genome analysis

    Science.gov (United States)

    Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R

    2018-01-01

    GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation. PMID:29309061

  2. Flavour violation in gauge-mediated supersymmetry breaking models: Experimental constraints and phenomenology at the LHC

    International Nuclear Information System (INIS)

    Fuks, Benjamin; Herrmann, Bjoern; Klasen, Michael

    2009-01-01

    We present an extensive analysis of gauge-mediated supersymmetry breaking models with minimal and non-minimal flavour violation. We first demonstrate that low-energy, precision electroweak, and cosmological constraints exclude large 'collider-friendly' regions of the minimal parameter space. We then discuss various possibilities how flavour violation, although naturally suppressed, may still occur in gauge-mediation models. The introduction of non-minimal flavour violation at the electroweak scale is shown to relax the stringent experimental constraints, so that benchmark points, that are also cosmologically viable, can be defined and their phenomenology, i.e. squark and gaugino production cross sections with flavour violation, at the LHC can be studied

  3. Iterated non-linear model predictive control based on tubes and contractive constraints.

    Science.gov (United States)

    Murillo, M; Sánchez, G; Giovanini, L

    2016-05-01

    This paper presents a predictive control algorithm for non-linear systems based on successive linearizations of the non-linear dynamic around a given trajectory. A linear time varying model is obtained and the non-convex constrained optimization problem is transformed into a sequence of locally convex ones. The robustness of the proposed algorithm is addressed adding a convex contractive constraint. To account for linearization errors and to obtain more accurate results an inner iteration loop is added to the algorithm. A simple methodology to obtain an outer bounding-tube for state trajectories is also presented. The convergence of the iterative process and the stability of the closed-loop system are analyzed. The simulation results show the effectiveness of the proposed algorithm in controlling a quadcopter type unmanned aerial vehicle. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Application and comparison of the SCS-CN-based rainfall-runoff model in meso-scale watershed and field scale

    Science.gov (United States)

    Luo, L.; Wang, Z.

    2010-12-01

    Soil Conservation Service Curve Number (SCS-CN) based hydrologic model, has widely been used for agricultural watersheds in recent years. However, there will be relative error when applying it due to differentiation of geographical and climatological conditions. This paper introduces a more adaptable and propagable model based on the modified SCS-CN method, which specializes into two different scale cases of research regions. Combining the typical conditions of the Zhanghe irrigation district in southern part of China, such as hydrometeorologic conditions and surface conditions, SCS-CN based models were established. The Xinbu-Qiao River basin (area =1207 km2) and the Tuanlin runoff test area (area =2.87 km2)were taken as the study areas of basin scale and field scale in Zhanghe irrigation district. Applications were extended from ordinary meso-scale watershed to field scale in Zhanghe paddy field-dominated irrigated . Based on actual measurement data of land use, soil classification, hydrology and meteorology, quantitative evaluation and modifications for two coefficients, i.e. preceding loss and runoff curve, were proposed with corresponding models, table of CN values for different landuse and AMC(antecedent moisture condition) grading standard fitting for research cases were proposed. The simulation precision was increased by putting forward a 12h unit hydrograph of the field area, and 12h unit hydrograph were simplified. Comparison between different scales show that it’s more effectively to use SCS-CN model on field scale after parameters calibrated in basin scale These results can help discovering the rainfall-runoff rule in the district. Differences of established SCS-CN model's parameters between the two study regions are also considered. Varied forms of landuse and impacts of human activities were the important factors which can impact the rainfall-runoff relations in Zhanghe irrigation district.

  5. Genomics, evolution and development of amphioxus and tunicates: The Goldilocks principle.

    Science.gov (United States)

    Holland, Linda Z

    2015-06-01

    Morphological comparisons among extant animals have long been used to infer their long-extinct ancestors for which the fossil record is poor or non-existent. For evolution of the vertebrates, the comparison has typically involved amphioxus and vertebrates. Both groups are evolving relatively slowly, and their genomes share a high level of synteny. Both vertebrates and amphioxus have regulative development in which cell fates become fixed only gradually during embryogenesis. Thus, their development fits a modified hourglass model in which constraints are greatest at the phylotypic stage (i.e., the late neurula/early larva), but are somewhat greater on earlier development than on later development. In contrast, the third group of chordates, the tunicates, which are sister group to vertebrates, are evolving rapidly. Constraints on evolution of tunicate genomes are relaxed, and they have discarded key developmental genes and organized much of their coding sequences into operons, which are transcribed as a single mRNA that undergoes trans-splicing. This contrasts with vertebrates and amphioxus, whose genomes are not organized into operons. Concomitantly, tunicates have switched to determinant development with very early fixation of cell fates. Thus, tunicate development more closely fits a progressive divergence model (shaped more like a wine glass than an hourglass) in which the constraints on the zygote and very early development are greatest. This model can help explain why tunicate body plans are so very diverse. The relaxed constraints on development after early cleavage stages are correlated with relaxed constraints on genome evolution. The question remains: which came first? © 2014 Wiley Periodicals, Inc.

  6. A Novel Spatial-Temporal Voronoi Diagram-Based Heuristic Approach for Large-Scale Vehicle Routing Optimization with Time Constraints

    Directory of Open Access Journals (Sweden)

    Wei Tu

    2015-10-01

    Full Text Available Vehicle routing optimization (VRO designs the best routes to reduce travel cost, energy consumption, and carbon emission. Due to non-deterministic polynomial-time hard (NP-hard complexity, many VROs involved in real-world applications require too much computing effort. Shortening computing time for VRO is a great challenge for state-of-the-art spatial optimization algorithms. From a spatial-temporal perspective, this paper presents a spatial-temporal Voronoi diagram-based heuristic approach for large-scale vehicle routing problems with time windows (VRPTW. Considering time constraints, a spatial-temporal Voronoi distance is derived from the spatial-temporal Voronoi diagram to find near neighbors in the space-time searching context. A Voronoi distance decay strategy that integrates a time warp operation is proposed to accelerate local search procedures. A spatial-temporal feature-guided search is developed to improve unpromising micro route structures. Experiments on VRPTW benchmarks and real-world instances are conducted to verify performance. The results demonstrate that the proposed approach is competitive with state-of-the-art heuristics and achieves high-quality solutions for large-scale instances of VRPTWs in a short time. This novel approach will contribute to spatial decision support community by developing an effective vehicle routing optimization method for large transportation applications in both public and private sectors.

  7. Unique attributes of cyanobacterial metabolism revealed by improved genome-scale metabolic modeling and essential gene analysis

    Science.gov (United States)

    Broddrick, Jared T.; Rubin, Benjamin E.; Welkie, David G.; Du, Niu; Mih, Nathan; Diamond, Spencer; Lee, Jenny J.; Golden, Susan S.; Palsson, Bernhard O.

    2016-01-01

    The model cyanobacterium, Synechococcus elongatus PCC 7942, is a genetically tractable obligate phototroph that is being developed for the bioproduction of high-value chemicals. Genome-scale models (GEMs) have been successfully used to assess and engineer cellular metabolism; however, GEMs of phototrophic metabolism have been limited by the lack of experimental datasets for model validation and the challenges of incorporating photon uptake. Here, we develop a GEM of metabolism in S. elongatus using random barcode transposon site sequencing (RB-TnSeq) essential gene and physiological data specific to photoautotrophic metabolism. The model explicitly describes photon absorption and accounts for shading, resulting in the characteristic linear growth curve of photoautotrophs. GEM predictions of gene essentiality were compared with data obtained from recent dense-transposon mutagenesis experiments. This dataset allowed major improvements to the accuracy of the model. Furthermore, discrepancies between GEM predictions and the in vivo dataset revealed biological characteristics, such as the importance of a truncated, linear TCA pathway, low flux toward amino acid synthesis from photorespiration, and knowledge gaps within nucleotide metabolism. Coupling of strong experimental support and photoautotrophic modeling methods thus resulted in a highly accurate model of S. elongatus metabolism that highlights previously unknown areas of S. elongatus biology. PMID:27911809

  8. A Hybrid Autonomic Computing-Based Approach to Distributed Constraint Satisfaction Problems

    Directory of Open Access Journals (Sweden)

    Abhishek Bhatia

    2015-03-01

    Full Text Available Distributed constraint satisfaction problems (DisCSPs are among the widely endeavored problems using agent-based simulation. Fernandez et al. formulated sensor and mobile tracking problem as a DisCSP, known as SensorDCSP In this paper, we adopt a customized ERE (environment, reactive rules and entities algorithm for the SensorDCSP, which is otherwise proven as a computationally intractable problem. An amalgamation of the autonomy-oriented computing (AOC-based algorithm (ERE and genetic algorithm (GA provides an early solution of the modeled DisCSP. Incorporation of GA into ERE facilitates auto-tuning of the simulation parameters, thereby leading to an early solution of constraint satisfaction. This study further contributes towards a model, built up in the NetLogo simulation environment, to infer the efficacy of the proposed approach.

  9. An agent-based analysis of the German electricity market with transmission capacity constraints

    International Nuclear Information System (INIS)

    Veit, Daniel J.; Weidlich, Anke; Krafft, Jacob A.

    2009-01-01

    While some agent-based models have been developed for analyzing the German electricity market, there has been little research done on the emerging issue of intra-German congestion and its effects on the bidding behavior of generator agents. Yet, studies of other markets have shown that transmission grid constraints considerably affect strategic behavior in electricity markets. In this paper, the implications of transmission constraints on power markets are analyzed for the case of Germany. Market splitting is applied in the case of congestion in the grid. For this purpose, the agent-based modeling of electricity systems (AMES) market package developed by Sun and Tesfatsion is modified to fit the German context, including a detailed representation of the German high-voltage grid and its interconnections. Implications of transmission constraints on prices and social welfare are analyzed for scenarios that include strategic behavior of market participants and high wind power generation. It can be shown that strategic behavior and transmission constraints are inter-related and may pose severe problems in the future German electricity market.

  10. An agent-based analysis of the German electricity market with transmission capacity constraints

    Energy Technology Data Exchange (ETDEWEB)

    Veit, Daniel J.; Weidlich, Anke; Krafft, Jacob A. [University of Mannheim, Dieter Schwarz Chair of Business Administration, E-Business and E-Government, 68131 Mannheim (Germany)

    2009-10-15

    While some agent-based models have been developed for analyzing the German electricity market, there has been little research done on the emerging issue of intra-German congestion and its effects on the bidding behavior of generator agents. Yet, studies of other markets have shown that transmission grid constraints considerably affect strategic behavior in electricity markets. In this paper, the implications of transmission constraints on power markets are analyzed for the case of Germany. Market splitting is applied in the case of congestion in the grid. For this purpose, the agent-based modeling of electricity systems (AMES) market package developed by Sun and Tesfatsion is modified to fit the German context, including a detailed representation of the German high-voltage grid and its interconnections. Implications of transmission constraints on prices and social welfare are analyzed for scenarios that include strategic behavior of market participants and high wind power generation. It can be shown that strategic behavior and transmission constraints are inter-related and may pose severe problems in the future German electricity market. (author)

  11. GenomeVx: simple web-based creation of editable circular chromosome maps.

    Science.gov (United States)

    Conant, Gavin C; Wolfe, Kenneth H

    2008-03-15

    We describe GenomeVx, a web-based tool for making editable, publication-quality, maps of mitochondrial and chloroplast genomes and of large plasmids. These maps show the location of genes and chromosomal features as well as a position scale. The program takes as input either raw feature positions or GenBank records. In the latter case, features are automatically extracted and colored, an example of which is given. Output is in the Adobe Portable Document Format (PDF) and can be edited by programs such as Adobe Illustrator. GenomeVx is available at http://wolfe.gen.tcd.ie/GenomeVx

  12. Implementing network constraints in the EMPS model

    Energy Technology Data Exchange (ETDEWEB)

    Helseth, Arild; Warland, Geir; Mo, Birger; Fosso, Olav B.

    2010-02-15

    This report concerns the coupling of detailed market and network models for long-term hydro-thermal scheduling. Currently, the EPF model (Samlast) is the only tool available for this task for actors in the Nordic market. A new prototype for solving the coupled market and network problem has been developed. The prototype is based on the EMPS model (Samkjoeringsmodellen). Results from the market model are distributed to a detailed network model, where a DC load flow detects if there are overloads on monitored lines or intersections. In case of overloads, network constraints are generated and added to the market problem. Theoretical and implementation details for the new prototype are elaborated in this report. The performance of the prototype is tested against the EPF model on a 20-area Nordic dataset. (Author)

  13. SWPhylo - A Novel Tool for Phylogenomic Inferences by Comparison of Oligonucleotide Patterns and Integration of Genome-Based and Gene-Based Phylogenetic Trees.

    Science.gov (United States)

    Yu, Xiaoyu; Reva, Oleg N

    2018-01-01

    Modern phylogenetic studies may benefit from the analysis of complete genome sequences of various microorganisms. Evolutionary inferences based on genome-scale analysis are believed to be more accurate than the gene-based alternative. However, the computational complexity of current phylogenomic procedures, inappropriateness of standard phylogenetic tools to process genome-wide data, and lack of reliable substitution models which correlates with alignment-free phylogenomic approaches deter microbiologists from using these opportunities. For example, the super-matrix and super-tree approaches of phylogenomics use multiple integrated genomic loci or individual gene-based trees to infer an overall consensus tree. However, these approaches potentially multiply errors of gene annotation and sequence alignment not mentioning the computational complexity and laboriousness of the methods. In this article, we demonstrate that the annotation- and alignment-free comparison of genome-wide tetranucleotide frequencies, termed oligonucleotide usage patterns (OUPs), allowed a fast and reliable inference of phylogenetic trees. These were congruent to the corresponding whole genome super-matrix trees in terms of tree topology when compared with other known approaches including 16S ribosomal RNA and GyrA protein sequence comparison, complete genome-based MAUVE, and CVTree methods. A Web-based program to perform the alignment-free OUP-based phylogenomic inferences was implemented at http://swphylo.bi.up.ac.za/. Applicability of the tool was tested on different taxa from subspecies to intergeneric levels. Distinguishing between closely related taxonomic units may be enforced by providing the program with alignments of marker protein sequences, eg, GyrA.

  14. Linking genes to ecosystem trace gas fluxes in a large-scale model system

    Science.gov (United States)

    Meredith, L. K.; Cueva, A.; Volkmann, T. H. M.; Sengupta, A.; Troch, P. A.

    2017-12-01

    Soil microorganisms mediate biogeochemical cycles through biosphere-atmosphere gas exchange with significant impact on atmospheric trace gas composition. Improving process-based understanding of these microbial populations and linking their genomic potential to the ecosystem-scale is a challenge, particularly in soil systems, which are heterogeneous in biodiversity, chemistry, and structure. In oligotrophic systems, such as the Landscape Evolution Observatory (LEO) at Biosphere 2, atmospheric trace gas scavenging may supply critical metabolic needs to microbial communities, thereby promoting tight linkages between microbial genomics and trace gas utilization. This large-scale model system of three initially homogenous and highly instrumented hillslopes facilitates high temporal resolution characterization of subsurface trace gas fluxes at hundreds of sampling points, making LEO an ideal location to study microbe-mediated trace gas fluxes from the gene to ecosystem scales. Specifically, we focus on the metabolism of ubiquitous atmospheric reduced trace gases hydrogen (H2), carbon monoxide (CO), and methane (CH4), which may have wide-reaching impacts on microbial community establishment, survival, and function. Additionally, microbial activity on LEO may facilitate weathering of the basalt matrix, which can be studied with trace gas measurements of carbonyl sulfide (COS/OCS) and carbon dioxide (O-isotopes in CO2), and presents an additional opportunity for gene to ecosystem study. This work will present initial measurements of this suite of trace gases to characterize soil microbial metabolic activity, as well as links between spatial and temporal variability of microbe-mediated trace gas fluxes in LEO and their relation to genomic-based characterization of microbial community structure (phylogenetic amplicons) and genetic potential (metagenomics). Results from the LEO model system will help build understanding of the importance of atmospheric inputs to

  15. Technical note: Equivalent genomic models with a residual polygenic effect.

    Science.gov (United States)

    Liu, Z; Goddard, M E; Hayes, B J; Reinhardt, F; Reents, R

    2016-03-01

    Routine genomic evaluations in animal breeding are usually based on either a BLUP with genomic relationship matrix (GBLUP) or single nucleotide polymorphism (SNP) BLUP model. For a multi-step genomic evaluation, these 2 alternative genomic models were proven to give equivalent predictions for genomic reference animals. The model equivalence was verified also for young genotyped animals without phenotypes. Due to incomplete linkage disequilibrium of SNP markers to genes or causal mutations responsible for genetic inheritance of quantitative traits, SNP markers cannot explain all the genetic variance. A residual polygenic effect is normally fitted in the genomic model to account for the incomplete linkage disequilibrium. In this study, we start by showing the proof that the multi-step GBLUP and SNP BLUP models are equivalent for the reference animals, when they have a residual polygenic effect included. Second, the equivalence of both multi-step genomic models with a residual polygenic effect was also verified for young genotyped animals without phenotypes. Additionally, we derived formulas to convert genomic estimated breeding values of the GBLUP model to its components, direct genomic values and residual polygenic effect. Third, we made a proof that the equivalence of these 2 genomic models with a residual polygenic effect holds also for single-step genomic evaluation. Both the single-step GBLUP and SNP BLUP models lead to equal prediction for genotyped animals with phenotypes (e.g., reference animals), as well as for (young) genotyped animals without phenotypes. Finally, these 2 single-step genomic models with a residual polygenic effect were proven to be equivalent for estimation of SNP effects, too. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Experience from large scale use of the EuroGenomics custom SNP chip in cattle

    DEFF Research Database (Denmark)

    Boichard, Didier A; Boussaha, Mekki; Capitan, Aurélien

    2018-01-01

    This article presents the strategy to evaluate candidate mutations underlying QTL or responsible for genetic defects, based upon the design and large-scale use of the Eurogenomics custom SNP chip set up for bovine genomic selection. Some variants under study originated from mapping genetic defect...

  17. Multiple Constraints Based Robust Matching of Poor-Texture Close-Range Images for Monitoring a Simulated Landslide

    Directory of Open Access Journals (Sweden)

    Gang Qiao

    2016-05-01

    Full Text Available Landslides are one of the most destructive geo-hazards that can bring about great threats to both human lives and infrastructures. Landslide monitoring has been always a research hotspot. In particular, landslide simulation experimentation is an effective tool in landslide research to obtain critical parameters that help understand the mechanism and evaluate the triggering and controlling factors of slope failure. Compared with other traditional geotechnical monitoring approaches, the close-range photogrammetry technique shows potential in tracking and recording the 3D surface deformation and failure processes. In such cases, image matching usually plays a critical role in stereo image processing for the 3D geometric reconstruction. However, the complex imaging conditions such as rainfall, mass movement, illumination, and ponding will reduce the texture quality of the stereo images, bringing about difficulties in the image matching process and resulting in very sparse matches. To address this problem, this paper presents a multiple-constraints based robust image matching approach for poor-texture close-range images particularly useful in monitoring a simulated landslide. The Scale Invariant Feature Transform (SIFT algorithm was first applied to the stereo images for generation of scale-invariate feature points, followed by a two-step matching process: feature-based image matching and area-based image matching. In the first feature-based matching step, the triangulation process was performed based on the SIFT matches filtered by the Fundamental Matrix (FM and a robust checking procedure, to serve as the basic constraints for feature-based iterated matching of all the non-matched SIFT-derived feature points inside each triangle. In the following area-based image-matching step, the corresponding points of the non-matched features in each triangle of the master image were predicted in the homologous triangle of the searching image by using geometric

  18. Rainbow: a tool for large-scale whole-genome sequencing data analysis using cloud computing.

    Science.gov (United States)

    Zhao, Shanrong; Prenger, Kurt; Smith, Lance; Messina, Thomas; Fan, Hongtao; Jaeger, Edward; Stephens, Susan

    2013-06-27

    Technical improvements have decreased sequencing costs and, as a result, the size and number of genomic datasets have increased rapidly. Because of the lower cost, large amounts of sequence data are now being produced by small to midsize research groups. Crossbow is a software tool that can detect single nucleotide polymorphisms (SNPs) in whole-genome sequencing (WGS) data from a single subject; however, Crossbow has a number of limitations when applied to multiple subjects from large-scale WGS projects. The data storage and CPU resources that are required for large-scale whole genome sequencing data analyses are too large for many core facilities and individual laboratories to provide. To help meet these challenges, we have developed Rainbow, a cloud-based software package that can assist in the automation of large-scale WGS data analyses. Here, we evaluated the performance of Rainbow by analyzing 44 different whole-genome-sequenced subjects. Rainbow has the capacity to process genomic data from more than 500 subjects in two weeks using cloud computing provided by the Amazon Web Service. The time includes the import and export of the data using Amazon Import/Export service. The average cost of processing a single sample in the cloud was less than 120 US dollars. Compared with Crossbow, the main improvements incorporated into Rainbow include the ability: (1) to handle BAM as well as FASTQ input files; (2) to split large sequence files for better load balance downstream; (3) to log the running metrics in data processing and monitoring multiple Amazon Elastic Compute Cloud (EC2) instances; and (4) to merge SOAPsnp outputs for multiple individuals into a single file to facilitate downstream genome-wide association studies. Rainbow is a scalable, cost-effective, and open-source tool for large-scale WGS data analysis. For human WGS data sequenced by either the Illumina HiSeq 2000 or HiSeq 2500 platforms, Rainbow can be used straight out of the box. Rainbow is available

  19. Genomic Footprints of Selective Sweeps from Metabolic Resistance to Pyrethroids in African Malaria Vectors Are Driven by Scale up of Insecticide-Based Vector Control.

    Science.gov (United States)

    Barnes, Kayla G; Weedall, Gareth D; Ndula, Miranda; Irving, Helen; Mzihalowa, Themba; Hemingway, Janet; Wondji, Charles S

    2017-02-01

    Insecticide resistance in mosquito populations threatens recent successes in malaria prevention. Elucidating patterns of genetic structure in malaria vectors to predict the speed and direction of the spread of resistance is essential to get ahead of the 'resistance curve' and to avert a public health catastrophe. Here, applying a combination of microsatellite analysis, whole genome sequencing and targeted sequencing of a resistance locus, we elucidated the continent-wide population structure of a major African malaria vector, Anopheles funestus. We identified a major selective sweep in a genomic region controlling cytochrome P450-based metabolic resistance conferring high resistance to pyrethroids. This selective sweep occurred since 2002, likely as a direct consequence of scaled up vector control as revealed by whole genome and fine-scale sequencing of pre- and post-intervention populations. Fine-scaled analysis of the pyrethroid resistance locus revealed that a resistance-associated allele of the cytochrome P450 monooxygenase CYP6P9a has swept through southern Africa to near fixation, in contrast to high polymorphism levels before interventions, conferring high levels of pyrethroid resistance linked to control failure. Population structure analysis revealed a barrier to gene flow between southern Africa and other areas, which may prevent or slow the spread of the southern mechanism of pyrethroid resistance to other regions. By identifying a genetic signature of pyrethroid-based interventions, we have demonstrated the intense selective pressure that control interventions exert on mosquito populations. If this level of selection and spread of resistance continues unabated, our ability to control malaria with current interventions will be compromised.

  20. Genomic Footprints of Selective Sweeps from Metabolic Resistance to Pyrethroids in African Malaria Vectors Are Driven by Scale up of Insecticide-Based Vector Control.

    Directory of Open Access Journals (Sweden)

    Kayla G Barnes

    2017-02-01

    Full Text Available Insecticide resistance in mosquito populations threatens recent successes in malaria prevention. Elucidating patterns of genetic structure in malaria vectors to predict the speed and direction of the spread of resistance is essential to get ahead of the 'resistance curve' and to avert a public health catastrophe. Here, applying a combination of microsatellite analysis, whole genome sequencing and targeted sequencing of a resistance locus, we elucidated the continent-wide population structure of a major African malaria vector, Anopheles funestus. We identified a major selective sweep in a genomic region controlling cytochrome P450-based metabolic resistance conferring high resistance to pyrethroids. This selective sweep occurred since 2002, likely as a direct consequence of scaled up vector control as revealed by whole genome and fine-scale sequencing of pre- and post-intervention populations. Fine-scaled analysis of the pyrethroid resistance locus revealed that a resistance-associated allele of the cytochrome P450 monooxygenase CYP6P9a has swept through southern Africa to near fixation, in contrast to high polymorphism levels before interventions, conferring high levels of pyrethroid resistance linked to control failure. Population structure analysis revealed a barrier to gene flow between southern Africa and other areas, which may prevent or slow the spread of the southern mechanism of pyrethroid resistance to other regions. By identifying a genetic signature of pyrethroid-based interventions, we have demonstrated the intense selective pressure that control interventions exert on mosquito populations. If this level of selection and spread of resistance continues unabated, our ability to control malaria with current interventions will be compromised.

  1. Low Parametric Sensitivity Realizations with relaxed L2-dynamic-range-scaling constraints

    OpenAIRE

    Hilaire , Thibault

    2009-01-01

    This paper presents a new dynamic-range scaling for the implementation of filters/controllers in state-space form. Relaxing the classical L2-scaling constraints by specific fixed-point considerations allows for a higher degree of freedom for the optimal L2-parametric sensitivity problem. However, overflows in the implementation are still prevented. The underlying constrained problem is converted into an unconstrained problem for which a solution can be provided. This leads to realizations whi...

  2. Genome scale metabolic network reconstruction of Spirochaeta cellobiosiphila

    Directory of Open Access Journals (Sweden)

    Bharat Manna

    2017-10-01

    Full Text Available Substantial rise in the global energy demand is one of the biggest challenges in this century. Environmental pollution due to rapid depletion of the fossil fuel resources and its alarming impact on the climate change and Global Warming have motivated researchers to look for non-petroleum-based sustainable, eco-friendly, renewable, low-cost energy alternatives, such as biofuel. Lignocellulosic biomass is one of the most promising bio-resources with huge potential to contribute to this worldwide energy demand. However, the complex organization of the Cellulose, Hemicellulose and Lignin in the Lignocellulosic biomass requires extensive pre-treatment and enzymatic hydrolysis followed by fermentation, raising overall production cost of biofuel. This encourages researchers to design cost-effective approaches for the production of second generation biofuels. The products from enzymatic hydrolysis of cellulose are mostly glucose monomer or cellobiose unit that are subjected to fermentation. Spirochaeta genus is a well-known group of obligate or facultative anaerobes, living primarily on carbohydrate metabolism. Spirochaeta cellobiosiphila sp. is a facultative anaerobe under this genus, which uses a variety of monosaccharides and disaccharides as energy sources. However, most rapid growth occurs on cellobiose and fermentation yields significant amount of ethanol, acetate, CO2, H2 and small amounts of formate. It is predicted to be promising microbial machinery for industrial fermentation processes for biofuel production. The metabolic pathways that govern cellobiose metabolism in Spirochaeta cellobiosiphila are yet to be explored. The function annotation of the genome sequence of Spirochaeta cellobiosiphila is in progress. In this work we aim to map all the metabolic activities for reconstruction of genome-scale metabolic model of Spirochaeta cellobiosiphila.

  3. Energy partitioning constraints at kinetic scales in low-β turbulence

    Science.gov (United States)

    Gershman, Daniel J.; F.-Viñas, Adolfo; Dorelli, John C.; Goldstein, Melvyn L.; Shuster, Jason; Avanov, Levon A.; Boardsen, Scott A.; Stawarz, Julia E.; Schwartz, Steven J.; Schiff, Conrad; Lavraud, Benoit; Saito, Yoshifumi; Paterson, William R.; Giles, Barbara L.; Pollock, Craig J.; Strangeway, Robert J.; Russell, Christopher T.; Torbert, Roy B.; Moore, Thomas E.; Burch, James L.

    2018-02-01

    Turbulence is a fundamental physical process through which energy injected into a system at large scales cascades to smaller scales. In collisionless plasmas, turbulence provides a critical mechanism for dissipating electromagnetic energy. Here, we present observations of plasma fluctuations in low-β turbulence using data from NASA's Magnetospheric Multiscale mission in Earth's magnetosheath. We provide constraints on the partitioning of turbulent energy density in the fluid, ion-kinetic, and electron-kinetic ranges. Magnetic field fluctuations dominated the energy density spectrum throughout the fluid and ion-kinetic ranges, consistent with previous observations of turbulence in similar plasma regimes. However, at scales shorter than the electron inertial length, fluctuation power in electron kinetic energy significantly exceeded that of the magnetic field, resulting in an electron-motion-regulated cascade at small scales. This dominance is highly relevant for the study of turbulence in highly magnetized laboratory and astrophysical plasmas.

  4. Improving the phenotype predictions of a yeast genome-scale metabolic model by incorporating enzymatic constraints

    DEFF Research Database (Denmark)

    Sanchez, Benjamin J.; Zhang, Xi-Cheng; Nilsson, Avlant

    2017-01-01

    , which act as limitations on metabolic fluxes, are not taken into account. Here, we present GECKO, a method that enhances a GEM to account for enzymes as part of reactions, thereby ensuring that each metabolic flux does not exceed its maximum capacity, equal to the product of the enzyme's abundance...... and turnover number. We applied GECKO to a Saccharomyces cerevisiae GEM and demonstrated that the new model could correctly describe phenotypes that the previous model could not, particularly under high enzymatic pressure conditions, such as yeast growing on different carbon sources in excess, coping...... with stress, or overexpressing a specific pathway. GECKO also allows to directly integrate quantitative proteomics data; by doing so, we significantly reduced flux variability of the model, in over 60% of metabolic reactions. Additionally, the model gives insight into the distribution of enzyme usage between...

  5. A systems approach to predict oncometabolites via context-specific genome-scale metabolic networks.

    Directory of Open Access Journals (Sweden)

    Hojung Nam

    2014-09-01

    Full Text Available Altered metabolism in cancer cells has been viewed as a passive response required for a malignant transformation. However, this view has changed through the recently described metabolic oncogenic factors: mutated isocitrate dehydrogenases (IDH, succinate dehydrogenase (SDH, and fumarate hydratase (FH that produce oncometabolites that competitively inhibit epigenetic regulation. In this study, we demonstrate in silico predictions of oncometabolites that have the potential to dysregulate epigenetic controls in nine types of cancer by incorporating massive scale genetic mutation information (collected from more than 1,700 cancer genomes, expression profiling data, and deploying Recon 2 to reconstruct context-specific genome-scale metabolic models. Our analysis predicted 15 compounds and 24 substructures of potential oncometabolites that could result from the loss-of-function and gain-of-function mutations of metabolic enzymes, respectively. These results suggest a substantial potential for discovering unidentified oncometabolites in various forms of cancers.

  6. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    Science.gov (United States)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  7. Affine-Invariant Geometric Constraints-Based High Accuracy Simultaneous Localization and Mapping

    Directory of Open Access Journals (Sweden)

    Gangchen Hua

    2017-01-01

    Full Text Available In this study we describe a new appearance-based loop-closure detection method for online incremental simultaneous localization and mapping (SLAM using affine-invariant-based geometric constraints. Unlike other pure bag-of-words-based approaches, our proposed method uses geometric constraints as a supplement to improve accuracy. By establishing an affine-invariant hypothesis, the proposed method excludes incorrect visual words and calculates the dispersion of correctly matched visual words to improve the accuracy of the likelihood calculation. In addition, camera’s intrinsic parameters and distortion coefficients are adequate for this method. 3D measuring is not necessary. We use the mechanism of Long-Term Memory and Working Memory (WM to manage the memory. Only a limited size of the WM is used for loop-closure detection; therefore the proposed method is suitable for large-scale real-time SLAM. We tested our method using the CityCenter and Lip6Indoor datasets. Our proposed method results can effectively correct the typical false-positive localization of previous methods, thus gaining better recall ratios and better precision.

  8. Stochastic User Equilibrium Assignment in Schedule-Based Transit Networks with Capacity Constraints

    Directory of Open Access Journals (Sweden)

    Wangtu Xu

    2012-01-01

    Full Text Available This paper proposes a stochastic user equilibrium (SUE assignment model for a schedule-based transit network with capacity constraint. We consider a situation in which passengers do not have the full knowledge about the condition of the network and select paths that minimize a generalized cost function encompassing five components: (1 ride time, which is composed of in-vehicle and waiting times, (2 overload delay, (3 fare, (4 transfer constraints, and (5 departure time difference. We split passenger demands among connections which are the space-time paths between OD pairs of the network. All transit vehicles have a fixed capacity and operate according to some preset timetables. When the capacity constraint of the transit line segment is reached, we show that the Lagrange multipliers of the mathematical programming problem are equivalent to the equilibrium passenger overload delay in the congested transit network. The proposed model can simultaneously predict how passengers choose their transit vehicles to minimize their travel costs and estimate the associated costs in a schedule-based congested transit network. A numerical example is used to illustrate the performance of the proposed model.

  9. BFAST: an alignment tool for large scale genome resequencing.

    Directory of Open Access Journals (Sweden)

    Nils Homer

    2009-11-01

    Full Text Available The new generation of massively parallel DNA sequencers, combined with the challenge of whole human genome resequencing, result in the need for rapid and accurate alignment of billions of short DNA sequence reads to a large reference genome. Speed is obviously of great importance, but equally important is maintaining alignment accuracy of short reads, in the 25-100 base range, in the presence of errors and true biological variation.We introduce a new algorithm specifically optimized for this task, as well as a freely available implementation, BFAST, which can align data produced by any of current sequencing platforms, allows for user-customizable levels of speed and accuracy, supports paired end data, and provides for efficient parallel and multi-threaded computation on a computer cluster. The new method is based on creating flexible, efficient whole genome indexes to rapidly map reads to candidate alignment locations, with arbitrary multiple independent indexes allowed to achieve robustness against read errors and sequence variants. The final local alignment uses a Smith-Waterman method, with gaps to support the detection of small indels.We compare BFAST to a selection of large-scale alignment tools -- BLAT, MAQ, SHRiMP, and SOAP -- in terms of both speed and accuracy, using simulated and real-world datasets. We show BFAST can achieve substantially greater sensitivity of alignment in the context of errors and true variants, especially insertions and deletions, and minimize false mappings, while maintaining adequate speed compared to other current methods. We show BFAST can align the amount of data needed to fully resequence a human genome, one billion reads, with high sensitivity and accuracy, on a modest computer cluster in less than 24 hours. BFAST is available at (http://bfast.sourceforge.net.

  10. Bio-succinic acid production: Escherichia coli strains design from genome-scale perspectives

    Directory of Open Access Journals (Sweden)

    Bashir Sajo Mienda

    2017-10-01

    Full Text Available Escherichia coli (E. coli has been established to be a native producer of succinic acid (a platform chemical with different applications via mixed acid fermentation reactions. Genome-scale metabolic models (GEMs of E. coli have been published with capabilities of predicting strain design strategies for the production of bio-based succinic acid. Proof-of-principle strains are fundamentally constructed as a starting point for systems strategies for industrial strains development. Here, we review for the first time, the use of E. coli GEMs for construction of proof-of-principles strains for increasing succinic acid production. Specific case studies, where E. coli proof-of-principle strains were constructed for increasing bio-based succinic acid production from glucose and glycerol carbon sources have been highlighted. In addition, a propose systems strategies for industrial strain development that could be applicable for future microbial succinic acid production guided by GEMs have been presented.

  11. Experimental and analytical comparison of constraint effects due to biaxial loading and shallow-flaws

    International Nuclear Information System (INIS)

    Theiss, T.J.; Bass, B.R.; Bryson, J.W.

    1993-01-01

    A program to develop and evaluate fracture methodologies for the assessment of crack-tip constraint effects on fracture toughness of reactor pressure vessel (RPV) steels has been initiated in the Heavy-Section Steel Technology (HSST) Program. The focus of studies described herein is on the evaluation of a micromechanical scaling model based on critical stressed volumes for quantifying crack-tip constraint through applications to experimental data. Data were utilized from single-edge notch bend (SENB) specimens and HSST-developed cruciform beam specimens that were tested in HSST shallow-crack and biaxial testing programs. Shallow-crack effects and far-field tensile out-of-plane biaxial loading have been identified as constraint issues that influence both fracture toughness and the extent of the toughness scatter band. Results from applications indicate that the micromechanical scaling model can be used successfully to interpret experimental data from the shallow- and deep-crack SENB specimen tests. When applied to the uniaxially and biaxially loaded cruciform specimens, the two methodologies showed some promising features, but also raised several questions concerning the interpretation of constraint conditions in the specimen based on near-tip stress fields. Crack-tip constraint analyses of the shallow-crack cruciform specimen based on near-tip stress fields. Crack-tip constraint analyses of the shallow-crack cruciform specimen subjected to uniaxial or biaxial loading conditions are shown to represent a significant challenge for these methodologies. Unresolved issued identified from these analyses require resolution as part of a validation process for biaxial loading applications

  12. First constraints on the running of non-Gaussianity.

    Science.gov (United States)

    Becker, Adam; Huterer, Dragan

    2012-09-21

    We use data from the Wilkinson Microwave Anisotropy probe temperature maps to constrain a scale-dependent generalization of the popular "local" model for primordial non-Gaussianity. In the model where the parameter f(NL) is allowed to run with scale k, f(NL)(k) = f*(NL) (k/k(piv))(n)(fNL), we constrain the running to be n(f)(NL) = 0.30(-1.2)(+1.9) at 95% confidence, marginalized over the amplitude f*(NL). The constraints depend somewhat on the prior probabilities assigned to the two parameters. In the near future, constraints from a combination of Planck and large-scale structure surveys are expected to improve this limit by about an order of magnitude and usefully constrain classes of inflationary models.

  13. Rapid Prototyping of Microbial Cell Factories via Genome-scale Engineering

    Science.gov (United States)

    Si, Tong; Xiao, Han; Zhao, Huimin

    2014-01-01

    Advances in reading, writing and editing genetic materials have greatly expanded our ability to reprogram biological systems at the resolution of a single nucleotide and on the scale of a whole genome. Such capacity has greatly accelerated the cycles of design, build and test to engineer microbes for efficient synthesis of fuels, chemicals and drugs. In this review, we summarize the emerging technologies that have been applied, or are potentially useful for genome-scale engineering in microbial systems. We will focus on the development of high-throughput methodologies, which may accelerate the prototyping of microbial cell factories. PMID:25450192

  14. SWPhylo – A Novel Tool for Phylogenomic Inferences by Comparison of Oligonucleotide Patterns and Integration of Genome-Based and Gene-Based Phylogenetic Trees

    Science.gov (United States)

    Yu, Xiaoyu; Reva, Oleg N

    2018-01-01

    Modern phylogenetic studies may benefit from the analysis of complete genome sequences of various microorganisms. Evolutionary inferences based on genome-scale analysis are believed to be more accurate than the gene-based alternative. However, the computational complexity of current phylogenomic procedures, inappropriateness of standard phylogenetic tools to process genome-wide data, and lack of reliable substitution models which correlates with alignment-free phylogenomic approaches deter microbiologists from using these opportunities. For example, the super-matrix and super-tree approaches of phylogenomics use multiple integrated genomic loci or individual gene-based trees to infer an overall consensus tree. However, these approaches potentially multiply errors of gene annotation and sequence alignment not mentioning the computational complexity and laboriousness of the methods. In this article, we demonstrate that the annotation- and alignment-free comparison of genome-wide tetranucleotide frequencies, termed oligonucleotide usage patterns (OUPs), allowed a fast and reliable inference of phylogenetic trees. These were congruent to the corresponding whole genome super-matrix trees in terms of tree topology when compared with other known approaches including 16S ribosomal RNA and GyrA protein sequence comparison, complete genome-based MAUVE, and CVTree methods. A Web-based program to perform the alignment-free OUP-based phylogenomic inferences was implemented at http://swphylo.bi.up.ac.za/. Applicability of the tool was tested on different taxa from subspecies to intergeneric levels. Distinguishing between closely related taxonomic units may be enforced by providing the program with alignments of marker protein sequences, eg, GyrA. PMID:29511354

  15. A novel genome-information content-based statistic for genome-wide association analysis designed for next-generation sequencing data.

    Science.gov (United States)

    Luo, Li; Zhu, Yun; Xiong, Momiao

    2012-06-01

    The genome-wide association studies (GWAS) designed for next-generation sequencing data involve testing association of genomic variants, including common, low frequency, and rare variants. The current strategies for association studies are well developed for identifying association of common variants with the common diseases, but may be ill-suited when large amounts of allelic heterogeneity are present in sequence data. Recently, group tests that analyze their collective frequency differences between cases and controls shift the current variant-by-variant analysis paradigm for GWAS of common variants to the collective test of multiple variants in the association analysis of rare variants. However, group tests ignore differences in genetic effects among SNPs at different genomic locations. As an alternative to group tests, we developed a novel genome-information content-based statistics for testing association of the entire allele frequency spectrum of genomic variation with the diseases. To evaluate the performance of the proposed statistics, we use large-scale simulations based on whole genome low coverage pilot data in the 1000 Genomes Project to calculate the type 1 error rates and power of seven alternative statistics: a genome-information content-based statistic, the generalized T(2), collapsing method, multivariate and collapsing (CMC) method, individual χ(2) test, weighted-sum statistic, and variable threshold statistic. Finally, we apply the seven statistics to published resequencing dataset from ANGPTL3, ANGPTL4, ANGPTL5, and ANGPTL6 genes in the Dallas Heart Study. We report that the genome-information content-based statistic has significantly improved type 1 error rates and higher power than the other six statistics in both simulated and empirical datasets.

  16. A Generalized Adsorption Rate Model Based on the Limiting-Component Constraint in Ion-Exchange Chromatographic Separation for Multicomponent Systems

    DEFF Research Database (Denmark)

    such that conventional LDF (linear driving force) type models are extended to inactive zones without loosing their generality. Based on a limiting component constraint, an exchange probability kernel is developed for multi-component systems. The LDF-type model with the kernel is continuous with time and axial direction....... Two tuning parameters such as concentration layer thickness and function change rate at the threshold point are needed for the probability kernels, which are not sensitive to problems considered....

  17. Spatial scale separation in regional climate modelling

    Energy Technology Data Exchange (ETDEWEB)

    Feser, F.

    2005-07-01

    In this thesis the concept of scale separation is introduced as a tool for first improving regional climate model simulations and, secondly, to explicitly detect and describe the added value obtained by regional modelling. The basic idea behind this is that global and regional climate models have their best performance at different spatial scales. Therefore the regional model should not alter the global model's results at large scales. The for this purpose designed concept of nudging of large scales controls the large scales within the regional model domain and keeps them close to the global forcing model whereby the regional scales are left unchanged. For ensemble simulations nudging of large scales strongly reduces the divergence of the different simulations compared to the standard approach ensemble that occasionally shows large differences for the individual realisations. For climate hindcasts this method leads to results which are on average closer to observed states than the standard approach. Also the analysis of the regional climate model simulation can be improved by separating the results into different spatial domains. This was done by developing and applying digital filters that perform the scale separation effectively without great computational effort. The separation of the results into different spatial scales simplifies model validation and process studies. The search for 'added value' can be conducted on the spatial scales the regional climate model was designed for giving clearer results than by analysing unfiltered meteorological fields. To examine the skill of the different simulations pattern correlation coefficients were calculated between the global reanalyses, the regional climate model simulation and, as a reference, of an operational regional weather analysis. The regional climate model simulation driven with large-scale constraints achieved a high increase in similarity to the operational analyses for medium-scale 2 meter

  18. Symmetries of integrable hierarchies and matrix model constraints

    International Nuclear Information System (INIS)

    Vos, K. de

    1992-01-01

    The orbit construction associates a soliton hierarchy to every level-one vertex realization of a simply laced affine Kac-Moody algebra g. We show that the τ-function of such a hierarchy has the (truncated) Virasoro algebra as an algebra of infinitesimal symmetry transformations. To prove this we use an appropriate bilinear form of these hierarchies together with the coset construction of conformal field theory. For A 1 (1) the orbit construction gives either the Toda or the KdV hierarchy. These both occur in the one-matrix model of two-dimensional quantum gravity, before and after the double scaling limit respectively. The truncated Virasoro symmetry algebra is exactly the algebra of constraints of the one-matrix model. The partition function of the one-matrix model is therefore an invariant τ-function. We also consider the case of A 1 (1) with l>1. Surprisingly, the symmetry algebra in that case is not simply a truncated Casimir algebra. It appears that again only the Virasoro symmetry survives. We speculate on the relation with multi-matrix models. (orig.)

  19. Top-down constraints on disturbance dynamics in the terrestrial carbon cycle: effects at global and regional scales

    Science.gov (United States)

    Bloom, A. A.; Exbrayat, J. F.; van der Velde, I.; Peters, W.; Williams, M.

    2014-12-01

    Large uncertainties preside over terrestrial carbon flux estimates on a global scale. In particular, the strongly coupled dynamics between net ecosystem productivity and disturbance C losses are poorly constrained. To gain an improved understanding of ecosystem C dynamics from regional to global scale, we apply a Markov Chain Monte Carlo based model-data-fusion approach into the CArbon DAta-MOdel fraMework (CARDAMOM). We assimilate MODIS LAI and burned area, plant-trait data, and use the Harmonized World Soil Database (HWSD) and maps of above ground biomass as prior knowledge for initial conditions. We optimize model parameters based on (a) globally spanning observations and (b) ecological and dynamic constraints that force single parameter values and parameter inter-dependencies to be representative of real world processes. We determine the spatial and temporal dynamics of major terrestrial C fluxes and model parameter values on a global scale (GPP = 123 +/- 8 Pg C yr-1 & NEE = -1.8 +/- 2.7 Pg C yr-1). We further show that the incorporation of disturbance fluxes, and accounting for their instantaneous or delayed effect, is of critical importance in constraining global C cycle dynamics, particularly in the tropics. In a higher resolution case study centred on the Amazon Basin we show how fires not only trigger large instantaneous emissions of burned matter, but also how they are responsible for a sustained reduction of up to 50% in plant uptake following the depletion of biomass stocks. The combination of these two fire-induced effects leads to a 1 g C m-2 d-1reduction in the strength of the net terrestrial carbon sink. Through our simulations at regional and global scale, we advocate the need to assimilate disturbance metrics in global terrestrial carbon cycle models to bridge the gap between globally spanning terrestrial carbon cycle data and the full dynamics of the ecosystem C cycle. Disturbances are especially important because their quick occurrence may have

  20. Model-based sensor diagnosis

    International Nuclear Information System (INIS)

    Milgram, J.; Dormoy, J.L.

    1994-09-01

    Running a nuclear power plant involves monitoring data provided by the installation's sensors. Operators and computerized systems then use these data to establish a diagnostic of the plant. However, the instrumentation system is complex, and is not immune to faults and failures. This paper presents a system for detecting sensor failures using a topological description of the installation and a set of component models. This model of the plant implicitly contains relations between sensor data. These relations must always be checked if all the components are functioning correctly. The failure detection task thus consists of checking these constraints. The constraints are extracted in two stages. Firstly, a qualitative model of their existence is built using structural analysis. Secondly, the models are formally handled according to the results of the structural analysis, in order to establish the constraints on the sensor data. This work constitutes an initial step in extending model-based diagnosis, as the information on which it is based is suspect. This work will be followed by surveillance of the detection system. When the instrumentation is assumed to be sound, the unverified constraints indicate errors on the plant model. (authors). 8 refs., 4 figs

  1. Rapid prototyping of microbial cell factories via genome-scale engineering.

    Science.gov (United States)

    Si, Tong; Xiao, Han; Zhao, Huimin

    2015-11-15

    Advances in reading, writing and editing genetic materials have greatly expanded our ability to reprogram biological systems at the resolution of a single nucleotide and on the scale of a whole genome. Such capacity has greatly accelerated the cycles of design, build and test to engineer microbes for efficient synthesis of fuels, chemicals and drugs. In this review, we summarize the emerging technologies that have been applied, or are potentially useful for genome-scale engineering in microbial systems. We will focus on the development of high-throughput methodologies, which may accelerate the prototyping of microbial cell factories. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Large-scale model-based assessment of deer-vehicle collision risk.

    Directory of Open Access Journals (Sweden)

    Torsten Hothorn

    Full Text Available Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining

  3. Observational constraints on successful model of quintessential Inflation

    International Nuclear Information System (INIS)

    Geng, Chao-Qiang; Lee, Chung-Chi; Sami, M.; Saridakis, Emmanuel N.; Starobinsky, Alexei A.

    2017-01-01

    We study quintessential inflation using a generalized exponential potential V (φ)∝ exp(−λ φ n / M Pl n ), n >1, the model admits slow-roll inflation at early times and leads to close-to-scaling behaviour in the post inflationary era with an exit to dark energy at late times. We present detailed investigations of the inflationary stage in the light of the Planck 2015 results, study post-inflationary dynamics and analytically confirm the existence of an approximately scaling solution. Additionally, assuming that standard massive neutrinos are non-minimally coupled, makes the field φ dominant once again at late times giving rise to present accelerated expansion of the Universe. We derive observational constraints on the field and time-dependent neutrino masses. In particular, for n =6 (8), the parameter λ is constrained to be, log λ > −7.29 (−11.7); the model produces the spectral index of the power spectrum of primordial scalar (matter density) perturbations as n s = 0.959 ± 0.001 (0.961 ± 0.001) and tiny tensor-to-scalar ratio, r <1.72 × 10 −2 (2.32 × 10 −2 ) respectively. Consequently, the upper bound on possible values of the sum of neutrino masses Σ m ν ∼< 2.5 eV significantly enhances compared to that in the standard ΛCDM model.

  4. A genome-wide, fine-scale map of natural pigmentation variation in Drosophila melanogaster.

    Directory of Open Access Journals (Sweden)

    Héloïse Bastide

    2013-06-01

    Full Text Available Various approaches can be applied to uncover the genetic basis of natural phenotypic variation, each with their specific strengths and limitations. Here, we use a replicated genome-wide association approach (Pool-GWAS to fine-scale map genomic regions contributing to natural variation in female abdominal pigmentation in Drosophila melanogaster, a trait that is highly variable in natural populations and highly heritable in the laboratory. We examined abdominal pigmentation phenotypes in approximately 8000 female European D. melanogaster, isolating 1000 individuals with extreme phenotypes. We then used whole-genome Illumina sequencing to identify single nucleotide polymorphisms (SNPs segregating in our sample, and tested these for associations with pigmentation by contrasting allele frequencies between replicate pools of light and dark individuals. We identify two small regions near the pigmentation genes tan and bric-à-brac 1, both corresponding to known cis-regulatory regions, which contain SNPs showing significant associations with pigmentation variation. While the Pool-GWAS approach suffers some limitations, its cost advantage facilitates replication and it can be applied to any non-model system with an available reference genome.

  5. A genome-wide, fine-scale map of natural pigmentation variation in Drosophila melanogaster.

    Science.gov (United States)

    Bastide, Héloïse; Betancourt, Andrea; Nolte, Viola; Tobler, Raymond; Stöbe, Petra; Futschik, Andreas; Schlötterer, Christian

    2013-06-01

    Various approaches can be applied to uncover the genetic basis of natural phenotypic variation, each with their specific strengths and limitations. Here, we use a replicated genome-wide association approach (Pool-GWAS) to fine-scale map genomic regions contributing to natural variation in female abdominal pigmentation in Drosophila melanogaster, a trait that is highly variable in natural populations and highly heritable in the laboratory. We examined abdominal pigmentation phenotypes in approximately 8000 female European D. melanogaster, isolating 1000 individuals with extreme phenotypes. We then used whole-genome Illumina sequencing to identify single nucleotide polymorphisms (SNPs) segregating in our sample, and tested these for associations with pigmentation by contrasting allele frequencies between replicate pools of light and dark individuals. We identify two small regions near the pigmentation genes tan and bric-à-brac 1, both corresponding to known cis-regulatory regions, which contain SNPs showing significant associations with pigmentation variation. While the Pool-GWAS approach suffers some limitations, its cost advantage facilitates replication and it can be applied to any non-model system with an available reference genome.

  6. Ensembl Genomes 2016: more genomes, more complexity.

    Science.gov (United States)

    Kersey, Paul Julian; Allen, James E; Armean, Irina; Boddu, Sanjay; Bolt, Bruce J; Carvalho-Silva, Denise; Christensen, Mikkel; Davis, Paul; Falin, Lee J; Grabmueller, Christoph; Humphrey, Jay; Kerhornou, Arnaud; Khobova, Julia; Aranganathan, Naveen K; Langridge, Nicholas; Lowy, Ernesto; McDowall, Mark D; Maheswari, Uma; Nuhn, Michael; Ong, Chuang Kee; Overduin, Bert; Paulini, Michael; Pedro, Helder; Perry, Emily; Spudich, Giulietta; Tapanari, Electra; Walts, Brandon; Williams, Gareth; Tello-Ruiz, Marcela; Stein, Joshua; Wei, Sharon; Ware, Doreen; Bolser, Daniel M; Howe, Kevin L; Kulesha, Eugene; Lawson, Daniel; Maslen, Gareth; Staines, Daniel M

    2016-01-04

    Ensembl Genomes (http://www.ensemblgenomes.org) is an integrating resource for genome-scale data from non-vertebrate species, complementing the resources for vertebrate genomics developed in the context of the Ensembl project (http://www.ensembl.org). Together, the two resources provide a consistent set of programmatic and interactive interfaces to a rich range of data including reference sequence, gene models, transcriptional data, genetic variation and comparative analysis. This paper provides an update to the previous publications about the resource, with a focus on recent developments. These include the development of new analyses and views to represent polyploid genomes (of which bread wheat is the primary exemplar); and the continued up-scaling of the resource, which now includes over 23 000 bacterial genomes, 400 fungal genomes and 100 protist genomes, in addition to 55 genomes from invertebrate metazoa and 39 genomes from plants. This dramatic increase in the number of included genomes is one part of a broader effort to automate the integration of archival data (genome sequence, but also associated RNA sequence data and variant calls) within the context of reference genomes and make it available through the Ensembl user interfaces. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Model-independent cosmological constraints from growth and expansion

    Science.gov (United States)

    L'Huillier, Benjamin; Shafieloo, Arman; Kim, Hyungjin

    2018-05-01

    Reconstructing the expansion history of the Universe from Type Ia supernovae data, we fit the growth rate measurements and put model-independent constraints on some key cosmological parameters, namely, Ωm, γ, and σ8. The constraints are consistent with those from the concordance model within the framework of general relativity, but the current quality of the data is not sufficient to rule out modified gravity models. Adding the condition that dark energy density should be positive at all redshifts, independently of its equation of state, further constrains the parameters and interestingly supports the concordance model.

  8. Probing the genome-scale metabolic landscape of Bordetella pertussis, the causative agent of whooping cough.

    Science.gov (United States)

    Branco Dos Santos, Filipe; Olivier, Brett G; Boele, Joost; Smessaert, Vincent; De Rop, Philippe; Krumpochova, Petra; Klau, Gunnar W; Giera, Martin; Dehottay, Philippe; Teusink, Bas; Goffin, Philippe

    2017-08-25

    Whooping cough is a highly-contagious respiratory disease caused by Bordetella pertussi s. Despite vaccination, its incidence has been rising alarmingly, and yet, the physiology of B. pertussis remains poorly understood. We combined genome-scale metabolic reconstruction, a novel optimization algorithm and experimental data to probe the full metabolic potential of this pathogen, using strain Tohama I as a reference. Experimental validation showed that B. pertussis secretes a significant proportion of nitrogen as arginine and purine nucleosides, which may contribute to modulation of the host response. We also found that B. pertussis can be unexpectedly versatile, being able to metabolize many compounds while displaying minimal nutrient requirements. It can grow without cysteine - using inorganic sulfur sources such as thiosulfate - and it can grow on organic acids such as citrate or lactate as sole carbon sources, providing in vivo demonstration that its TCA cycle is functional. Although the metabolic reconstruction of eight additional strains indicates that the structural genes underlying this metabolic flexibility are widespread, experimental validation suggests a role of strain-specific regulatory mechanisms in shaping metabolic capabilities. Among five alternative strains tested, three were shown to grow on substrate combinations requiring a functional TCA cycle, but only one could use thiosulfate. Finally, the metabolic model was used to rationally design growth media with over two-fold improvements in pertussis toxin production. This study thus provides novel insights into B. pertussis physiology, and highlights the potential, but also limitations of models solely based on metabolic gene content. IMPORTANCE The metabolic capabilities of Bordetella pertussis - the causative agent of whooping cough - were investigated from a systems-level perspective. We constructed a comprehensive genome-scale metabolic model for B. pertussis , and challenged its predictions

  9. Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.

    Science.gov (United States)

    Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K

    2017-11-01

    Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Export constraints facing Lesotho-based manufacturing enterprises

    Directory of Open Access Journals (Sweden)

    Motšelisi C. Mokhethi

    2015-07-01

    Full Text Available Orientation: Exporting is preferred by many enterprises as the mode of foreign entry as it requires less commitment of organisational resources and offers flexibility of managerial actions. However, enterprises face a number of challenges when attempting to initiate exports or expand their export operations. Research purpose: This study was undertaken to determine the characteristics and composition of export barriers constraining exporting by Lesotho-based manufacturing enterprises. Motivation for the study: Lesotho is faced with low destination diversity and low diversity in export products. Research design, approach and method: Data was collected from 162 Lesotho-based manufacturing enterprises through a self-administered questionnaire. Main findings: In its findings, the study firstly identified international constraints, distribution constraints and financial constraints as factors constraining exporting. Secondly, it was determined that three exporting constraints, all internal to the enterprise and all related to one factor (namely financial constraint hampered exporting. Lastly, the ANOVA results revealed that the perceptions of export constraints differed according to the enterprise characteristics, enterprise size, ownership and type of industry. Contribution/value-add: With the majority of enterprises in this study being identified as micro-enterprises, the government of Lesotho needs to pay particular attention to addressing the export needs of these enterprises in order to enable them to participate in exporting activities − especially considering that they can play a pivotal role in the alleviation of poverty, job creation and economic rejuvenation.

  11. Visualization of RNA structure models within the Integrative Genomics Viewer.

    Science.gov (United States)

    Busan, Steven; Weeks, Kevin M

    2017-07-01

    Analyses of the interrelationships between RNA structure and function are increasingly important components of genomic studies. The SHAPE-MaP strategy enables accurate RNA structure probing and realistic structure modeling of kilobase-length noncoding RNAs and mRNAs. Existing tools for visualizing RNA structure models are not suitable for efficient analysis of long, structurally heterogeneous RNAs. In addition, structure models are often advantageously interpreted in the context of other experimental data and gene annotation information, for which few tools currently exist. We have developed a module within the widely used and well supported open-source Integrative Genomics Viewer (IGV) that allows visualization of SHAPE and other chemical probing data, including raw reactivities, data-driven structural entropies, and data-constrained base-pair secondary structure models, in context with linear genomic data tracks. We illustrate the usefulness of visualizing RNA structure in the IGV by exploring structure models for a large viral RNA genome, comparing bacterial mRNA structure in cells with its structure under cell- and protein-free conditions, and comparing a noncoding RNA structure modeled using SHAPE data with a base-pairing model inferred through sequence covariation analysis. © 2017 Busan and Weeks; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  12. Solar system constraints on disformal gravity theories

    International Nuclear Information System (INIS)

    Ip, Hiu Yan; Schmidt, Fabian; Sakstein, Jeremy

    2015-01-01

    Disformal theories of gravity are scalar-tensor theories where the scalar couples derivatively to matter via the Jordan frame metric. These models have recently attracted interest in the cosmological context since they admit accelerating solutions. We derive the solution for a static isolated mass in generic disformal gravity theories and transform it into the parameterised post-Newtonian form. This allows us to investigate constraints placed on such theories by local tests of gravity. The tightest constraints come from preferred-frame effects due to the motion of the Solar System with respect to the evolving cosmological background field. The constraints we obtain improve upon the previous solar system constraints by two orders of magnitude, and constrain the scale of the disformal coupling for generic models to ℳ ∼> 100 eV. These constraints render all disformal effects irrelevant for cosmology

  13. Comparison on genomic predictions using GBLUP models and two single-step blending methods with different relationship matrices in the Nordic Holstein population

    DEFF Research Database (Denmark)

    Gao, Hongding; Christensen, Ole Fredslund; Madsen, Per

    2012-01-01

    Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may...... not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16......) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted...

  14. Variable selection models for genomic selection using whole-genome sequence data and singular value decomposition.

    Science.gov (United States)

    Meuwissen, Theo H E; Indahl, Ulf G; Ødegård, Jørgen

    2017-12-27

    Non-linear Bayesian genomic prediction models such as BayesA/B/C/R involve iteration and mostly Markov chain Monte Carlo (MCMC) algorithms, which are computationally expensive, especially when whole-genome sequence (WGS) data are analyzed. Singular value decomposition (SVD) of the genotype matrix can facilitate genomic prediction in large datasets, and can be used to estimate marker effects and their prediction error variances (PEV) in a computationally efficient manner. Here, we developed, implemented, and evaluated a direct, non-iterative method for the estimation of marker effects for the BayesC genomic prediction model. The BayesC model assumes a priori that markers have normally distributed effects with probability [Formula: see text] and no effect with probability (1 - [Formula: see text]). Marker effects and their PEV are estimated by using SVD and the posterior probability of the marker having a non-zero effect is calculated. These posterior probabilities are used to obtain marker-specific effect variances, which are subsequently used to approximate BayesC estimates of marker effects in a linear model. A computer simulation study was conducted to compare alternative genomic prediction methods, where a single reference generation was used to estimate marker effects, which were subsequently used for 10 generations of forward prediction, for which accuracies were evaluated. SVD-based posterior probabilities of markers having non-zero effects were generally lower than MCMC-based posterior probabilities, but for some regions the opposite occurred, resulting in clear signals for QTL-rich regions. The accuracies of breeding values estimated using SVD- and MCMC-based BayesC analyses were similar across the 10 generations of forward prediction. For an intermediate number of generations (2 to 5) of forward prediction, accuracies obtained with the BayesC model tended to be slightly higher than accuracies obtained using the best linear unbiased prediction of SNP

  15. Genome Partitioner: A web tool for multi-level partitioning of large-scale DNA constructs for synthetic biology applications.

    Science.gov (United States)

    Christen, Matthias; Del Medico, Luca; Christen, Heinz; Christen, Beat

    2017-01-01

    Recent advances in lower-cost DNA synthesis techniques have enabled new innovations in the field of synthetic biology. Still, efficient design and higher-order assembly of genome-scale DNA constructs remains a labor-intensive process. Given the complexity, computer assisted design tools that fragment large DNA sequences into fabricable DNA blocks are needed to pave the way towards streamlined assembly of biological systems. Here, we present the Genome Partitioner software implemented as a web-based interface that permits multi-level partitioning of genome-scale DNA designs. Without the need for specialized computing skills, biologists can submit their DNA designs to a fully automated pipeline that generates the optimal retrosynthetic route for higher-order DNA assembly. To test the algorithm, we partitioned a 783 kb Caulobacter crescentus genome design. We validated the partitioning strategy by assembling a 20 kb test segment encompassing a difficult to synthesize DNA sequence. Successful assembly from 1 kb subblocks into the 20 kb segment highlights the effectiveness of the Genome Partitioner for reducing synthesis costs and timelines for higher-order DNA assembly. The Genome Partitioner is broadly applicable to translate DNA designs into ready to order sequences that can be assembled with standardized protocols, thus offering new opportunities to harness the diversity of microbial genomes for synthetic biology applications. The Genome Partitioner web tool can be accessed at https://christenlab.ethz.ch/GenomePartitioner.

  16. Genome Partitioner: A web tool for multi-level partitioning of large-scale DNA constructs for synthetic biology applications.

    Directory of Open Access Journals (Sweden)

    Matthias Christen

    Full Text Available Recent advances in lower-cost DNA synthesis techniques have enabled new innovations in the field of synthetic biology. Still, efficient design and higher-order assembly of genome-scale DNA constructs remains a labor-intensive process. Given the complexity, computer assisted design tools that fragment large DNA sequences into fabricable DNA blocks are needed to pave the way towards streamlined assembly of biological systems. Here, we present the Genome Partitioner software implemented as a web-based interface that permits multi-level partitioning of genome-scale DNA designs. Without the need for specialized computing skills, biologists can submit their DNA designs to a fully automated pipeline that generates the optimal retrosynthetic route for higher-order DNA assembly. To test the algorithm, we partitioned a 783 kb Caulobacter crescentus genome design. We validated the partitioning strategy by assembling a 20 kb test segment encompassing a difficult to synthesize DNA sequence. Successful assembly from 1 kb subblocks into the 20 kb segment highlights the effectiveness of the Genome Partitioner for reducing synthesis costs and timelines for higher-order DNA assembly. The Genome Partitioner is broadly applicable to translate DNA designs into ready to order sequences that can be assembled with standardized protocols, thus offering new opportunities to harness the diversity of microbial genomes for synthetic biology applications. The Genome Partitioner web tool can be accessed at https://christenlab.ethz.ch/GenomePartitioner.

  17. Observational constraints on Visser's cosmological model

    International Nuclear Information System (INIS)

    Alves, M. E. S.; Araujo, J. C. N. de; Miranda, O. D.; Wuensche, C. A.; Carvalho, F. C.; Santos, E. M.

    2010-01-01

    Theories of gravity for which gravitons can be treated as massive particles have presently been studied as realistic modifications of general relativity, and can be tested with cosmological observations. In this work, we study the ability of a recently proposed theory with massive gravitons, the so-called Visser theory, to explain the measurements of luminosity distance from the Union2 compilation, the most recent Type-Ia Supernovae (SNe Ia) data set, adopting the current ratio of the total density of nonrelativistic matter to the critical density (Ω m ) as a free parameter. We also combine the SNe Ia data with constraints from baryon acoustic oscillations (BAO) and cosmic microwave background (CMB) measurements. We find that, for the allowed interval of values for Ω m , a model based on Visser's theory can produce an accelerated expansion period without any dark energy component, but the combined analysis (SNe Ia+BAO+CMB) shows that the model is disfavored when compared with the ΛCDM model.

  18. Biaxial loading and shallow-flaw effects on crack-tip constraint and fracture toughness

    International Nuclear Information System (INIS)

    Bass, B.R.; Bryson, J.W.; Theiss, T.J.; Rao, M.C.

    1994-01-01

    A program to develop and evaluate fracture methodologies for the assessment of crack-tip constraint effects on fracture toughness of reactor pressure vessel (RPV) steels has been initiated in the Heavy-Section Steel Technology (HSST) Program. Crack-tip constraint is an issue that significantly impacts fracture mechanics technologies employed in safety assessment procedures for commercially licensed nuclear RPVs. The focus of studies described herein is on the evaluation of two stressed-based methodologies for quantifying crack-tip constraint (i.e., J-Q theory and a micromechanical scaling model based on critical stressed volumes) through applications to experimental and fractographic data. Data were utilized from single-edge notch bend (SENB) specimens and HSST-developed cruciform beam specimens that were tested in HSST shallow-crack and biaxial testing programs. Results from applications indicate that both the J-Q methodology and the micromechanical scaling model can be used successfully to interpret experimental data from the shallow- and deep-crack SENB specimen tests. When applied to the uniaxially and biaxially loaded cruciform specimens, the two methodologies showed some promising features, but also raised several questions concerning the interpretation of constraint conditions in the specimen based on near-tip stress fields. Fractographic data taken from the fracture surfaces of the SENB and cruciform specimens are used to assess the relevance of stress-based fracture characterizations to conditions at cleavage initiation sites. Unresolved issues identified from these analyses require resolution as part of a validation process for biaxial loading applications. This report is designated as HSST Report No. 142

  19. Water Constraints in an Electric Sector Capacity Expansion Model

    Energy Technology Data Exchange (ETDEWEB)

    Macknick, Jordan [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cohen, Stuart [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Newmark, Robin [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Martinez, Andrew [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Sullivan, Patrick [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Tidwell, Vince [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-07-17

    This analysis provides a description of the first U.S. national electricity capacity expansion model to incorporate water resource availability and costs as a constraint for the future development of the electricity sector. The Regional Energy Deployment System (ReEDS) model was modified to incorporate water resource availability constraints and costs in each of its 134 Balancing Area (BA) regions along with differences in costs and efficiencies of cooling systems. Water resource availability and cost data are from recently completed research at Sandia National Laboratories (Tidwell et al. 2013b). Scenarios analyzed include a business-as-usual 3 This report is available at no cost from the National Renewable Energy Laboratory (NREL) at www.nrel.gov/publications. scenario without water constraints as well as four scenarios that include water constraints and allow for different cooling systems and types of water resources to be utilized. This analysis provides insight into where water resource constraints could affect the choice, configuration, or location of new electricity technologies.

  20. Value-based genomics.

    Science.gov (United States)

    Gong, Jun; Pan, Kathy; Fakih, Marwan; Pal, Sumanta; Salgia, Ravi

    2018-03-20

    Advancements in next-generation sequencing have greatly enhanced the development of biomarker-driven cancer therapies. The affordability and availability of next-generation sequencers have allowed for the commercialization of next-generation sequencing platforms that have found widespread use for clinical-decision making and research purposes. Despite the greater availability of tumor molecular profiling by next-generation sequencing at our doorsteps, the achievement of value-based care, or improving patient outcomes while reducing overall costs or risks, in the era of precision oncology remains a looming challenge. In this review, we highlight available data through a pre-established and conceptualized framework for evaluating value-based medicine to assess the cost (efficiency), clinical benefit (effectiveness), and toxicity (safety) of genomic profiling in cancer care. We also provide perspectives on future directions of next-generation sequencing from targeted panels to whole-exome or whole-genome sequencing and describe potential strategies needed to attain value-based genomics.

  1. A gene-based linkage map for Bicyclus anynana butterflies allows for a comprehensive analysis of synteny with the lepidopteran reference genome.

    Directory of Open Access Journals (Sweden)

    Patrícia Beldade

    2009-02-01

    Full Text Available Lepidopterans (butterflies and moths are a rich and diverse order of insects, which, despite their economic impact and unusual biological properties, are relatively underrepresented in terms of genomic resources. The genome of the silkworm Bombyx mori has been fully sequenced, but comparative lepidopteran genomics has been hampered by the scarcity of information for other species. This is especially striking for butterflies, even though they have diverse and derived phenotypes (such as color vision and wing color patterns and are considered prime models for the evolutionary and developmental analysis of ecologically relevant, complex traits. We focus on Bicyclus anynana butterflies, a laboratory system for studying the diversification of novelties and serially repeated traits. With a panel of 12 small families and a biphasic mapping approach, we first assigned 508 expressed genes to segregation groups and then ordered 297 of them within individual linkage groups. We also coarsely mapped seven color pattern loci. This is the richest gene-based map available for any butterfly species and allowed for a broad-coverage analysis of synteny with the lepidopteran reference genome. Based on 462 pairs of mapped orthologous markers in Bi. anynana and Bo. mori, we observed strong conservation of gene assignment to chromosomes, but also evidence for numerous large- and small-scale chromosomal rearrangements. With gene collections growing for a variety of target organisms, the ability to place those genes in their proper genomic context is paramount. Methods to map expressed genes and to compare maps with relevant model systems are crucial to extend genomic-level analysis outside classical model species. Maps with gene-based markers are useful for comparative genomics and to resolve mapped genomic regions to a tractable number of candidate genes, especially if there is synteny with related model species. This is discussed in relation to the identification of

  2. An adaptive ES with a ranking based constraint handling strategy

    Directory of Open Access Journals (Sweden)

    Kusakci Ali Osman

    2014-01-01

    Full Text Available To solve a constrained optimization problem, equality constraints can be used to eliminate a problem variable. If it is not feasible, the relations imposed implicitly by the constraints can still be exploited. Most conventional constraint handling methods in Evolutionary Algorithms (EAs do not consider the correlations between problem variables imposed by the constraints. This paper relies on the idea that a proper search operator, which captures mentioned implicit correlations, can improve performance of evolutionary constrained optimization algorithms. To realize this, an Evolution Strategy (ES along with a simplified Covariance Matrix Adaptation (CMA based mutation operator is used with a ranking based constraint-handling method. The proposed algorithm is tested on 13 benchmark problems as well as on a real life design problem. The outperformance of the algorithm is significant when compared with conventional ES-based methods.

  3. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Directory of Open Access Journals (Sweden)

    Wesley K Thompson

    2015-12-01

    Full Text Available Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD and the other for schizophrenia (SZ. A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the

  4. An Empirical Bayes Mixture Model for Effect Size Distributions in Genome-Wide Association Studies.

    Science.gov (United States)

    Thompson, Wesley K; Wang, Yunpeng; Schork, Andrew J; Witoelar, Aree; Zuber, Verena; Xu, Shujing; Werge, Thomas; Holland, Dominic; Andreassen, Ole A; Dale, Anders M

    2015-12-01

    Characterizing the distribution of effects from genome-wide genotyping data is crucial for understanding important aspects of the genetic architecture of complex traits, such as number or proportion of non-null loci, average proportion of phenotypic variance explained per non-null effect, power for discovery, and polygenic risk prediction. To this end, previous work has used effect-size models based on various distributions, including the normal and normal mixture distributions, among others. In this paper we propose a scale mixture of two normals model for effect size distributions of genome-wide association study (GWAS) test statistics. Test statistics corresponding to null associations are modeled as random draws from a normal distribution with zero mean; test statistics corresponding to non-null associations are also modeled as normal with zero mean, but with larger variance. The model is fit via minimizing discrepancies between the parametric mixture model and resampling-based nonparametric estimates of replication effect sizes and variances. We describe in detail the implications of this model for estimation of the non-null proportion, the probability of replication in de novo samples, the local false discovery rate, and power for discovery of a specified proportion of phenotypic variance explained from additive effects of loci surpassing a given significance threshold. We also examine the crucial issue of the impact of linkage disequilibrium (LD) on effect sizes and parameter estimates, both analytically and in simulations. We apply this approach to meta-analysis test statistics from two large GWAS, one for Crohn's disease (CD) and the other for schizophrenia (SZ). A scale mixture of two normals distribution provides an excellent fit to the SZ nonparametric replication effect size estimates. While capturing the general behavior of the data, this mixture model underestimates the tails of the CD effect size distribution. We discuss the implications of

  5. Single-polymer dynamics under constraints: scaling theory and computer experiment

    International Nuclear Information System (INIS)

    Milchev, Andrey

    2011-01-01

    The relaxation, diffusion and translocation dynamics of single linear polymer chains in confinement is briefly reviewed with emphasis on the comparison between theoretical scaling predictions and observations from experiment or, most frequently, from computer simulations. Besides cylindrical, spherical and slit-like constraints, related problems such as the chain dynamics in a random medium and the translocation dynamics through a nanopore are also considered. Another particular kind of confinement is imposed by polymer adsorption on attractive surfaces or selective interfaces-a short overview of single-chain dynamics is also contained in this survey. While both theory and numerical experiments consider predominantly coarse-grained models of self-avoiding linear chain molecules with typically Rouse dynamics, we also note some recent studies which examine the impact of hydrodynamic interactions on polymer dynamics in confinement. In all of the aforementioned cases we focus mainly on the consequences of imposed geometric restrictions on single-chain dynamics and try to check our degree of understanding by assessing the agreement between theoretical predictions and observations. (topical review)

  6. The roles of constraint-based and dedication-based influences on user's continued online shopping behavior.

    Science.gov (United States)

    Chang, Su-Chao; Chou, Chi-Min

    2012-11-01

    The objective of this study was to determine empirically the role of constraint-based and dedication-based influences as drivers of the intention to continue using online shopping websites. Constraint-based influences consist of two variables: trust and perceived switching costs. Dedication-based influences consist of three variables: satisfaction, perceived usefulness, and trust. The current results indicate that both constraint-based and dedication-based influences are important drivers of the intention to continue using online shopping websites. The data also shows that trust has the strongest total effect on online shoppers' intention to continue using online shopping websites. In addition, the results indicate that the antecedents of constraint-based influences, technical bonds (e.g., perceived operational competence and perceived website interactivity) and social bonds (e.g., perceived relationship investment, community building, and intimacy) have indirect positive effects on the intention to continue using online shopping websites. Based on these findings, this research suggests that online shopping websites should build constraint-based and dedication-based influences to enhance user's continued online shopping behaviors simultaneously.

  7. Catchment scale water resource constraints on UK policies for low-carbon energy system transition

    Science.gov (United States)

    Konadu, D. D.; Fenner, R. A.

    2017-12-01

    Long-term low-carbon energy transition policy of the UK presents national scale propositions of different low-carbon energy system options that lead to meeting GHG emissions reduction target of 80% on 1990 levels by 2050. Whilst national-scale assessments suggests that water availability may not be a significant constrain on future thermal power generation systems in this pursuit, these analysis fail to capture the appropriate spatial scale where water resource decisions are made, i.e. at the catchment scale. Water is a local resource, which also has significant spatio-temporal regional and national variability, thus any policy-relevant water-energy nexus analysis must be reflective of these characteristics. This presents a critical challenge for policy relevant water-energy nexus analysis. This study seeks to overcome the above challenge by using a linear spatial-downscaling model to allocate nationally projected water-intensive energy system infrastructure/technologies to the catchment level, and estimating the water requirements for the deployment of these technologies. The model is applied to the UK Committee on Climate Change Carbon Budgets to 2030 as a case study. The paper concludes that whilst national-scale analyses show minimal long-term water related impacts, catchment level appraisal of water resource requirements reveal significant constraints in some locations. The approach and results presented in this study thus, highlights the importance of bringing together scientific understanding, data and analysis tools to provide better insights for water-energy nexus decisions at the appropriate spatial scale. This is particularly important for water stressed regions where the water-energy nexus must be analysed at appropriate spatial resolution to capture the full water resource impact of national energy policy.

  8. Musa sebagai Model Genom

    Directory of Open Access Journals (Sweden)

    RITA MEGIA

    2005-12-01

    Full Text Available During the meeting in Arlington, USA in 2001, the scientists grouped in PROMUSA agreed with the launching of the Global Musa Genomics Consortium. The Consortium aims to apply genomics technologies to the improvement of this important crop. These genome projects put banana as the third model species after Arabidopsis and rice that will be analyzed and sequenced. Comparing to Arabidopsis and rice, banana genome provides a unique and powerful insight into structural and in functional genomics that could not be found in those two species. This paper discussed these subjects-including the importance of banana as the fourth main food in the world, the evolution and biodiversity of this genetic resource and its parasite.

  9. Isocurvature constraints on portal couplings

    Energy Technology Data Exchange (ETDEWEB)

    Kainulainen, Kimmo; Nurmi, Sami; Vaskonen, Ville [Department of Physics, University of Jyväskylä, P.O.Box 35 (YFL), FI-40014 University of Jyväskylä (Finland); Tenkanen, Tommi; Tuominen, Kimmo, E-mail: kimmo.kainulainen@jyu.fi, E-mail: sami.t.nurmi@jyu.fi, E-mail: tommi.tenkanen@helsinki.fi, E-mail: kimmo.i.tuominen@helsinki.fi, E-mail: ville.vaskonen@jyu.fi [Department of Physics, University of Helsinki P.O. Box 64, FI-00014, Helsinki (Finland)

    2016-06-01

    We consider portal models which are ultraweakly coupled with the Standard Model, and confront them with observational constraints on dark matter abundance and isocurvature perturbations. We assume the hidden sector to contain a real singlet scalar s and a sterile neutrino ψ coupled to s via a pseudoscalar Yukawa term. During inflation, a primordial condensate consisting of the singlet scalar s is generated, and its contribution to the isocurvature perturbations is imprinted onto the dark matter abundance. We compute the total dark matter abundance including the contributions from condensate decay and nonthermal production from the Standard Model sector. We then use the Planck limit on isocurvature perturbations to derive a novel constraint connecting dark matter mass and the singlet self coupling with the scale of inflation: m {sub DM}/GeV ∼< 0.2λ{sub s}{sup 3/8} ( H {sub *}/10{sup 11} GeV){sup −3/2}. This constraint is relevant in most portal models ultraweakly coupled with the Standard Model and containing light singlet scalar fields.

  10. Virtual Genome Walking across the 32 Gb Ambystoma mexicanum genome; assembling gene models and intronic sequence.

    Science.gov (United States)

    Evans, Teri; Johnson, Andrew D; Loose, Matthew

    2018-01-12

    Large repeat rich genomes present challenges for assembly using short read technologies. The 32 Gb axolotl genome is estimated to contain ~19 Gb of repetitive DNA making an assembly from short reads alone effectively impossible. Indeed, this model species has been sequenced to 20× coverage but the reads could not be conventionally assembled. Using an alternative strategy, we have assembled subsets of these reads into scaffolds describing over 19,000 gene models. We call this method Virtual Genome Walking as it locally assembles whole genome reads based on a reference transcriptome, identifying exons and iteratively extending them into surrounding genomic sequence. These assemblies are then linked and refined to generate gene models including upstream and downstream genomic, and intronic, sequence. Our assemblies are validated by comparison with previously published axolotl bacterial artificial chromosome (BAC) sequences. Our analyses of axolotl intron length, intron-exon structure, repeat content and synteny provide novel insights into the genic structure of this model species. This resource will enable new experimental approaches in axolotl, such as ChIP-Seq and CRISPR and aid in future whole genome sequencing efforts. The assembled sequences and annotations presented here are freely available for download from https://tinyurl.com/y8gydc6n . The software pipeline is available from https://github.com/LooseLab/iterassemble .

  11. Business models for frugal innovation : the role of Resource-Constraints

    OpenAIRE

    Winterhalter, Stephan; Zeschky, Marco; Gassmann, Oliver; Weiblen, Tobias

    2014-01-01

    Frugal Innovation is an extreme case of innovation: radically new applications are innovated for an environment of extreme resource and cost constraints. While the phenomenon of frugal innovation has been described from a product perspective, very little is known about how firms organize frugal innovation on a business model level. This study is based on a multiple case study approach investigating five business models for frugal innovation in the context of the medical equipment market in em...

  12. Genome-scale cold stress response regulatory networks in ten Arabidopsis thaliana ecotypes

    DEFF Research Database (Denmark)

    Barah, Pankaj; Jayavelu, Naresh Doni; Rasmussen, Simon

    2013-01-01

    available from Arabidopsis thaliana 1001 genome project, we further investigated sequence polymorphisms in the core cold stress regulon genes. Significant numbers of non-synonymous amino acid changes were observed in the coding region of the CBF regulon genes. Considering the limited knowledge about......BACKGROUND: Low temperature leads to major crop losses every year. Although several studies have been conducted focusing on diversity of cold tolerance level in multiple phenotypically divergent Arabidopsis thaliana (A. thaliana) ecotypes, genome-scale molecular understanding is still lacking....... RESULTS: In this study, we report genome-scale transcript response diversity of 10 A. thaliana ecotypes originating from different geographical locations to non-freezing cold stress (10°C). To analyze the transcriptional response diversity, we initially compared transcriptome changes in all 10 ecotypes...

  13. Maximum length scale in density based topology optimization

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Wang, Fengwen

    2017-01-01

    The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...

  14. Geographical constraints to range-based attacks on links in complex networks

    International Nuclear Information System (INIS)

    Gong Baihua; Liu Jun; Huang Liang; Yang Kongqing; Yang Lei

    2008-01-01

    In this paper, we studied range-based attacks on links in geographically constrained scale-free networks and found that there is a continuous switching of roles of short- and long-range attacks on links when tuning the geographical constraint strength. Our results demonstrate that the geography has a significant impact on the network efficiency and security; thus one can adjust the geographical structure to optimize the robustness and the efficiency of the networks. We introduce a measurement of the impact of links on the efficiency of the network, and an effective attacking strategy is suggested

  15. Observational constraints on successful model of quintessential Inflation

    Energy Technology Data Exchange (ETDEWEB)

    Geng, Chao-Qiang [Chongqing University of Posts and Telecommunications, Chongqing, 400065 (China); Lee, Chung-Chi [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Sami, M. [Centre for Theoretical Physics, Jamia Millia Islamia, New Delhi 110025 (India); Saridakis, Emmanuel N. [Physics Division, National Technical University of Athens, 15780 Zografou Campus, Athens (Greece); Starobinsky, Alexei A., E-mail: geng@phys.nthu.edu.tw, E-mail: lee.chungchi16@gmail.com, E-mail: sami@iucaa.ernet.in, E-mail: Emmanuel_Saridakis@baylor.edu, E-mail: alstar@landau.ac.ru [L. D. Landau Institute for Theoretical Physics RAS, Moscow 119334 (Russian Federation)

    2017-06-01

    We study quintessential inflation using a generalized exponential potential V (φ)∝ exp(−λ φ {sup n} / M {sub Pl} {sup n} ), n >1, the model admits slow-roll inflation at early times and leads to close-to-scaling behaviour in the post inflationary era with an exit to dark energy at late times. We present detailed investigations of the inflationary stage in the light of the Planck 2015 results, study post-inflationary dynamics and analytically confirm the existence of an approximately scaling solution. Additionally, assuming that standard massive neutrinos are non-minimally coupled, makes the field φ dominant once again at late times giving rise to present accelerated expansion of the Universe. We derive observational constraints on the field and time-dependent neutrino masses. In particular, for n =6 (8), the parameter λ is constrained to be, log λ > −7.29 (−11.7); the model produces the spectral index of the power spectrum of primordial scalar (matter density) perturbations as n {sub s} = 0.959 ± 0.001 (0.961 ± 0.001) and tiny tensor-to-scalar ratio, r <1.72 × 10{sup −2} (2.32 × 10{sup −2}) respectively. Consequently, the upper bound on possible values of the sum of neutrino masses Σ m {sub ν} ∼< 2.5 eV significantly enhances compared to that in the standard ΛCDM model.

  16. A constraint on Planck-scale modifications to electrodynamics with CMB polarization data

    International Nuclear Information System (INIS)

    Gubitosi, Giulia; Pagano, Luca; Amelino-Camelia, Giovanni; Melchiorri, Alessandro; Cooray, Asantha

    2009-01-01

    We show that the Cosmic Microwave Background (CMB) polarization data gathered by the BOOMERanG 2003 flight and WMAP provide an opportunity to investigate in-vacuo birefringence, of a type expected in some quantum pictures of space-time, with a sensitivity that extends even beyond the desired Planck-scale energy. In order to render this constraint more transparent we rely on a well studied phenomenological model of quantum-gravity-induced birefringence, in which one easily establishes that effects introduced at the Planck scale would amount to values of a dimensionless parameter, denoted by ξ, with respect to the Planck energy which are roughly of order 1. By combining BOOMERanG and WMAP data we estimate ξ ≅ −0.110±0.075 at the 68% c.l. Moreover, we forecast on the sensitivity to ξ achievable by future CMB polarization experiments (PLANCK, Spider, EPIC), which, in the absence of systematics, will be at the 1-σ confidence of 8.5 × 10 −4 (PLANCK), 6.1 × 10 −3 (Spider), and 1.0 × 10 −5 (EPIC) respectively. The cosmic variance-limited sensitivity from CMB is 6.1 × 10 −6

  17. A multi scale model for small scale plasticity

    International Nuclear Information System (INIS)

    Zbib, Hussein M.

    2002-01-01

    Full text.A framework for investigating size-dependent small-scale plasticity phenomena and related material instabilities at various length scales ranging from the nano-microscale to the mesoscale is presented. The model is based on fundamental physical laws that govern dislocation motion and their interaction with various defects and interfaces. Particularly, a multi-scale model is developed merging two scales, the nano-microscale where plasticity is determined by explicit three-dimensional dislocation dynamics analysis providing the material length-scale, and the continuum scale where energy transport is based on basic continuum mechanics laws. The result is a hybrid simulation model coupling discrete dislocation dynamics with finite element analyses. With this hybrid approach, one can address complex size-dependent problems, including dislocation boundaries, dislocations in heterogeneous structures, dislocation interaction with interfaces and associated shape changes and lattice rotations, as well as deformation in nano-structured materials, localized deformation and shear band

  18. Constraints on models with a break in the primordial power spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Li Hong, E-mail: hongli@mail.ihep.ac.c [Institute of High Energy Physics, Chinese Academy of Science, P.O. Box 918-4, Beijing 100049 (China); Theoretical Physics Center for Science Facilities (TPCSF), Chinese Academy of Science (China); Kavli Institute for Theoretical Physics, Chinese Academy of Science, Beijing 100190 (China); Xia Junqing [Scuola Internazionale Superiore di Studi Avanzati, Via Bonomea 265, I-34136 Trieste (Italy); Brandenberger, Robert [Department of Physics, McGill University, 3600 University Street, Montreal, QC, H3A 2T8 (Canada); Institute of High Energy Physics, Chinese Academy of Science, P.O. Box 918-4, Beijing 100049 (China); Theoretical Physics Center for Science Facilities (TPCSF), Chinese Academy of Science (China); Kavli Institute for Theoretical Physics, Chinese Academy of Science, Beijing 100190 (China); Zhang Xinmin [Institute of High Energy Physics, Chinese Academy of Science, P.O. Box 918-4, Beijing 100049 (China); Theoretical Physics Center for Science Facilities (TPCSF), Chinese Academy of Science (China)

    2010-07-05

    One of the characteristics of the 'Matter Bounce' scenario, an alternative to cosmological inflation for producing a scale-invariant spectrum of primordial adiabatic fluctuations on large scales, is a break in the power spectrum at a characteristic scale, below which the spectral index changes from n{sub s}=1 to n{sub s}=3. We study the constraints which current cosmological data place on the location of such a break, and more generally on the position of the break and the slope at length scales smaller than the break. The observational data we use include the WMAP five-year data set (WMAP5), other CMB data from BOOMERanG, CBI, VSA, and ACBAR, large-scale structure data from the Sloan Digital Sky Survey (SDSS, their luminous red galaxies sample), Type Ia Supernovae data (the 'Union' compilation), and the Sloan Digital Sky Survey Lyman-{alpha} forest power spectrum (Ly{alpha}) data. We employ the Markov Chain Monte Carlo method to constrain the features in the primordial power spectrum which are motivated by the matter bounce model. We give an upper limit on the length scale where the break in the spectrum occurs.

  19. Constraints on models with a break in the primordial power spectrum

    International Nuclear Information System (INIS)

    Li Hong; Xia Junqing; Brandenberger, Robert; Zhang Xinmin

    2010-01-01

    One of the characteristics of the 'Matter Bounce' scenario, an alternative to cosmological inflation for producing a scale-invariant spectrum of primordial adiabatic fluctuations on large scales, is a break in the power spectrum at a characteristic scale, below which the spectral index changes from n s =1 to n s =3. We study the constraints which current cosmological data place on the location of such a break, and more generally on the position of the break and the slope at length scales smaller than the break. The observational data we use include the WMAP five-year data set (WMAP5), other CMB data from BOOMERanG, CBI, VSA, and ACBAR, large-scale structure data from the Sloan Digital Sky Survey (SDSS, their luminous red galaxies sample), Type Ia Supernovae data (the 'Union' compilation), and the Sloan Digital Sky Survey Lyman-α forest power spectrum (Lyα) data. We employ the Markov Chain Monte Carlo method to constrain the features in the primordial power spectrum which are motivated by the matter bounce model. We give an upper limit on the length scale where the break in the spectrum occurs.

  20. An Alternative Methodological Approach for Cost-Effectiveness Analysis and Decision Making in Genomic Medicine.

    Science.gov (United States)

    Fragoulakis, Vasilios; Mitropoulou, Christina; van Schaik, Ron H; Maniadakis, Nikolaos; Patrinos, George P

    2016-05-01

    Genomic Medicine aims to improve therapeutic interventions and diagnostics, the quality of life of patients, but also to rationalize healthcare costs. To reach this goal, careful assessment and identification of evidence gaps for public health genomics priorities are required so that a more efficient healthcare environment is created. Here, we propose a public health genomics-driven approach to adjust the classical healthcare decision making process with an alternative methodological approach of cost-effectiveness analysis, which is particularly helpful for genomic medicine interventions. By combining classical cost-effectiveness analysis with budget constraints, social preferences, and patient ethics, we demonstrate the application of this model, the Genome Economics Model (GEM), based on a previously reported genome-guided intervention from a developing country environment. The model and the attendant rationale provide a practical guide by which all major healthcare stakeholders could ensure the sustainability of funding for genome-guided interventions, their adoption and coverage by health insurance funds, and prioritization of Genomic Medicine research, development, and innovation, given the restriction of budgets, particularly in developing countries and low-income healthcare settings in developed countries. The implications of the GEM for the policy makers interested in Genomic Medicine and new health technology and innovation assessment are also discussed.

  1. Observational constraints on cosmological models with Chaplygin gas and quadratic equation of state

    International Nuclear Information System (INIS)

    Sharov, G.S.

    2016-01-01

    Observational manifestations of accelerated expansion of the universe, in particular, recent data for Type Ia supernovae, baryon acoustic oscillations, for the Hubble parameter H ( z ) and cosmic microwave background constraints are described with different cosmological models. We compare the ΛCDM, the models with generalized and modified Chaplygin gas and the model with quadratic equation of state. For these models we estimate optimal model parameters and their permissible errors with different approaches to calculation of sound horizon scale r s ( z d ). Among the considered models the best value of χ 2 is achieved for the model with quadratic equation of state, but it has 2 additional parameters in comparison with the ΛCDM and therefore is not favored by the Akaike information criterion.

  2. A simple model for determining photoelectron-generated radiation scaling laws

    International Nuclear Information System (INIS)

    Dipp, T.M.

    1993-12-01

    The generation of radiation via photoelectrons induced off of a conducting surface was explored using a simple model to determine fundamental scaling laws. The model is one-dimensional (small-spot) and uses monoenergetic, nonrelativistic photoelectrons emitted normal to the illuminated conducting surface. Simple steady-state radiation, frequency, and maximum orbital distance equations were derived using small-spot radiation equations, a sin 2 type modulation function, and simple photoelectron dynamics. The result is a system of equations for various scaling laws, which, along with model and user constraints, are simultaneously solved using techniques similar to linear programming. Typical conductors illuminated by low-power sources producing photons with energies less than 5.0 eV are readily modeled by this small-spot, steady-state analysis, which shows they generally produce low efficiency (η rsL -10.5 ) pure photoelectron-induced radiation. However, the small-spot theory predicts that the total conversion efficiency for incident photon power to photoelectron-induced radiated power can go higher than 10 -5.5 for typical real conductors if photons having energies of 15 eV and higher are used, and should go even higher still if the small-spot limit of this theory is exceeded as well. Overall, the simple theory equations, model constraint equations, and solution techniques presented provide a foundation for understanding, predicting, and optimizing the generated radiation, and the simple theory equations provide scaling laws to compare with computational and laboratory experimental data

  3. SECOM: A novel hash seed and community detection based-approach for genome-scale protein domain identification

    KAUST Repository

    Fan, Ming

    2012-06-28

    With rapid advances in the development of DNA sequencing technologies, a plethora of high-throughput genome and proteome data from a diverse spectrum of organisms have been generated. The functional annotation and evolutionary history of proteins are usually inferred from domains predicted from the genome sequences. Traditional database-based domain prediction methods cannot identify novel domains, however, and alignment-based methods, which look for recurring segments in the proteome, are computationally demanding. Here, we propose a novel genome-wide domain prediction method, SECOM. Instead of conducting all-against-all sequence alignment, SECOM first indexes all the proteins in the genome by using a hash seed function. Local similarity can thus be detected and encoded into a graph structure, in which each node represents a protein sequence and each edge weight represents the shared hash seeds between the two nodes. SECOM then formulates the domain prediction problem as an overlapping community-finding problem in this graph. A backward graph percolation algorithm that efficiently identifies the domains is proposed. We tested SECOM on five recently sequenced genomes of aquatic animals. Our tests demonstrated that SECOM was able to identify most of the known domains identified by InterProScan. When compared with the alignment-based method, SECOM showed higher sensitivity in detecting putative novel domains, while it was also three orders of magnitude faster. For example, SECOM was able to predict a novel sponge-specific domain in nucleoside-triphosphatase (NTPases). Furthermore, SECOM discovered two novel domains, likely of bacterial origin, that are taxonomically restricted to sea anemone and hydra. SECOM is an open-source program and available at http://sfb.kaust.edu.sa/Pages/Software.aspx. © 2012 Fan et al.

  4. SECOM: A novel hash seed and community detection based-approach for genome-scale protein domain identification

    KAUST Repository

    Fan, Ming; Wong, Ka-Chun; Ryu, Tae Woo; Ravasi, Timothy; Gao, Xin

    2012-01-01

    With rapid advances in the development of DNA sequencing technologies, a plethora of high-throughput genome and proteome data from a diverse spectrum of organisms have been generated. The functional annotation and evolutionary history of proteins are usually inferred from domains predicted from the genome sequences. Traditional database-based domain prediction methods cannot identify novel domains, however, and alignment-based methods, which look for recurring segments in the proteome, are computationally demanding. Here, we propose a novel genome-wide domain prediction method, SECOM. Instead of conducting all-against-all sequence alignment, SECOM first indexes all the proteins in the genome by using a hash seed function. Local similarity can thus be detected and encoded into a graph structure, in which each node represents a protein sequence and each edge weight represents the shared hash seeds between the two nodes. SECOM then formulates the domain prediction problem as an overlapping community-finding problem in this graph. A backward graph percolation algorithm that efficiently identifies the domains is proposed. We tested SECOM on five recently sequenced genomes of aquatic animals. Our tests demonstrated that SECOM was able to identify most of the known domains identified by InterProScan. When compared with the alignment-based method, SECOM showed higher sensitivity in detecting putative novel domains, while it was also three orders of magnitude faster. For example, SECOM was able to predict a novel sponge-specific domain in nucleoside-triphosphatase (NTPases). Furthermore, SECOM discovered two novel domains, likely of bacterial origin, that are taxonomically restricted to sea anemone and hydra. SECOM is an open-source program and available at http://sfb.kaust.edu.sa/Pages/Software.aspx. © 2012 Fan et al.

  5. Models for inflation with a low supersymmetry-breaking scale

    International Nuclear Information System (INIS)

    Binetruy, P.; California Univ., Santa Barbara; Mahajan, S.; California Univ., Berkeley

    1986-01-01

    We present models where the same scalar field is reponsible for inflation and for the breaking of supersymmetry. The scale of supersymmetry breaking is related to the slope of the potential in the plateau region described by the scalar field during the slow rollover, and the gravitino mass can therefore be kept as small as Msub(W), the mass of the weak gauge boson. We show that such a result is stable under radiative corrections. We describe the inflationary scenario corresponding to the simplest of these models and show that no major problem arises, except for a violation of the thermal constraint (stabilization of the field in the plateau region at high temperature). We discuss the possibility of introducing a second scalar field to satisfy this constraint. (orig.)

  6. De Novo Assembly of Complete Chloroplast Genomes from Non-model Species Based on a K-mer Frequency-Based Selection of Chloroplast Reads from Total DNA Sequences

    Directory of Open Access Journals (Sweden)

    Shairul Izan

    2017-08-01

    Full Text Available Whole Genome Shotgun (WGS sequences of plant species often contain an abundance of reads that are derived from the chloroplast genome. Up to now these reads have generally been identified and assembled into chloroplast genomes based on homology to chloroplasts from related species. This re-sequencing approach may select against structural differences between the genomes especially in non-model species for which no close relatives have been sequenced before. The alternative approach is to de novo assemble the chloroplast genome from total genomic DNA sequences. In this study, we used k-mer frequency tables to identify and extract the chloroplast reads from the WGS reads and assemble these using a highly integrated and automated custom pipeline. Our strategy includes steps aimed at optimizing assemblies and filling gaps which are left due to coverage variation in the WGS dataset. We have successfully de novo assembled three complete chloroplast genomes from plant species with a range of nuclear genome sizes to demonstrate the universality of our approach: Solanum lycopersicum (0.9 Gb, Aegilops tauschii (4 Gb and Paphiopedilum henryanum (25 Gb. We also highlight the need to optimize the choice of k and the amount of data used. This new and cost-effective method for de novo short read assembly will facilitate the study of complete chloroplast genomes with more accurate analyses and inferences, especially in non-model plant genomes.

  7. A Decomposition-Based Pricing Method for Solving a Large-Scale MILP Model for an Integrated Fishery

    Directory of Open Access Journals (Sweden)

    M. Babul Hasan

    2007-01-01

    The IFP can be decomposed into a trawler-scheduling subproblem and a fish-processing subproblem in two different ways by relaxing different sets of constraints. We tried conventional decomposition techniques including subgradient optimization and Dantzig-Wolfe decomposition, both of which were unacceptably slow. We then developed a decomposition-based pricing method for solving the large fishery model, which gives excellent computation times. Numerical results for several planning horizon models are presented.

  8. On the potential of models for location and scale for genome-wide DNA methylation data.

    Science.gov (United States)

    Wahl, Simone; Fenske, Nora; Zeilinger, Sonja; Suhre, Karsten; Gieger, Christian; Waldenberger, Melanie; Grallert, Harald; Schmid, Matthias

    2014-07-03

    With the help of epigenome-wide association studies (EWAS), increasing knowledge on the role of epigenetic mechanisms such as DNA methylation in disease processes is obtained. In addition, EWAS aid the understanding of behavioral and environmental effects on DNA methylation. In terms of statistical analysis, specific challenges arise from the characteristics of methylation data. First, methylation β-values represent proportions with skewed and heteroscedastic distributions. Thus, traditional modeling strategies assuming a normally distributed response might not be appropriate. Second, recent evidence suggests that not only mean differences but also variability in site-specific DNA methylation associates with diseases, including cancer. The purpose of this study was to compare different modeling strategies for methylation data in terms of model performance and performance of downstream hypothesis tests. Specifically, we used the generalized additive models for location, scale and shape (GAMLSS) framework to compare beta regression with Gaussian regression on raw, binary logit and arcsine square root transformed methylation data, with and without modeling a covariate effect on the scale parameter. Using simulated and real data from a large population-based study and an independent sample of cancer patients and healthy controls, we show that beta regression does not outperform competing strategies in terms of model performance. In addition, Gaussian models for location and scale showed an improved performance as compared to models for location only. The best performance was observed for the Gaussian model on binary logit transformed β-values, referred to as M-values. Our results further suggest that models for location and scale are specifically sensitive towards violations of the distribution assumption and towards outliers in the methylation data. Therefore, a resampling procedure is proposed as a mode of inference and shown to diminish type I error rate in

  9. RegPrecise 3.0--a resource for genome-scale exploration of transcriptional regulation in bacteria.

    Science.gov (United States)

    Novichkov, Pavel S; Kazakov, Alexey E; Ravcheev, Dmitry A; Leyn, Semen A; Kovaleva, Galina Y; Sutormin, Roman A; Kazanov, Marat D; Riehl, William; Arkin, Adam P; Dubchak, Inna; Rodionov, Dmitry A

    2013-11-01

    bacterial genomes. Analytical capabilities include exploration of: regulon content, structure and function; TF binding site motifs; conservation and variations in genome-wide regulatory networks across all taxonomic groups of Bacteria. RegPrecise 3.0 was selected as a core resource on transcriptional regulation of the Department of Energy Systems Biology Knowledgebase, an emerging software and data environment designed to enable researchers to collaboratively generate, test and share new hypotheses about gene and protein functions, perform large-scale analyses, and model interactions in microbes, plants, and their communities.

  10. A model for AGN variability on multiple time-scales

    Science.gov (United States)

    Sartori, Lia F.; Schawinski, Kevin; Trakhtenbrot, Benny; Caplar, Neven; Treister, Ezequiel; Koss, Michael J.; Urry, C. Megan; Zhang, C. E.

    2018-05-01

    We present a framework to link and describe active galactic nuclei (AGN) variability on a wide range of time-scales, from days to billions of years. In particular, we concentrate on the AGN variability features related to changes in black hole fuelling and accretion rate. In our framework, the variability features observed in different AGN at different time-scales may be explained as realisations of the same underlying statistical properties. In this context, we propose a model to simulate the evolution of AGN light curves with time based on the probability density function (PDF) and power spectral density (PSD) of the Eddington ratio (L/LEdd) distribution. Motivated by general galaxy population properties, we propose that the PDF may be inspired by the L/LEdd distribution function (ERDF), and that a single (or limited number of) ERDF+PSD set may explain all observed variability features. After outlining the framework and the model, we compile a set of variability measurements in terms of structure function (SF) and magnitude difference. We then combine the variability measurements on a SF plot ranging from days to Gyr. The proposed framework enables constraints on the underlying PSD and the ability to link AGN variability on different time-scales, therefore providing new insights into AGN variability and black hole growth phenomena.

  11. The constraints

    International Nuclear Information System (INIS)

    Jones, P.M.S.

    1987-01-01

    There are considerable incentives for the use of nuclear in preference to other sources for base load electricity generation in most of the developed world. These are economic, strategic, environmental and climatic. However, there are two potential constraints which could hinder the development of nuclear power to its full economic potential. These are public opinion and financial regulations which distort the nuclear economic advantage. The concerns of the anti-nuclear lobby are over safety, (especially following the Chernobyl accident), the management of radioactive waste, the potential effects of large scale exposure of the population to radiation and weapons proliferation. These are discussed. The financial constraint is over two factors, the availability of funds and the perception of cost, both of which are discussed. (U.K.)

  12. Multi-scale Drivers of Variations in Atmospheric Evaporative Demand Based on Observations and Physically-based Modeling

    Science.gov (United States)

    Peng, L.; Sheffield, J.; Li, D.

    2015-12-01

    Evapotranspiration (ET) is a key link between the availability of water resources and climate change and climate variability. Variability of ET has important environmental and socioeconomic implications for managing hydrological hazards, food and energy production. Although there have been many observational and modeling studies of ET, how ET has varied and the drivers of the variations at different temporal scales remain elusive. Much of the uncertainty comes from the atmospheric evaporative demand (AED), which is the combined effect of radiative and aerodynamic controls. The inconsistencies among modeled AED estimates and the limited observational data may originate from multiple sources including the limited time span and uncertainties in the data. To fully investigate and untangle the intertwined drivers of AED, we present a spectrum analysis to identify key controls of AED across multiple temporal scales. We use long-term records of observed pan evaporation for 1961-2006 from 317 weather stations across China and physically-based model estimates of potential evapotranspiration (PET). The model estimates are based on surface meteorology and radiation derived from reanalysis, satellite retrievals and station data. Our analyses show that temperature plays a dominant role in regulating variability of AED at the inter-annual scale. At the monthly and seasonal scales, the primary control of AED shifts from radiation in humid regions to humidity in dry regions. Unlike many studies focusing on the spatial pattern of ET drivers based on a traditional supply and demand framework, this study underlines the importance of temporal scales when discussing controls of ET variations.

  13. Integrated reservoir characterization: Improvement in heterogeneities stochastic modelling by integration of additional external constraints

    Energy Technology Data Exchange (ETDEWEB)

    Doligez, B.; Eschard, R. [Institut Francais du Petrole, Rueil Malmaison (France); Geffroy, F. [Centre de Geostatistique, Fontainebleau (France)] [and others

    1997-08-01

    The classical approach to construct reservoir models is to start with a fine scale geological model which is informed with petrophysical properties. Then scaling-up techniques allow to obtain a reservoir model which is compatible with the fluid flow simulators. Geostatistical modelling techniques are widely used to build the geological models before scaling-up. These methods provide equiprobable images of the area under investigation, which honor the well data, and which variability is the same than the variability computed from the data. At an appraisal phase, when few data are available, or when the wells are insufficient to describe all the heterogeneities and the behavior of the field, additional constraints are needed to obtain a more realistic geological model. For example, seismic data or stratigraphic models can provide average reservoir information with an excellent areal coverage, but with a poor vertical resolution. New advances in modelisation techniques allow now to integrate this type of additional external information in order to constrain the simulations. In particular, 2D or 3D seismic derived information grids, or sand-shale ratios maps coming from stratigraphic models can be used as external drifts to compute the geological image of the reservoir at the fine scale. Examples are presented to illustrate the use of these new tools, their impact on the final reservoir model, and their sensitivity to some key parameters.

  14. Genome-enabled Modeling of Microbial Biogeochemistry using a Trait-based Approach. Does Increasing Metabolic Complexity Increase Predictive Capabilities?

    Science.gov (United States)

    King, E.; Karaoz, U.; Molins, S.; Bouskill, N.; Anantharaman, K.; Beller, H. R.; Banfield, J. F.; Steefel, C. I.; Brodie, E.

    2015-12-01

    The biogeochemical functioning of ecosystems is shaped in part by genomic information stored in the subsurface microbiome. Cultivation-independent approaches allow us to extract this information through reconstruction of thousands of genomes from a microbial community. Analysis of these genomes, in turn, gives an indication of the organisms present and their functional roles. However, metagenomic analyses can currently deliver thousands of different genomes that range in abundance/importance, requiring the identification and assimilation of key physiologies and metabolisms to be represented as traits for successful simulation of subsurface processes. Here we focus on incorporating -omics information into BioCrunch, a genome-informed trait-based model that represents the diversity of microbial functional processes within a reactive transport framework. This approach models the rate of nutrient uptake and the thermodynamics of coupled electron donors and acceptors for a range of microbial metabolisms including heterotrophs and chemolithotrophs. Metabolism of exogenous substrates fuels catabolic and anabolic processes, with the proportion of energy used for cellular maintenance, respiration, biomass development, and enzyme production based upon dynamic intracellular and environmental conditions. This internal resource partitioning represents a trade-off against biomass formation and results in microbial community emergence across a fitness landscape. Biocrunch was used here in simulations that included organisms and metabolic pathways derived from a dataset of ~1200 non-redundant genomes reflecting a microbial community in a floodplain aquifer. Metagenomic data was directly used to parameterize trait values related to growth and to identify trait linkages associated with respiration, fermentation, and key enzymatic functions such as plant polymer degradation. Simulations spanned a range of metabolic complexities and highlight benefits originating from simulations

  15. A constraints-induced model of park choice

    NARCIS (Netherlands)

    Stemerding, M.P.; Oppewal, H.; Timmermans, H.J.P.

    1999-01-01

    Conjoint choice models have been used widely in the consumer-choice literature as an approach to measure and predict consumer-choice behavior. These models typically assume that consumer preferences and choice rules are independent from any constraints that might impact the behavior of interest.

  16. Design and Evaluation of the User-Adapted Program Scheduling system based on Bayesian Network and Constraint Satisfaction

    Science.gov (United States)

    Iwasaki, Hirotoshi; Sega, Shinichiro; Hiraishi, Hironori; Mizoguchi, Fumio

    In recent years, lots of music content can be stored in mobile computing devices, such as a portable digital music player and a car navigation system. Moreover, various information content like news or traffic information can be acquired always anywhere by a cellular communication and a wireless LAN. However, usability issues arise from the simple interfaces of mobile computing devices. Moreover, retrieving and selecting such content poses safety issues, especially while driving. Thus, it is important for the mobile system to recommend content automatically adapted to user's preference and situation. In this paper, we present the user-adapted program scheduling that generates sequences of content (Program) suiting user's preference and situation based on the Bayesian network and the Constraint Satisfaction Problem (CSP) technique. We also describe the design and evaluation of its realization system, the Personal Program Producer (P3). First, preference such as a genre ratio of content in a program is learned as a Bayesian network model using simple operations such as a skip behavior. A model including each content tends to become large-scale. In order to make it small, we present the model separation method that carries out losslessly compression of the model. Using the model, probabilistic distributions of preference to generate constraints are inferred. Finally satisfying the constraints, a program is produced. This kind of CSP has an issue of which the number of variables is not fixedness. In order to make it variable, we propose a method using metavariables. To evaluate the above methods, we applied them to P3 on a car navigation system. User evaluations helped us clarify that the P3 can produce the program that a user prefers and adapt it to the user.

  17. Seismogenic Potential of a Gouge-filled Fault and the Criterion for Its Slip Stability: Constraints From a Microphysical Model

    Science.gov (United States)

    Chen, Jianye; Niemeijer, A. R.

    2017-12-01

    Physical constraints for the parameters of the rate-and-state friction (RSF) laws have been mostly lacking. We presented such constraints based on a microphysical model and demonstrated the general applicability to granular fault gouges deforming under hydrothermal conditions in a companion paper. In this paper, we examine the transition velocities for contrasting frictional behavior (i.e., strengthening to weakening and vice versa) and the slip stability of the model. The model predicts a steady state friction coefficient that increases with slip rate at very low and high slip rates and decreases in between. This allows the transition velocities to be theoretically obtained and the unstable slip regime (Vs→w static stress drop (Δμs) associated with self-sustained oscillations or stick slips. Numerical implementation of the model predicts frictional behavior that exhibits consecutive transitions from stable sliding, via periodic oscillations, to unstable stick slips with decreasing elastic stiffness or loading rate, and gives Kc, Wc, Δμs, Vs→w, and Vw→s values that are consistent with the analytical predictions. General scaling relations of these parameters given by the model are consistent with previous interpretations in the context of RSF laws and agree well with previous experiments, testifying to high validity. From these physics-based expressions that allow a more reliable extrapolation to natural conditions, we discuss the seismological implications for natural faults and present topics for future work.

  18. Genomic selection: genome-wide prediction in plant improvement.

    Science.gov (United States)

    Desta, Zeratsion Abera; Ortiz, Rodomiro

    2014-09-01

    Association analysis is used to measure relations between markers and quantitative trait loci (QTL). Their estimation ignores genes with small effects that trigger underpinning quantitative traits. By contrast, genome-wide selection estimates marker effects across the whole genome on the target population based on a prediction model developed in the training population (TP). Whole-genome prediction models estimate all marker effects in all loci and capture small QTL effects. Here, we review several genomic selection (GS) models with respect to both the prediction accuracy and genetic gain from selection. Phenotypic selection or marker-assisted breeding protocols can be replaced by selection, based on whole-genome predictions in which phenotyping updates the model to build up the prediction accuracy. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. A Constraint-Based Model for Fast Post-Disaster Emergency Vehicle Routing

    Directory of Open Access Journals (Sweden)

    Roberto Amadini

    2013-12-01

    Full Text Available Disasters like terrorist attacks, earthquakes, hurricanes, and volcano eruptions are usually unpredictable events that affect a high number of people. We propose an approach that could be used as a decision support tool for a post-disaster response that allows the assignment of victims to hospitals and organizes their transportation via emergency vehicles. By exploiting the synergy between Mixed Integer Programming and Constraint Programming techniques, we are able to compute the routing of the vehicles so as to rescue much more victims than both heuristic based and complete approaches in a very reasonable time.

  20. Beyond mechanistic interaction: Value-based constraints on meaning in language

    Directory of Open Access Journals (Sweden)

    Joanna eRączaszek-Leonardi

    2015-10-01

    Full Text Available According to situated, embodied, distributed approaches to cognition, language is a crucial means for structuring social interactions. Recent approaches that emphasize the coordinative function of language treat language as a system of replicable constraints that work both on individuals and on interactions. In this paper we argue that the integration of replicable constraints approach to language with the ecological view on values allows for a deeper insight into processes of meaning creation in interaction. Such synthesis of these frameworks draws attention to important sources of structuring interactions beyond the sheer efficiency of a collective system in its current task situation. Most importantly the workings of linguistic constraints will be shown as embedded in more general fields of values, which are realized on multiple time-scales. Since the ontogenetic timescale offers a convenient window into a process of the emergence of linguistic constraints, we present illustrations of concrete mechanisms through which values may become embodied in language use in development.

  1. Mechanistic modeling of aberrant energy metabolism in human disease

    Directory of Open Access Journals (Sweden)

    Vineet eSangar

    2012-10-01

    Full Text Available Dysfunction in energy metabolism—including in pathways localized to the mitochondria—has been implicated in the pathogenesis of a wide array of disorders, ranging from cancer to neurodegenerative diseases to type II diabetes. The inherent complexities of energy and mitochondrial metabolism present a significant obstacle in the effort to understand the role that these molecular processes play in the development of disease. To help unravel these complexities, systems biology methods have been applied to develop an array of computational metabolic models, ranging from mitochondria-specific processes to genome-scale cellular networks. These constraint-based models can efficiently simulate aspects of normal and aberrant metabolism in various genetic and environmental conditions. Development of these models leverages—and also provides a powerful means to integrate and interpret—information from a wide range of sources including genomics, proteomics, metabolomics, and enzyme kinetics. Here, we review a variety of mechanistic modeling studies that explore metabolic functions, deficiency disorders, and aberrant biochemical pathways in mitochondria and related regions in the cell.

  2. Revealing less derived nature of cartilaginous fish genomes with their evolutionary time scale inferred with nuclear genes.

    Directory of Open Access Journals (Sweden)

    Adina J Renz

    Full Text Available Cartilaginous fishes, divided into Holocephali (chimaeras and Elasmoblanchii (sharks, rays and skates, occupy a key phylogenetic position among extant vertebrates in reconstructing their evolutionary processes. Their accurate evolutionary time scale is indispensable for better understanding of the relationship between phenotypic and molecular evolution of cartilaginous fishes. However, our current knowledge on the time scale of cartilaginous fish evolution largely relies on estimates using mitochondrial DNA sequences. In this study, making the best use of the still partial, but large-scale sequencing data of cartilaginous fish species, we estimate the divergence times between the major cartilaginous fish lineages employing nuclear genes. By rigorous orthology assessment based on available genomic and transcriptomic sequence resources for cartilaginous fishes, we selected 20 protein-coding genes in the nuclear genome, spanning 2973 amino acid residues. Our analysis based on the Bayesian inference resulted in the mean divergence time of 421 Ma, the late Silurian, for the Holocephali-Elasmobranchii split, and 306 Ma, the late Carboniferous, for the split between sharks and rays/skates. By applying these results and other documented divergence times, we measured the relative evolutionary rate of the Hox A cluster sequences in the cartilaginous fish lineages, which resulted in a lower substitution rate with a factor of at least 2.4 in comparison to tetrapod lineages. The obtained time scale enables mapping phenotypic and molecular changes in a quantitative framework. It is of great interest to corroborate the less derived nature of cartilaginous fish at the molecular level as a genome-wide phenomenon.

  3. Genome-scale reconstruction of the Streptococcus pyogenes M49 metabolic network reveals growth requirements and indicates potential drug targets

    NARCIS (Netherlands)

    Levering, J.; Fiedler, T.; Sieg, A.; van Grinsven, K.W.A.; Hering, S.; Veith, N.; Olivier, B.G.; Klett, L.; Hugenholtz, J.; Teusink, B.; Kreikemeyer, B.; Kummer, U.

    2016-01-01

    Genome-scale metabolic models comprise stoichiometric relations between metabolites, as well as associations between genes and metabolic reactions and facilitate the analysis of metabolism. We computationally reconstructed the metabolic network of the lactic acid bacterium Streptococcus pyogenes

  4. Genome sequence analysis of the model grass Brachypodium distachyon: insights into grass genome evolution

    Energy Technology Data Exchange (ETDEWEB)

    Schulman, Al

    2009-08-09

    Three subfamilies of grasses, the Erhardtoideae (rice), the Panicoideae (maize, sorghum, sugar cane and millet), and the Pooideae (wheat, barley and cool season forage grasses) provide the basis of human nutrition and are poised to become major sources of renewable energy. Here we describe the complete genome sequence of the wild grass Brachypodium distachyon (Brachypodium), the first member of the Pooideae subfamily to be completely sequenced. Comparison of the Brachypodium, rice and sorghum genomes reveals a precise sequence- based history of genome evolution across a broad diversity of the grass family and identifies nested insertions of whole chromosomes into centromeric regions as a predominant mechanism driving chromosome evolution in the grasses. The relatively compact genome of Brachypodium is maintained by a balance of retroelement replication and loss. The complete genome sequence of Brachypodium, coupled to its exceptional promise as a model system for grass research, will support the development of new energy and food crops

  5. A Constraint programming-based genetic algorithm for capacity output optimization

    Directory of Open Access Journals (Sweden)

    Kate Ean Nee Goh

    2014-10-01

    Full Text Available Purpose: The manuscript presents an investigation into a constraint programming-based genetic algorithm for capacity output optimization in a back-end semiconductor manufacturing company.Design/methodology/approach: In the first stage, constraint programming defining the relationships between variables was formulated into the objective function. A genetic algorithm model was created in the second stage to optimize capacity output. Three demand scenarios were applied to test the robustness of the proposed algorithm.Findings: CPGA improved both the machine utilization and capacity output once the minimum requirements of a demand scenario were fulfilled. Capacity outputs of the three scenarios were improved by 157%, 7%, and 69%, respectively.Research limitations/implications: The work relates to aggregate planning of machine capacity in a single case study. The constraints and constructed scenarios were therefore industry-specific.Practical implications: Capacity planning in a semiconductor manufacturing facility need to consider multiple mutually influenced constraints in resource availability, process flow and product demand. The findings prove that CPGA is a practical and an efficient alternative to optimize the capacity output and to allow the company to review its capacity with quick feedback.Originality/value: The work integrates two contemporary computational methods for a real industry application conventionally reliant on human judgement.

  6. Genome-wide comparative analysis of codon usage bias and codon context patterns among cyanobacterial genomes.

    Science.gov (United States)

    Prabha, Ratna; Singh, Dhananjaya P; Sinha, Swati; Ahmad, Khurshid; Rai, Anil

    2017-04-01

    With the increasing accumulation of genomic sequence information of prokaryotes, the study of codon usage bias has gained renewed attention. The purpose of this study was to examine codon selection pattern within and across cyanobacterial species belonging to diverse taxonomic orders and habitats. We performed detailed comparative analysis of cyanobacterial genomes with respect to codon bias. Our analysis reflects that in cyanobacterial genomes, A- and/or T-ending codons were used predominantly in the genes whereas G- and/or C-ending codons were largely avoided. Variation in the codon context usage of cyanobacterial genes corresponded to the clustering of cyanobacteria as per their GC content. Analysis of codon adaptation index (CAI) and synonymous codon usage order (SCUO) revealed that majority of genes are associated with low codon bias. Codon selection pattern in cyanobacterial genomes reflected compositional constraints as major influencing factor. It is also identified that although, mutational constraint may play some role in affecting codon usage bias in cyanobacteria, compositional constraint in terms of genomic GC composition coupled with environmental factors affected codon selection pattern in cyanobacterial genomes. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. HOROPLAN: computer-assisted nurse scheduling using constraint-based programming.

    Science.gov (United States)

    Darmoni, S J; Fajner, A; Mahé, N; Leforestier, A; Vondracek, M; Stelian, O; Baldenweck, M

    1995-01-01

    Nurse scheduling is a difficult and time consuming task. The schedule has to determine the day to day shift assignments of each nurse for a specified period of time in a way that satisfies the given requirements as much as possible, taking into account the wishes of nurses as closely as possible. This paper presents a constraint-based, artificial intelligence approach by describing a prototype implementation developed with the Charme language and the first results of its use in the Rouen University Hospital. Horoplan implements a non-cyclical constraint-based scheduling, using some heuristics. Four levels of constraints were defined to give a maximum of flexibility: French level (e.g. number of worked hours in a year), hospital level (e.g. specific day-off), department level (e.g. specific shift) and care unit level (e.g. specific pattern for week-ends). Some constraints must always be verified and can not be overruled and some constraints can be overruled at a certain cost. Rescheduling is possible at any time specially in case of an unscheduled absence.

  8. Including Overweight or Obese Students in Physical Education: A Social Ecological Constraint Model

    Science.gov (United States)

    Li, Weidong; Rukavina, Paul

    2012-01-01

    In this review, we propose a social ecological constraint model to study inclusion of overweight or obese students in physical education by integrating key concepts and assumptions from ecological constraint theory in motor development and social ecological models in health promotion and behavior. The social ecological constraint model proposes…

  9. Tie Points Extraction for SAR Images Based on Differential Constraints

    Science.gov (United States)

    Xiong, X.; Jin, G.; Xu, Q.; Zhang, H.

    2018-04-01

    Automatically extracting tie points (TPs) on large-size synthetic aperture radar (SAR) images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC) algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC) algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.

  10. The Sequenced Angiosperm Genomes and Genome Databases.

    Science.gov (United States)

    Chen, Fei; Dong, Wei; Zhang, Jiawei; Guo, Xinyue; Chen, Junhao; Wang, Zhengjia; Lin, Zhenguo; Tang, Haibao; Zhang, Liangsheng

    2018-01-01

    Angiosperms, the flowering plants, provide the essential resources for human life, such as food, energy, oxygen, and materials. They also promoted the evolution of human, animals, and the planet earth. Despite the numerous advances in genome reports or sequencing technologies, no review covers all the released angiosperm genomes and the genome databases for data sharing. Based on the rapid advances and innovations in the database reconstruction in the last few years, here we provide a comprehensive review for three major types of angiosperm genome databases, including databases for a single species, for a specific angiosperm clade, and for multiple angiosperm species. The scope, tools, and data of each type of databases and their features are concisely discussed. The genome databases for a single species or a clade of species are especially popular for specific group of researchers, while a timely-updated comprehensive database is more powerful for address of major scientific mysteries at the genome scale. Considering the low coverage of flowering plants in any available database, we propose construction of a comprehensive database to facilitate large-scale comparative studies of angiosperm genomes and to promote the collaborative studies of important questions in plant biology.

  11. Machine Translation Using Constraint-Based Synchronous Grammar

    Institute of Scientific and Technical Information of China (English)

    WONG Fai; DONG Mingchui; HU Dongcheng

    2006-01-01

    A synchronous grammar based on the formalism of context-free grammar was developed by generalizing the first component of production that models the source text. Unlike other synchronous grammars,the grammar allows multiple target productions to be associated to a single production rule which can be used to guide a parser to infer different possible translational equivalences for a recognized input string according to the feature constraints of symbols in the pattern. An extended generalized LR algorithm was adapted to the parsing of the proposed formalism to analyze the syntactic structure of a language. The grammar was used as the basis for building a machine translation system for Portuguese to Chinese translation. The empirical results show that the grammar is more expressive when modeling the translational equivalences of parallel texts for machine translation and grammar rewriting applications.

  12. Multi-time, multi-scale correlation functions in turbulence and in turbulent models

    NARCIS (Netherlands)

    Biferale, L.; Boffetta, G.; Celani, A.; Toschi, F.

    1999-01-01

    A multifractal-like representation for multi-time, multi-scale velocity correlation in turbulence and dynamical turbulent models is proposed. The importance of subleading contributions to time correlations is highlighted. The fulfillment of the dynamical constraints due to the equations of motion is

  13. Constraints on supersymmetric flavour models from b→sγ

    International Nuclear Information System (INIS)

    Olive, Keith A.; Velasco-Sevilla, L.

    2008-01-01

    We consider the effects of departures from minimal flavour violations (MFV) in the context of CMSSM-like theories. Second and third generation off-diagonal elements in the Yukawa, sfermion, and trilinear mass matrices are taken to be non-zero at the GUT scale. These are run down together with MSSM parameters to the electroweak scale. We apply constraints from fermion masses and CKM matrix elements to limit the range of the new free parameters of the model. We determine the effect of the departure from MFV on the branching ratio of b→s γ. We find that only when the expansion parameter in the down-squark sector is relatively large there is a noticeable effect, which tends to relax the lower limit from b→s γ on the universal gaugino mass. We also find that the expansion parameter associated with the slepton sector needs to be smaller than the corresponding parameter in the down-squark sector in order to be compliant with the bound imposed by the branching ratio of τ→μγ.

  14. A simple dynamic subgrid-scale model for LES of particle-laden turbulence

    Science.gov (United States)

    Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz

    2017-04-01

    In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.

  15. Solution-based targeted genomic enrichment for precious DNA samples

    Directory of Open Access Journals (Sweden)

    Shearer Aiden

    2012-05-01

    Full Text Available Abstract Background Solution-based targeted genomic enrichment (TGE protocols permit selective sequencing of genomic regions of interest on a massively parallel scale. These protocols could be improved by: 1 modifying or eliminating time consuming steps; 2 increasing yield to reduce input DNA and excessive PCR cycling; and 3 enhancing reproducible. Results We developed a solution-based TGE method for downstream Illumina sequencing in a non-automated workflow, adding standard Illumina barcode indexes during the post-hybridization amplification to allow for sample pooling prior to sequencing. The method utilizes Agilent SureSelect baits, primers and hybridization reagents for the capture, off-the-shelf reagents for the library preparation steps, and adaptor oligonucleotides for Illumina paired-end sequencing purchased directly from an oligonucleotide manufacturing company. Conclusions This solution-based TGE method for Illumina sequencing is optimized for small- or medium-sized laboratories and addresses the weaknesses of standard protocols by reducing the amount of input DNA required, increasing capture yield, optimizing efficiency, and improving reproducibility.

  16. THE CLOWER CONSTRAINTS MODEL DARI SURPLUS ATAU DEFISIT FISKAL PEMERINTAH

    Directory of Open Access Journals (Sweden)

    Jonni Manurung

    2006-01-01

    Full Text Available This study has something as a purpose to building the empirical models and the new hypothesis between the broad money, surpluses or fiscal deficit, the general price index or inflation rate, demand for monetary base and demand for bank deposit. This study also head for optimal interest rate for bank deposit at the given value of broad money, surplus or fiscal deficit, general price index, demand for monetary base, and demand for bank deposit. The model build consist to balance central bank, intertemporal budget constraint at the maximum expected utility for hold monetary base and bank deposit. The evaluation of the surplus or fiscal deficit stabilization is with the alteration of the requirement reserve ratio, Gross Domestic Product, general price index and interest rate. The results of the study show that the requirement reserve ratio, Gross Domestic Product, general price index and interest rate is very respect to surplus or deficit fiscal. The contribution requirement reserve ratio and interest rate for surplus or deficit fiscal are relatively high. This results show that the clower constraint model can explain the necessary of fiscal and monetary coordinate. Fiscal policy still weak and cause the real business cycle slow down, high inflation and interest rate. The other hands, monetary policy is very strong and cause fiscal surplus is relatively high. The prudent of government and monetary authority are needed to build the fiscal and monetary policy for create the dynamic economy, lower inflation, requirement reserve ratio and interest rate, and the monetary and fiscal dynamic equilibrium. Abstract in Bahasa Indonesia : Studi ini bertujuan membuat model empiris dan hipotesis baru tentang faktor-faktor broad money, surplus dan defisit fiskal, tingkat bunga secara umum, atau tingkat inflasi, permintaan uang primer dan deposito. Selain itu studi ini juga mencari tingkat suku bunga deposito optimal pada nilai tertentu dari faktor-faktor tersebut

  17. Toward genome-scale models of the Chinese hamster ovary cells: incentives, status and perspectives

    DEFF Research Database (Denmark)

    Kaas, Christian Schrøder; Fan, Yuzhou; Weilguny, Dietmar

    2014-01-01

    Bioprocessing of the important Chinese hamster ovary (CHO) cell lines used for the production of biopharmaceuticals stands at the brink of several redefining events. In 2011, the field entered the genomics era, which has accelerated omics-based phenotyping of the cell lines. In this review we...

  18. Leading CFT constraints on multi-critical models in d>2

    Energy Technology Data Exchange (ETDEWEB)

    Codello, Alessandro [CP-Origins, University of Southern Denmark,Campusvej 55, 5230 Odense M (Denmark); INFN - Sezione di Bologna,via Irnerio 46, 40126 Bologna (Italy); Safari, Mahmoud [INFN - Sezione di Bologna,via Irnerio 46, 40126 Bologna (Italy); Dipartimento di Fisica e Astronomia, Università di Bologna,via Irnerio 46, 40126 Bologna (Italy); Vacca, Gian Paolo [INFN - Sezione di Bologna,via Irnerio 46, 40126 Bologna (Italy); Zanusso, Omar [Theoretisch-Physikalisches Institut, Friedrich-Schiller-Universität Jena,Max-Wien-Platz 1, 07743 Jena (Germany); INFN - Sezione di Bologna,via Irnerio 46, 40126 Bologna (Italy)

    2017-04-21

    We consider the family of renormalizable scalar QFTs with self-interacting potentials of highest monomial ϕ{sup m} below their upper critical dimensions d{sub c}=((2m)/(m−2)), and study them using a combination of CFT constraints, Schwinger-Dyson equation and the free theory behavior at the upper critical dimension. For even integers m≥4 these theories coincide with the Landau-Ginzburg description of multi-critical phenomena and interpolate with the unitary minimal models in d=2, while for odd m the theories are non-unitary and start at m=3 with the Lee-Yang universality class. For all the even potentials and for the Lee-Yang universality class, we show how the assumption of conformal invariance is enough to compute the scaling dimensions of the local operators ϕ{sup k} and of some families of structure constants in either the coupling’s or the ϵ-expansion. For all other odd potentials we express some scaling dimensions and structure constants in the coupling’s expansion.

  19. Leading CFT constraints on multi-critical models in d > 2

    DEFF Research Database (Denmark)

    Codello, Alessandro; Safari, Mahmoud; Vacca, Gian Paolo

    2017-01-01

    We consider the family of renormalizable scalar QFTs with self-interacting potentials of highest monomial ϕm below their upper critical dimensions dc=2mm−2, and study them using a combination of CFT constraints, Schwinger-Dyson equation and the free theory behavior at the upper critical dimension....... For even integers m ≥ 4 these theories coincide with the Landau-Ginzburg description of multi-critical phenomena and interpolate with the unitary minimal models in d = 2, while for odd m the theories are non-unitary and start at m = 3 with the Lee-Yang universality class. For all the even potentials...... and for the Lee-Yang universality class, we show how the assumption of conformal invariance is enough to compute the scaling dimensions of the local operators ϕk and of some families of structure constants in either the coupling’s or the ϵ-expansion. For all other odd potentials we express some scaling dimensions...

  20. Cloud computing for genomic data analysis and collaboration.

    Science.gov (United States)

    Langmead, Ben; Nellore, Abhinav

    2018-04-01

    Next-generation sequencing has made major strides in the past decade. Studies based on large sequencing data sets are growing in number, and public archives for raw sequencing data have been doubling in size every 18 months. Leveraging these data requires researchers to use large-scale computational resources. Cloud computing, a model whereby users rent computers and storage from large data centres, is a solution that is gaining traction in genomics research. Here, we describe how cloud computing is used in genomics for research and large-scale collaborations, and argue that its elasticity, reproducibility and privacy features make it ideally suited for the large-scale reanalysis of publicly available archived data, including privacy-protected data.

  1. Parallel constraint satisfaction in memory-based decisions.

    Science.gov (United States)

    Glöckner, Andreas; Hodges, Sara D

    2011-01-01

    Three studies sought to investigate decision strategies in memory-based decisions and to test the predictions of the parallel constraint satisfaction (PCS) model for decision making (Glöckner & Betsch, 2008). Time pressure was manipulated and the model was compared against simple heuristics (take the best and equal weight) and a weighted additive strategy. From PCS we predicted that fast intuitive decision making is based on compensatory information integration and that decision time increases and confidence decreases with increasing inconsistency in the decision task. In line with these predictions we observed a predominant usage of compensatory strategies under all time-pressure conditions and even with decision times as short as 1.7 s. For a substantial number of participants, choices and decision times were best explained by PCS, but there was also evidence for use of simple heuristics. The time-pressure manipulation did not significantly affect decision strategies. Overall, the results highlight intuitive, automatic processes in decision making and support the idea that human information-processing capabilities are less severely bounded than often assumed.

  2. The perennial ryegrass GenomeZipper: targeted use of genome resources for comparative grass genomics.

    Science.gov (United States)

    Pfeifer, Matthias; Martis, Mihaela; Asp, Torben; Mayer, Klaus F X; Lübberstedt, Thomas; Byrne, Stephen; Frei, Ursula; Studer, Bruno

    2013-02-01

    Whole-genome sequences established for model and major crop species constitute a key resource for advanced genomic research. For outbreeding forage and turf grass species like ryegrasses (Lolium spp.), such resources have yet to be developed. Here, we present a model of the perennial ryegrass (Lolium perenne) genome on the basis of conserved synteny to barley (Hordeum vulgare) and the model grass genome Brachypodium (Brachypodium distachyon) as well as rice (Oryza sativa) and sorghum (Sorghum bicolor). A transcriptome-based genetic linkage map of perennial ryegrass served as a scaffold to establish the chromosomal arrangement of syntenic genes from model grass species. This scaffold revealed a high degree of synteny and macrocollinearity and was then utilized to anchor a collection of perennial ryegrass genes in silico to their predicted genome positions. This resulted in the unambiguous assignment of 3,315 out of 8,876 previously unmapped genes to the respective chromosomes. In total, the GenomeZipper incorporates 4,035 conserved grass gene loci, which were used for the first genome-wide sequence divergence analysis between perennial ryegrass, barley, Brachypodium, rice, and sorghum. The perennial ryegrass GenomeZipper is an ordered, information-rich genome scaffold, facilitating map-based cloning and genome assembly in perennial ryegrass and closely related Poaceae species. It also represents a milestone in describing synteny between perennial ryegrass and fully sequenced model grass genomes, thereby increasing our understanding of genome organization and evolution in the most important temperate forage and turf grass species.

  3. Sonication-based isolation and enrichment of Chlorella protothecoides chloroplasts for illumina genome sequencing

    Energy Technology Data Exchange (ETDEWEB)

    Angelova, Angelina [University of Arizona; Park, Sang-Hycuk [University of Arizona; Kyndt, John [Bellevue University; Fitzsimmons, Kevin [University of Arizona; Brown, Judith K [University of Arizona

    2013-09-01

    With the increasing world demand for biofuel, a number of oleaginous algal species are being considered as renewable sources of oil. Chlorella protothecoides Krüger synthesizes triacylglycerols (TAGs) as storage compounds that can be converted into renewable fuel utilizing an anabolic pathway that is poorly understood. The paucity of algal chloroplast genome sequences has been an important constraint to chloroplast transformation and for studying gene expression in TAGs pathways. In this study, the intact chloroplasts were released from algal cells using sonication followed by sucrose gradient centrifugation, resulting in a 2.36-fold enrichment of chloroplasts from C. protothecoides, based on qPCR analysis. The C. protothecoides chloroplast genome (cpDNA) was determined using the Illumina HiSeq 2000 sequencing platform and found to be 84,576 Kb in size (8.57 Kb) in size, with a GC content of 30.8 %. This is the first report of an optimized protocol that uses a sonication step, followed by sucrose gradient centrifugation, to release and enrich intact chloroplasts from a microalga (C. prototheocoides) of sufficient quality to permit chloroplast genome sequencing with high coverage, while minimizing nuclear genome contamination. The approach is expected to guide chloroplast isolation from other oleaginous algal species for a variety of uses that benefit from enrichment of chloroplasts, ranging from biochemical analysis to genomics studies.

  4. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    Science.gov (United States)

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  5. Revisiting the chlorophyll biosynthesis pathway using genome scale metabolic model of Oryza sativa japonica

    Science.gov (United States)

    Chatterjee, Ankita; Kundu, Sudip

    2015-01-01

    Chlorophyll is one of the most important pigments present in green plants and rice is one of the major food crops consumed worldwide. We curated the existing genome scale metabolic model (GSM) of rice leaf by incorporating new compartment, reactions and transporters. We used this modified GSM to elucidate how the chlorophyll is synthesized in a leaf through a series of bio-chemical reactions spanned over different organelles using inorganic macronutrients and light energy. We predicted the essential reactions and the associated genes of chlorophyll synthesis and validated against the existing experimental evidences. Further, ammonia is known to be the preferred source of nitrogen in rice paddy fields. The ammonia entering into the plant is assimilated in the root and leaf. The focus of the present work is centered on rice leaf metabolism. We studied the relative importance of ammonia transporters through the chloroplast and the cytosol and their interlink with other intracellular transporters. Ammonia assimilation in the leaves takes place by the enzyme glutamine synthetase (GS) which is present in the cytosol (GS1) and chloroplast (GS2). Our results provided possible explanation why GS2 mutants show normal growth under minimum photorespiration and appear chlorotic when exposed to air. PMID:26443104

  6. Why does yeast ferment? A flux balance analysis study.

    NARCIS (Netherlands)

    Simeonides, E.; Murabito, E.; Smalbone, K.; Westerhoff, H.V.

    2010-01-01

    Advances in biological techniques have led to the availability of genome-scale metabolic reconstructions for yeast. The size and complexity of such networks impose limits on what types of analyses one can perform. Constraint-based modelling overcomes some of these restrictions by using

  7. Optimization of multi-environment trials for genomic selection based on crop models.

    Science.gov (United States)

    Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J

    2017-08-01

    We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.

  8. A constraint-based approach to intelligent support of nuclear reactor design

    International Nuclear Information System (INIS)

    Furuta, Kazuo

    1993-01-01

    Constraint is a powerful representation to formulate and solve problems in design; a constraint-based approach to intelligent support of nuclear reactor design is proposed. We first discuss the features of the approach, and then present the architecture of a nuclear reactor design support system under development. In this design support system, the knowledge base contains constraints useful to structure the design space as object class definitions, and several types of constraint resolvers are provided as design support subsystems. The adopted method of constraint resolution are explained in detail. The usefulness of the approach is demonstrated using two design problems: Design window search and multiobjective optimization in nuclear reactor design. (orig./HP)

  9. Coalescent-based genome analyses resolve the early branches of the euarchontoglires.

    Directory of Open Access Journals (Sweden)

    Vikas Kumar

    Full Text Available Despite numerous large-scale phylogenomic studies, certain parts of the mammalian tree are extraordinarily difficult to resolve. We used the coding regions from 19 completely sequenced genomes to study the relationships within the super-clade Euarchontoglires (Primates, Rodentia, Lagomorpha, Dermoptera and Scandentia because the placement of Scandentia within this clade is controversial. The difficulty in resolving this issue is due to the short time spans between the early divergences of Euarchontoglires, which may cause incongruent gene trees. The conflict in the data can be depicted by network analyses and the contentious relationships are best reconstructed by coalescent-based analyses. This method is expected to be superior to analyses of concatenated data in reconstructing a species tree from numerous gene trees. The total concatenated dataset used to study the relationships in this group comprises 5,875 protein-coding genes (9,799,170 nucleotides from all orders except Dermoptera (flying lemurs. Reconstruction of the species tree from 1,006 gene trees using coalescent models placed Scandentia as sister group to the primates, which is in agreement with maximum likelihood analyses of concatenated nucleotide sequence data. Additionally, both analytical approaches favoured the Tarsier to be sister taxon to Anthropoidea, thus belonging to the Haplorrhine clade. When divergence times are short such as in radiations over periods of a few million years, even genome scale analyses struggle to resolve phylogenetic relationships. On these short branches processes such as incomplete lineage sorting and possibly hybridization occur and make it preferable to base phylogenomic analyses on coalescent methods.

  10. Data-based Non-Markovian Model Inference

    Science.gov (United States)

    Ghil, Michael

    2015-04-01

    This talk concentrates on obtaining stable and efficient data-based models for simulation and prediction in the geosciences and life sciences. The proposed model derivation relies on using a multivariate time series of partial observations from a large-dimensional system, and the resulting low-order models are compared with the optimal closures predicted by the non-Markovian Mori-Zwanzig formalism of statistical physics. Multilayer stochastic models (MSMs) are introduced as both a very broad generalization and a time-continuous limit of existing multilevel, regression-based approaches to data-based closure, in particular of empirical model reduction (EMR). We show that the multilayer structure of MSMs can provide a natural Markov approximation to the generalized Langevin equation (GLE) of the Mori-Zwanzig formalism. A simple correlation-based stopping criterion for an EMR-MSM model is derived to assess how well it approximates the GLE solution. Sufficient conditions are given for the nonlinear cross-interactions between the constitutive layers of a given MSM to guarantee the existence of a global random attractor. This existence ensures that no blow-up can occur for a very broad class of MSM applications. The EMR-MSM methodology is first applied to a conceptual, nonlinear, stochastic climate model of coupled slow and fast variables, in which only slow variables are observed. The resulting reduced model with energy-conserving nonlinearities captures the main statistical features of the slow variables, even when there is no formal scale separation and the fast variables are quite energetic. Second, an MSM is shown to successfully reproduce the statistics of a partially observed, generalized Lokta-Volterra model of population dynamics in its chaotic regime. The positivity constraint on the solutions' components replaces here the quadratic-energy-preserving constraint of fluid-flow problems and it successfully prevents blow-up. This work is based on a close

  11. A constraint on Planck-scale modifications to electrodynamics with CMB polarization data

    Energy Technology Data Exchange (ETDEWEB)

    Gubitosi, Giulia; Pagano, Luca; Amelino-Camelia, Giovanni; Melchiorri, Alessandro [Physics Department, University of Rome ' ' La Sapienza' ' and Sezione Roma1 INFN, P.le Aldo Moro 2, 00185 Rome (Italy); Cooray, Asantha, E-mail: giulia.gubitosi@roma1.infn.it, E-mail: luca.pagano@roma1.infn.it, E-mail: giovanni.amelino-camelia@roma1.infn.it, E-mail: alessandro.melchiorri@roma1.infn.it, E-mail: acooray@uci.edu [Center for Cosmology, Dept. of Physics and Astronomy, University of California Irvine, Irvine, CA 92697 (United States)

    2009-08-01

    We show that the Cosmic Microwave Background (CMB) polarization data gathered by the BOOMERanG 2003 flight and WMAP provide an opportunity to investigate in-vacuo birefringence, of a type expected in some quantum pictures of space-time, with a sensitivity that extends even beyond the desired Planck-scale energy. In order to render this constraint more transparent we rely on a well studied phenomenological model of quantum-gravity-induced birefringence, in which one easily establishes that effects introduced at the Planck scale would amount to values of a dimensionless parameter, denoted by ξ, with respect to the Planck energy which are roughly of order 1. By combining BOOMERanG and WMAP data we estimate ξ ≅ −0.110±0.075 at the 68% c.l. Moreover, we forecast on the sensitivity to ξ achievable by future CMB polarization experiments (PLANCK, Spider, EPIC), which, in the absence of systematics, will be at the 1-σ confidence of 8.5 × 10{sup −4} (PLANCK), 6.1 × 10{sup −3} (Spider), and 1.0 × 10{sup −5} (EPIC) respectively. The cosmic variance-limited sensitivity from CMB is 6.1 × 10{sup −6}.

  12. A Physiologically Based, Multi-Scale Model of Skeletal Muscle Structure and Function

    Science.gov (United States)

    Röhrle, O.; Davidson, J. B.; Pullan, A. J.

    2012-01-01

    Models of skeletal muscle can be classified as phenomenological or biophysical. Phenomenological models predict the muscle’s response to a specified input based on experimental measurements. Prominent phenomenological models are the Hill-type muscle models, which have been incorporated into rigid-body modeling frameworks, and three-dimensional continuum-mechanical models. Biophysically based models attempt to predict the muscle’s response as emerging from the underlying physiology of the system. In this contribution, the conventional biophysically based modeling methodology is extended to include several structural and functional characteristics of skeletal muscle. The result is a physiologically based, multi-scale skeletal muscle finite element model that is capable of representing detailed, geometrical descriptions of skeletal muscle fibers and their grouping. Together with a well-established model of motor-unit recruitment, the electro-physiological behavior of single muscle fibers within motor units is computed and linked to a continuum-mechanical constitutive law. The bridging between the cellular level and the organ level has been achieved via a multi-scale constitutive law and homogenization. The effect of homogenization has been investigated by varying the number of embedded skeletal muscle fibers and/or motor units and computing the resulting exerted muscle forces while applying the same excitatory input. All simulations were conducted using an anatomically realistic finite element model of the tibialis anterior muscle. Given the fact that the underlying electro-physiological cellular muscle model is capable of modeling metabolic fatigue effects such as potassium accumulation in the T-tubular space and inorganic phosphate build-up, the proposed framework provides a novel simulation-based way to investigate muscle behavior ranging from motor-unit recruitment to force generation and fatigue. PMID:22993509

  13. A physiologically based, multi-scale model of skeletal muscle structure and function

    Directory of Open Access Journals (Sweden)

    Oliver eRöhrle

    2012-09-01

    Full Text Available Models of skeletal muscle can be classified as phenomenological or biophysical. Phenomenological models predict the muscle's response to a specified input based on experimental measurements. Prominent phenomenological models are the Hill-type muscle models, which have been incorporated into rigid-body modelling frameworks, and three-dimensional continuum-mechanical models. Biophysically based models attempt to predict the muscle's response as emerging from the underlying physiology of the system. In this contribution, the conventional biophysically based modelling methodology is extended to include several structural and functional characteristics of skeletal muscle. The result is a physiologically based, multi-scale skeletal muscle finite element model that is capable of representing detailed, geometrical descriptions of skeletal muscle fibres and their grouping. Together with a well-established model of motor unit recruitment, the electro-physiological behaviour of single muscle fibres within motor units is computed and linked to a continuum-mechanical constitutive law. The bridging between the cellular level and the organ level has been achieved via a multi-scale constitutive law and homogenisation. The effect of homogenisation has been investigated by varying the number of embedded skeletal muscle fibres and/or motor units and computing the resulting exerted muscle forces while applying the same excitatory input. All simulations were conducted using an anatomically realistic finite element model of the Tibialis Anterior muscle. Given the fact that the underlying electro-physiological cellular muscle model is capable of modelling metabolic fatigue effects such as potassium accumulation in the T-tubular space and inorganic phosphate build-up, the proposed framework provides a novel simulation-based way to investigate muscle behaviour ranging from motor unit recruitment to force generation and fatigue.

  14. Constraint-based deadlock checking of high-level specifications

    DEFF Research Database (Denmark)

    Hallerstede, Stefan; Leuschel, Michael

    2011-01-01

    Establishing the absence of deadlocks is important in many applications of formal methods. The use of model checking for finding deadlocks in formal models is limited because in many industrial applications the state space is either infinite or much too large to be explored exhaustively. In this ......Establishing the absence of deadlocks is important in many applications of formal methods. The use of model checking for finding deadlocks in formal models is limited because in many industrial applications the state space is either infinite or much too large to be explored exhaustively......B's Prolog kernel, such as reification of membership and arithmetic constraints. ProB typically finds counter examples to deadlock-freedom constraints, a formula of about 900 partly nested conjunctions and disjunction among them 80 arithmetic and 150 set-theoretic predicates (in total a formula of 30 pages...

  15. CRISPR/Cas9 Based Genome Editing of Penicillium chrysogenum.

    Science.gov (United States)

    Pohl, C; Kiel, J A K W; Driessen, A J M; Bovenberg, R A L; Nygård, Y

    2016-07-15

    CRISPR/Cas9 based systems have emerged as versatile platforms for precision genome editing in a wide range of organisms. Here we have developed powerful CRISPR/Cas9 tools for marker-based and marker-free genome modifications in Penicillium chrysogenum, a model filamentous fungus and industrially relevant cell factory. The developed CRISPR/Cas9 toolbox is highly flexible and allows editing of new targets with minimal cloning efforts. The Cas9 protein and the sgRNA can be either delivered during transformation, as preassembled CRISPR-Cas9 ribonucleoproteins (RNPs) or expressed from an AMA1 based plasmid within the cell. The direct delivery of the Cas9 protein with in vitro synthesized sgRNA to the cells allows for a transient method for genome engineering that may rapidly be applicable for other filamentous fungi. The expression of Cas9 from an AMA1 based vector was shown to be highly efficient for marker-free gene deletions.

  16. PGen: large-scale genomic variations analysis workflow and browser in SoyKB.

    Science.gov (United States)

    Liu, Yang; Khan, Saad M; Wang, Juexin; Rynge, Mats; Zhang, Yuanxun; Zeng, Shuai; Chen, Shiyuan; Maldonado Dos Santos, Joao V; Valliyodan, Babu; Calyam, Prasad P; Merchant, Nirav; Nguyen, Henry T; Xu, Dong; Joshi, Trupti

    2016-10-06

    With the advances in next-generation sequencing (NGS) technology and significant reductions in sequencing costs, it is now possible to sequence large collections of germplasm in crops for detecting genome-scale genetic variations and to apply the knowledge towards improvements in traits. To efficiently facilitate large-scale NGS resequencing data analysis of genomic variations, we have developed "PGen", an integrated and optimized workflow using the Extreme Science and Engineering Discovery Environment (XSEDE) high-performance computing (HPC) virtual system, iPlant cloud data storage resources and Pegasus workflow management system (Pegasus-WMS). The workflow allows users to identify single nucleotide polymorphisms (SNPs) and insertion-deletions (indels), perform SNP annotations and conduct copy number variation analyses on multiple resequencing datasets in a user-friendly and seamless way. We have developed both a Linux version in GitHub ( https://github.com/pegasus-isi/PGen-GenomicVariations-Workflow ) and a web-based implementation of the PGen workflow integrated within the Soybean Knowledge Base (SoyKB), ( http://soykb.org/Pegasus/index.php ). Using PGen, we identified 10,218,140 single-nucleotide polymorphisms (SNPs) and 1,398,982 indels from analysis of 106 soybean lines sequenced at 15X coverage. 297,245 non-synonymous SNPs and 3330 copy number variation (CNV) regions were identified from this analysis. SNPs identified using PGen from additional soybean resequencing projects adding to 500+ soybean germplasm lines in total have been integrated. These SNPs are being utilized for trait improvement using genotype to phenotype prediction approaches developed in-house. In order to browse and access NGS data easily, we have also developed an NGS resequencing data browser ( http://soykb.org/NGS_Resequence/NGS_index.php ) within SoyKB to provide easy access to SNP and downstream analysis results for soybean researchers. PGen workflow has been optimized for the most

  17. APT cost scaling: Preliminary indications from a Parametric Costing Model (PCM)

    International Nuclear Information System (INIS)

    Krakowski, R.A.

    1995-01-01

    A Parametric Costing Model has been created and evaluate as a first step in quantitatively understanding important design options for the Accelerator Production of Tritium (APT) concept. This model couples key economic and technical elements of APT in a two-parameter search of beam energy and beam power that minimizes costs within a range of operating constraints. The costing and engineering depth of the Parametric Costing Model is minimal at the present open-quotes entry levelclose quotes, and is intended only to demonstrate a potential for a more-detailed, cost-based integrating design tool. After describing the present basis of the Parametric Costing Model and giving an example of a single parametric scaling run derived therefrom, the impacts of choices related to resistive versus superconducting accelerator structures and cost of electricity versus plant availability (open-quotes load curveclose quotes) are reported. Areas of further development and application are suggested

  18. MSOAR 2.0: Incorporating tandem duplications into ortholog assignment based on genome rearrangement

    Directory of Open Access Journals (Sweden)

    Zhang Liqing

    2010-01-01

    Full Text Available Abstract Background Ortholog assignment is a critical and fundamental problem in comparative genomics, since orthologs are considered to be functional counterparts in different species and can be used to infer molecular functions of one species from those of other species. MSOAR is a recently developed high-throughput system for assigning one-to-one orthologs between closely related species on a genome scale. It attempts to reconstruct the evolutionary history of input genomes in terms of genome rearrangement and gene duplication events. It assumes that a gene duplication event inserts a duplicated gene into the genome of interest at a random location (i.e., the random duplication model. However, in practice, biologists believe that genes are often duplicated by tandem duplications, where a duplicated gene is located next to the original copy (i.e., the tandem duplication model. Results In this paper, we develop MSOAR 2.0, an improved system for one-to-one ortholog assignment. For a pair of input genomes, the system first focuses on the tandemly duplicated genes of each genome and tries to identify among them those that were duplicated after the speciation (i.e., the so-called inparalogs, using a simple phylogenetic tree reconciliation method. For each such set of tandemly duplicated inparalogs, all but one gene will be deleted from the concerned genome (because they cannot possibly appear in any one-to-one ortholog pairs, and MSOAR is invoked. Using both simulated and real data experiments, we show that MSOAR 2.0 is able to achieve a better sensitivity and specificity than MSOAR. In comparison with the well-known genome-scale ortholog assignment tool InParanoid, Ensembl ortholog database, and the orthology information extracted from the well-known whole-genome multiple alignment program MultiZ, MSOAR 2.0 shows the highest sensitivity. Although the specificity of MSOAR 2.0 is slightly worse than that of InParanoid in the real data experiments

  19. Ancient bacterial endosymbionts of insects: Genomes as sources of insight and springboards for inquiry.

    Science.gov (United States)

    Wernegreen, Jennifer J

    2017-09-15

    Ancient associations between insects and bacteria provide models to study intimate host-microbe interactions. Currently, a wealth of genome sequence data for long-term, obligately intracellular (primary) endosymbionts of insects reveals profound genomic consequences of this specialized bacterial lifestyle. Those consequences include severe genome reduction and extreme base compositions. This minireview highlights the utility of genome sequence data to understand how, and why, endosymbionts have been pushed to such extremes, and to illuminate the functional consequences of such extensive genome change. While the static snapshots provided by individual endosymbiont genomes are valuable, comparative analyses of multiple genomes have shed light on evolutionary mechanisms. Namely, genome comparisons have told us that selection is important in fine-tuning gene content, but at the same time, mutational pressure and genetic drift contribute to genome degradation. Examples from Blochmannia, the primary endosymbiont of the ant tribe Camponotini, illustrate the value and constraints of genome sequence data, and exemplify how genomes can serve as a springboard for further comparative and experimental inquiry. Copyright © 2017. Published by Elsevier Inc.

  20. A medium term bulk production cost model based on decomposition techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, A.; Munoz, L. [Univ. Pontificia Comillas, Madrid (Spain). Inst. de Investigacion Tecnologica; Martinez-Corcoles, F.; Martin-Corrochano, V. [IBERDROLA, Madrid (Spain)

    1995-11-01

    This model provides the minimum variable cost subject to operating constraints (generation, transmission and fuel constraints). Generation constraints include power reserve margin with respect to the system peak load, first Kirchhoff`s law at each node, hydro energy scheduling, maintenance scheduling, and generation limitations. Transmission constraints cover the second Kirchhoff`s law and transmission limitations. The generation and transmission economic dispatch is approximated by the linearized (also called DC) load flow. Network losses are included as a non linear approximation. Fuel constraints include minimum consumption quotas and fuel scheduling for domestic coal thermal plants. This production costing problem is formulated as a large-scale non linear optimization problem solved by generalized Benders decomposition method. Master problem determines the inter-period decisions, i.e., maintenance, fuel and hydro scheduling, and each subproblem solves the intra-period decisions, i.e., generation and transmission economic dispatch for one period. The model has been implemented in GAMS, a mathematical programming language. An application to the large-scale Spanish electric power system is presented. 11 refs

  1. Complex networks-based energy-efficient evolution model for wireless sensor networks

    Energy Technology Data Exchange (ETDEWEB)

    Zhu Hailin [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China)], E-mail: zhuhailin19@gmail.com; Luo Hong [Beijing Key Laboratory of Intelligent Telecommunications Software and Multimedia, Beijing University of Posts and Telecommunications, P.O. Box 106, Beijing 100876 (China); Peng Haipeng; Li Lixiang; Luo Qun [Information Secure Center, State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, P.O. Box 145, Beijing 100876 (China)

    2009-08-30

    Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.

  2. Complex networks-based energy-efficient evolution model for wireless sensor networks

    International Nuclear Information System (INIS)

    Zhu Hailin; Luo Hong; Peng Haipeng; Li Lixiang; Luo Qun

    2009-01-01

    Based on complex networks theory, we present two self-organized energy-efficient models for wireless sensor networks in this paper. The first model constructs the wireless sensor networks according to the connectivity and remaining energy of each sensor node, thus it can produce scale-free networks which have a performance of random error tolerance. In the second model, we not only consider the remaining energy, but also introduce the constraint of links to each node. This model can make the energy consumption of the whole network more balanced. Finally, we present the numerical experiments of the two models.

  3. Fixing the EW scale in supersymmetric models after the Higgs discovery

    CERN Document Server

    Ghilencea, D M

    2013-01-01

    TeV-scale supersymmetry was originally introduced to solve the hierarchy problem and therefore fix the electroweak (EW) scale in the presence of quantum corrections. Numerical methods testing the SUSY models often report a good likelihood L (or chi^2=-2ln L) to fit the data {\\it including} the EW scale itself (m_Z^0) with a {\\it simultaneously} large fine-tuning i.e. a large variation of this scale under a small variation of the SUSY parameters. We argue that this is inconsistent and we identify the origin of this problem. Our claim is that the likelihood (or chi^2) to fit the data that is usually reported in such models does not account for the chi^2 cost of fixing the EW scale. When this constraint is implemented, the likelihood (or chi^2) receives a significant correction (delta_chi^2) that worsens the current data fits of SUSY models. We estimate this correction for the models: constrained MSSM (CMSSM), models with non-universal gaugino masses (NUGM) or higgs soft masses (NUHM1, NUHM2), the NMSSM and the ...

  4. Construction and analysis of a genome-scale metabolic network for Bacillus licheniformis WX-02.

    Science.gov (United States)

    Guo, Jing; Zhang, Hong; Wang, Cheng; Chang, Ji-Wei; Chen, Ling-Ling

    2016-05-01

    We constructed the genome-scale metabolic network of Bacillus licheniformis (B. licheniformis) WX-02 by combining genomic annotation, high-throughput phenotype microarray (PM) experiments and literature-based metabolic information. The accuracy of the metabolic network was assessed by an OmniLog PM experiment. The final metabolic model iWX1009 contains 1009 genes, 1141 metabolites and 1762 reactions, and the predicted metabolic phenotypes showed an agreement rate of 76.8% with experimental PM data. In addition, key metabolic features such as growth yield, utilization of different substrates and essential genes were identified by flux balance analysis. A total of 195 essential genes were predicted from LB medium, among which 149 were verified with the experimental essential gene set of B. subtilis 168. With the removal of 5 reactions from the network, pathways for poly-γ-glutamic acid (γ-PGA) synthesis were optimized and the γ-PGA yield reached 83.8 mmol/h. Furthermore, the important metabolites and pathways related to γ-PGA synthesis and bacterium growth were comprehensively analyzed. The present study provides valuable clues for exploring the metabolisms and metabolic regulation of γ-PGA synthesis in B. licheniformis WX-02. Copyright © 2016 Institut Pasteur. Published by Elsevier Masson SAS. All rights reserved.

  5. Lot Sizing Based on Stochastic Demand and Service Level Constraint

    Directory of Open Access Journals (Sweden)

    hajar shirneshan

    2012-06-01

    Full Text Available Considering its application, stochastic lot sizing is a significant subject in production planning. Also the concept of service level is more applicable than shortage cost from managers' viewpoint. In this paper, the stochastic multi period multi item capacitated lot sizing problem has been investigated considering service level constraint. First, the single item model has been developed considering service level and with no capacity constraint and then, it has been solved using dynamic programming algorithm and the optimal solution has been derived. Then the model has been generalized to multi item problem with capacity constraint. The stochastic multi period multi item capacitated lot sizing problem is NP-Hard, hence the model could not be solved by exact optimization approaches. Therefore, simulated annealing method has been applied for solving the problem. Finally, in order to evaluate the efficiency of the model, low level criterion has been used .

  6. Metabolic model for the filamentous ‘Candidatus Microthrix parvicella’ based on genomic and metagenomic analyses

    DEFF Research Database (Denmark)

    McIlroy, Simon Jon; Kristiansen, Rikke; Albertsen, Mads

    2013-01-01

    acids as triacylglycerols. Utilisation of trehalose and/or polyphosphate stores or partial oxidation of long-chain fatty acids may supply the energy required for anaerobic lipid uptake and storage. Comparing the genome sequence of this isolate with metagenomes from two full-scale wastewater treatment...

  7. The large-scale blast score ratio (LS-BSR pipeline: a method to rapidly compare genetic content between bacterial genomes

    Directory of Open Access Journals (Sweden)

    Jason W. Sahl

    2014-04-01

    Full Text Available Background. As whole genome sequence data from bacterial isolates becomes cheaper to generate, computational methods are needed to correlate sequence data with biological observations. Here we present the large-scale BLAST score ratio (LS-BSR pipeline, which rapidly compares the genetic content of hundreds to thousands of bacterial genomes, and returns a matrix that describes the relatedness of all coding sequences (CDSs in all genomes surveyed. This matrix can be easily parsed in order to identify genetic relationships between bacterial genomes. Although pipelines have been published that group peptides by sequence similarity, no other software performs the rapid, large-scale, full-genome comparative analyses carried out by LS-BSR.Results. To demonstrate the utility of the method, the LS-BSR pipeline was tested on 96 Escherichia coli and Shigella genomes; the pipeline ran in 163 min using 16 processors, which is a greater than 7-fold speedup compared to using a single processor. The BSR values for each CDS, which indicate a relative level of relatedness, were then mapped to each genome on an independent core genome single nucleotide polymorphism (SNP based phylogeny. Comparisons were then used to identify clade specific CDS markers and validate the LS-BSR pipeline based on molecular markers that delineate between classical E. coli pathogenic variant (pathovar designations. Scalability tests demonstrated that the LS-BSR pipeline can process 1,000 E. coli genomes in 27–57 h, depending upon the alignment method, using 16 processors.Conclusions. LS-BSR is an open-source, parallel implementation of the BSR algorithm, enabling rapid comparison of the genetic content of large numbers of genomes. The results of the pipeline can be used to identify specific markers between user-defined phylogenetic groups, and to identify the loss and/or acquisition of genetic information between bacterial isolates. Taxa-specific genetic markers can then be translated

  8. GenColors-based comparative genome databases for small eukaryotic genomes.

    Science.gov (United States)

    Felder, Marius; Romualdi, Alessandro; Petzold, Andreas; Platzer, Matthias; Sühnel, Jürgen; Glöckner, Gernot

    2013-01-01

    Many sequence data repositories can give a quick and easily accessible overview on genomes and their annotations. Less widespread is the possibility to compare related genomes with each other in a common database environment. We have previously described the GenColors database system (http://gencolors.fli-leibniz.de) and its applications to a number of bacterial genomes such as Borrelia, Legionella, Leptospira and Treponema. This system has an emphasis on genome comparison. It combines data from related genomes and provides the user with an extensive set of visualization and analysis tools. Eukaryote genomes are normally larger than prokaryote genomes and thus pose additional challenges for such a system. We have, therefore, adapted GenColors to also handle larger datasets of small eukaryotic genomes and to display eukaryotic gene structures. Further recent developments include whole genome views, genome list options and, for bacterial genome browsers, the display of horizontal gene transfer predictions. Two new GenColors-based databases for two fungal species (http://fgb.fli-leibniz.de) and for four social amoebas (http://sacgb.fli-leibniz.de) were set up. Both new resources open up a single entry point for related genomes for the amoebozoa and fungal research communities and other interested users. Comparative genomics approaches are greatly facilitated by these resources.

  9. Exploring the Spatial-Temporal Disparities of Urban Land Use Economic Efficiency in China and Its Influencing Factors under Environmental Constraints Based on a Sequential Slacks-Based Model

    Directory of Open Access Journals (Sweden)

    Hualin Xie

    2015-07-01

    Full Text Available Using a sequential slack-based measure (SSBM model, this paper analyzes the spatiotemporal disparities of urban land use economic efficiency (ULUEE under environmental constraints, and its influencing factors in 270 cities across China from 2003–2012. The main results are as follows: (1 The average ULUEE for Chinese cities is only 0.411, and out of the 270 cities, only six cities are always efficient in urban land use in the study period. Most cities have a lot of room to improve the economic output of secondary and tertiary industries, as well as environmental protection work; (2 The eastern region of China enjoys the highest ULUEE, followed by the western and central regions. Super-scale cities show the best performance of all four city scales, followed by large-scale, small-scale and medium-scale cities. Cities with relatively developed economies and less pollutant discharge always have better ULUEE; (3 The results of slack variables analysis show that most cities have problems such as the labor surplus, over-development, excessive pollutant discharge, economic output shortage, and unreasonable use of funds is the most serious one; (4 The regression results of the influencing factors show that improvements of the per capita GDP and land use intensity are helpful to raise ULUEE. The urbanization rate and the proportion of foreign enterprises’ output account for the total output in the secondary and tertiary industries only have the same effect in some regions and city scales. The land management policy and land leasing policy have negative impact on the ULUEE in all the three regions and four city scales; (5 Some targeted policy goals are proposed, including the reduction of surplus labor, and pay more attention to environmental protection. Most importantly, effective implementation of land management policies from the central government, and stopping blind leasing of land to make up the local government’s financial deficit would be very

  10. Inflationary magnetogenesis with added helicity: constraints from non-Gaussianities

    Science.gov (United States)

    Caprini, Chiara; Chiara Guzzetti, Maria; Sorbo, Lorenzo

    2018-06-01

    In previous work (Caprini and Sorbo 2014 J. Cosmol. Astropart. Phys. JCAP10(2014)056), two of us have proposed a model of inflationary magnetogenesis based on a rolling auxiliary field able both to account for the magnetic fields inferred by the (non) observation of gamma-rays from blazars, and to start the galactic dynamo, without incurring in any strong coupling or strong backreaction regime. Here we evaluate the correction to the scalar spectrum and bispectrum with respect to single-field slow-roll inflation generated in that scenario. The strongest constraints on the model originate from the non-observation of a scalar bispectrum. Nevertheless, even when those constraints are taken into consideration, the scenario can successfully account for the observed magnetic fields as long as the energy scale of inflation is smaller than GeV, under some conditions on the slow roll of the auxiliary scalar field.

  11. Genomic prediction of complex human traits: relatedness, trait architecture and predictive meta-models

    Science.gov (United States)

    Spiliopoulou, Athina; Nagy, Reka; Bermingham, Mairead L.; Huffman, Jennifer E.; Hayward, Caroline; Vitart, Veronique; Rudan, Igor; Campbell, Harry; Wright, Alan F.; Wilson, James F.; Pong-Wong, Ricardo; Agakov, Felix; Navarro, Pau; Haley, Chris S.

    2015-01-01

    We explore the prediction of individuals' phenotypes for complex traits using genomic data. We compare several widely used prediction models, including Ridge Regression, LASSO and Elastic Nets estimated from cohort data, and polygenic risk scores constructed using published summary statistics from genome-wide association meta-analyses (GWAMA). We evaluate the interplay between relatedness, trait architecture and optimal marker density, by predicting height, body mass index (BMI) and high-density lipoprotein level (HDL) in two data cohorts, originating from Croatia and Scotland. We empirically demonstrate that dense models are better when all genetic effects are small (height and BMI) and target individuals are related to the training samples, while sparse models predict better in unrelated individuals and when some effects have moderate size (HDL). For HDL sparse models achieved good across-cohort prediction, performing similarly to the GWAMA risk score and to models trained within the same cohort, which indicates that, for predicting traits with moderately sized effects, large sample sizes and familial structure become less important, though still potentially useful. Finally, we propose a novel ensemble of whole-genome predictors with GWAMA risk scores and demonstrate that the resulting meta-model achieves higher prediction accuracy than either model on its own. We conclude that although current genomic predictors are not accurate enough for diagnostic purposes, performance can be improved without requiring access to large-scale individual-level data. Our methodologically simple meta-model is a means of performing predictive meta-analysis for optimizing genomic predictions and can be easily extended to incorporate multiple population-level summary statistics or other domain knowledge. PMID:25918167

  12. In silico method for modelling metabolism and gene product expression at genome scale

    Energy Technology Data Exchange (ETDEWEB)

    Lerman, Joshua A.; Hyduke, Daniel R.; Latif, Haythem; Portnoy, Vasiliy A.; Lewis, Nathan E.; Orth, Jeffrey D.; Rutledge, Alexandra C.; Smith, Richard D.; Adkins, Joshua N.; Zengler, Karsten; Palsson, Bernard O.

    2012-07-03

    Transcription and translation use raw materials and energy generated metabolically to create the macromolecular machinery responsible for all cellular functions, including metabolism. A biochemically accurate model of molecular biology and metabolism will facilitate comprehensive and quantitative computations of an organism's molecular constitution as a function of genetic and environmental parameters. Here we formulate a model of metabolism and macromolecular expression. Prototyping it using the simple microorganism Thermotoga maritima, we show our model accurately simulates variations in cellular composition and gene expression. Moreover, through in silico comparative transcriptomics, the model allows the discovery of new regulons and improving the genome and transcription unit annotations. Our method presents a framework for investigating molecular biology and cellular physiology in silico and may allow quantitative interpretation of multi-omics data sets in the context of an integrated biochemical description of an organism.

  13. Correlation-based decimation in constraint satisfaction problems

    International Nuclear Information System (INIS)

    Higuchi, Saburo; Mezard, Marc

    2010-01-01

    We study hard constraint satisfaction problems using some decimation algorithms based on mean-field approximations. The message-passing approach is used to estimate, beside the usual one-variable marginals, the pair correlation functions. The identification of strongly correlated pairs allows to use a new decimation procedure, where the relative orientation of a pair of variables is fixed. We apply this novel decimation to locked occupation problems, a class of hard constraint satisfaction problems where the usual belief-propagation guided decimation performs poorly. The pair-decimation approach provides a significant improvement.

  14. Model-independent constraints on modified gravity from current data and from the Euclid and SKA future surveys

    Energy Technology Data Exchange (ETDEWEB)

    Taddei, Laura; Martinelli, Matteo; Amendola, Luca, E-mail: taddei@thphys.uni-heidelberg.de, E-mail: martinelli@lorentz.leidenuniv.nl, E-mail: amendola@thphys.uni-heidelberg.de [Institut für Theoretische Physik, Ruprecht-Karls-Universität Heidelberg, Philosophenweg 16, 69120 Heidelberg (Germany)

    2016-12-01

    The aim of this paper is to constrain modified gravity with redshift space distortion observations and supernovae measurements. Compared with a standard ΛCDM analysis, we include three additional free parameters, namely the initial conditions of the matter perturbations, the overall perturbation normalization, and a scale-dependent modified gravity parameter modifying the Poisson equation, in an attempt to perform a more model-independent analysis. First, we constrain the Poisson parameter Y (also called G {sub eff}) by using currently available f σ{sub 8} data and the recent SN catalog JLA. We find that the inclusion of the additional free parameters makes the constraints significantly weaker than when fixing them to the standard cosmological value. Second, we forecast future constraints on Y by using the predicted growth-rate data for Euclid and SKA missions. Here again we point out the weakening of the constraints when the additional parameters are included. Finally, we adopt as modified gravity Poisson parameter the specific Horndeski form, and use scale-dependent forecasts to build an exclusion plot for the Yukawa potential akin to the ones realized in laboratory experiments, both for the Euclid and the SKA surveys.

  15. Model-independent constraints on modified gravity from current data and from the Euclid and SKA future surveys

    International Nuclear Information System (INIS)

    Taddei, Laura; Martinelli, Matteo; Amendola, Luca

    2016-01-01

    The aim of this paper is to constrain modified gravity with redshift space distortion observations and supernovae measurements. Compared with a standard ΛCDM analysis, we include three additional free parameters, namely the initial conditions of the matter perturbations, the overall perturbation normalization, and a scale-dependent modified gravity parameter modifying the Poisson equation, in an attempt to perform a more model-independent analysis. First, we constrain the Poisson parameter Y (also called G eff ) by using currently available f σ 8 data and the recent SN catalog JLA. We find that the inclusion of the additional free parameters makes the constraints significantly weaker than when fixing them to the standard cosmological value. Second, we forecast future constraints on Y by using the predicted growth-rate data for Euclid and SKA missions. Here again we point out the weakening of the constraints when the additional parameters are included. Finally, we adopt as modified gravity Poisson parameter the specific Horndeski form, and use scale-dependent forecasts to build an exclusion plot for the Yukawa potential akin to the ones realized in laboratory experiments, both for the Euclid and the SKA surveys.

  16. Integrated Genome-Based Studies of Shewanella Echophysiology

    Energy Technology Data Exchange (ETDEWEB)

    Margrethe H. Serres

    2012-06-29

    Shewanella oneidensis MR-1 is a motile, facultative {gamma}-Proteobacterium with remarkable respiratory versatility; it can utilize a range of organic and inorganic compounds as terminal electronacceptors for anaerobic metabolism. The ability to effectively reduce nitrate, S0, polyvalent metals andradionuclides has established MR-1 as an important model dissimilatory metal-reducing microorganism for genome-based investigations of biogeochemical transformation of metals and radionuclides that are of concern to the U.S. Department of Energy (DOE) sites nationwide. Metal-reducing bacteria such as Shewanella also have a highly developed capacity for extracellular transfer of respiratory electrons to solid phase Fe and Mn oxides as well as directly to anode surfaces in microbial fuel cells. More broadly, Shewanellae are recognized free-living microorganisms and members of microbial communities involved in the decomposition of organic matter and the cycling of elements in aquatic and sedimentary systems. To function and compete in environments that are subject to spatial and temporal environmental change, Shewanella must be able to sense and respond to such changes and therefore require relatively robust sensing and regulation systems. The overall goal of this project is to apply the tools of genomics, leveraging the availability of genome sequence for 18 additional strains of Shewanella, to better understand the ecophysiology and speciation of respiratory-versatile members of this important genus. To understand these systems we propose to use genome-based approaches to investigate Shewanella as a system of integrated networks; first describing key cellular subsystems - those involved in signal transduction, regulation, and metabolism - then building towards understanding the function of whole cells and, eventually, cells within populations. As a general approach, this project will employ complimentary "top-down" - bioinformatics-based genome functional predictions, high

  17. Cloud-based interactive analytics for terabytes of genomic variants data.

    Science.gov (United States)

    Pan, Cuiping; McInnes, Gregory; Deflaux, Nicole; Snyder, Michael; Bingham, Jonathan; Datta, Somalee; Tsao, Philip S

    2017-12-01

    Large scale genomic sequencing is now widely used to decipher questions in diverse realms such as biological function, human diseases, evolution, ecosystems, and agriculture. With the quantity and diversity these data harbor, a robust and scalable data handling and analysis solution is desired. We present interactive analytics using a cloud-based columnar database built on Dremel to perform information compression, comprehensive quality controls, and biological information retrieval in large volumes of genomic data. We demonstrate such Big Data computing paradigms can provide orders of magnitude faster turnaround for common genomic analyses, transforming long-running batch jobs submitted via a Linux shell into questions that can be asked from a web browser in seconds. Using this method, we assessed a study population of 475 deeply sequenced human genomes for genomic call rate, genotype and allele frequency distribution, variant density across the genome, and pharmacogenomic information. Our analysis framework is implemented in Google Cloud Platform and BigQuery. Codes are available at https://github.com/StanfordBioinformatics/mvp_aaa_codelabs. cuiping@stanford.edu or ptsao@stanford.edu. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.

  18. Multi-scale modeling of composites

    DEFF Research Database (Denmark)

    Azizi, Reza

    A general method to obtain the homogenized response of metal-matrix composites is developed. It is assumed that the microscopic scale is sufficiently small compared to the macroscopic scale such that the macro response does not affect the micromechanical model. Therefore, the microscopic scale......-Mandel’s energy principle is used to find macroscopic operators based on micro-mechanical analyses using the finite element method under generalized plane strain condition. A phenomenologically macroscopic model for metal matrix composites is developed based on constitutive operators describing the elastic...... to plastic deformation. The macroscopic operators found, can be used to model metal matrix composites on the macroscopic scale using a hierarchical multi-scale approach. Finally, decohesion under tension and shear loading is studied using a cohesive law for the interface between matrix and fiber....

  19. WormBase: Annotating many nematode genomes.

    Science.gov (United States)

    Howe, Kevin; Davis, Paul; Paulini, Michael; Tuli, Mary Ann; Williams, Gary; Yook, Karen; Durbin, Richard; Kersey, Paul; Sternberg, Paul W

    2012-01-01

    WormBase (www.wormbase.org) has been serving the scientific community for over 11 years as the central repository for genomic and genetic information for the soil nematode Caenorhabditis elegans. The resource has evolved from its beginnings as a database housing the genomic sequence and genetic and physical maps of a single species, and now represents the breadth and diversity of nematode research, currently serving genome sequence and annotation for around 20 nematodes. In this article, we focus on WormBase's role of genome sequence annotation, describing how we annotate and integrate data from a growing collection of nematode species and strains. We also review our approaches to sequence curation, and discuss the impact on annotation quality of large functional genomics projects such as modENCODE.

  20. Model-Based Integration and Interpretation of Data

    DEFF Research Database (Denmark)

    Petersen, Johannes

    2004-01-01

    Data integration and interpretation plays a crucial role in supervisory control. The paper defines a set of generic inference steps for the data integration and interpretation process based on a three-layer model of system representations. The three-layer model is used to clarify the combination...... of constraint and object-centered representations of the work domain throwing new light on the basic principles underlying the data integration and interpretation process of Rasmussen's abstraction hierarchy as well as other model-based approaches combining constraint and object-centered representations. Based...