WorldWideScience

Sample records for integrative computational biology

  1. Integrating interactive computational modeling in biology curricula.

    Directory of Open Access Journals (Sweden)

    Tomáš Helikar

    2015-03-01

    Full Text Available While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  2. Integrating interactive computational modeling in biology curricula.

    Science.gov (United States)

    Helikar, Tomáš; Cutucache, Christine E; Dahlquist, Lauren M; Herek, Tyler A; Larson, Joshua J; Rogers, Jim A

    2015-03-01

    While the use of computer tools to simulate complex processes such as computer circuits is normal practice in fields like engineering, the majority of life sciences/biological sciences courses continue to rely on the traditional textbook and memorization approach. To address this issue, we explored the use of the Cell Collective platform as a novel, interactive, and evolving pedagogical tool to foster student engagement, creativity, and higher-level thinking. Cell Collective is a Web-based platform used to create and simulate dynamical models of various biological processes. Students can create models of cells, diseases, or pathways themselves or explore existing models. This technology was implemented in both undergraduate and graduate courses as a pilot study to determine the feasibility of such software at the university level. First, a new (In Silico Biology) class was developed to enable students to learn biology by "building and breaking it" via computer models and their simulations. This class and technology also provide a non-intimidating way to incorporate mathematical and computational concepts into a class with students who have a limited mathematical background. Second, we used the technology to mediate the use of simulations and modeling modules as a learning tool for traditional biological concepts, such as T cell differentiation or cell cycle regulation, in existing biology courses. Results of this pilot application suggest that there is promise in the use of computational modeling and software tools such as Cell Collective to provide new teaching methods in biology and contribute to the implementation of the "Vision and Change" call to action in undergraduate biology education by providing a hands-on approach to biology.

  3. Computational biology

    DEFF Research Database (Denmark)

    Hartmann, Lars Røeboe; Jones, Neil; Simonsen, Jakob Grue

    2011-01-01

    Computation via biological devices has been the subject of close scrutiny since von Neumann’s early work some 60 years ago. In spite of the many relevant works in this field, the notion of programming biological devices seems to be, at best, ill-defined. While many devices are claimed or proved t...

  4. iTools: a framework for classification, categorization and integration of computational biology resources.

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2008-05-01

    Full Text Available The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long

  5. Delivering The Benefits of Chemical-Biological Integration in Computational Toxicology at the EPA (ACS Fall meeting)

    Science.gov (United States)

    Abstract: Researchers at the EPA’s National Center for Computational Toxicology integrate advances in biology, chemistry, and computer science to examine the toxicity of chemicals and help prioritize chemicals for further research based on potential human health risks. The intent...

  6. CoreFlow: A computational platform for integration, analysis and modeling of complex biological data

    DEFF Research Database (Denmark)

    Pasculescu, Adrian; Schoof, Erwin; Creixell, Pau

    2014-01-01

    between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion......A major challenge in mass spectrometry and other large-scale applications is how to handle, integrate, and model the data that is produced. Given the speed at which technology advances and the need to keep pace with biological experiments, we designed a computational platform, CoreFlow, which...... provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts...

  7. Computing health quality measures using Informatics for Integrating Biology and the Bedside.

    Science.gov (United States)

    Klann, Jeffrey G; Murphy, Shawn N

    2013-04-19

    The Health Quality Measures Format (HQMF) is a Health Level 7 (HL7) standard for expressing computable Clinical Quality Measures (CQMs). Creating tools to process HQMF queries in clinical databases will become increasingly important as the United States moves forward with its Health Information Technology Strategic Plan to Stages 2 and 3 of the Meaningful Use incentive program (MU2 and MU3). Informatics for Integrating Biology and the Bedside (i2b2) is one of the analytical databases used as part of the Office of the National Coordinator (ONC)'s Query Health platform to move toward this goal. Our goal is to integrate i2b2 with the Query Health HQMF architecture, to prepare for other HQMF use-cases (such as MU2 and MU3), and to articulate the functional overlap between i2b2 and HQMF. Therefore, we analyze the structure of HQMF, and then we apply this understanding to HQMF computation on the i2b2 clinical analytical database platform. Specifically, we develop a translator between two query languages, HQMF and i2b2, so that the i2b2 platform can compute HQMF queries. We use the HQMF structure of queries for aggregate reporting, which define clinical data elements and the temporal and logical relationships between them. We use the i2b2 XML format, which allows flexible querying of a complex clinical data repository in an easy-to-understand domain-specific language. The translator can represent nearly any i2b2-XML query as HQMF and execute in i2b2 nearly any HQMF query expressible in i2b2-XML. This translator is part of the freely available reference implementation of the QueryHealth initiative. We analyze limitations of the conversion and find it covers many, but not all, of the complex temporal and logical operators required by quality measures. HQMF is an expressive language for defining quality measures, and it will be important to understand and implement for CQM computation, in both meaningful use and population health. However, its current form might allow

  8. Computational aspects of systematic biology.

    Science.gov (United States)

    Lilburn, Timothy G; Harrison, Scott H; Cole, James R; Garrity, George M

    2006-06-01

    We review the resources available to systematic biologists who wish to use computers to build classifications. Algorithm development is in an early stage, and only a few examples of integrated applications for systematic biology are available. The availability of data is crucial if systematic biology is to enter the computer age.

  9. CoreFlow: a computational platform for integration, analysis and modeling of complex biological data.

    Science.gov (United States)

    Pasculescu, Adrian; Schoof, Erwin M; Creixell, Pau; Zheng, Yong; Olhovsky, Marina; Tian, Ruijun; So, Jonathan; Vanderlaan, Rachel D; Pawson, Tony; Linding, Rune; Colwill, Karen

    2014-04-04

    A major challenge in mass spectrometry and other large-scale applications is how to handle, integrate, and model the data that is produced. Given the speed at which technology advances and the need to keep pace with biological experiments, we designed a computational platform, CoreFlow, which provides programmers with a framework to manage data in real-time. It allows users to upload data into a relational database (MySQL), and to create custom scripts in high-level languages such as R, Python, or Perl for processing, correcting and modeling this data. CoreFlow organizes these scripts into project-specific pipelines, tracks interdependencies between related tasks, and enables the generation of summary reports as well as publication-quality images. As a result, the gap between experimental and computational components of a typical large-scale biology project is reduced, decreasing the time between data generation, analysis and manuscript writing. CoreFlow is being released to the scientific community as an open-sourced software package complete with proteomics-specific examples, which include corrections for incomplete isotopic labeling of peptides (SILAC) or arginine-to-proline conversion, and modeling of multiple/selected reaction monitoring (MRM/SRM) results. CoreFlow was purposely designed as an environment for programmers to rapidly perform data analysis. These analyses are assembled into project-specific workflows that are readily shared with biologists to guide the next stages of experimentation. Its simple yet powerful interface provides a structure where scripts can be written and tested virtually simultaneously to shorten the life cycle of code development for a particular task. The scripts are exposed at every step so that a user can quickly see the relationships between the data, the assumptions that have been made, and the manipulations that have been performed. Since the scripts use commonly available programming languages, they can easily be

  10. An Integrated Bioinformatics and Computational Biology Approach Identifies New BH3-Only Protein Candidates.

    Science.gov (United States)

    Hawley, Robert G; Chen, Yuzhong; Riz, Irene; Zeng, Chen

    2012-05-04

    In this study, we utilized an integrated bioinformatics and computational biology approach in search of new BH3-only proteins belonging to the BCL2 family of apoptotic regulators. The BH3 (BCL2 homology 3) domain mediates specific binding interactions among various BCL2 family members. It is composed of an amphipathic α-helical region of approximately 13 residues that has only a few amino acids that are highly conserved across all members. Using a generalized motif, we performed a genome-wide search for novel BH3-containing proteins in the NCBI Consensus Coding Sequence (CCDS) database. In addition to known pro-apoptotic BH3-only proteins, 197 proteins were recovered that satisfied the search criteria. These were categorized according to α-helical content and predictive binding to BCL-xL (encoded by BCL2L1) and MCL-1, two representative anti-apoptotic BCL2 family members, using position-specific scoring matrix models. Notably, the list is enriched for proteins associated with autophagy as well as a broad spectrum of cellular stress responses such as endoplasmic reticulum stress, oxidative stress, antiviral defense, and the DNA damage response. Several potential novel BH3-containing proteins are highlighted. In particular, the analysis strongly suggests that the apoptosis inhibitor and DNA damage response regulator, AVEN, which was originally isolated as a BCL-xL-interacting protein, is a functional BH3-only protein representing a distinct subclass of BCL2 family members.

  11. S100A4 and its role in metastasis – computational integration of data on biological networks.

    Science.gov (United States)

    Buetti-Dinh, Antoine; Pivkin, Igor V; Friedman, Ran

    2015-08-01

    Characterising signal transduction networks is fundamental to our understanding of biology. However, redundancy and different types of feedback mechanisms make it difficult to understand how variations of the network components contribute to a biological process. In silico modelling of signalling interactions therefore becomes increasingly useful for the development of successful therapeutic approaches. Unfortunately, quantitative information cannot be obtained for all of the proteins or complexes that comprise the network, which limits the usability of computational models. We developed a flexible computational framework for the analysis of biological signalling networks. We demonstrate our approach by studying the mechanism of metastasis promotion by the S100A4 protein, and suggest therapeutic strategies. The advantage of the proposed method is that only limited information (interaction type between species) is required to set up a steady-state network model. This permits a straightforward integration of experimental information where the lack of details are compensated by efficient sampling of the parameter space. We investigated regulatory properties of the S100A4 network and the role of different key components. The results show that S100A4 enhances the activity of matrix metalloproteinases (MMPs), causing higher cell dissociation. Moreover, it leads to an increased stability of the pathological state. Thus, avoiding metastasis in S100A4-expressing tumours requires multiple target inhibition. Moreover, the analysis could explain the previous failure of MMP inhibitors in clinical trials. Finally, our method is applicable to a wide range of biological questions that can be represented as directional networks.

  12. Computational biology for ageing

    Science.gov (United States)

    Wieser, Daniela; Papatheodorou, Irene; Ziehm, Matthias; Thornton, Janet M.

    2011-01-01

    High-throughput genomic and proteomic technologies have generated a wealth of publicly available data on ageing. Easy access to these data, and their computational analysis, is of great importance in order to pinpoint the causes and effects of ageing. Here, we provide a description of the existing databases and computational tools on ageing that are available for researchers. We also describe the computational approaches to data interpretation in the field of ageing including gene expression, comparative and pathway analyses, and highlight the challenges for future developments. We review recent biological insights gained from applying bioinformatics methods to analyse and interpret ageing data in different organisms, tissues and conditions. PMID:21115530

  13. Michael Levitt and Computational Biology

    Science.gov (United States)

    dropdown arrow Site Map A-Z Index Menu Synopsis Michael Levitt and Computational Biology Resources with Michael Levitt, PhD, professor of structural biology at the Stanford University School of Medicine, has function. ... Levitt's early work pioneered computational structural biology, which helped to predict

  14. Computational Systems Chemical Biology

    OpenAIRE

    Oprea, Tudor I.; May, Elebeoba E.; Leitão, Andrei; Tropsha, Alexander

    2011-01-01

    There is a critical need for improving the level of chemistry awareness in systems biology. The data and information related to modulation of genes and proteins by small molecules continue to accumulate at the same time as simulation tools in systems biology and whole body physiologically-based pharmacokinetics (PBPK) continue to evolve. We called this emerging area at the interface between chemical biology and systems biology systems chemical biology, SCB (Oprea et al., 2007).

  15. Integrative radiation systems biology

    International Nuclear Information System (INIS)

    Unger, Kristian

    2014-01-01

    Maximisation of the ratio of normal tissue preservation and tumour cell reduction is the main concept of radiotherapy alone or combined with chemo-, immuno- or biologically targeted therapy. The foremost parameter influencing this ratio is radiation sensitivity and its modulation towards a more efficient killing of tumour cells and a better preservation of normal tissue at the same time is the overall aim of modern therapy schemas. Nevertheless, this requires a deep understanding of the molecular mechanisms of radiation sensitivity in order to identify its key players as potential therapeutic targets. Moreover, the success of conventional approaches that tried to statistically associate altered radiation sensitivity with any molecular phenotype such as gene expression proofed to be somewhat limited since the number of clinically used targets is rather sparse. However, currently a paradigm shift is taking place from pure frequentistic association analysis to the rather holistic systems biology approach that seeks to mathematically model the system to be investigated and to allow the prediction of an altered phenotype as the function of one single or a signature of biomarkers. Integrative systems biology also considers the data from different molecular levels such as the genome, transcriptome or proteome in order to partially or fully comprehend the causal chain of molecular mechanisms. An example for the application of this concept currently carried out at the Clinical Cooperation Group “Personalized Radiotherapy in Head and Neck Cancer” of the Helmholtz-Zentrum München and the LMU Munich is described. This review article strives for providing a compact overview on the state of the art of systems biology, its actual challenges, potential applications, chances and limitations in radiation oncology research working towards improved personalised therapy concepts using this relatively new methodology

  16. Data integration in biological research: an overview.

    Science.gov (United States)

    Lapatas, Vasileios; Stefanidakis, Michalis; Jimenez, Rafael C; Via, Allegra; Schneider, Maria Victoria

    2015-12-01

    Data sharing, integration and annotation are essential to ensure the reproducibility of the analysis and interpretation of the experimental findings. Often these activities are perceived as a role that bioinformaticians and computer scientists have to take with no or little input from the experimental biologist. On the contrary, biological researchers, being the producers and often the end users of such data, have a big role in enabling biological data integration. The quality and usefulness of data integration depend on the existence and adoption of standards, shared formats, and mechanisms that are suitable for biological researchers to submit and annotate the data, so it can be easily searchable, conveniently linked and consequently used for further biological analysis and discovery. Here, we provide background on what is data integration from a computational science point of view, how it has been applied to biological research, which key aspects contributed to its success and future directions.

  17. Integrative Radiation Biology

    Energy Technology Data Exchange (ETDEWEB)

    Barcellos-Hoff, Mary Helen [New York University School of Medicine, NY (United States)

    2015-02-27

    We plan to study tissue-level mechanisms important to human breast radiation carcinogenesis. We propose that the cell biology of irradiated tissues reveals a coordinated multicellular damage response program in which individual cell contributions are primarily directed towards suppression of carcinogenesis and reestablishment of homeostasis. We identified transforming growth factor β1 (TGFβ) as a pivotal signal. Notably, we have discovered that TGFβ suppresses genomic instability by controlling the intrinsic DNA damage response and centrosome integrity. However, TGFβ also mediates disruption of microenvironment interactions, which drive epithelial to mesenchymal transition in irradiated human mammary epithelial cells. This apparent paradox of positive and negative controls by TGFβ is the topic of the present proposal. First, we postulate that these phenotypes manifest differentially following fractionated or chronic exposures; second, that the interactions of multiple cell types in tissues modify the responses evident in this single cell type culture models. The goals are to: 1) study the effect of low dose rate and fractionated radiation exposure in combination with TGFβ on the irradiated phenotype and genomic instability of non-malignant human epithelial cells; and 2) determine whether stromal-epithelial interactions suppress the irradiated phenotype in cell culture and the humanized mammary mouse model. These data will be used to 3) develop a systems biology model that integrates radiation effects across multiple levels of tissue organization and time. Modeling multicellular radiation responses coordinated via extracellular signaling could have a significant impact on the extrapolation of human health risks from high dose to low dose/rate radiation exposure.

  18. Applicability of Computational Systems Biology in Toxicology

    DEFF Research Database (Denmark)

    Kongsbak, Kristine Grønning; Hadrup, Niels; Audouze, Karine Marie Laure

    2014-01-01

    be used to establish hypotheses on links between the chemical and human diseases. Such information can also be applied for designing more intelligent animal/cell experiments that can test the established hypotheses. Here, we describe how and why to apply an integrative systems biology method......Systems biology as a research field has emerged within the last few decades. Systems biology, often defined as the antithesis of the reductionist approach, integrates information about individual components of a biological system. In integrative systems biology, large data sets from various sources...... and databases are used to model and predict effects of chemicals on, for instance, human health. In toxicology, computational systems biology enables identification of important pathways and molecules from large data sets; tasks that can be extremely laborious when performed by a classical literature search...

  19. A computational systems biology software platform for multiscale modeling and simulation: Integrating whole-body physiology, disease biology, and molecular reaction networks

    Directory of Open Access Journals (Sweden)

    Thomas eEissing

    2011-02-01

    Full Text Available Today, in silico studies and trial simulations already complement experimental approaches in pharmaceutical R&D and have become indispensable tools for decision making and communication with regulatory agencies. While biology is multi-scale by nature, project work and software tools usually focus on isolated aspects of drug action, such as pharmacokinetics at the organism scale or pharmacodynamic interaction on the molecular level. We present a modeling and simulation software platform consisting of PK-Sim® and MoBi® capable of building and simulating models that integrate across biological scales. A prototypical multiscale model for the progression of a pancreatic tumor and its response to pharmacotherapy is constructed and virtual patients are treated with a prodrug activated by hepatic metabolization. Tumor growth is driven by signal transduction leading to cell cycle transition and proliferation. Free tumor concentrations of the active metabolite inhibit Raf kinase in the signaling cascade and thereby cell cycle progression. In a virtual clinical study, the individual therapeutic outcome of the chemotherapeutic intervention is simulated for a large population with heterogeneous genomic background. Thereby, the platform allows efficient model building and integration of biological knowledge and prior data from all biological scales. Experimental in vitro model systems can be linked with observations in animal experiments and clinical trials. The interplay between patients, diseases, and drugs and topics with high clinical relevance such as the role of pharmacogenomics, drug-drug or drug-metabolite interactions can be addressed using this mechanistic, insight driven multiscale modeling approach.

  20. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  1. Synthetic biology: engineering molecular computers

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Complicated systems cannot survive the rigors of a chaotic environment, without balancing mechanisms that sense, decide upon and counteract the exerted disturbances. Especially so with living organisms, forced by competition to incredible complexities, escalating also their self-controlling plight. Therefore, they compute. Can we harness biological mechanisms to create artificial computing systems? Biology offers several levels of design abstraction: molecular machines, cells, organisms... ranging from the more easily-defined to the more inherently complex. At the bottom of this stack we find the nucleic acids, RNA and DNA, with their digital structure and relatively precise interactions. They are central enablers of designing artificial biological systems, in the confluence of engineering and biology, that we call Synthetic biology. In the first part, let us follow their trail towards an overview of building computing machines with molecules -- and in the second part, take the case study of iGEM Greece 201...

  2. Integrated Biological Control

    International Nuclear Information System (INIS)

    JOHNSON, A.R.

    2002-01-01

    Biological control is any activity taken to prevent, limit, clean up, or remediate potential environmental, health and safety, or workplace quality impacts from plants, animals, or microorganisms. At Hanford the principal emphasis of biological control is to prevent the transport of radioactive contamination by biological vectors (plants, animals, or microorganisms), and where necessary, control and clean up resulting contamination. Other aspects of biological control at Hanford include industrial weed control (e.g.; tumbleweeds), noxious weed control (invasive, non-native plant species), and pest control (undesirable animals such as rodents and stinging insects; and microorganisms such as molds that adversely affect the quality of the workplace environment). Biological control activities may be either preventive (apriori) or in response to existing contamination spread (aposteriori). Surveillance activities, including ground, vegetation, flying insect, and other surveys, and apriori control actions, such as herbicide spraying and placing biological barriers, are important in preventing radioactive contamination spread. If surveillance discovers that biological vectors have spread radioactive contamination, aposteriori control measures, such as fixing contamination, followed by cleanup and removal of the contamination to an approved disposal location are typical response functions. In some cases remediation following the contamination cleanup and removal is necessary. Biological control activities for industrial weeds, noxious weeds and pests have similar modes of prevention and response

  3. Semantic Web meets Integrative Biology: a survey.

    Science.gov (United States)

    Chen, Huajun; Yu, Tong; Chen, Jake Y

    2013-01-01

    Integrative Biology (IB) uses experimental or computational quantitative technologies to characterize biological systems at the molecular, cellular, tissue and population levels. IB typically involves the integration of the data, knowledge and capabilities across disciplinary boundaries in order to solve complex problems. We identify a series of bioinformatics problems posed by interdisciplinary integration: (i) data integration that interconnects structured data across related biomedical domains; (ii) ontology integration that brings jargons, terminologies and taxonomies from various disciplines into a unified network of ontologies; (iii) knowledge integration that integrates disparate knowledge elements from multiple sources; (iv) service integration that build applications out of services provided by different vendors. We argue that IB can benefit significantly from the integration solutions enabled by Semantic Web (SW) technologies. The SW enables scientists to share content beyond the boundaries of applications and websites, resulting into a web of data that is meaningful and understandable to any computers. In this review, we provide insight into how SW technologies can be used to build open, standardized and interoperable solutions for interdisciplinary integration on a global basis. We present a rich set of case studies in system biology, integrative neuroscience, bio-pharmaceutics and translational medicine, to highlight the technical features and benefits of SW applications in IB.

  4. Computation of integral bases

    NARCIS (Netherlands)

    Bauch, J.H.P.

    2015-01-01

    Let $A$ be a Dedekind domain, $K$ the fraction field of $A$, and $f\\in A[x]$ a monic irreducible separable polynomial. For a given non-zero prime ideal $\\mathfrak{p}$ of $A$ we present in this paper a new method to compute a $\\mathfrak{p}$-integral basis of the extension of $K$ determined by $f$.

  5. How Computers are Arming biology!

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 23; Issue 1. In-vitro to In-silico - How Computers are Arming biology! Geetha Sugumaran Sushila Rajagopal. Face to Face Volume 23 Issue 1 January 2018 pp 83-102. Fulltext. Click here to view fulltext PDF. Permanent link:

  6. An Integrated Approach to Biology

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 16; Issue 8. An Integrated Approach to Biology. Aniket Bhattacharya. General Article Volume 16 Issue 8 August 2011 pp 742-753. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/016/08/0742-0753 ...

  7. And So It Grows: Using a Computer-Based Simulation of a Population Growth Model to Integrate Biology & Mathematics

    Science.gov (United States)

    Street, Garrett M.; Laubach, Timothy A.

    2013-01-01

    We provide a 5E structured-inquiry lesson so that students can learn more of the mathematics behind the logistic model of population biology. By using models and mathematics, students understand how population dynamics can be influenced by relatively simple changes in the environment.

  8. The case for biological quantum computer elements

    Science.gov (United States)

    Baer, Wolfgang; Pizzi, Rita

    2009-05-01

    An extension to vonNeumann's analysis of quantum theory suggests self-measurement is a fundamental process of Nature. By mapping the quantum computer to the brain architecture we will argue that the cognitive experience results from a measurement of a quantum memory maintained by biological entities. The insight provided by this mapping suggests quantum effects are not restricted to small atomic and nuclear phenomena but are an integral part of our own cognitive experience and further that the architecture of a quantum computer system parallels that of a conscious brain. We will then review the suggestions for biological quantum elements in basic neural structures and address the de-coherence objection by arguing for a self- measurement event model of Nature. We will argue that to first order approximation the universe is composed of isolated self-measurement events which guaranties coherence. Controlled de-coherence is treated as the input/output interactions between quantum elements of a quantum computer and the quantum memory maintained by biological entities cognizant of the quantum calculation results. Lastly we will present stem-cell based neuron experiments conducted by one of us with the aim of demonstrating the occurrence of quantum effects in living neural networks and discuss future research projects intended to reach this objective.

  9. Integrative biological analysis for neuropsychopharmacology.

    Science.gov (United States)

    Emmett, Mark R; Kroes, Roger A; Moskal, Joseph R; Conrad, Charles A; Priebe, Waldemar; Laezza, Fernanda; Meyer-Baese, Anke; Nilsson, Carol L

    2014-01-01

    Although advances in psychotherapy have been made in recent years, drug discovery for brain diseases such as schizophrenia and mood disorders has stagnated. The need for new biomarkers and validated therapeutic targets in the field of neuropsychopharmacology is widely unmet. The brain is the most complex part of human anatomy from the standpoint of number and types of cells, their interconnections, and circuitry. To better meet patient needs, improved methods to approach brain studies by understanding functional networks that interact with the genome are being developed. The integrated biological approaches--proteomics, transcriptomics, metabolomics, and glycomics--have a strong record in several areas of biomedicine, including neurochemistry and neuro-oncology. Published applications of an integrated approach to projects of neurological, psychiatric, and pharmacological natures are still few but show promise to provide deep biological knowledge derived from cells, animal models, and clinical materials. Future studies that yield insights based on integrated analyses promise to deliver new therapeutic targets and biomarkers for personalized medicine.

  10. 2K09 and thereafter : the coming era of integrative bioinformatics, systems biology and intelligent computing for functional genomics and personalized medicine research

    Science.gov (United States)

    2010-01-01

    Significant interest exists in establishing synergistic research in bioinformatics, systems biology and intelligent computing. Supported by the United States National Science Foundation (NSF), International Society of Intelligent Biological Medicine (http://www.ISIBM.org), International Journal of Computational Biology and Drug Design (IJCBDD) and International Journal of Functional Informatics and Personalized Medicine, the ISIBM International Joint Conferences on Bioinformatics, Systems Biology and Intelligent Computing (ISIBM IJCBS 2009) attracted more than 300 papers and 400 researchers and medical doctors world-wide. It was the only inter/multidisciplinary conference aimed to promote synergistic research and education in bioinformatics, systems biology and intelligent computing. The conference committee was very grateful for the valuable advice and suggestions from honorary chairs, steering committee members and scientific leaders including Dr. Michael S. Waterman (USC, Member of United States National Academy of Sciences), Dr. Chih-Ming Ho (UCLA, Member of United States National Academy of Engineering and Academician of Academia Sinica), Dr. Wing H. Wong (Stanford, Member of United States National Academy of Sciences), Dr. Ruzena Bajcsy (UC Berkeley, Member of United States National Academy of Engineering and Member of United States Institute of Medicine of the National Academies), Dr. Mary Qu Yang (United States National Institutes of Health and Oak Ridge, DOE), Dr. Andrzej Niemierko (Harvard), Dr. A. Keith Dunker (Indiana), Dr. Brian D. Athey (Michigan), Dr. Weida Tong (FDA, United States Department of Health and Human Services), Dr. Cathy H. Wu (Georgetown), Dr. Dong Xu (Missouri), Drs. Arif Ghafoor and Okan K Ersoy (Purdue), Dr. Mark Borodovsky (Georgia Tech, President of ISIBM), Dr. Hamid R. Arabnia (UGA, Vice-President of ISIBM), and other scientific leaders. The committee presented the 2009 ISIBM Outstanding Achievement Awards to Dr. Joydeep Ghosh (UT

  11. Integrating phosphoproteomics in systems biology

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2014-07-01

    Full Text Available Phosphorylation of serine, threonine and tyrosine plays significant roles in cellular signal transduction and in modifying multiple protein functions. Phosphoproteins are coordinated and regulated by a network of kinases, phosphatases and phospho-binding proteins, which modify the phosphorylation states, recognize unique phosphopeptides, or target proteins for degradation. Detailed and complete information on the structure and dynamics of these networks is required to better understand fundamental mechanisms of cellular processes and diseases. High-throughput technologies have been developed to investigate phosphoproteomes in model organisms and human diseases. Among them, mass spectrometry (MS-based technologies are the major platforms and have been widely applied, which has led to explosive growth of phosphoproteomic data in recent years. New bioinformatics tools are needed to analyze and make sense of these data. Moreover, most research has focused on individual phosphoproteins and kinases. To gain a more complete knowledge of cellular processes, systems biology approaches, including pathways and networks modeling, have to be applied to integrate all components of the phosphorylation machinery, including kinases, phosphatases, their substrates, and phospho-binding proteins. This review presents the latest developments of bioinformatics methods and attempts to apply systems biology to analyze phosphoproteomics data generated by MS-based technologies. Challenges and future directions in this field will be also discussed.

  12. Computation of integral bases

    NARCIS (Netherlands)

    Bauch, J.D.

    2016-01-01

    Let A be a Dedekind domain, K the fraction field of A, and f∈. A[. x] a monic irreducible separable polynomial. For a given non-zero prime ideal p of A we present in this paper a new characterization of a p-integral basis of the extension of K determined by f. This characterization yields in an

  13. The Virtual Institute for Integrative Biology (VIIB)

    International Nuclear Information System (INIS)

    Rivera, G.; Gonzalez-Nieto, F.; Perez-Acle, T.; Isea, R.; Holmes, D. S.

    2007-01-01

    The Virtual Institute for Integrative Biology (VII B) is a Latin American initiative for achieving global collaborative e-Science in the areas of bioinformatics, genome biology, systems biology, Metagenomic, medical applications and nanobiotechnolgy. The scientific agenda of VIIB includes: construction of databases for comparative genomic, the AlterORF database for alternate open reading frames discovery in genomes, bioinformatics services and protein simulations for biotechnological and medical applications. Human resource development has been promoted through co-sponsored students and shared teaching and seminars via video conferencing. E-Science challenges include: inter operability and connectivity concerns, high performance computing limitations, and the development of customized computational frameworks and flexible work flows to efficiently exploit shared resources without causing impediments to the user. Outreach programs include training workshops and classes for high school teachers and students and the new Adopt-a-Gene initiative. The VIIB has proved an effective way for small teams to transcend the critical mass problem, to overcome geographic limitations, to harness the power of large scale, collaborative science and improve the visibility of Latin American science It may provide a useful paradigm for developing further e-Science initiatives in Latin America and other emerging regions. (Author)

  14. Women are underrepresented in computational biology: An analysis of the scholarly literature in biology, computer science and computational biology.

    Science.gov (United States)

    Bonham, Kevin S; Stefan, Melanie I

    2017-10-01

    While women are generally underrepresented in STEM fields, there are noticeable differences between fields. For instance, the gender ratio in biology is more balanced than in computer science. We were interested in how this difference is reflected in the interdisciplinary field of computational/quantitative biology. To this end, we examined the proportion of female authors in publications from the PubMed and arXiv databases. There are fewer female authors on research papers in computational biology, as compared to biology in general. This is true across authorship position, year, and journal impact factor. A comparison with arXiv shows that quantitative biology papers have a higher ratio of female authors than computer science papers, placing computational biology in between its two parent fields in terms of gender representation. Both in biology and in computational biology, a female last author increases the probability of other authors on the paper being female, pointing to a potential role of female PIs in influencing the gender balance.

  15. Women are underrepresented in computational biology: An analysis of the scholarly literature in biology, computer science and computational biology.

    Directory of Open Access Journals (Sweden)

    Kevin S Bonham

    2017-10-01

    Full Text Available While women are generally underrepresented in STEM fields, there are noticeable differences between fields. For instance, the gender ratio in biology is more balanced than in computer science. We were interested in how this difference is reflected in the interdisciplinary field of computational/quantitative biology. To this end, we examined the proportion of female authors in publications from the PubMed and arXiv databases. There are fewer female authors on research papers in computational biology, as compared to biology in general. This is true across authorship position, year, and journal impact factor. A comparison with arXiv shows that quantitative biology papers have a higher ratio of female authors than computer science papers, placing computational biology in between its two parent fields in terms of gender representation. Both in biology and in computational biology, a female last author increases the probability of other authors on the paper being female, pointing to a potential role of female PIs in influencing the gender balance.

  16. Application of computational intelligence to biology

    CERN Document Server

    Sekhar, Akula

    2016-01-01

    This book is a contribution of translational and allied research to the proceedings of the International Conference on Computational Intelligence and Soft Computing. It explains how various computational intelligence techniques can be applied to investigate various biological problems. It is a good read for Research Scholars, Engineers, Medical Doctors and Bioinformatics researchers.

  17. Biology as an Integrating Natural Science Domain

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 13; Issue 3. Biology as an Integrating Natural Science Domain: A Proposal for BSc (Hons) in Integrated Biology. Kambadur Muralidhar. Classroom Volume 13 Issue 3 March 2008 pp 272-276 ...

  18. Integrating systems biology models and biomedical ontologies.

    Science.gov (United States)

    Hoehndorf, Robert; Dumontier, Michel; Gennari, John H; Wimalaratne, Sarala; de Bono, Bernard; Cook, Daniel L; Gkoutos, Georgios V

    2011-08-11

    Systems biology is an approach to biology that emphasizes the structure and dynamic behavior of biological systems and the interactions that occur within them. To succeed, systems biology crucially depends on the accessibility and integration of data across domains and levels of granularity. Biomedical ontologies were developed to facilitate such an integration of data and are often used to annotate biosimulation models in systems biology. We provide a framework to integrate representations of in silico systems biology with those of in vivo biology as described by biomedical ontologies and demonstrate this framework using the Systems Biology Markup Language. We developed the SBML Harvester software that automatically converts annotated SBML models into OWL and we apply our software to those biosimulation models that are contained in the BioModels Database. We utilize the resulting knowledge base for complex biological queries that can bridge levels of granularity, verify models based on the biological phenomenon they represent and provide a means to establish a basic qualitative layer on which to express the semantics of biosimulation models. We establish an information flow between biomedical ontologies and biosimulation models and we demonstrate that the integration of annotated biosimulation models and biomedical ontologies enables the verification of models as well as expressive queries. Establishing a bi-directional information flow between systems biology and biomedical ontologies has the potential to enable large-scale analyses of biological systems that span levels of granularity from molecules to organisms.

  19. Computational structural biology: methods and applications

    National Research Council Canada - National Science Library

    Schwede, Torsten; Peitsch, Manuel Claude

    2008-01-01

    ... sequencing reinforced the observation that structural information is needed to understand the detailed function and mechanism of biological molecules such as enzyme reactions and molecular recognition events. Furthermore, structures are obviously key to the design of molecules with new or improved functions. In this context, computational structural biology...

  20. The fusion of biology, computer science, and engineering: towards efficient and successful synthetic biology.

    Science.gov (United States)

    Linshiz, Gregory; Goldberg, Alex; Konry, Tania; Hillson, Nathan J

    2012-01-01

    Synthetic biology is a nascent field that emerged in earnest only around the turn of the millennium. It aims to engineer new biological systems and impart new biological functionality, often through genetic modifications. The design and construction of new biological systems is a complex, multistep process, requiring multidisciplinary collaborative efforts from "fusion" scientists who have formal training in computer science or engineering, as well as hands-on biological expertise. The public has high expectations for synthetic biology and eagerly anticipates the development of solutions to the major challenges facing humanity. This article discusses laboratory practices and the conduct of research in synthetic biology. It argues that the fusion science approach, which integrates biology with computer science and engineering best practices, including standardization, process optimization, computer-aided design and laboratory automation, miniaturization, and systematic management, will increase the predictability and reproducibility of experiments and lead to breakthroughs in the construction of new biological systems. The article also discusses several successful fusion projects, including the development of software tools for DNA construction design automation, recursive DNA construction, and the development of integrated microfluidics systems.

  1. Deterministic computation of functional integrals

    International Nuclear Information System (INIS)

    Lobanov, Yu.Yu.

    1995-09-01

    A new method of numerical integration in functional spaces is described. This method is based on the rigorous definition of a functional integral in complete separable metric space and on the use of approximation formulas which we constructed for this kind of integral. The method is applicable to solution of some partial differential equations and to calculation of various characteristics in quantum physics. No preliminary discretization of space and time is required in this method, as well as no simplifying assumptions like semi-classical, mean field approximations, collective excitations, introduction of ''short-time'' propagators, etc are necessary in our approach. The constructed approximation formulas satisfy the condition of being exact on a given class of functionals, namely polynomial functionals of a given degree. The employment of these formulas replaces the evaluation of a functional integral by computation of the ''ordinary'' (Riemannian) integral of a low dimension, thus allowing to use the more preferable deterministic algorithms (normally - Gaussian quadratures) in computations rather than traditional stochastic (Monte Carlo) methods which are commonly used for solution of the problem under consideration. The results of application of the method to computation of the Green function of the Schroedinger equation in imaginary time as well as the study of some models of Euclidean quantum mechanics are presented. The comparison with results of other authors shows that our method gives significant (by an order of magnitude) economy of computer time and memory versus other known methods while providing the results with the same or better accuracy. The funcitonal measure of the Gaussian type is considered and some of its particular cases, namely conditional Wiener measure in quantum statistical mechanics and functional measure in a Schwartz distribution space in two-dimensional quantum field theory are studied in detail. Numerical examples demonstrating the

  2. Deep Learning and Applications in Computational Biology

    KAUST Repository

    Zeng, Jianyang

    2016-01-01

    -transcriptional gene regulation. Though numerous computational methods have been developed for modeling RBP binding preferences, discovering a complete structural representation of the RBP targets by integrating their available structural features in all three

  3. Graphics processing units in bioinformatics, computational biology and systems biology.

    Science.gov (United States)

    Nobile, Marco S; Cazzaniga, Paolo; Tangherloni, Andrea; Besozzi, Daniela

    2017-09-01

    Several studies in Bioinformatics, Computational Biology and Systems Biology rely on the definition of physico-chemical or mathematical models of biological systems at different scales and levels of complexity, ranging from the interaction of atoms in single molecules up to genome-wide interaction networks. Traditional computational methods and software tools developed in these research fields share a common trait: they can be computationally demanding on Central Processing Units (CPUs), therefore limiting their applicability in many circumstances. To overcome this issue, general-purpose Graphics Processing Units (GPUs) are gaining an increasing attention by the scientific community, as they can considerably reduce the running time required by standard CPU-based software, and allow more intensive investigations of biological systems. In this review, we present a collection of GPU tools recently developed to perform computational analyses in life science disciplines, emphasizing the advantages and the drawbacks in the use of these parallel architectures. The complete list of GPU-powered tools here reviewed is available at http://bit.ly/gputools. © The Author 2016. Published by Oxford University Press.

  4. Integrative approaches to computational biomedicine

    Science.gov (United States)

    Coveney, Peter V.; Diaz-Zuccarini, Vanessa; Graf, Norbert; Hunter, Peter; Kohl, Peter; Tegner, Jesper; Viceconti, Marco

    2013-01-01

    The new discipline of computational biomedicine is concerned with the application of computer-based techniques and particularly modelling and simulation to human health. Since 2007, this discipline has been synonymous, in Europe, with the name given to the European Union's ambitious investment in integrating these techniques with the eventual aim of modelling the human body as a whole: the virtual physiological human. This programme and its successors are expected, over the next decades, to transform the study and practice of healthcare, moving it towards the priorities known as ‘4P's’: predictive, preventative, personalized and participatory medicine.

  5. Integrating rehabilitation engineering technology with biologics.

    Science.gov (United States)

    Collinger, Jennifer L; Dicianno, Brad E; Weber, Douglas J; Cui, Xinyan Tracy; Wang, Wei; Brienza, David M; Boninger, Michael L

    2011-06-01

    Rehabilitation engineers apply engineering principles to improve function or to solve challenges faced by persons with disabilities. It is critical to integrate the knowledge of biologics into the process of rehabilitation engineering to advance the field and maximize potential benefits to patients. Some applications in particular demonstrate the value of a symbiotic relationship between biologics and rehabilitation engineering. In this review we illustrate how researchers working with neural interfaces and integrated prosthetics, assistive technology, and biologics data collection are currently integrating these 2 fields. We also discuss the potential for further integration of biologics and rehabilitation engineering to deliver the best technologies and treatments to patients. Engineers and clinicians must work together to develop technologies that meet clinical needs and are accessible to the intended patient population. Copyright © 2011 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  6. UC Merced Center for Computational Biology Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Colvin, Michael; Watanabe, Masakatsu

    2010-11-30

    Final report for the UC Merced Center for Computational Biology. The Center for Computational Biology (CCB) was established to support multidisciplinary scientific research and academic programs in computational biology at the new University of California campus in Merced. In 2003, the growing gap between biology research and education was documented in a report from the National Academy of Sciences, Bio2010 Transforming Undergraduate Education for Future Research Biologists. We believed that a new type of biological sciences undergraduate and graduate programs that emphasized biological concepts and considered biology as an information science would have a dramatic impact in enabling the transformation of biology. UC Merced as newest UC campus and the first new U.S. research university of the 21st century was ideally suited to adopt an alternate strategy - to create a new Biological Sciences majors and graduate group that incorporated the strong computational and mathematical vision articulated in the Bio2010 report. CCB aimed to leverage this strong commitment at UC Merced to develop a new educational program based on the principle of biology as a quantitative, model-driven science. Also we expected that the center would be enable the dissemination of computational biology course materials to other university and feeder institutions, and foster research projects that exemplify a mathematical and computations-based approach to the life sciences. As this report describes, the CCB has been successful in achieving these goals, and multidisciplinary computational biology is now an integral part of UC Merced undergraduate, graduate and research programs in the life sciences. The CCB began in fall 2004 with the aid of an award from U.S. Department of Energy (DOE), under its Genomes to Life program of support for the development of research and educational infrastructure in the modern biological sciences. This report to DOE describes the research and academic programs

  7. Computational Tools for Stem Cell Biology.

    Science.gov (United States)

    Bian, Qin; Cahan, Patrick

    2016-12-01

    For over half a century, the field of developmental biology has leveraged computation to explore mechanisms of developmental processes. More recently, computational approaches have been critical in the translation of high throughput data into knowledge of both developmental and stem cell biology. In the past several years, a new subdiscipline of computational stem cell biology has emerged that synthesizes the modeling of systems-level aspects of stem cells with high-throughput molecular data. In this review, we provide an overview of this new field and pay particular attention to the impact that single cell transcriptomics is expected to have on our understanding of development and our ability to engineer cell fate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Micro-Computers in Biology Inquiry.

    Science.gov (United States)

    Barnato, Carolyn; Barrett, Kathy

    1981-01-01

    Describes the modification of computer programs (BISON and POLLUT) to accommodate species and areas indigenous to the Pacific Coast area. Suggests that these programs, suitable for PET microcomputers, may foster a long-term, ongoing, inquiry-directed approach in biology. (DS)

  9. An integrated methodological approach to the computer-assisted gas chromatographic screening of basic drugs in biological fluids using nitrogen selective detection.

    Science.gov (United States)

    Dugal, R; Massé, R; Sanchez, G; Bertrand, M J

    1980-01-01

    This paper presents the methodological aspects of a computerized system for the gas-chromatographic screening and primary identification of central nervous system stimulants and narcotic analgesics (including some of their respective metabolites) extracted from urine. The operating conditions of a selective nitrogen detector for optimized analytical functions are discussed, particularly the effect of carrier and fuel gas on the detector's sensitivity to nitrogen-containing molecules and discriminating performance toward biological matrix interferences. Application of simple extraction techniques, combined with rapid derivatization procedures, computer data acquisition, and reduction of chromatographic data are presented. Results show that this system approach allows for the screening of several drugs and their metabolites in a short amount of time. The reliability and stability of the system have been tested by analyzing several thousand samples for doping control at major international sporting events and for monitoring drug intake in addicts participating in a rehabilitation program. Results indicate that these techniques can be used and adapted to many different analytical toxicology situations.

  10. Ranked retrieval of Computational Biology models.

    Science.gov (United States)

    Henkel, Ron; Endler, Lukas; Peters, Andre; Le Novère, Nicolas; Waltemath, Dagmar

    2010-08-11

    The study of biological systems demands computational support. If targeting a biological problem, the reuse of existing computational models can save time and effort. Deciding for potentially suitable models, however, becomes more challenging with the increasing number of computational models available, and even more when considering the models' growing complexity. Firstly, among a set of potential model candidates it is difficult to decide for the model that best suits ones needs. Secondly, it is hard to grasp the nature of an unknown model listed in a search result set, and to judge how well it fits for the particular problem one has in mind. Here we present an improved search approach for computational models of biological processes. It is based on existing retrieval and ranking methods from Information Retrieval. The approach incorporates annotations suggested by MIRIAM, and additional meta-information. It is now part of the search engine of BioModels Database, a standard repository for computational models. The introduced concept and implementation are, to our knowledge, the first application of Information Retrieval techniques on model search in Computational Systems Biology. Using the example of BioModels Database, it was shown that the approach is feasible and extends the current possibilities to search for relevant models. The advantages of our system over existing solutions are that we incorporate a rich set of meta-information, and that we provide the user with a relevance ranking of the models found for a query. Better search capabilities in model databases are expected to have a positive effect on the reuse of existing models.

  11. Computational biology and bioinformatics in Nigeria.

    Science.gov (United States)

    Fatumo, Segun A; Adoga, Moses P; Ojo, Opeolu O; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi

    2014-04-01

    Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.

  12. Computational biology and bioinformatics in Nigeria.

    Directory of Open Access Journals (Sweden)

    Segun A Fatumo

    2014-04-01

    Full Text Available Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.

  13. Application of computational systems biology to explore environmental toxicity hazards

    DEFF Research Database (Denmark)

    Audouze, Karine Marie Laure; Grandjean, Philippe

    2011-01-01

    Background: Computer-based modeling is part of a new approach to predictive toxicology.Objectives: We investigated the usefulness of an integrated computational systems biology approach in a case study involving the isomers and metabolites of the pesticide dichlorodiphenyltrichloroethane (DDT......) to ascertain their possible links to relevant adverse effects.Methods: We extracted chemical-protein association networks for each DDT isomer and its metabolites using ChemProt, a disease chemical biology database that includes both binding and gene expression data, and we explored protein-protein interactions...... using a human interactome network. To identify associated dysfunctions and diseases, we integrated protein-disease annotations into the protein complexes using the Online Mendelian Inheritance in Man database and the Comparative Toxicogenomics Database.Results: We found 175 human proteins linked to p,p´-DDT...

  14. Integrative Systems Biology Applied to Toxicology

    DEFF Research Database (Denmark)

    Kongsbak, Kristine Grønning

    associated with combined exposure to multiple chemicals. Testing all possible combinations of the tens of thousands environmental chemicals is impractical. This PhD project was launched to apply existing computational systems biology methods to toxicological research. In this thesis, I present in three...... of a system thereby suggesting new ways of thinking specific toxicological endpoints. Furthermore, computational methods can serve as valuable input for the hypothesis generating phase of the preparations of a research project....

  15. 2nd Colombian Congress on Computational Biology and Bioinformatics

    CERN Document Server

    Cristancho, Marco; Isaza, Gustavo; Pinzón, Andrés; Rodríguez, Juan

    2014-01-01

    This volume compiles accepted contributions for the 2nd Edition of the Colombian Computational Biology and Bioinformatics Congress CCBCOL, after a rigorous review process in which 54 papers were accepted for publication from 119 submitted contributions. Bioinformatics and Computational Biology are areas of knowledge that have emerged due to advances that have taken place in the Biological Sciences and its integration with Information Sciences. The expansion of projects involving the study of genomes has led the way in the production of vast amounts of sequence data which needs to be organized, analyzed and stored to understand phenomena associated with living organisms related to their evolution, behavior in different ecosystems, and the development of applications that can be derived from this analysis.  .

  16. Integrated Network Analysis and Effective Tools in Plant Systems Biology

    Directory of Open Access Journals (Sweden)

    Atsushi eFukushima

    2014-11-01

    Full Text Available One of the ultimate goals in plant systems biology is to elucidate the genotype-phenotype relationship in plant cellular systems. Integrated network analysis that combines omics data with mathematical models has received particular attention. Here we focus on the latest cutting-edge computational advances that facilitate their combination. We highlight (1 network visualization tools, (2 pathway analyses, (3 genome-scale metabolic reconstruction, and (4 the integration of high-throughput experimental data and mathematical models. Multi-omics data that contain the genome, transcriptome, proteome, and metabolome and mathematical models are expected to integrate and expand our knowledge of complex plant metabolisms.

  17. Calculation of integrated biological response in brachytherapy

    International Nuclear Information System (INIS)

    Dale, Roger G.; Coles, Ian P.; Deehan, Charles; O'Donoghue, Joseph A.

    1997-01-01

    Purpose: To present analytical methods for calculating or estimating the integrated biological response in brachytherapy applications, and which allow for the presence of dose gradients. Methods and Materials: The approach uses linear-quadratic (LQ) formulations to identify an equivalent biologically effective dose (BED eq ) which, if applied to a specified tissue volume, would produce the same biological effect as that achieved by a given brachytherapy application. For simple geometrical cases, BED multiplying factors have been derived which allow the equivalent BED for tumors to be estimated from a single BED value calculated at a dose reference point. For more complex brachytherapy applications a voxel-by-voxel determination of the equivalent BED will be more accurate. Equations are derived which when incorporated into brachytherapy software would facilitate such a process. Results: At both high and low dose rates, the BEDs calculated at the dose reference point are shown to be lower than the true values by an amount which depends primarily on the magnitude of the prescribed dose; the BED multiplying factors are higher for smaller prescribed doses. The multiplying factors are less dependent on the assumed radiobiological parameters. In most clinical applications involving multiple sources, particularly those in multiplanar arrays, the multiplying factors are likely to be smaller than those derived here for single sources. The overall suggestion is that the radiobiological consequences of dose gradients in well-designed brachytherapy treatments, although important, may be less significant than is sometimes supposed. The modeling exercise also demonstrates that the integrated biological effect associated with fractionated high-dose-rate (FHDR) brachytherapy will usually be different from that for an 'equivalent' continuous low-dose-rate (CLDR) regime. For practical FHDR regimes involving relatively small numbers of fractions, the integrated biological effect to

  18. Computational intelligence, medicine and biology selected links

    CERN Document Server

    Zaitseva, Elena

    2015-01-01

    This book contains an interesting and state-of the art collection of chapters presenting several examples of attempts to developing modern tools utilizing computational intelligence in different real life problems encountered by humans. Reasoning, prediction, modeling, optimization, decision making, etc. need modern, soft and intelligent algorithms, methods and methodologies to solve, in the efficient ways, problems appearing in human activity. The contents of the book is divided into two parts. Part I, consisting of four chapters, is devoted to selected links of computational intelligence, medicine, health care and biomechanics. Several problems are considered: estimation of healthcare system reliability, classification of ultrasound thyroid images, application of fuzzy logic to measure weight status and central fatness, and deriving kinematics directly from video records. Part II, also consisting of four chapters, is devoted to selected links of computational intelligence and biology. The common denominato...

  19. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar

    2016-03-21

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users\\' intuition about model similarity, and to support complex model searches in databases.

  20. Notions of similarity for computational biology models

    KAUST Repository

    Waltemath, Dagmar; Henkel, Ron; Hoehndorf, Robert; Kacprowski, Tim; Knuepfer, Christian; Liebermeister, Wolfram

    2016-01-01

    Computational models used in biology are rapidly increasing in complexity, size, and numbers. To build such large models, researchers need to rely on software tools for model retrieval, model combination, and version control. These tools need to be able to quantify the differences and similarities between computational models. However, depending on the specific application, the notion of similarity may greatly vary. A general notion of model similarity, applicable to various types of models, is still missing. Here, we introduce a general notion of quantitative model similarities, survey the use of existing model comparison methods in model building and management, and discuss potential applications of model comparison. To frame model comparison as a general problem, we describe a theoretical approach to defining and computing similarities based on different model aspects. Potentially relevant aspects of a model comprise its references to biological entities, network structure, mathematical equations and parameters, and dynamic behaviour. Future similarity measures could combine these model aspects in flexible, problem-specific ways in order to mimic users' intuition about model similarity, and to support complex model searches in databases.

  1. Multiobjective optimization in bioinformatics and computational biology.

    Science.gov (United States)

    Handl, Julia; Kell, Douglas B; Knowles, Joshua

    2007-01-01

    This paper reviews the application of multiobjective optimization in the fields of bioinformatics and computational biology. A survey of existing work, organized by application area, forms the main body of the review, following an introduction to the key concepts in multiobjective optimization. An original contribution of the review is the identification of five distinct "contexts," giving rise to multiple objectives: These are used to explain the reasons behind the use of multiobjective optimization in each application area and also to point the way to potential future uses of the technique.

  2. Data Integration and Mining for Synthetic Biology Design.

    Science.gov (United States)

    Mısırlı, Göksel; Hallinan, Jennifer; Pocock, Matthew; Lord, Phillip; McLaughlin, James Alastair; Sauro, Herbert; Wipat, Anil

    2016-10-21

    One aim of synthetic biologists is to create novel and predictable biological systems from simpler modular parts. This approach is currently hampered by a lack of well-defined and characterized parts and devices. However, there is a wealth of existing biological information, which can be used to identify and characterize biological parts, and their design constraints in the literature and numerous biological databases. However, this information is spread among these databases in many different formats. New computational approaches are required to make this information available in an integrated format that is more amenable to data mining. A tried and tested approach to this problem is to map disparate data sources into a single data set, with common syntax and semantics, to produce a data warehouse or knowledge base. Ontologies have been used extensively in the life sciences, providing this common syntax and semantics as a model for a given biological domain, in a fashion that is amenable to computational analysis and reasoning. Here, we present an ontology for applications in synthetic biology design, SyBiOnt, which facilitates the modeling of information about biological parts and their relationships. SyBiOnt was used to create the SyBiOntKB knowledge base, incorporating and building upon existing life sciences ontologies and standards. The reasoning capabilities of ontologies were then applied to automate the mining of biological parts from this knowledge base. We propose that this approach will be useful to speed up synthetic biology design and ultimately help facilitate the automation of the biological engineering life cycle.

  3. Computing chemical organizations in biological networks.

    Science.gov (United States)

    Centler, Florian; Kaleta, Christoph; di Fenizio, Pietro Speroni; Dittrich, Peter

    2008-07-15

    Novel techniques are required to analyze computational models of intracellular processes as they increase steadily in size and complexity. The theory of chemical organizations has recently been introduced as such a technique that links the topology of biochemical reaction network models to their dynamical repertoire. The network is decomposed into algebraically closed and self-maintaining subnetworks called organizations. They form a hierarchy representing all feasible system states including all steady states. We present three algorithms to compute the hierarchy of organizations for network models provided in SBML format. Two of them compute the complete organization hierarchy, while the third one uses heuristics to obtain a subset of all organizations for large models. While the constructive approach computes the hierarchy starting from the smallest organization in a bottom-up fashion, the flux-based approach employs self-maintaining flux distributions to determine organizations. A runtime comparison on 16 different network models of natural systems showed that none of the two exhaustive algorithms is superior in all cases. Studying a 'genome-scale' network model with 762 species and 1193 reactions, we demonstrate how the organization hierarchy helps to uncover the model structure and allows to evaluate the model's quality, for example by detecting components and subsystems of the model whose maintenance is not explained by the model. All data and a Java implementation that plugs into the Systems Biology Workbench is available from http://www.minet.uni-jena.de/csb/prj/ot/tools.

  4. WISB: Warwick Integrative Synthetic Biology Centre.

    Science.gov (United States)

    McCarthy, John

    2016-06-15

    Synthetic biology promises to create high-impact solutions to challenges in the areas of biotechnology, human/animal health, the environment, energy, materials and food security. Equally, synthetic biologists create tools and strategies that have the potential to help us answer important fundamental questions in biology. Warwick Integrative Synthetic Biology (WISB) pursues both of these mutually complementary 'build to apply' and 'build to understand' approaches. This is reflected in our research structure, in which a core theme on predictive biosystems engineering develops underpinning understanding as well as next-generation experimental/theoretical tools, and these are then incorporated into three applied themes in which we engineer biosynthetic pathways, microbial communities and microbial effector systems in plants. WISB takes a comprehensive approach to training, education and outreach. For example, WISB is a partner in the EPSRC/BBSRC-funded U.K. Doctoral Training Centre in synthetic biology, we have developed a new undergraduate module in the subject, and we have established five WISB Research Career Development Fellowships to support young group leaders. Research in Ethical, Legal and Societal Aspects (ELSA) of synthetic biology is embedded in our centre activities. WISB has been highly proactive in building an international research and training network that includes partners in Barcelona, Boston, Copenhagen, Madrid, Marburg, São Paulo, Tartu and Valencia. © 2016 The Author(s).

  5. Computational Modeling of Biological Systems From Molecules to Pathways

    CERN Document Server

    2012-01-01

    Computational modeling is emerging as a powerful new approach for studying and manipulating biological systems. Many diverse methods have been developed to model, visualize, and rationally alter these systems at various length scales, from atomic resolution to the level of cellular pathways. Processes taking place at larger time and length scales, such as molecular evolution, have also greatly benefited from new breeds of computational approaches. Computational Modeling of Biological Systems: From Molecules to Pathways provides an overview of established computational methods for the modeling of biologically and medically relevant systems. It is suitable for researchers and professionals working in the fields of biophysics, computational biology, systems biology, and molecular medicine.

  6. Toward computational cumulative biology by combining models of biological datasets.

    Science.gov (United States)

    Faisal, Ali; Peltonen, Jaakko; Georgii, Elisabeth; Rung, Johan; Kaski, Samuel

    2014-01-01

    A main challenge of data-driven sciences is how to make maximal use of the progressively expanding databases of experimental datasets in order to keep research cumulative. We introduce the idea of a modeling-based dataset retrieval engine designed for relating a researcher's experimental dataset to earlier work in the field. The search is (i) data-driven to enable new findings, going beyond the state of the art of keyword searches in annotations, (ii) modeling-driven, to include both biological knowledge and insights learned from data, and (iii) scalable, as it is accomplished without building one unified grand model of all data. Assuming each dataset has been modeled beforehand, by the researchers or automatically by database managers, we apply a rapidly computable and optimizable combination model to decompose a new dataset into contributions from earlier relevant models. By using the data-driven decomposition, we identify a network of interrelated datasets from a large annotated human gene expression atlas. While tissue type and disease were major driving forces for determining relevant datasets, the found relationships were richer, and the model-based search was more accurate than the keyword search; moreover, it recovered biologically meaningful relationships that are not straightforwardly visible from annotations-for instance, between cells in different developmental stages such as thymocytes and T-cells. Data-driven links and citations matched to a large extent; the data-driven links even uncovered corrections to the publication data, as two of the most linked datasets were not highly cited and turned out to have wrong publication entries in the database.

  7. Integrated Computer System of Management in Logistics

    Science.gov (United States)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  8. Utilizing Computer Integration to Assist Nursing

    OpenAIRE

    Hujcs, Marianne

    1990-01-01

    As the use of computers in health care continues to increase, methods of using these computers to assist nursing practice are also increasing. This paper describes how integration within a hospital information system (HIS) contributed to the development of a report format and computer generated alerts used by nurses. Discussion also includes how the report and alerts impact those nurses providing bedside care as well as how integration of an HIS creates challenges for nursing.

  9. Cyber integrated MEMS microhand for biological applications

    Science.gov (United States)

    Weissman, Adam; Frazier, Athena; Pepen, Michael; Lu, Yen-Wen; Yang, Shanchieh Jay

    2009-05-01

    Anthropomorphous robotic hands at microscales have been developed to receive information and perform tasks for biological applications. To emulate a human hand's dexterity, the microhand requires a master-slave interface with a wearable controller, force sensors, and perception displays for tele-manipulation. Recognizing the constraints and complexity imposed in developing feedback interface during miniaturization, this project address the need by creating an integrated cyber environment incorporating sensors with a microhand, haptic/visual display, and object model, to emulates human hands' psychophysical perception at microscale.

  10. Data mining and data integration in biology

    DEFF Research Database (Denmark)

    Ólason, Páll Ísólfur

    2008-01-01

    . They also necessitate new ways of data preparation as established methods for sequence sets are often useless when dealing with sets of sequence pairs. Therefore careful analysis on the sequence level as well as the integrated network level is needed to benchmark these data prior to use. The networks, which...... between molecules, the essence of systems biology. Internet technologies are very important in this respect as bioinformatics labs around the world generate staggering amounts of novel annotations, increasing the importance of on-line processing and distributed systems. One of the most important new data...... types in proteomics is protein-protein interactions. Interactions between the functional elements in the cell are a natural place to start when integrating protein annotations with the aim of gaining a systems view of the cell. Interaction data, however, are notoriously biased, erroneous and incomplete...

  11. Computer Models and Automata Theory in Biology and Medicine

    CERN Document Server

    Baianu, I C

    2004-01-01

    The applications of computers to biological and biomedical problem solving goes back to the very beginnings of computer science, automata theory [1], and mathematical biology [2]. With the advent of more versatile and powerful computers, biological and biomedical applications of computers have proliferated so rapidly that it would be virtually impossible to compile a comprehensive review of all developments in this field. Limitations of computer simulations in biology have also come under close scrutiny, and claims have been made that biological systems have limited information processing power [3]. Such general conjectures do not, however, deter biologists and biomedical researchers from developing new computer applications in biology and medicine. Microprocessors are being widely employed in biological laboratories both for automatic data acquisition/processing and modeling; one particular area, which is of great biomedical interest, involves fast digital image processing and is already established for rout...

  12. An Integrated Biological Control System At Hanford

    International Nuclear Information System (INIS)

    Johnson, A.R.; Caudill, J.G.; Giddings, R.F.; Rodriguez, J.M.; Roos, R.C.; Wilde, J.W.

    2010-01-01

    In 1999 an integrated biological control system was instituted at the U.S. Department of Energy's Hanford Site. Successes and changes to the program needed to be communicated to a large and diverse mix of organizations and individuals. Efforts at communication are directed toward the following: Hanford Contractors (Liquid or Tank Waste, Solid Waste, Environmental Restoration, Science and Technology, Site Infrastructure), General Hanford Employees, and Hanford Advisory Board (Native American Tribes, Environmental Groups, Local Citizens, Washington State and Oregon State regulatory agencies). Communication was done through direct interface meetings, individual communication, where appropriate, and broadly sharing program reports. The objectives of the communication efforts was to have the program well coordinated with Hanford contractors, and to have the program understood well enough that all stakeholders would have confidence in the work performed by the program to reduce or elimate spread of radioactive contamination by biotic vectors. Communication of successes and changes to an integrated biological control system instituted in 1999 at the Department of Energy's Hanford Site have required regular interfaces with not only a diverse group of Hanford contractors (i.e., those responsible for liquid or tank waste, solid wastes, environmental restoration, science and technology, and site infrastructure), and general Hanford employees, but also with a consortium of designated stake holders organized as the Hanford Advisory Board (i.e., Native American tribes, various environmental groups, local citizens, Washington state and Oregon regulatory agencies, etc.). Direct interface meetings, individual communication where appropriate, and transparency of the biological control program were the methods and outcome of this effort.

  13. AN INTEGRATED BIOLOGICAL CONTROL SYSTEM AT HANFORD

    Energy Technology Data Exchange (ETDEWEB)

    JOHNSON AR; CAUDILL JG; GIDDINGS RF; RODRIGUEZ JM; ROOS RC; WILDE JW

    2010-02-11

    In 1999 an integrated biological control system was instituted at the U.S. Department of Energy's Hanford Site. Successes and changes to the program needed to be communicated to a large and diverse mix of organizations and individuals. Efforts at communication are directed toward the following: Hanford Contractors (Liquid or Tank Waste, Solid Waste, Environmental Restoration, Science and Technology, Site Infrastructure), General Hanford Employees, and Hanford Advisory Board (Native American Tribes, Environmental Groups, Local Citizens, Washington State and Oregon State regulatory agencies). Communication was done through direct interface meetings, individual communication, where appropriate, and broadly sharing program reports. The objectives of the communication efforts was to have the program well coordinated with Hanford contractors, and to have the program understood well enough that all stakeholders would have confidence in the work performed by the program to reduce or elimated spread of radioactive contamination by biotic vectors. Communication of successes and changes to an integrated biological control system instituted in 1999 at the Department of Energy's Hanford Site have required regular interfaces with not only a diverse group of Hanford contractors (i.e., those responsible for liquid or tank waste, solid wastes, environmental restoration, science and technology, and site infrastructure), and general Hanford employees, but also with a consortium of designated stake holders organized as the Hanford Advisory Board (i.e., Native American tribes, various environmental groups, local citizens, Washington state and Oregon regulatory agencies, etc.). Direct interface meetings, individual communication where appropriate, and transparency of the biological control program were the methods and outcome of this effort.

  14. Informing biological design by integration of systems and synthetic biology.

    Science.gov (United States)

    Smolke, Christina D; Silver, Pamela A

    2011-03-18

    Synthetic biology aims to make the engineering of biology faster and more predictable. In contrast, systems biology focuses on the interaction of myriad components and how these give rise to the dynamic and complex behavior of biological systems. Here, we examine the synergies between these two fields. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Computing one of Victor Moll's irresistible integrals with computer algebra

    Directory of Open Access Journals (Sweden)

    Christoph Koutschan

    2008-04-01

    Full Text Available We investigate a certain quartic integral from V. Moll's book “Irresistible Integrals” and demonstrate how it can be solved by computer algebra methods, namely by using non-commutative Gröbner bases. We present recent implementations in the computer algebra systems SINGULAR and MATHEMATICA.

  16. Bibliography for computer security, integrity, and safety

    Science.gov (United States)

    Bown, Rodney L.

    1991-01-01

    A bibliography of computer security, integrity, and safety issues is given. The bibliography is divided into the following sections: recent national publications; books; journal, magazine articles, and miscellaneous reports; conferences, proceedings, and tutorials; and government documents and contractor reports.

  17. Integrated computer aided design simulation and manufacture

    OpenAIRE

    Diko, Faek

    1989-01-01

    Computer Aided Design (CAD) and Computer Aided Manufacture (CAM) have been investigated and developed since twenty years as standalone systems. A large number of very powerful but independent packages have been developed for Computer Aided Design,Aanlysis and Manufacture. However, in most cases these packages have poor facility for communicating with other packages. Recently attempts have been made to develop integrated CAD/CAM systems and many software companies a...

  18. Deep Learning and Applications in Computational Biology

    KAUST Repository

    Zeng, Jianyang

    2016-01-26

    RNA-binding proteins (RBPs) play important roles in the post-transcriptional control of RNAs. Identifying RBP binding sites and characterizing RBP binding preferences are key steps toward understanding the basic mechanisms of the post-transcriptional gene regulation. Though numerous computational methods have been developed for modeling RBP binding preferences, discovering a complete structural representation of the RBP targets by integrating their available structural features in all three dimensions is still a challenging task. In this work, we develop a general and flexible deep learning framework for modeling structural binding preferences and predicting binding sites of RBPs, which takes (predicted) RNA tertiary structural information into account for the first time. Our framework constructs a unified representation that characterizes the structural specificities of RBP targets in all three dimensions, which can be further used to predict novel candidate binding sites and discover potential binding motifs. Through testing on the real CLIP-seq datasets, we have demonstrated that our deep learning framework can automatically extract effective hidden structural features from the encoded raw sequence and structural profiles, and predict accurate RBP binding sites. In addition, we have conducted the first study to show that integrating the additional RNA tertiary structural features can improve the model performance in predicting RBP binding sites, especially for the polypyrimidine tract-binding protein (PTB), which also provides a new evidence to support the view that RBPs may own specific tertiary structural binding preferences. In particular, the tests on the internal ribosome entry site (IRES) segments yield satisfiable results with experimental support from the literature and further demonstrate the necessity of incorporating RNA tertiary structural information into the prediction model. The source code of our approach can be found in https://github.com/thucombio/deepnet-rbp.

  19. Call Centre- Computer Telephone Integration

    Directory of Open Access Journals (Sweden)

    Dražen Kovačević

    2012-10-01

    Full Text Available Call centre largely came into being as a result of consumerneeds converging with enabling technology- and by the companiesrecognising the revenue opportunities generated by meetingthose needs thereby increasing customer satisfaction. Regardlessof the specific application or activity of a Call centre, customersatisfaction with the interaction is critical to the revenuegenerated or protected by the Call centre. Physical(v, Call centreset up is a place that includes computer, telephone and supervisorstation. Call centre can be available 24 hours a day - whenthe customer wants to make a purchase, needs information, orsimply wishes to register a complaint.

  20. Teaching the fundamentals of biological data integration using classroom games.

    Directory of Open Access Journals (Sweden)

    Maria Victoria Schneider

    Full Text Available This article aims to introduce the nature of data integration to life scientists. Generally, the subject of data integration is not discussed outside the field of computational science and is not covered in any detail, or even neglected, when teaching/training trainees. End users (hereby defined as wet-lab trainees, clinicians, lab researchers will mostly interact with bioinformatics resources and tools through web interfaces that mask the user from the data integration processes. However, the lack of formal training or acquaintance with even simple database concepts and terminology often results in a real obstacle to the full comprehension of the resources and tools the end users wish to access. Understanding how data integration works is fundamental to empowering trainees to see the limitations as well as the possibilities when exploring, retrieving, and analysing biological data from databases. Here we introduce a game-based learning activity for training/teaching the topic of data integration that trainers/educators can adopt and adapt for their classroom. In particular we provide an example using DAS (Distributed Annotation Systems as a method for data integration.

  1. Modeling Cancer Metastasis using Global, Quantitative and Integrative Network Biology

    DEFF Research Database (Denmark)

    Schoof, Erwin; Erler, Janine

    understanding of molecular processes which are fundamental to tumorigenesis. In Article 1, we propose a novel framework for how cancer mutations can be studied by taking into account their effect at the protein network level. In Article 2, we demonstrate how global, quantitative data on phosphorylation dynamics...... can be generated using MS, and how this can be modeled using a computational framework for deciphering kinase-substrate dynamics. This framework is described in depth in Article 3, and covers the design of KinomeXplorer, which allows the prediction of kinases responsible for modulating observed...... phosphorylation dynamics in a given biological sample. In Chapter III, we move into Integrative Network Biology, where, by combining two fundamental technologies (MS & NGS), we can obtain more in-depth insights into the links between cellular phenotype and genotype. Article 4 describes the proof...

  2. A first attempt to bring computational biology into advanced high school biology classrooms.

    Science.gov (United States)

    Gallagher, Suzanne Renick; Coon, William; Donley, Kristin; Scott, Abby; Goldberg, Debra S

    2011-10-01

    Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.

  3. 7th International Conference on Practical Applications of Computational Biology & Bioinformatics

    CERN Document Server

    Nanni, Loris; Rocha, Miguel; Fdez-Riverola, Florentino

    2013-01-01

    The growth in the Bioinformatics and Computational Biology fields over the last few years has been remarkable and the trend is to increase its pace. In fact, the need for computational techniques that can efficiently handle the huge amounts of data produced by the new experimental techniques in Biology is still increasing driven by new advances in Next Generation Sequencing, several types of the so called omics data and image acquisition, just to name a few. The analysis of the datasets that produces and its integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Within this scenario of increasing data availability, Systems Biology has also been emerging as an alternative to the reductionist view that dominated biological research in the last decades. Indeed, Biology is more and more a science of information requiring tools from the computational sciences. In the last few years, we ...

  4. Biological condition gradient: Applying a framework for determining the biological integrity of coral reefs

    Science.gov (United States)

    The goals of the U.S. Clean Water Act (CWA) are to restore and maintain the chemical, physical and biological integrity of water resources. Although clean water is a goal, another is to safeguard biological communities by defining levels of biological integrity to protect aquatic...

  5. Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.

    Science.gov (United States)

    Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo

    2017-01-01

    The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.

  6. An integrated computational tool for precipitation simulation

    Science.gov (United States)

    Cao, W.; Zhang, F.; Chen, S.-L.; Zhang, C.; Chang, Y. A.

    2011-07-01

    Computer aided materials design is of increasing interest because the conventional approach solely relying on experimentation is no longer viable within the constraint of available resources. Modeling of microstructure and mechanical properties during precipitation plays a critical role in understanding the behavior of materials and thus accelerating the development of materials. Nevertheless, an integrated computational tool coupling reliable thermodynamic calculation, kinetic simulation, and property prediction of multi-component systems for industrial applications is rarely available. In this regard, we are developing a software package, PanPrecipitation, under the framework of integrated computational materials engineering to simulate precipitation kinetics. It is seamlessly integrated with the thermodynamic calculation engine, PanEngine, to obtain accurate thermodynamic properties and atomic mobility data necessary for precipitation simulation.

  7. Biocellion: accelerating computer simulation of multicellular biological system models.

    Science.gov (United States)

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-11-01

    Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Integrating Computer-Mediated Communication Strategy Instruction

    Science.gov (United States)

    McNeil, Levi

    2016-01-01

    Communication strategies (CSs) play important roles in resolving problematic second language interaction and facilitating language learning. While studies in face-to-face contexts demonstrate the benefits of communication strategy instruction (CSI), there have been few attempts to integrate computer-mediated communication and CSI. The study…

  9. 8th International Conference on Practical Applications of Computational Biology & Bioinformatics

    CERN Document Server

    Rocha, Miguel; Fdez-Riverola, Florentino; Santana, Juan

    2014-01-01

    Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis of the datasets produced and their integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Clearly, Biology is more and more a science of information requiring tools from the computational sciences. In the last few years, we have seen the surge of a new generation of interdisciplinary scientists that have a strong background in the biological and computational sciences. In this context, the interaction of researche...

  10. 11th International Conference on Practical Applications of Computational Biology & Bioinformatics

    CERN Document Server

    Mohamad, Mohd; Rocha, Miguel; Paz, Juan; Pinto, Tiago

    2017-01-01

    Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next-generation sequencing technologies, together with novel and constantly evolving, distinct types of omics data technologies, have created an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis of the datasets produced and their integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Clearly, Biology is more and more a science of information and requires tools from the computational sciences. In the last few years, we have seen the rise of a new generation of interdisciplinary scientists with a strong background in the biological and computational sciences. In this context, the interaction of r...

  11. 10th International Conference on Practical Applications of Computational Biology & Bioinformatics

    CERN Document Server

    Rocha, Miguel; Fdez-Riverola, Florentino; Mayo, Francisco; Paz, Juan

    2016-01-01

    Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis of the datasets produced and their integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Clearly, Biology is more and more a science of information requiring tools from the computational sciences. In the last few years, we have seen the surge of a new generation of interdisciplinary scientists that have a strong background in the biological and computational sciences. In this context, the interaction of researche...

  12. Applications of membrane computing in systems and synthetic biology

    CERN Document Server

    Gheorghe, Marian; Pérez-Jiménez, Mario

    2014-01-01

    Membrane Computing was introduced as a computational paradigm in Natural Computing. The models introduced, called Membrane (or P) Systems, provide a coherent platform to describe and study living cells as computational systems. Membrane Systems have been investigated for their computational aspects and employed to model problems in other fields, like: Computer Science, Linguistics, Biology, Economy, Computer Graphics, Robotics, etc. Their inherent parallelism, heterogeneity and intrinsic versatility allow them to model a broad range of processes and phenomena, being also an efficient means to solve and analyze problems in a novel way. Membrane Computing has been used to model biological systems, becoming with time a thorough modeling paradigm comparable, in its modeling and predicting capabilities, to more established models in this area. This book is the result of the need to collect, in an organic way, different facets of this paradigm. The chapters of this book, together with the web pages accompanying th...

  13. Reconstruction of biological networks based on life science data integration

    Directory of Open Access Journals (Sweden)

    Kormeier Benjamin

    2010-06-01

    Full Text Available For the implementation of the virtual cell, the fundamental question is how to model and simulate complex biological networks. Therefore, based on relevant molecular database and information systems, biological data integration is an essential step in constructing biological networks. In this paper, we will motivate the applications BioDWH - an integration toolkit for building life science data warehouses, CardioVINEdb - a information system for biological data in cardiovascular-disease and VANESA- a network editor for modeling and simulation of biological networks. Based on this integration process, the system supports the generation of biological network models. A case study of a cardiovascular-disease related gene-regulated biological network is also presented.

  14. Reconstruction of biological networks based on life science data integration.

    Science.gov (United States)

    Kormeier, Benjamin; Hippe, Klaus; Arrigo, Patrizio; Töpel, Thoralf; Janowski, Sebastian; Hofestädt, Ralf

    2010-10-27

    For the implementation of the virtual cell, the fundamental question is how to model and simulate complex biological networks. Therefore, based on relevant molecular database and information systems, biological data integration is an essential step in constructing biological networks. In this paper, we will motivate the applications BioDWH--an integration toolkit for building life science data warehouses, CardioVINEdb--a information system for biological data in cardiovascular-disease and VANESA--a network editor for modeling and simulation of biological networks. Based on this integration process, the system supports the generation of biological network models. A case study of a cardiovascular-disease related gene-regulated biological network is also presented.

  15. Probabilistic data integration and computational complexity

    Science.gov (United States)

    Hansen, T. M.; Cordua, K. S.; Mosegaard, K.

    2016-12-01

    Inverse problems in Earth Sciences typically refer to the problem of inferring information about properties of the Earth from observations of geophysical data (the result of nature's solution to the `forward' problem). This problem can be formulated more generally as a problem of `integration of information'. A probabilistic formulation of data integration is in principle simple: If all information available (from e.g. geology, geophysics, remote sensing, chemistry…) can be quantified probabilistically, then different algorithms exist that allow solving the data integration problem either through an analytical description of the combined probability function, or sampling the probability function. In practice however, probabilistic based data integration may not be easy to apply successfully. This may be related to the use of sampling methods, which are known to be computationally costly. But, another source of computational complexity is related to how the individual types of information are quantified. In one case a data integration problem is demonstrated where the goal is to determine the existence of buried channels in Denmark, based on multiple sources of geo-information. Due to one type of information being too informative (and hence conflicting), this leads to a difficult sampling problems with unrealistic uncertainty. Resolving this conflict prior to data integration, leads to an easy data integration problem, with no biases. In another case it is demonstrated how imperfections in the description of the geophysical forward model (related to solving the wave-equation) can lead to a difficult data integration problem, with severe bias in the results. If the modeling error is accounted for, the data integration problems becomes relatively easy, with no apparent biases. Both examples demonstrate that biased information can have a dramatic effect on the computational efficiency solving a data integration problem and lead to biased results, and under

  16. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  17. Computation: A New Open Access Journal of Computational Chemistry, Computational Biology and Computational Engineering

    OpenAIRE

    Karlheinz Schwarz; Rainer Breitling; Christian Allen

    2013-01-01

    Computation (ISSN 2079-3197; http://www.mdpi.com/journal/computation) is an international scientific open access journal focusing on fundamental work in the field of computational science and engineering. Computational science has become essential in many research areas by contributing to solving complex problems in fundamental science all the way to engineering. The very broad range of application domains suggests structuring this journal into three sections, which are briefly characterized ...

  18. Modelling, abstraction, and computation in systems biology: A view from computer science.

    Science.gov (United States)

    Melham, Tom

    2013-04-01

    Systems biology is centrally engaged with computational modelling across multiple scales and at many levels of abstraction. Formal modelling, precise and formalised abstraction relationships, and computation also lie at the heart of computer science--and over the past decade a growing number of computer scientists have been bringing their discipline's core intellectual and computational tools to bear on biology in fascinating new ways. This paper explores some of the apparent points of contact between the two fields, in the context of a multi-disciplinary discussion on conceptual foundations of systems biology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Integrating quantitative thinking into an introductory biology course improves students' mathematical reasoning in biological contexts.

    Science.gov (United States)

    Hester, Susan; Buxner, Sanlyn; Elfring, Lisa; Nagy, Lisa

    2014-01-01

    Recent calls for improving undergraduate biology education have emphasized the importance of students learning to apply quantitative skills to biological problems. Motivated by students' apparent inability to transfer their existing quantitative skills to biological contexts, we designed and taught an introductory molecular and cell biology course in which we integrated application of prerequisite mathematical skills with biology content and reasoning throughout all aspects of the course. In this paper, we describe the principles of our course design and present illustrative examples of course materials integrating mathematics and biology. We also designed an outcome assessment made up of items testing students' understanding of biology concepts and their ability to apply mathematical skills in biological contexts and administered it as a pre/postcourse test to students in the experimental section and other sections of the same course. Precourse results confirmed students' inability to spontaneously transfer their prerequisite mathematics skills to biological problems. Pre/postcourse outcome assessment comparisons showed that, compared with students in other sections, students in the experimental section made greater gains on integrated math/biology items. They also made comparable gains on biology items, indicating that integrating quantitative skills into an introductory biology course does not have a deleterious effect on students' biology learning.

  20. Integrating Quantitative Thinking into an Introductory Biology Course Improves Students’ Mathematical Reasoning in Biological Contexts

    Science.gov (United States)

    Hester, Susan; Buxner, Sanlyn; Elfring, Lisa; Nagy, Lisa

    2014-01-01

    Recent calls for improving undergraduate biology education have emphasized the importance of students learning to apply quantitative skills to biological problems. Motivated by students’ apparent inability to transfer their existing quantitative skills to biological contexts, we designed and taught an introductory molecular and cell biology course in which we integrated application of prerequisite mathematical skills with biology content and reasoning throughout all aspects of the course. In this paper, we describe the principles of our course design and present illustrative examples of course materials integrating mathematics and biology. We also designed an outcome assessment made up of items testing students’ understanding of biology concepts and their ability to apply mathematical skills in biological contexts and administered it as a pre/postcourse test to students in the experimental section and other sections of the same course. Precourse results confirmed students’ inability to spontaneously transfer their prerequisite mathematics skills to biological problems. Pre/postcourse outcome assessment comparisons showed that, compared with students in other sections, students in the experimental section made greater gains on integrated math/biology items. They also made comparable gains on biology items, indicating that integrating quantitative skills into an introductory biology course does not have a deleterious effect on students’ biology learning. PMID:24591504

  1. Path-integral computation of superfluid densities

    International Nuclear Information System (INIS)

    Pollock, E.L.; Ceperley, D.M.

    1987-01-01

    The normal and superfluid densities are defined by the response of a liquid to sample boundary motion. The free-energy change due to uniform boundary motion can be calculated by path-integral methods from the distribution of the winding number of the paths around a periodic cell. This provides a conceptually and computationally simple way of calculating the superfluid density for any Bose system. The linear-response formulation relates the superfluid density to the momentum-density correlation function, which has a short-ranged part related to the normal density and, in the case of a superfluid, a long-ranged part whose strength is proportional to the superfluid density. These facts are discussed in the context of path-integral computations and demonstrated for liquid 4 He along the saturated vapor-pressure curve. Below the experimental superfluid transition temperature the computed superfluid fractions agree with the experimental values to within the statistical uncertainties of a few percent in the computations. The computed transition is broadened by finite-sample-size effects

  2. The computational linguistics of biological sequences

    Energy Technology Data Exchange (ETDEWEB)

    Searls, D. [Univ. of Pennsylvania, Philadelphia, PA (United States)

    1995-12-31

    This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. Protein sequences are analogous in many respects, particularly their folding behavior. Proteins have a much richer variety of interactions, but in theory the same linguistic principles could come to bear in describing dependencies between distant residues that arise by virtue of three-dimensional structure. This tutorial will concentrate on nucleic acid sequences.

  3. Using a Computer Animation to Teach High School Molecular Biology

    Science.gov (United States)

    Rotbain, Yosi; Marbach-Ad, Gili; Stavy, Ruth

    2008-01-01

    We present an active way to use a computer animation in secondary molecular genetics class. For this purpose we developed an activity booklet that helps students to work interactively with a computer animation which deals with abstract concepts and processes in molecular biology. The achievements of the experimental group were compared with those…

  4. Computation: A New Open Access Journal of Computational Chemistry, Computational Biology and Computational Engineering

    Directory of Open Access Journals (Sweden)

    Karlheinz Schwarz

    2013-09-01

    Full Text Available Computation (ISSN 2079-3197; http://www.mdpi.com/journal/computation is an international scientific open access journal focusing on fundamental work in the field of computational science and engineering. Computational science has become essential in many research areas by contributing to solving complex problems in fundamental science all the way to engineering. The very broad range of application domains suggests structuring this journal into three sections, which are briefly characterized below. In each section a further focusing will be provided by occasionally organizing special issues on topics of high interests, collecting papers on fundamental work in the field. More applied papers should be submitted to their corresponding specialist journals. To help us achieve our goal with this journal, we have an excellent editorial board to advise us on the exciting current and future trends in computation from methodology to application. We very much look forward to hearing all about the research going on across the world. [...

  5. Integrated computer-aided design using minicomputers

    Science.gov (United States)

    Storaasli, O. O.

    1980-01-01

    Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM), a highly interactive software, has been implemented on minicomputers at the NASA Langley Research Center. CAD/CAM software integrates many formerly fragmented programs and procedures into one cohesive system; it also includes finite element modeling and analysis, and has been interfaced via a computer network to a relational data base management system and offline plotting devices on mainframe computers. The CAD/CAM software system requires interactive graphics terminals operating at a minimum of 4800 bits/sec transfer rate to a computer. The system is portable and introduces 'interactive graphics', which permits the creation and modification of models interactively. The CAD/CAM system has already produced designs for a large area space platform, a national transonic facility fan blade, and a laminar flow control wind tunnel model. Besides the design/drafting element analysis capability, CAD/CAM provides options to produce an automatic program tooling code to drive a numerically controlled (N/C) machine. Reductions in time for design, engineering, drawing, finite element modeling, and N/C machining will benefit productivity through reduced costs, fewer errors, and a wider range of configuration.

  6. Novel opportunities for computational biology and sociology in drug discovery☆

    Science.gov (United States)

    Yao, Lixia; Evans, James A.; Rzhetsky, Andrey

    2013-01-01

    Current drug discovery is impossible without sophisticated modeling and computation. In this review we outline previous advances in computational biology and, by tracing the steps involved in pharmaceutical development, explore a range of novel, high-value opportunities for computational innovation in modeling the biological process of disease and the social process of drug discovery. These opportunities include text mining for new drug leads, modeling molecular pathways and predicting the efficacy of drug cocktails, analyzing genetic overlap between diseases and predicting alternative drug use. Computation can also be used to model research teams and innovative regions and to estimate the value of academy–industry links for scientific and human benefit. Attention to these opportunities could promise punctuated advance and will complement the well-established computational work on which drug discovery currently relies. PMID:20349528

  7. Novel opportunities for computational biology and sociology in drug discovery

    Science.gov (United States)

    Yao, Lixia

    2009-01-01

    Drug discovery today is impossible without sophisticated modeling and computation. In this review we touch on previous advances in computational biology and by tracing the steps involved in pharmaceutical development, we explore a range of novel, high value opportunities for computational innovation in modeling the biological process of disease and the social process of drug discovery. These opportunities include text mining for new drug leads, modeling molecular pathways and predicting the efficacy of drug cocktails, analyzing genetic overlap between diseases and predicting alternative drug use. Computation can also be used to model research teams and innovative regions and to estimate the value of academy-industry ties for scientific and human benefit. Attention to these opportunities could promise punctuated advance, and will complement the well-established computational work on which drug discovery currently relies. PMID:19674801

  8. PathSys: integrating molecular interaction graphs for systems biology

    Directory of Open Access Journals (Sweden)

    Raval Alpan

    2006-02-01

    Full Text Available Abstract Background The goal of information integration in systems biology is to combine information from a number of databases and data sets, which are obtained from both high and low throughput experiments, under one data management scheme such that the cumulative information provides greater biological insight than is possible with individual information sources considered separately. Results Here we present PathSys, a graph-based system for creating a combined database of networks of interaction for generating integrated view of biological mechanisms. We used PathSys to integrate over 14 curated and publicly contributed data sources for the budding yeast (S. cerevisiae and Gene Ontology. A number of exploratory questions were formulated as a combination of relational and graph-based queries to the integrated database. Thus, PathSys is a general-purpose, scalable, graph-data warehouse of biological information, complete with a graph manipulation and a query language, a storage mechanism and a generic data-importing mechanism through schema-mapping. Conclusion Results from several test studies demonstrate the effectiveness of the approach in retrieving biologically interesting relations between genes and proteins, the networks connecting them, and of the utility of PathSys as a scalable graph-based warehouse for interaction-network integration and a hypothesis generator system. The PathSys's client software, named BiologicalNetworks, developed for navigation and analyses of molecular networks, is available as a Java Web Start application at http://brak.sdsc.edu/pub/BiologicalNetworks.

  9. Advances in Integrated Computational Materials Engineering "ICME"

    Science.gov (United States)

    Hirsch, Jürgen

    The methods of Integrated Computational Materials Engineering that were developed and successfully applied for Aluminium have been constantly improved. The main aspects and recent advances of integrated material and process modeling are simulations of material properties like strength and forming properties and for the specific microstructure evolution during processing (rolling, extrusion, annealing) under the influence of material constitution and process variations through the production process down to the final application. Examples are discussed for the through-process simulation of microstructures and related properties of Aluminium sheet, including DC ingot casting, pre-heating and homogenization, hot and cold rolling, final annealing. New results are included of simulation solution annealing and age hardening of 6xxx alloys for automotive applications. Physically based quantitative descriptions and computer assisted evaluation methods are new ICME methods of integrating new simulation tools also for customer applications, like heat affected zones in welding of age hardening alloys. The aspects of estimating the effect of specific elements due to growing recycling volumes requested also for high end Aluminium products are also discussed, being of special interest in the Aluminium producing industries.

  10. Integrated biological, chemical and physical processes kinetic ...

    African Journals Online (AJOL)

    ... for C and N removal, only gas and liquid phase processes were considered for this integrated model. ... kLA value for the aeration system, which affects the pH in the anoxic and aerobic reactors through CO2 gas exchange. ... Water SA Vol.

  11. XIV Mediterranean Conference on Medical and Biological Engineering and Computing

    CERN Document Server

    Christofides, Stelios; Pattichis, Constantinos

    2016-01-01

    This volume presents the proceedings of Medicon 2016, held in Paphos, Cyprus. Medicon 2016 is the XIV in the series of regional meetings of the International Federation of Medical and Biological Engineering (IFMBE) in the Mediterranean. The goal of Medicon 2016 is to provide updated information on the state of the art on Medical and Biological Engineering and Computing under the main theme “Systems Medicine for the Delivery of Better Healthcare Services”. Medical and Biological Engineering and Computing cover complementary disciplines that hold great promise for the advancement of research and development in complex medical and biological systems. Research and development in these areas are impacting the science and technology by advancing fundamental concepts in translational medicine, by helping us understand human physiology and function at multiple levels, by improving tools and techniques for the detection, prevention and treatment of disease. Medicon 2016 provides a common platform for the cross fer...

  12. The Virtual Cell: a software environment for computational cell biology.

    Science.gov (United States)

    Loew, L M; Schaff, J C

    2001-10-01

    The newly emerging field of computational cell biology requires software tools that address the needs of a broad community of scientists. Cell biological processes are controlled by an interacting set of biochemical and electrophysiological events that are distributed within complex cellular structures. Computational modeling is familiar to researchers in fields such as molecular structure, neurobiology and metabolic pathway engineering, and is rapidly emerging in the area of gene expression. Although some of these established modeling approaches can be adapted to address problems of interest to cell biologists, relatively few software development efforts have been directed at the field as a whole. The Virtual Cell is a computational environment designed for cell biologists as well as for mathematical biologists and bioengineers. It serves to aid the construction of cell biological models and the generation of simulations from them. The system enables the formulation of both compartmental and spatial models, the latter with either idealized or experimentally derived geometries of one, two or three dimensions.

  13. Revision history aware repositories of computational models of biological systems.

    Science.gov (United States)

    Miller, Andrew K; Yu, Tommy; Britten, Randall; Cooling, Mike T; Lawson, James; Cowan, Dougal; Garny, Alan; Halstead, Matt D B; Hunter, Peter J; Nickerson, David P; Nunns, Geo; Wimalaratne, Sarala M; Nielsen, Poul M F

    2011-01-14

    Building repositories of computational models of biological systems ensures that published models are available for both education and further research, and can provide a source of smaller, previously verified models to integrate into a larger model. One problem with earlier repositories has been the limitations in facilities to record the revision history of models. Often, these facilities are limited to a linear series of versions which were deposited in the repository. This is problematic for several reasons. Firstly, there are many instances in the history of biological systems modelling where an 'ancestral' model is modified by different groups to create many different models. With a linear series of versions, if the changes made to one model are merged into another model, the merge appears as a single item in the history. This hides useful revision history information, and also makes further merges much more difficult, as there is no record of which changes have or have not already been merged. In addition, a long series of individual changes made outside of the repository are also all merged into a single revision when they are put back into the repository, making it difficult to separate out individual changes. Furthermore, many earlier repositories only retain the revision history of individual files, rather than of a group of files. This is an important limitation to overcome, because some types of models, such as CellML 1.1 models, can be developed as a collection of modules, each in a separate file. The need for revision history is widely recognised for computer software, and a lot of work has gone into developing version control systems and distributed version control systems (DVCSs) for tracking the revision history. However, to date, there has been no published research on how DVCSs can be applied to repositories of computational models of biological systems. We have extended the Physiome Model Repository software to be fully revision history aware

  14. Revision history aware repositories of computational models of biological systems

    Directory of Open Access Journals (Sweden)

    Nickerson David P

    2011-01-01

    Full Text Available Abstract Background Building repositories of computational models of biological systems ensures that published models are available for both education and further research, and can provide a source of smaller, previously verified models to integrate into a larger model. One problem with earlier repositories has been the limitations in facilities to record the revision history of models. Often, these facilities are limited to a linear series of versions which were deposited in the repository. This is problematic for several reasons. Firstly, there are many instances in the history of biological systems modelling where an 'ancestral' model is modified by different groups to create many different models. With a linear series of versions, if the changes made to one model are merged into another model, the merge appears as a single item in the history. This hides useful revision history information, and also makes further merges much more difficult, as there is no record of which changes have or have not already been merged. In addition, a long series of individual changes made outside of the repository are also all merged into a single revision when they are put back into the repository, making it difficult to separate out individual changes. Furthermore, many earlier repositories only retain the revision history of individual files, rather than of a group of files. This is an important limitation to overcome, because some types of models, such as CellML 1.1 models, can be developed as a collection of modules, each in a separate file. The need for revision history is widely recognised for computer software, and a lot of work has gone into developing version control systems and distributed version control systems (DVCSs for tracking the revision history. However, to date, there has been no published research on how DVCSs can be applied to repositories of computational models of biological systems. Results We have extended the Physiome Model

  15. Biologics in dermatology: An integrated review

    Directory of Open Access Journals (Sweden)

    Virendra N Sehgal

    2014-01-01

    Full Text Available The advent of biologics in dermatologic treatment armentarium has added refreshing dimensions, for it is a major breakthrough. Several agents are now available for use. It is therefore imperative to succinctly comprehend their pharmacokinetics for their apt use. A concerted endeavor has been made to delve on this subject. The major groups of biologics have been covered and include: Drugs acting against TNF-α, Alefacept, Ustekinumab, Rituximab, IVIG and Omalizumab. The relevant pharmacokinetic characteristics have been detailed. Their respective label (approved and off-label (unapproved indications have been defined, highlighting their dosage protocol, availability and mode of administration. The evidence level of each indication has also been discussed to apprise the clinician of their current and prospective uses. Individual anti-TNF drugs are not identical in their actions and often one is superior to the other in a particular disease. Hence, the section on anti-TNF agents mentions the literature on each drug separately, and not as a group. The limitations for their use have also been clearly brought out.

  16. Bioconductor: open software development for computational biology and bioinformatics

    DEFF Research Database (Denmark)

    Gentleman, R.C.; Carey, V.J.; Bates, D.M.

    2004-01-01

    The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisci......The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry...... into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples....

  17. Integrating cell biology and proteomic approaches in plants.

    Science.gov (United States)

    Takáč, Tomáš; Šamajová, Olga; Šamaj, Jozef

    2017-10-03

    Significant improvements of protein extraction, separation, mass spectrometry and bioinformatics nurtured advancements of proteomics during the past years. The usefulness of proteomics in the investigation of biological problems can be enhanced by integration with other experimental methods from cell biology, genetics, biochemistry, pharmacology, molecular biology and other omics approaches including transcriptomics and metabolomics. This review aims to summarize current trends integrating cell biology and proteomics in plant science. Cell biology approaches are most frequently used in proteomic studies investigating subcellular and developmental proteomes, however, they were also employed in proteomic studies exploring abiotic and biotic stress responses, vesicular transport, cytoskeleton and protein posttranslational modifications. They are used either for detailed cellular or ultrastructural characterization of the object subjected to proteomic study, validation of proteomic results or to expand proteomic data. In this respect, a broad spectrum of methods is employed to support proteomic studies including ultrastructural electron microscopy studies, histochemical staining, immunochemical localization, in vivo imaging of fluorescently tagged proteins and visualization of protein-protein interactions. Thus, cell biological observations on fixed or living cell compartments, cells, tissues and organs are feasible, and in some cases fundamental for the validation and complementation of proteomic data. Validation of proteomic data by independent experimental methods requires development of new complementary approaches. Benefits of cell biology methods and techniques are not sufficiently highlighted in current proteomic studies. This encouraged us to review most popular cell biology methods used in proteomic studies and to evaluate their relevance and potential for proteomic data validation and enrichment of purely proteomic analyses. We also provide examples of

  18. Computational Biomechanics Theoretical Background and BiologicalBiomedical Problems

    CERN Document Server

    Tanaka, Masao; Nakamura, Masanori

    2012-01-01

    Rapid developments have taken place in biological/biomedical measurement and imaging technologies as well as in computer analysis and information technologies. The increase in data obtained with such technologies invites the reader into a virtual world that represents realistic biological tissue or organ structures in digital form and allows for simulation and what is called “in silico medicine.” This volume is the third in a textbook series and covers both the basics of continuum mechanics of biosolids and biofluids and the theoretical core of computational methods for continuum mechanics analyses. Several biomechanics problems are provided for better understanding of computational modeling and analysis. Topics include the mechanics of solid and fluid bodies, fundamental characteristics of biosolids and biofluids, computational methods in biomechanics analysis/simulation, practical problems in orthopedic biomechanics, dental biomechanics, ophthalmic biomechanics, cardiovascular biomechanics, hemodynamics...

  19. Integration of genomic information with biological networks using Cytoscape.

    Science.gov (United States)

    Bauer-Mehren, Anna

    2013-01-01

    Cytoscape is an open-source software for visualizing, analyzing, and modeling biological networks. This chapter explains how to use Cytoscape to analyze the functional effect of sequence variations in the context of biological networks such as protein-protein interaction networks and signaling pathways. The chapter is divided into five parts: (1) obtaining information about the functional effect of sequence variation in a Cytoscape readable format, (2) loading and displaying different types of biological networks in Cytoscape, (3) integrating the genomic information (SNPs and mutations) with the biological networks, and (4) analyzing the effect of the genomic perturbation onto the network structure using Cytoscape built-in functions. Finally, we briefly outline how the integrated data can help in building mathematical network models for analyzing the effect of the sequence variation onto the dynamics of the biological system. Each part is illustrated by step-by-step instructions on an example use case and visualized by many screenshots and figures.

  20. Structure, function, and behaviour of computational models in systems biology.

    Science.gov (United States)

    Knüpfer, Christian; Beckstein, Clemens; Dittrich, Peter; Le Novère, Nicolas

    2013-05-31

    Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such "bio-models" necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. We present a conceptual framework - the meaning facets - which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model's components (structure), the meaning of the model's intended use (function), and the meaning of the model's dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research.

  1. COMPUTER INTEGRATED MANUFACTURING: OVERVIEW OF MODERN STANDARDS

    Directory of Open Access Journals (Sweden)

    A. Рupena

    2016-09-01

    Full Text Available The article deals with modern international standards ISA-95 and ISA-88 on the development of computer inegreted manufacturing. It is shown scope of standards in the context of a hierarchical model of the enterprise. Article is built in such a way to describe the essence of the standards in the light of the basic descriptive models: product definition, resources, schedules and actual performance of industrial activity. Description of the product definition is given by hierarchical presentation of products at various levels of management. Much attention is given to describe this type of resources like equipment, which is logical chain to all these standards. For example, the standard batch process control shows the relationship between the definition of product and equipment on which it is made. The article shows the hierarchy of planning ERP-MES / MOM-SCADA (in terms of standard ISA-95, which traces the decomposition of common production plans of enterprises for specific works at APCS. We consider the appointment of the actual performance of production at MES / MOM considering KPI. Generalized picture of operational activity on a level MES / MOM is shown via general circuit diagrams of the relationship of activities and information flows between the functions. The article is finished by a substantiation of necessity of distribution, approval and development of standards ISA-88 and ISA-95 in Ukraine. The article is an overview and can be useful to specialists in computer-integrated systems control and management of industrial enterprises, system integrators and suppliers.

  2. COGMIR: A computer model for knowledge integration

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Z.X.

    1988-01-01

    This dissertation explores some aspects of knowledge integration, namely, accumulation of scientific knowledge and performing analogical reasoning on the acquired knowledge. Knowledge to be integrated is conveyed by paragraph-like pieces referred to as documents. By incorporating some results from cognitive science, the Deutsch-Kraft model of information retrieval is extended to a model for knowledge engineering, which integrates acquired knowledge and performs intelligent retrieval. The resulting computer model is termed COGMIR, which stands for a COGnitive Model for Intelligent Retrieval. A scheme, named query invoked memory reorganization, is used in COGMIR for knowledge integration. Unlike some other schemes which realize knowledge integration through subjective understanding by representing new knowledge in terms of existing knowledge, the proposed scheme suggests at storage time only recording the possible connection of knowledge acquired from different documents. The actual binding of the knowledge acquired from different documents is deferred to query time. There is only one way to store knowledge and numerous ways to utilize the knowledge. Each document can be represented as a whole as well as its meaning. In addition, since facts are constructed from the documents, document retrieval and fact retrieval are treated in a unified way. When the requested knowledge is not available, query invoked memory reorganization can generate suggestion based on available knowledge through analogical reasoning. This is done by revising the algorithms developed for document retrieval and fact retrieval, and by incorporating Gentner's structure mapping theory. Analogical reasoning is treated as a natural extension of intelligent retrieval, so that two previously separate research areas are combined. A case study is provided. All the components are implemented as list structures similar to relational data-bases.

  3. Catalyzing Inquiry at the Interface of Computing and Biology

    Energy Technology Data Exchange (ETDEWEB)

    John Wooley; Herbert S. Lin

    2005-10-30

    This study is the first comprehensive NRC study that suggests a high-level intellectual structure for Federal agencies for supporting work at the biology/computing interface. The report seeks to establish the intellectual legitimacy of a fundamentally cross-disciplinary collaboration between biologists and computer scientists. That is, while some universities are increasingly favorable to research at the intersection, life science researchers at other universities are strongly impeded in their efforts to collaborate. This report addresses these impediments and describes proven strategies for overcoming them. An important feature of the report is the use of well-documented examples that describe clearly to individuals not trained in computer science the value and usage of computing across the biological sciences, from genes and proteins to networks and pathways, from organelles to cells, and from individual organisms to populations and ecosystems. It is hoped that these examples will be useful to students in the life sciences to motivate (continued) study in computer science that will enable them to be more facile users of computing in their future biological studies.

  4. National Ignition Facility integrated computer control system

    International Nuclear Information System (INIS)

    Van Arsdall, P.J. LLNL

    1998-01-01

    The NIF design team is developing the Integrated Computer Control System (ICCS), which is based on an object-oriented software framework applicable to event-driven control systems. The framework provides an open, extensible architecture that is sufficiently abstract to construct future mission-critical control systems. The ICCS will become operational when the first 8 out of 192 beams are activated in mid 2000. The ICCS consists of 300 front-end processors attached to 60,000 control points coordinated by a supervisory system. Computers running either Solaris or VxWorks are networked over a hybrid configuration of switched fast Ethernet and asynchronous transfer mode (ATM). ATM carries digital motion video from sensors to operator consoles. Supervisory software is constructed by extending the reusable framework components for each specific application. The framework incorporates services for database persistence, system configuration, graphical user interface, status monitoring, event logging, scripting language, alert management, and access control. More than twenty collaborating software applications are derived from the common framework. The framework is interoperable among different kinds of computers and functions as a plug-in software bus by leveraging a common object request brokering architecture (CORBA). CORBA transparently distributes the software objects across the network. Because of the pivotal role played, CORBA was tested to ensure adequate performance

  5. Computational Biology Support: RECOMB Conference Series (Conference Support)

    Energy Technology Data Exchange (ETDEWEB)

    Michael Waterman

    2006-06-15

    This funding was support for student and postdoctoral attendance at the Annual Recomb Conference from 2001 to 2005. The RECOMB Conference series was founded in 1997 to provide a scientific forum for theoretical advances in computational biology and their applications in molecular biology and medicine. The conference series aims at attracting research contributions in all areas of computational molecular biology. Typical, but not exclusive, the topics of interest are: Genomics, Molecular sequence analysis, Recognition of genes and regulatory elements, Molecular evolution, Protein structure, Structural genomics, Gene Expression, Gene Networks, Drug Design, Combinatorial libraries, Computational proteomics, and Structural and functional genomics. The origins of the conference came from the mathematical and computational side of the field, and there remains to be a certain focus on computational advances. However, the effective use of computational techniques to biological innovation is also an important aspect of the conference. The conference had a growing number of attendees, topping 300 in recent years and often exceeding 500. The conference program includes between 30 and 40 contributed papers, that are selected by a international program committee with around 30 experts during a rigorous review process rivaling the editorial procedure for top-rate scientific journals. In previous years papers selection has been made from up to 130--200 submissions from well over a dozen countries. 10-page extended abstracts of the contributed papers are collected in a volume published by ACM Press and Springer, and are available at the conference. Full versions of a selection of the papers are published annually in a special issue of the Journal of Computational Biology devoted to the RECOMB Conference. A further point in the program is a lively poster session. From 120-300 posters have been presented each year at RECOMB 2000. One of the highlights of each RECOMB conference is a

  6. Systems Biology as an Integrated Platform for Bioinformatics, Systems Synthetic Biology, and Systems Metabolic Engineering

    Directory of Open Access Journals (Sweden)

    Bor-Sen Chen

    2013-10-01

    Full Text Available Systems biology aims at achieving a system-level understanding of living organisms and applying this knowledge to various fields such as synthetic biology, metabolic engineering, and medicine. System-level understanding of living organisms can be derived from insight into: (i system structure and the mechanism of biological networks such as gene regulation, protein interactions, signaling, and metabolic pathways; (ii system dynamics of biological networks, which provides an understanding of stability, robustness, and transduction ability through system identification, and through system analysis methods; (iii system control methods at different levels of biological networks, which provide an understanding of systematic mechanisms to robustly control system states, minimize malfunctions, and provide potential therapeutic targets in disease treatment; (iv systematic design methods for the modification and construction of biological networks with desired behaviors, which provide system design principles and system simulations for synthetic biology designs and systems metabolic engineering. This review describes current developments in systems biology, systems synthetic biology, and systems metabolic engineering for engineering and biology researchers. We also discuss challenges and future prospects for systems biology and the concept of systems biology as an integrated platform for bioinformatics, systems synthetic biology, and systems metabolic engineering.

  7. Systems Biology as an Integrated Platform for Bioinformatics, Systems Synthetic Biology, and Systems Metabolic Engineering

    Science.gov (United States)

    Chen, Bor-Sen; Wu, Chia-Chou

    2013-01-01

    Systems biology aims at achieving a system-level understanding of living organisms and applying this knowledge to various fields such as synthetic biology, metabolic engineering, and medicine. System-level understanding of living organisms can be derived from insight into: (i) system structure and the mechanism of biological networks such as gene regulation, protein interactions, signaling, and metabolic pathways; (ii) system dynamics of biological networks, which provides an understanding of stability, robustness, and transduction ability through system identification, and through system analysis methods; (iii) system control methods at different levels of biological networks, which provide an understanding of systematic mechanisms to robustly control system states, minimize malfunctions, and provide potential therapeutic targets in disease treatment; (iv) systematic design methods for the modification and construction of biological networks with desired behaviors, which provide system design principles and system simulations for synthetic biology designs and systems metabolic engineering. This review describes current developments in systems biology, systems synthetic biology, and systems metabolic engineering for engineering and biology researchers. We also discuss challenges and future prospects for systems biology and the concept of systems biology as an integrated platform for bioinformatics, systems synthetic biology, and systems metabolic engineering. PMID:24709875

  8. 7th World Congress on Nature and Biologically Inspired Computing

    CERN Document Server

    Engelbrecht, Andries; Abraham, Ajith; Plessis, Mathys; Snášel, Václav; Muda, Azah

    2016-01-01

    World Congress on Nature and Biologically Inspired Computing (NaBIC) is organized to discuss the state-of-the-art as well as to address various issues with respect to Nurturing Intelligent Computing Towards Advancement of Machine Intelligence. This Volume contains the papers presented in the Seventh World Congress (NaBIC’15) held in Pietermaritzburg, South Africa during December 01-03, 2015. The 39 papers presented in this Volume were carefully reviewed and selected. The Volume would be a valuable reference to researchers, students and practitioners in the computational intelligence field.

  9. Inter-level relations in computer science, biology, and psychology

    NARCIS (Netherlands)

    Boogerd, F.; Bruggeman, F.; Jonker, C.M.; Looren de Jong, H.; Tamminga, A.; Treur, J.; Westerhoff, H.V.; Wijngaards, W.C.A.

    2002-01-01

    Investigations into inter-level relations in computer science, biology and psychology call for an empirical turn in the philosophy of mind. Rather than concentrate on a priori discussions of inter-level relations between 'completed' sciences, a case is made for the actual study of the way

  10. Inter-level relations in computer science, biology and psychology

    NARCIS (Netherlands)

    Boogerd, F.C.; Bruggeman, F.J.; Jonker, C.M.; Looren De Jong, H.; Tamminga, A.M.; Treur, J.; Westerhoff, H.V.; Wijngaards, W.C.A.

    2002-01-01

    Investigations into inter-level relations in computer science, biology and psychology call for an empirical turn in the philosophy of mind. Rather than concentrate on a priori discussions of inter-level relations between "completed" sciences, a case is made for the actual study of the way

  11. Inter-level relations in computer science, biology, and psychology

    NARCIS (Netherlands)

    Boogerd, Fred; Bruggeman, Frank; Jonker, Catholijn; Looren de Jong, Huib; Tamminga, Allard; Treur, Jan; Westerhoff, Hans; Wijngaards, Wouter

    2002-01-01

    Investigations into inter-level relations in computer science, biology and psychology call for an *empirical* turn in the philosophy of mind. Rather than concentrate on *a priori* discussions of inter-level relations between “completed” sciences, a case is made for the actual study of the way

  12. Filling the gap between biology and computer science.

    Science.gov (United States)

    Aguilar-Ruiz, Jesús S; Moore, Jason H; Ritchie, Marylyn D

    2008-07-17

    This editorial introduces BioData Mining, a new journal which publishes research articles related to advances in computational methods and techniques for the extraction of useful knowledge from heterogeneous biological data. We outline the aims and scope of the journal, introduce the publishing model and describe the open peer review policy, which fosters interaction within the research community.

  13. Biology Students Building Computer Simulations Using StarLogo TNG

    Science.gov (United States)

    Smith, V. Anne; Duncan, Ishbel

    2011-01-01

    Confidence is an important issue for biology students in handling computational concepts. This paper describes a practical in which honours-level bioscience students simulate complex animal behaviour using StarLogo TNG, a freely-available graphical programming environment. The practical consists of two sessions, the first of which guides students…

  14. Biological data integration: wrapping data and tools.

    Science.gov (United States)

    Lacroix, Zoé

    2002-06-01

    Nowadays scientific data is inevitably digital and stored in a wide variety of formats in heterogeneous systems. Scientists need to access an integrated view of remote or local heterogeneous data sources with advanced data accessing, analyzing, and visualization tools. Building a digital library for scientific data requires accessing and manipulating data extracted from flat files or databases, documents retrieved from the Web as well as data generated by software. We present an approach to wrapping web data sources, databases, flat files, or data generated by tools through a database view mechanism. Generally, a wrapper has two tasks: it first sends a query to the source to retrieve data and, second builds the expected output with respect to the virtual structure. Our wrappers are composed of a retrieval component based on an intermediate object view mechanism called search views mapping the source capabilities to attributes, and an eXtensible Markup Language (XML) engine, respectively, to perform these two tasks. The originality of the approach consists of: 1) a generic view mechanism to access seamlessly data sources with limited capabilities and 2) the ability to wrap data sources as well as the useful specific tools they may provide. Our approach has been developed and demonstrated as part of the multidatabase system supporting queries via uniform object protocol model (OPM) interfaces.

  15. Dovetailing biology and chemistry: integrating the Gene Ontology with the ChEBI chemical ontology

    Science.gov (United States)

    2013-01-01

    Background The Gene Ontology (GO) facilitates the description of the action of gene products in a biological context. Many GO terms refer to chemical entities that participate in biological processes. To facilitate accurate and consistent systems-wide biological representation, it is necessary to integrate the chemical view of these entities with the biological view of GO functions and processes. We describe a collaborative effort between the GO and the Chemical Entities of Biological Interest (ChEBI) ontology developers to ensure that the representation of chemicals in the GO is both internally consistent and in alignment with the chemical expertise captured in ChEBI. Results We have examined and integrated the ChEBI structural hierarchy into the GO resource through computationally-assisted manual curation of both GO and ChEBI. Our work has resulted in the creation of computable definitions of GO terms that contain fully defined semantic relationships to corresponding chemical terms in ChEBI. Conclusions The set of logical definitions using both the GO and ChEBI has already been used to automate aspects of GO development and has the potential to allow the integration of data across the domains of biology and chemistry. These logical definitions are available as an extended version of the ontology from http://purl.obolibrary.org/obo/go/extensions/go-plus.owl. PMID:23895341

  16. Integrating biological redesign: where synthetic biology came from and where it needs to go.

    Science.gov (United States)

    Way, Jeffrey C; Collins, James J; Keasling, Jay D; Silver, Pamela A

    2014-03-27

    Synthetic biology seeks to extend approaches from engineering and computation to redesign of biology, with goals such as generating new chemicals, improving human health, and addressing environmental issues. Early on, several guiding principles of synthetic biology were articulated, including design according to specification, separation of design from fabrication, use of standardized biological parts and organisms, and abstraction. We review the utility of these principles over the past decade in light of the field's accomplishments in building complex systems based on microbial transcription and metabolism and describe the progress in mammalian cell engineering. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Pegasys: software for executing and integrating analyses of biological sequences

    Directory of Open Access Journals (Sweden)

    Lett Drew

    2004-04-01

    Full Text Available Abstract Background We present Pegasys – a flexible, modular and customizable software system that facilitates the execution and data integration from heterogeneous biological sequence analysis tools. Results The Pegasys system includes numerous tools for pair-wise and multiple sequence alignment, ab initio gene prediction, RNA gene detection, masking repetitive sequences in genomic DNA as well as filters for database formatting and processing raw output from various analysis tools. We introduce a novel data structure for creating workflows of sequence analyses and a unified data model to store its results. The software allows users to dynamically create analysis workflows at run-time by manipulating a graphical user interface. All non-serial dependent analyses are executed in parallel on a compute cluster for efficiency of data generation. The uniform data model and backend relational database management system of Pegasys allow for results of heterogeneous programs included in the workflow to be integrated and exported into General Feature Format for further analyses in GFF-dependent tools, or GAME XML for import into the Apollo genome editor. The modularity of the design allows for new tools to be added to the system with little programmer overhead. The database application programming interface allows programmatic access to the data stored in the backend through SQL queries. Conclusions The Pegasys system enables biologists and bioinformaticians to create and manage sequence analysis workflows. The software is released under the Open Source GNU General Public License. All source code and documentation is available for download at http://bioinformatics.ubc.ca/pegasys/.

  18. Integrated optical circuits for numerical computation

    Science.gov (United States)

    Verber, C. M.; Kenan, R. P.

    1983-01-01

    The development of integrated optical circuits (IOC) for numerical-computation applications is reviewed, with a focus on the use of systolic architectures. The basic architecture criteria for optical processors are shown to be the same as those proposed by Kung (1982) for VLSI design, and the advantages of IOCs over bulk techniques are indicated. The operation and fabrication of electrooptic grating structures are outlined, and the application of IOCs of this type to an existing 32-bit, 32-Mbit/sec digital correlator, a proposed matrix multiplier, and a proposed pipeline processor for polynomial evaluation is discussed. The problems arising from the inherent nonlinearity of electrooptic gratings are considered. Diagrams and drawings of the application concepts are provided.

  19. ASCR Cybersecurity for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Piesert, Sean

    2015-02-27

    The Department of Energy (DOE) has the responsibility to address the energy, environmental, and nuclear security challenges that face our nation. Much of DOE’s enterprise involves distributed, collaborative teams; a signi¬cant fraction involves “open science,” which depends on multi-institutional, often international collaborations that must access or share signi¬cant amounts of information between institutions and over networks around the world. The mission of the Office of Science is the delivery of scienti¬c discoveries and major scienti¬c tools to transform our understanding of nature and to advance the energy, economic, and national security of the United States. The ability of DOE to execute its responsibilities depends critically on its ability to assure the integrity and availability of scienti¬c facilities and computer systems, and of the scienti¬c, engineering, and operational software and data that support its mission.

  20. BiologicalNetworks 2.0 - an integrative view of genome biology data

    Directory of Open Access Journals (Sweden)

    Ponomarenko Julia

    2010-12-01

    Full Text Available Abstract Background A significant problem in the study of mechanisms of an organism's development is the elucidation of interrelated factors which are making an impact on the different levels of the organism, such as genes, biological molecules, cells, and cell systems. Numerous sources of heterogeneous data which exist for these subsystems are still not integrated sufficiently enough to give researchers a straightforward opportunity to analyze them together in the same frame of study. Systematic application of data integration methods is also hampered by a multitude of such factors as the orthogonal nature of the integrated data and naming problems. Results Here we report on a new version of BiologicalNetworks, a research environment for the integral visualization and analysis of heterogeneous biological data. BiologicalNetworks can be queried for properties of thousands of different types of biological entities (genes/proteins, promoters, COGs, pathways, binding sites, and other and their relations (interactions, co-expression, co-citations, and other. The system includes the build-pathways infrastructure for molecular interactions/relations and module discovery in high-throughput experiments. Also implemented in BiologicalNetworks are the Integrated Genome Viewer and Comparative Genomics Browser applications, which allow for the search and analysis of gene regulatory regions and their conservation in multiple species in conjunction with molecular pathways/networks, experimental data and functional annotations. Conclusions The new release of BiologicalNetworks together with its back-end database introduces extensive functionality for a more efficient integrated multi-level analysis of microarray, sequence, regulatory, and other data. BiologicalNetworks is freely available at http://www.biologicalnetworks.org.

  1. Integrated multiscale modeling of molecular computing devices

    International Nuclear Information System (INIS)

    Cummings, Peter T; Leng Yongsheng

    2005-01-01

    Molecular electronics, in which single organic molecules are designed to perform the functions of transistors, diodes, switches and other circuit elements used in current siliconbased microelecronics, is drawing wide interest as a potential replacement technology for conventional silicon-based lithographically etched microelectronic devices. In addition to their nanoscopic scale, the additional advantage of molecular electronics devices compared to silicon-based lithographically etched devices is the promise of being able to produce them cheaply on an industrial scale using wet chemistry methods (i.e., self-assembly from solution). The design of molecular electronics devices, and the processes to make them on an industrial scale, will require a thorough theoretical understanding of the molecular and higher level processes involved. Hence, the development of modeling techniques for molecular electronics devices is a high priority from both a basic science point of view (to understand the experimental studies in this field) and from an applied nanotechnology (manufacturing) point of view. Modeling molecular electronics devices requires computational methods at all length scales - electronic structure methods for calculating electron transport through organic molecules bonded to inorganic surfaces, molecular simulation methods for determining the structure of self-assembled films of organic molecules on inorganic surfaces, mesoscale methods to understand and predict the formation of mesoscale patterns on surfaces (including interconnect architecture), and macroscopic scale methods (including finite element methods) for simulating the behavior of molecular electronic circuit elements in a larger integrated device. Here we describe a large Department of Energy project involving six universities and one national laboratory aimed at developing integrated multiscale methods for modeling molecular electronics devices. The project is funded equally by the Office of Basic

  2. FFTF integrated leak rate computer system

    International Nuclear Information System (INIS)

    Hubbard, J.A.

    1987-01-01

    The Fast Flux Test Facility (FFTF) is a liquid-metal-cooled test reactor located on the Hanford site. The FFTF is the only reactor of this type designed and operated to meet the licensing requirements of the Nuclear Regulatory Commission. Unique characteristics of the FFTF that present special challenges related to leak rate testing include thin wall containment vessel construction, cover gas systems that penetrate containment, and a low-pressure design basis accident. The successful completion of the third FFTF integrated leak rate test 5 days ahead of schedule and 10% under budget was a major achievement for the Westinghouse Hanford Company. The success of this operational safety test was due in large part to a special network (LAN) of three IBM PC/XT computers, which monitored the sensor data, calculated the containment vessel leak rate, and displayed test results. The equipment configuration allowed continuous monitoring of the progress of the test independent of the data acquisition and analysis functions, and it also provided overall improved system reliability by permitting immediate switching to backup computers in the event of equipment failure

  3. Discovery of novel bacterial toxins by genomics and computational biology.

    Science.gov (United States)

    Doxey, Andrew C; Mansfield, Michael J; Montecucco, Cesare

    2018-06-01

    Hundreds and hundreds of bacterial protein toxins are presently known. Traditionally, toxin identification begins with pathological studies of bacterial infectious disease. Following identification and cultivation of a bacterial pathogen, the protein toxin is purified from the culture medium and its pathogenic activity is studied using the methods of biochemistry and structural biology, cell biology, tissue and organ biology, and appropriate animal models, supplemented by bioimaging techniques. The ongoing and explosive development of high-throughput DNA sequencing and bioinformatic approaches have set in motion a revolution in many fields of biology, including microbiology. One consequence is that genes encoding novel bacterial toxins can be identified by bioinformatic and computational methods based on previous knowledge accumulated from studies of the biology and pathology of thousands of known bacterial protein toxins. Starting from the paradigmatic cases of diphtheria toxin, tetanus and botulinum neurotoxins, this review discusses traditional experimental approaches as well as bioinformatics and genomics-driven approaches that facilitate the discovery of novel bacterial toxins. We discuss recent work on the identification of novel botulinum-like toxins from genera such as Weissella, Chryseobacterium, and Enteroccocus, and the implications of these computationally identified toxins in the field. Finally, we discuss the promise of metagenomics in the discovery of novel toxins and their ecological niches, and present data suggesting the existence of uncharacterized, botulinum-like toxin genes in insect gut metagenomes. Copyright © 2018. Published by Elsevier Ltd.

  4. An integrative approach to inferring biologically meaningful gene modules

    Directory of Open Access Journals (Sweden)

    Wang Kai

    2011-07-01

    Full Text Available Abstract Background The ability to construct biologically meaningful gene networks and modules is critical for contemporary systems biology. Though recent studies have demonstrated the power of using gene modules to shed light on the functioning of complex biological systems, most modules in these networks have shown little association with meaningful biological function. We have devised a method which directly incorporates gene ontology (GO annotation in construction of gene modules in order to gain better functional association. Results We have devised a method, Semantic Similarity-Integrated approach for Modularization (SSIM that integrates various gene-gene pairwise similarity values, including information obtained from gene expression, protein-protein interactions and GO annotations, in the construction of modules using affinity propagation clustering. We demonstrated the performance of the proposed method using data from two complex biological responses: 1. the osmotic shock response in Saccharomyces cerevisiae, and 2. the prion-induced pathogenic mouse model. In comparison with two previously reported algorithms, modules identified by SSIM showed significantly stronger association with biological functions. Conclusions The incorporation of semantic similarity based on GO annotation with gene expression and protein-protein interaction data can greatly enhance the functional relevance of inferred gene modules. In addition, the SSIM approach can also reveal the hierarchical structure of gene modules to gain a broader functional view of the biological system. Hence, the proposed method can facilitate comprehensive and in-depth analysis of high throughput experimental data at the gene network level.

  5. Interdisciplinary research and education at the biology-engineering-computer science interface: a perspective.

    Science.gov (United States)

    Tadmor, Brigitta; Tidor, Bruce

    2005-09-01

    Progress in the life sciences, including genome sequencing and high-throughput experimentation, offers an opportunity for understanding biology and medicine from a systems perspective. This 'new view', which complements the more traditional component-based approach, involves the integration of biological research with approaches from engineering disciplines and computer science. The result is more than a new set of technologies. Rather, it promises a fundamental reconceptualization of the life sciences based on the development of quantitative and predictive models to describe crucial processes. To achieve this change, learning communities are being formed at the interface of the life sciences, engineering and computer science. Through these communities, research and education will be integrated across disciplines and the challenges associated with multidisciplinary team-based science will be addressed.

  6. Physical integrity: the missing link in biological monitoring and TMDLs.

    Science.gov (United States)

    Asmus, Brenda; Magner, Joseph A; Vondracek, Bruce; Perry, Jim

    2009-12-01

    The Clean Water Act mandates that the chemical, physical, and biological integrity of our nation's waters be maintained and restored. Physical integrity has often been defined as physical habitat integrity, and as such, data collected during biological monitoring programs focus primarily on habitat quality. However, we argue that channel stability is a more appropriate measure of physical integrity and that channel stability is a foundational element of physical habitat integrity in low-gradient alluvial streams. We highlight assessment tools that could supplement stream assessments and the Total Maximum Daily Load stressor identification process: field surveys of bankfull cross-sections; longitudinal thalweg profiles; particle size distribution; and regionally calibrated, visual, stream stability assessments. Benefits of measuring channel stability include a more informed selection of reference or best attainable stream condition for an Index of Biotic Integrity, establishment of a baseline for monitoring changes in present and future condition, and indication of channel stability for investigations of chemical and biological impairments associated with sediment discontinuity and loss of habitat quality.

  7. Tav4SB: integrating tools for analysis of kinetic models of biological systems.

    Science.gov (United States)

    Rybiński, Mikołaj; Lula, Michał; Banasik, Paweł; Lasota, Sławomir; Gambin, Anna

    2012-04-05

    Progress in the modeling of biological systems strongly relies on the availability of specialized computer-aided tools. To that end, the Taverna Workbench eases integration of software tools for life science research and provides a common workflow-based framework for computational experiments in Biology. The Taverna services for Systems Biology (Tav4SB) project provides a set of new Web service operations, which extend the functionality of the Taverna Workbench in a domain of systems biology. Tav4SB operations allow you to perform numerical simulations or model checking of, respectively, deterministic or stochastic semantics of biological models. On top of this functionality, Tav4SB enables the construction of high-level experiments. As an illustration of possibilities offered by our project we apply the multi-parameter sensitivity analysis. To visualize the results of model analysis a flexible plotting operation is provided as well. Tav4SB operations are executed in a simple grid environment, integrating heterogeneous software such as Mathematica, PRISM and SBML ODE Solver. The user guide, contact information, full documentation of available Web service operations, workflows and other additional resources can be found at the Tav4SB project's Web page: http://bioputer.mimuw.edu.pl/tav4sb/. The Tav4SB Web service provides a set of integrated tools in the domain for which Web-based applications are still not as widely available as for other areas of computational biology. Moreover, we extend the dedicated hardware base for computationally expensive task of simulating cellular models. Finally, we promote the standardization of models and experiments as well as accessibility and usability of remote services.

  8. Multilevel functional genomics data integration as a tool for understanding physiology: a network biology perspective.

    Science.gov (United States)

    Davidsen, Peter K; Turan, Nil; Egginton, Stuart; Falciani, Francesco

    2016-02-01

    The overall aim of physiological research is to understand how living systems function in an integrative manner. Consequently, the discipline of physiology has since its infancy attempted to link multiple levels of biological organization. Increasingly this has involved mathematical and computational approaches, typically to model a small number of components spanning several levels of biological organization. With the advent of "omics" technologies, which can characterize the molecular state of a cell or tissue (intended as the level of expression and/or activity of its molecular components), the number of molecular components we can quantify has increased exponentially. Paradoxically, the unprecedented amount of experimental data has made it more difficult to derive conceptual models underlying essential mechanisms regulating mammalian physiology. We present an overview of state-of-the-art methods currently used to identifying biological networks underlying genomewide responses. These are based on a data-driven approach that relies on advanced computational methods designed to "learn" biology from observational data. In this review, we illustrate an application of these computational methodologies using a case study integrating an in vivo model representing the transcriptional state of hypoxic skeletal muscle with a clinical study representing muscle wasting in chronic obstructive pulmonary disease patients. The broader application of these approaches to modeling multiple levels of biological data in the context of modern physiology is discussed. Copyright © 2016 the American Physiological Society.

  9. Computational Biology and the Limits of Shared Vision

    DEFF Research Database (Denmark)

    Carusi, Annamaria

    2011-01-01

    of cases is necessary in order to gain a better perspective on social sharing of practices, and on what other factors this sharing is dependent upon. The article presents the case of currently emerging inter-disciplinary visual practices in the domain of computational biology, where the sharing of visual...... practices would be beneficial to the collaborations necessary for the research. Computational biology includes sub-domains where visual practices are coming to be shared across disciplines, and those where this is not occurring, and where the practices of others are resisted. A significant point......, its domain of study. Social practices alone are not sufficient to account for the shaping of evidence. The philosophy of Merleau-Ponty is introduced as providing an alternative framework for thinking of the complex inter-relations between all of these factors. This [End Page 300] philosophy enables us...

  10. Milkweed Seed Dispersal: A Means for Integrating Biology and Physics.

    Science.gov (United States)

    Bisbee, Gregory D.; Kaiser, Cheryl A.

    1997-01-01

    Describes an activity that integrates biology and physics concepts by experimenting with the seed dispersal of common milkweed or similar wind-dispersed seeds. Student teams collect seeds and measure several parameters, review principles of trajectory motion, perform experiments, and graph data. Students examine the ideas of…

  11. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  12. Computing paths and cycles in biological interaction graphs

    Directory of Open Access Journals (Sweden)

    von Kamp Axel

    2009-06-01

    Full Text Available Abstract Background Interaction graphs (signed directed graphs provide an important qualitative modeling approach for Systems Biology. They enable the analysis of causal relationships in cellular networks and can even be useful for predicting qualitative aspects of systems dynamics. Fundamental issues in the analysis of interaction graphs are the enumeration of paths and cycles (feedback loops and the calculation of shortest positive/negative paths. These computational problems have been discussed only to a minor extent in the context of Systems Biology and in particular the shortest signed paths problem requires algorithmic developments. Results We first review algorithms for the enumeration of paths and cycles and show that these algorithms are superior to a recently proposed enumeration approach based on elementary-modes computation. The main part of this work deals with the computation of shortest positive/negative paths, an NP-complete problem for which only very few algorithms are described in the literature. We propose extensions and several new algorithm variants for computing either exact results or approximations. Benchmarks with various concrete biological networks show that exact results can sometimes be obtained in networks with several hundred nodes. A class of even larger graphs can still be treated exactly by a new algorithm combining exhaustive and simple search strategies. For graphs, where the computation of exact solutions becomes time-consuming or infeasible, we devised an approximative algorithm with polynomial complexity. Strikingly, in realistic networks (where a comparison with exact results was possible this algorithm delivered results that are very close or equal to the exact values. This phenomenon can probably be attributed to the particular topology of cellular signaling and regulatory networks which contain a relatively low number of negative feedback loops. Conclusion The calculation of shortest positive

  13. Exploiting graphics processing units for computational biology and bioinformatics.

    Science.gov (United States)

    Payne, Joshua L; Sinnott-Armstrong, Nicholas A; Moore, Jason H

    2010-09-01

    Advances in the video gaming industry have led to the production of low-cost, high-performance graphics processing units (GPUs) that possess more memory bandwidth and computational capability than central processing units (CPUs), the standard workhorses of scientific computing. With the recent release of generalpurpose GPUs and NVIDIA's GPU programming language, CUDA, graphics engines are being adopted widely in scientific computing applications, particularly in the fields of computational biology and bioinformatics. The goal of this article is to concisely present an introduction to GPU hardware and programming, aimed at the computational biologist or bioinformaticist. To this end, we discuss the primary differences between GPU and CPU architecture, introduce the basics of the CUDA programming language, and discuss important CUDA programming practices, such as the proper use of coalesced reads, data types, and memory hierarchies. We highlight each of these topics in the context of computing the all-pairs distance between instances in a dataset, a common procedure in numerous disciplines of scientific computing. We conclude with a runtime analysis of the GPU and CPU implementations of the all-pairs distance calculation. We show our final GPU implementation to outperform the CPU implementation by a factor of 1700.

  14. Computer generation of integrands for Feynman parametric integrals

    International Nuclear Information System (INIS)

    Cvitanovic, Predrag

    1973-01-01

    TECO text editing language, available on PDP-10 computers, is used for the generation and simplification of Feynman integrals. This example shows that TECO can be a useful computational tool in complicated calculations where similar algebraic structures recur many times

  15. Integrating ICT with education: using computer games to enhance ...

    African Journals Online (AJOL)

    Integrating ICT with education: using computer games to enhance learning mathematics at undergraduate level. ... This research seeks to look into ways in which computer games as ICT tools can be used to ... AJOL African Journals Online.

  16. Towards Integration of Biological and Physiological Functions at Multiple Levels

    Directory of Open Access Journals (Sweden)

    Taishin eNomura

    2010-12-01

    Full Text Available An aim of systems physiology today can be stated as to establish logical and quantitative bridges between phenomenological attributes of physiological entities such as cells and organs and physical attributes of biological entities, i.e., biological molecules, allowing us to describe and better understand physiological functions in terms of underlying biological functions. This article illustrates possible schema that can be used for promoting systems physiology by integrating quantitative knowledge of biological and physiological functions at multiple levels of time and space with the use of information technology infrastructure. Emphasis will be made for systematic, modular, hierarchical, and standardized descriptions of mathematical models of the functions and advantages for the use of them.

  17. Integrated pathway clusters with coherent biological themes for target prioritisation.

    Directory of Open Access Journals (Sweden)

    Yi-An Chen

    Full Text Available Prioritising candidate genes for further experimental characterisation is an essential, yet challenging task in biomedical research. One way of achieving this goal is to identify specific biological themes that are enriched within the gene set of interest to obtain insights into the biological phenomena under study. Biological pathway data have been particularly useful in identifying functional associations of genes and/or gene sets. However, biological pathway information as compiled in varied repositories often differs in scope and content, preventing a more effective and comprehensive characterisation of gene sets. Here we describe a new approach to constructing biologically coherent gene sets from pathway data in major public repositories and employing them for functional analysis of large gene sets. We first revealed significant overlaps in gene content between different pathways and then defined a clustering method based on the shared gene content and the similarity of gene overlap patterns. We established the biological relevance of the constructed pathway clusters using independent quantitative measures and we finally demonstrated the effectiveness of the constructed pathway clusters in comparative functional enrichment analysis of gene sets associated with diverse human diseases gathered from the literature. The pathway clusters and gene mappings have been integrated into the TargetMine data warehouse and are likely to provide a concise, manageable and biologically relevant means of functional analysis of gene sets and to facilitate candidate gene prioritisation.

  18. Integrating publicly-available data to generate computationally ...

    Science.gov (United States)

    The adverse outcome pathway (AOP) framework provides a way of organizing knowledge related to the key biological events that result in a particular health outcome. For the majority of environmental chemicals, the availability of curated pathways characterizing potential toxicity is limited. Methods are needed to assimilate large amounts of available molecular data and quickly generate putative AOPs for further testing and use in hazard assessment. A graph-based workflow was used to facilitate the integration of multiple data types to generate computationally-predicted (cp) AOPs. Edges between graph entities were identified through direct experimental or literature information or computationally inferred using frequent itemset mining. Data from the TG-GATEs and ToxCast programs were used to channel large-scale toxicogenomics information into a cpAOP network (cpAOPnet) of over 20,000 relationships describing connections between chemical treatments, phenotypes, and perturbed pathways measured by differential gene expression and high-throughput screening targets. Sub-networks of cpAOPs for a reference chemical (carbon tetrachloride, CCl4) and outcome (hepatic steatosis) were extracted using the network topology. Comparison of the cpAOP subnetworks to published mechanistic descriptions for both CCl4 toxicity and hepatic steatosis demonstrate that computational approaches can be used to replicate manually curated AOPs and identify pathway targets that lack genomic mar

  19. Systematic integration of experimental data and models in systems biology.

    Science.gov (United States)

    Li, Peter; Dada, Joseph O; Jameson, Daniel; Spasic, Irena; Swainston, Neil; Carroll, Kathleen; Dunn, Warwick; Khan, Farid; Malys, Naglis; Messiha, Hanan L; Simeonidis, Evangelos; Weichart, Dieter; Winder, Catherine; Wishart, Jill; Broomhead, David S; Goble, Carole A; Gaskell, Simon J; Kell, Douglas B; Westerhoff, Hans V; Mendes, Pedro; Paton, Norman W

    2010-11-29

    The behaviour of biological systems can be deduced from their mathematical models. However, multiple sources of data in diverse forms are required in the construction of a model in order to define its components and their biochemical reactions, and corresponding parameters. Automating the assembly and use of systems biology models is dependent upon data integration processes involving the interoperation of data and analytical resources. Taverna workflows have been developed for the automated assembly of quantitative parameterised metabolic networks in the Systems Biology Markup Language (SBML). A SBML model is built in a systematic fashion by the workflows which starts with the construction of a qualitative network using data from a MIRIAM-compliant genome-scale model of yeast metabolism. This is followed by parameterisation of the SBML model with experimental data from two repositories, the SABIO-RK enzyme kinetics database and a database of quantitative experimental results. The models are then calibrated and simulated in workflows that call out to COPASIWS, the web service interface to the COPASI software application for analysing biochemical networks. These systems biology workflows were evaluated for their ability to construct a parameterised model of yeast glycolysis. Distributed information about metabolic reactions that have been described to MIRIAM standards enables the automated assembly of quantitative systems biology models of metabolic networks based on user-defined criteria. Such data integration processes can be implemented as Taverna workflows to provide a rapid overview of the components and their relationships within a biochemical system.

  20. Integrative multicellular biological modeling: a case study of 3D epidermal development using GPU algorithms

    Directory of Open Access Journals (Sweden)

    Christley Scott

    2010-08-01

    Full Text Available Abstract Background Simulation of sophisticated biological models requires considerable computational power. These models typically integrate together numerous biological phenomena such as spatially-explicit heterogeneous cells, cell-cell interactions, cell-environment interactions and intracellular gene networks. The recent advent of programming for graphical processing units (GPU opens up the possibility of developing more integrative, detailed and predictive biological models while at the same time decreasing the computational cost to simulate those models. Results We construct a 3D model of epidermal development and provide a set of GPU algorithms that executes significantly faster than sequential central processing unit (CPU code. We provide a parallel implementation of the subcellular element method for individual cells residing in a lattice-free spatial environment. Each cell in our epidermal model includes an internal gene network, which integrates cellular interaction of Notch signaling together with environmental interaction of basement membrane adhesion, to specify cellular state and behaviors such as growth and division. We take a pedagogical approach to describing how modeling methods are efficiently implemented on the GPU including memory layout of data structures and functional decomposition. We discuss various programmatic issues and provide a set of design guidelines for GPU programming that are instructive to avoid common pitfalls as well as to extract performance from the GPU architecture. Conclusions We demonstrate that GPU algorithms represent a significant technological advance for the simulation of complex biological models. We further demonstrate with our epidermal model that the integration of multiple complex modeling methods for heterogeneous multicellular biological processes is both feasible and computationally tractable using this new technology. We hope that the provided algorithms and source code will be a

  1. Integration of process computer systems to Cofrentes NPP

    International Nuclear Information System (INIS)

    Saettone Justo, A.; Pindado Andres, R.; Buedo Jimenez, J.L.; Jimenez Fernandez-Sesma, A.; Delgado Muelas, J.A.

    1997-01-01

    The existence of three different process computer systems in Cofrentes NPP and the ageing of two of them have led to the need for their integration into a single real time computer system, known as Integrated ERIS-Computer System (SIEC), which covers the functionality of the three systems: Process Computer (PC), Emergency Response Information System (ERIS) and Nuclear Calculation Computer (OCN). The paper describes the integration project developed, which has essentially consisted in the integration of PC, ERIS and OCN databases into a single database, the migration of programs from the old process computer into the new SIEC hardware-software platform and the installation of a communications programme to transmit all necessary data for OCN programs from the SIEC computer, which in the new configuration is responsible for managing the databases of the whole system. (Author)

  2. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision

    OpenAIRE

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of tra...

  3. A framework to establish credibility of computational models in biology.

    Science.gov (United States)

    Patterson, Eann A; Whelan, Maurice P

    2017-10-01

    Computational models in biology and biomedical science are often constructed to aid people's understanding of phenomena or to inform decisions with socioeconomic consequences. Model credibility is the willingness of people to trust a model's predictions and is often difficult to establish for computational biology models. A 3 × 3 matrix has been proposed to allow such models to be categorised with respect to their testability and epistemic foundation in order to guide the selection of an appropriate process of validation to supply evidence to establish credibility. Three approaches to validation are identified that can be deployed depending on whether a model is deemed untestable, testable or lies somewhere in between. In the latter two cases, the validation process involves the quantification of uncertainty which is a key output. The issues arising due to the complexity and inherent variability of biological systems are discussed and the creation of 'digital twins' proposed as a means to alleviate the issues and provide a more robust, transparent and traceable route to model credibility and acceptance. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Computer Integration into the Early Childhood Curriculum

    Science.gov (United States)

    Mohammad, Mona; Mohammad, Heyam

    2012-01-01

    Navin and Mark are playing at the computer in their preschool classroom. Like the rest of their classmates, these four-year-old children fearlessly experiment with computer as they navigate through the art program they are using. As they draw and paint on the computer screen, Mark and Navin talk about their creation. "Let's try the stamps" insists…

  5. From biological neural networks to thinking machines: Transitioning biological organizational principles to computer technology

    Science.gov (United States)

    Ross, Muriel D.

    1991-01-01

    The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical.

  6. West-Life, Tools for Integrative Structural Biology

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Structural biology is part of molecular biology focusing on determining structure of macromolecules inside living cells and cell membranes. As macromolecules determines most of the functions of cells the structural knowledge is very useful for further research in metabolism, physiology to application in pharmacology etc. As macromolecules are too small to be observed directly by light microscope, there are other methods used to determine the structure including nuclear magnetic resonance (NMR), X-Ray crystalography, cryo electron microscopy and others. Each method has it's advantages and disadvantages in the terms of availability, sample preparation, resolution. West-Life project has ambition to facilitate integrative approach using multiple techniques mentioned above. As there are already lot of software tools to process data produced by the techniques above, the challenge is to integrate them together in a way they can be used by experts in one technique but not experts in other techniques. One product ...

  7. Semiconductor Devices Inspired By and Integrated With Biology

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, John [University of Illinois

    2012-04-25

    Biology is curved, soft and elastic; silicon wafers are not. Semiconductor technologies that can bridge this gap in form and mechanics will create new opportunities in devices that adopt biologically inspired designs or require intimate integration with the human body. This talk describes the development of ideas for electronics that offer the performance of state-of-the-art, wafer- based systems but with the mechanical properties of a rubber band. We explain the underlying materials science and mechanics of these approaches, and illustrate their use in (1) bio- integrated, ‘tissue-like’ electronics with unique capabilities for mapping cardiac and neural electrophysiology, and (2) bio-inspired, ‘eyeball’ cameras with exceptional imaging properties enabled by curvilinear, Petzval designs.

  8. Computer modeling in developmental biology: growing today, essential tomorrow.

    Science.gov (United States)

    Sharpe, James

    2017-12-01

    D'Arcy Thompson was a true pioneer, applying mathematical concepts and analyses to the question of morphogenesis over 100 years ago. The centenary of his famous book, On Growth and Form , is therefore a great occasion on which to review the types of computer modeling now being pursued to understand the development of organs and organisms. Here, I present some of the latest modeling projects in the field, covering a wide range of developmental biology concepts, from molecular patterning to tissue morphogenesis. Rather than classifying them according to scientific question, or scale of problem, I focus instead on the different ways that modeling contributes to the scientific process and discuss the likely future of modeling in developmental biology. © 2017. Published by The Company of Biologists Ltd.

  9. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria

    2016-01-01

    AGIS is the information system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing (ADC) applications and services. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others.

  10. Computational biology in the cloud: methods and new insights from computing at scale.

    Science.gov (United States)

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  11. Parallel computing and molecular dynamics of biological membranes

    International Nuclear Information System (INIS)

    La Penna, G.; Letardi, S.; Minicozzi, V.; Morante, S.; Rossi, G.C.; Salina, G.

    1998-01-01

    In this talk I discuss the general question of the portability of molecular dynamics codes for diffusive systems on parallel computers of the APE family. The intrinsic single precision of the today available platforms does not seem to affect the numerical accuracy of the simulations, while the absence of integer addressing from CPU to individual nodes puts strong constraints on possible programming strategies. Liquids can be satisfactorily simulated using the ''systolic'' method. For more complex systems, like the biological ones at which we are ultimately interested in, the ''domain decomposition'' approach is best suited to beat the quadratic growth of the inter-molecular computational time with the number of atoms of the system. The promising perspectives of using this strategy for extensive simulations of lipid bilayers are briefly reviewed. (orig.)

  12. Integrated Optoelectronic Networks for Application-Driven Multicore Computing

    Science.gov (United States)

    2017-05-08

    AFRL-AFOSR-VA-TR-2017-0102 Integrated Optoelectronic Networks for Application- Driven Multicore Computing Sudeep Pasricha COLORADO STATE UNIVERSITY...AND SUBTITLE Integrated Optoelectronic Networks for Application-Driven Multicore Computing 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-13-1-0110 5c...and supportive materials with innovative architectural designs that integrate these components according to system-wide application needs. 15

  13. Computer science in Dutch secondary education: independent or integrated?

    NARCIS (Netherlands)

    van der Sijde, Peter; Doornekamp, B.G.

    1992-01-01

    Nowadays, in Dutch secondary education, computer science is integrated within school subjects. About ten years ago computer science was considered an independent subject, but in the mid-1980s this idea changed. In our study we investigated whether the objectives of teaching computer science as an

  14. Automated computation of one-loop integrals in massless theories

    International Nuclear Information System (INIS)

    Hameren, A. van; Vollinga, J.; Weinzierl, S.

    2005-01-01

    We consider one-loop tensor and scalar integrals, which occur in a massless quantum field theory, and we report on the implementation into a numerical program of an algorithm for the automated computation of these one-loop integrals. The number of external legs of the loop integrals is not restricted. All calculations are done within dimensional regularization. (orig.)

  15. A systems approach to integrative biology: an overview of statistical methods to elucidate association and architecture.

    Science.gov (United States)

    Ciaccio, Mark F; Finkle, Justin D; Xue, Albert Y; Bagheri, Neda

    2014-07-01

    An organism's ability to maintain a desired physiological response relies extensively on how cellular and molecular signaling networks interpret and react to environmental cues. The capacity to quantitatively predict how networks respond to a changing environment by modifying signaling regulation and phenotypic responses will help inform and predict the impact of a changing global enivronment on organisms and ecosystems. Many computational strategies have been developed to resolve cue-signal-response networks. However, selecting a strategy that answers a specific biological question requires knowledge both of the type of data being collected, and of the strengths and weaknesses of different computational regimes. We broadly explore several computational approaches, and we evaluate their accuracy in predicting a given response. Specifically, we describe how statistical algorithms can be used in the context of integrative and comparative biology to elucidate the genomic, proteomic, and/or cellular networks responsible for robust physiological response. As a case study, we apply this strategy to a dataset of quantitative levels of protein abundance from the mussel, Mytilus galloprovincialis, to uncover the temperature-dependent signaling network. © The Author 2014. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  16. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    OpenAIRE

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and s...

  17. Integrated computer network high-speed parallel interface

    International Nuclear Information System (INIS)

    Frank, R.B.

    1979-03-01

    As the number and variety of computers within Los Alamos Scientific Laboratory's Central Computer Facility grows, the need for a standard, high-speed intercomputer interface has become more apparent. This report details the development of a High-Speed Parallel Interface from conceptual through implementation stages to meet current and future needs for large-scle network computing within the Integrated Computer Network. 4 figures

  18. Integration of Cloud resources in the LHCb Distributed Computing

    CERN Document Server

    Ubeda Garcia, Mario; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keepin...

  19. Graduate Curriculum for Biological Information Specialists: A Key to Integration of Scale in Biology

    Directory of Open Access Journals (Sweden)

    Carole L. Palmer

    2007-12-01

    Full Text Available Scientific data problems do not stand in isolation. They are part of a larger set of challenges associated with the escalation of scientific information and changes in scholarly communication in the digital environment. Biologists in particular are generating enormous sets of data at a high rate, and new discoveries in the biological sciences will increasingly depend on the integration of data across multiple scales. This work will require new kinds of information expertise in key areas. To build this professional capacity we have developed two complementary educational programs: a Biological Information Specialist (BIS masters degree and a concentration in Data Curation (DC. We believe that BISs will be central in the development of cyberinfrastructure and information services needed to facilitate interdisciplinary and multi-scale science. Here we present three sample cases from our current research projects to illustrate areas in which we expect information specialists to make important contributions to biological research practice.

  20. Community-driven computational biology with Debian Linux.

    Science.gov (United States)

    Möller, Steffen; Krabbenhöft, Hajo Nils; Tille, Andreas; Paleino, David; Williams, Alan; Wolstencroft, Katy; Goble, Carole; Holland, Richard; Belhachemi, Dominique; Plessy, Charles

    2010-12-21

    The Open Source movement and its technologies are popular in the bioinformatics community because they provide freely available tools and resources for research. In order to feed the steady demand for updates on software and associated data, a service infrastructure is required for sharing and providing these tools to heterogeneous computing environments. The Debian Med initiative provides ready and coherent software packages for medical informatics and bioinformatics. These packages can be used together in Taverna workflows via the UseCase plugin to manage execution on local or remote machines. If such packages are available in cloud computing environments, the underlying hardware and the analysis pipelines can be shared along with the software. Debian Med closes the gap between developers and users. It provides a simple method for offering new releases of software and data resources, thus provisioning a local infrastructure for computational biology. For geographically distributed teams it can ensure they are working on the same versions of tools, in the same conditions. This contributes to the world-wide networking of researchers.

  1. Computational brain models: Advances from system biology and future challenges

    Directory of Open Access Journals (Sweden)

    George E. Barreto

    2015-02-01

    Full Text Available Computational brain models focused on the interactions between neurons and astrocytes, modeled via metabolic reconstructions, are reviewed. The large source of experimental data provided by the -omics techniques and the advance/application of computational and data-management tools are being fundamental. For instance, in the understanding of the crosstalk between these cells, the key neuroprotective mechanisms mediated by astrocytes in specific metabolic scenarios (1 and the identification of biomarkers for neurodegenerative diseases (2,3. However, the modeling of these interactions demands a clear view of the metabolic and signaling pathways implicated, but most of them are controversial and are still under evaluation (4. Hence, to gain insight into the complexity of these interactions a current view of the main pathways implicated in the neuron-astrocyte communication processes have been made from recent experimental reports and reviews. Furthermore, target problems, limitations and main conclusions have been identified from metabolic models of the brain reported from 2010. Finally, key aspects to take into account into the development of a computational model of the brain and topics that could be approached from a systems biology perspective in future research are highlighted.

  2. Integrative biology approach identifies cytokine targeting strategies for psoriasis.

    Science.gov (United States)

    Perera, Gayathri K; Ainali, Chrysanthi; Semenova, Ekaterina; Hundhausen, Christian; Barinaga, Guillermo; Kassen, Deepika; Williams, Andrew E; Mirza, Muddassar M; Balazs, Mercedesz; Wang, Xiaoting; Rodriguez, Robert Sanchez; Alendar, Andrej; Barker, Jonathan; Tsoka, Sophia; Ouyang, Wenjun; Nestle, Frank O

    2014-02-12

    Cytokines are critical checkpoints of inflammation. The treatment of human autoimmune disease has been revolutionized by targeting inflammatory cytokines as key drivers of disease pathogenesis. Despite this, there exist numerous pitfalls when translating preclinical data into the clinic. We developed an integrative biology approach combining human disease transcriptome data sets with clinically relevant in vivo models in an attempt to bridge this translational gap. We chose interleukin-22 (IL-22) as a model cytokine because of its potentially important proinflammatory role in epithelial tissues. Injection of IL-22 into normal human skin grafts produced marked inflammatory skin changes resembling human psoriasis. Injection of anti-IL-22 monoclonal antibody in a human xenotransplant model of psoriasis, developed specifically to test potential therapeutic candidates, efficiently blocked skin inflammation. Bioinformatic analysis integrating both the IL-22 and anti-IL-22 cytokine transcriptomes and mapping them onto a psoriasis disease gene coexpression network identified key cytokine-dependent hub genes. Using knockout mice and small-molecule blockade, we show that one of these hub genes, the so far unexplored serine/threonine kinase PIM1, is a critical checkpoint for human skin inflammation and potential future therapeutic target in psoriasis. Using in silico integration of human data sets and biological models, we were able to identify a new target in the treatment of psoriasis.

  3. Lean Big Data integration in systems biology and systems pharmacology.

    Science.gov (United States)

    Ma'ayan, Avi; Rouillard, Andrew D; Clark, Neil R; Wang, Zichen; Duan, Qiaonan; Kou, Yan

    2014-09-01

    Data sets from recent large-scale projects can be integrated into one unified puzzle that can provide new insights into how drugs and genetic perturbations applied to human cells are linked to whole-organism phenotypes. Data that report how drugs affect the phenotype of human cell lines and how drugs induce changes in gene and protein expression in human cell lines can be combined with knowledge about human disease, side effects induced by drugs, and mouse phenotypes. Such data integration efforts can be achieved through the conversion of data from the various resources into single-node-type networks, gene-set libraries, or multipartite graphs. This approach can lead us to the identification of more relationships between genes, drugs, and phenotypes as well as benchmark computational and experimental methods. Overall, this lean 'Big Data' integration strategy will bring us closer toward the goal of realizing personalized medicine. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Convolutional Deep Belief Networks for Single-Cell/Object Tracking in Computational Biology and Computer Vision.

    Science.gov (United States)

    Zhong, Bineng; Pan, Shengnan; Zhang, Hongbo; Wang, Tian; Du, Jixiang; Chen, Duansheng; Cao, Liujuan

    2016-01-01

    In this paper, we propose deep architecture to dynamically learn the most discriminative features from data for both single-cell and object tracking in computational biology and computer vision. Firstly, the discriminative features are automatically learned via a convolutional deep belief network (CDBN). Secondly, we design a simple yet effective method to transfer features learned from CDBNs on the source tasks for generic purpose to the object tracking tasks using only limited amount of training data. Finally, to alleviate the tracker drifting problem caused by model updating, we jointly consider three different types of positive samples. Extensive experiments validate the robustness and effectiveness of the proposed method.

  5. National electronic medical records integration on cloud computing system.

    Science.gov (United States)

    Mirza, Hebah; El-Masri, Samir

    2013-01-01

    Few Healthcare providers have an advanced level of Electronic Medical Record (EMR) adoption. Others have a low level and most have no EMR at all. Cloud computing technology is a new emerging technology that has been used in other industry and showed a great success. Despite the great features of Cloud computing, they haven't been utilized fairly yet in healthcare industry. This study presents an innovative Healthcare Cloud Computing system for Integrating Electronic Health Record (EHR). The proposed Cloud system applies the Cloud Computing technology on EHR system, to present a comprehensive EHR integrated environment.

  6. Fundamentals of power integrity for computer platforms and systems

    CERN Document Server

    DiBene, Joseph T

    2014-01-01

    An all-encompassing text that focuses on the fundamentals of power integrity Power integrity is the study of power distribution from the source to the load and the system level issues that can occur across it. For computer systems, these issues can range from inside the silicon to across the board and may egress into other parts of the platform, including thermal, EMI, and mechanical. With a focus on computer systems and silicon level power delivery, this book sheds light on the fundamentals of power integrity, utilizing the author's extensive background in the power integrity industry and un

  7. Scientific computing vol III - approximation and integration

    CERN Document Server

    Trangenstein, John A

    2017-01-01

    This is the third of three volumes providing a comprehensive presentation of the fundamentals of scientific computing. This volume discusses topics that depend more on calculus than linear algebra, in order to prepare the reader for solving differential equations. This book and its companions show how to determine the quality of computational results, and how to measure the relative efficiency of competing methods. Readers learn how to determine the maximum attainable accuracy of algorithms, and how to select the best method for computing problems. This book also discusses programming in several languages, including C++, Fortran and MATLAB. There are 90 examples, 200 exercises, 36 algorithms, 40 interactive JavaScript programs, 91 references to software programs and 1 case study. Topics are introduced with goals, literature references and links to public software. There are descriptions of the current algorithms in GSLIB and MATLAB. This book could be used for a second course in numerical methods, for either ...

  8. Broadening Participation in the Society for Integrative and Comparative Biology.

    Science.gov (United States)

    Wilga, Cheryl A D; Nishiguchi, Michele; Tsukimura, Brian

    2017-07-01

    The goal of the Society for Integrative and Comparative Biology's Broadening Participation Committee (SICB BPC) is to increase the number of underrepresented group (URG) members within the society and to expand their capabilities as future researchers and leaders within SICB. Our short-term 10-year goal was to increase the recruitment and retention of URG members in the society by 10%. Our long-term 25-year goal is to increase the membership of URG in the society through recruitment and retention until the membership demographic mirrors that of the US Census. Our plans to accomplish this included establishment of a formal standing committee, establishment of a moderate budget to support BPC activities, hosting professional development workshops, hosting diversity and mentor socials, and obtaining grant funds to supplement our budget. This paper documents broadening participation activities in the society, discusses the effectiveness of these activities, and evaluates BPC goals after 5 years of targeted funded activities. Over the past 5 years, the number of URG members rose by 5.2% to a total of 16.2%, members who report ethnicity and gender increased by 25.2% and 18%, respectively, and the number of members attending BPC activities has increased to 33% by 2016. SICB has made significant advances in broadening participation, not only through increased expenditures, but also with a commitment by its members and leadership to increase diversity. Most members realize that increasing diversity will both improve the Society's ability to develop different approaches to tackling problems within integrative biology, and help solve larger global issues that are evident throughout science and technology fields. In addition, having URG members as part of the executive committee would provide other URG members role models within the society, as well as have a voice in the leadership that represents diversity and inclusion for all scientists. © The Author 2017. Published by

  9. Impact of Interdisciplinary Undergraduate Research in mathematics and biology on the development of a new course integrating five STEM disciplines.

    Science.gov (United States)

    Caudill, Lester; Hill, April; Hoke, Kathy; Lipan, Ovidiu

    2010-01-01

    Funded by innovative programs at the National Science Foundation and the Howard Hughes Medical Institute, University of Richmond faculty in biology, chemistry, mathematics, physics, and computer science teamed up to offer first- and second-year students the opportunity to contribute to vibrant, interdisciplinary research projects. The result was not only good science but also good science that motivated and informed course development. Here, we describe four recent undergraduate research projects involving students and faculty in biology, physics, mathematics, and computer science and how each contributed in significant ways to the conception and implementation of our new Integrated Quantitative Science course, a course for first-year students that integrates the material in the first course of the major in each of biology, chemistry, mathematics, computer science, and physics.

  10. Hardware for computing the integral image

    OpenAIRE

    Fernández-Berni, J.; Rodríguez-Vázquez, Ángel; Río, Rocío del; Carmona-Galán, R.

    2015-01-01

    La presente invención, según se expresa en el enunciado de esta memoria descriptiva, consiste en hardware de señal mixta para cómputo de la imagen integral en el plano focal mediante una agrupación de celdas básicas de sensado-procesamiento cuya interconexión puede ser reconfigurada mediante circuitería periférica que hace posible una implementación muy eficiente de una tarea de procesamiento muy útil en visión artificial como es el cálculo de la imagen integral en escenarios tales como monit...

  11. Paradox of integration-A computational model

    Science.gov (United States)

    Krawczyk, Małgorzata J.; Kułakowski, Krzysztof

    2017-02-01

    The paradoxical aspect of integration of a social group has been highlighted by Blau (1964). During the integration process, the group members simultaneously compete for social status and play the role of the audience. Here we show that when the competition prevails over the desire of approval, a sharp transition breaks all friendly relations. However, as was described by Blau, people with high status are inclined to bother more with acceptance of others; this is achieved by praising others and revealing her/his own weak points. In our model, this action smooths the transition and improves interpersonal relations.

  12. Integrated MEMS/NEMS Resonant Cantilevers for Ultrasensitive Biological Detection

    Directory of Open Access Journals (Sweden)

    Xinxin Li

    2009-01-01

    Full Text Available The paper reviews the recent researches implemented in Chinese Academy of Sciences, with achievements on integrated resonant microcantilever sensors. In the resonant cantilevers, the self-sensing elements and resonance exciting elements are both top-down integrated with silicon micromachining techniques. Quite a lot of effort is focused on optimization of the resonance mode and sensing structure for improvement of sensitivity. On the other hand, to enable the micro-cantilevers specifically sensitive to bio/chemical molecules, sensing materials are developed and modified on the cantilever surface with a self-assembled monolayer (SAM based bottom-up construction and surface functionalization. To improve the selectivity of the sensors and depress environmental noise, multiple and localized surface modifications are developed. The achieved volume production capability and satisfactory detecting resolution to trace-level biological antigen of alpha-fetoprotein (AFP give the micro-cantilever sensors a great promise for rapid and high-resoluble detection.

  13. Integrating Computer Concepts into Principles of Accounting.

    Science.gov (United States)

    Beck, Henry J.; Parrish, Roy James, Jr.

    A package of instructional materials for an undergraduate principles of accounting course at Danville Community College was developed based upon the following assumptions: (1) the principles of accounting student does not need to be able to write computer programs; (2) computerized accounting concepts should be presented in this course; (3)…

  14. Integration of case study approach, project design and computer ...

    African Journals Online (AJOL)

    Integration of case study approach, project design and computer modeling in managerial accounting education ... Journal of Fundamental and Applied Sciences ... in the Laboratory of Management Accounting and Controlling Systems at the ...

  15. Microwave integrated circuit mask design, using computer aided microfilm techniques

    Energy Technology Data Exchange (ETDEWEB)

    Reymond, J.M.; Batliwala, E.R.; Ajose, S.O.

    1977-01-01

    This paper examines the possibility of using a computer interfaced with a precision film C.R.T. information retrieval system, to produce photomasks suitable for the production of microwave integrated circuits.

  16. Integrated Computational Material Engineering Technologies for Additive Manufacturing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — QuesTek Innovations, a pioneer in Integrated Computational Materials Engineering (ICME) and a Tibbetts Award recipient, is teaming with University of Pittsburgh,...

  17. Complex network problems in physics, computer science and biology

    Science.gov (United States)

    Cojocaru, Radu Ionut

    There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe

  18. Distributed and multi-core computation of 2-loop integrals

    International Nuclear Information System (INIS)

    De Doncker, E; Yuasa, F

    2014-01-01

    For an automatic computation of Feynman loop integrals in the physical region we rely on an extrapolation technique where the integrals of the sequence are obtained with iterated/repeated adaptive methods from the QUADPACK 1D quadrature package. The integration rule evaluations in the outer level, corresponding to independent inner integral approximations, are assigned to threads dynamically via the OpenMP runtime in the parallel implementation. Furthermore, multi-level (nested) parallelism enables an efficient utilization of hyperthreading or larger numbers of cores. For a class of loop integrals in the unphysical region, which do not suffer from singularities in the interior of the integration domain, we find that the distributed adaptive integration methods in the multivariate PARINT package are highly efficient and accurate. We apply these techniques without resorting to integral transformations and report on the capabilities of the algorithms and the parallel performance for a test set including various types of two-loop integrals

  19. Computer-integrated electric-arc melting process control system

    OpenAIRE

    Дёмин, Дмитрий Александрович

    2014-01-01

    Developing common principles of completing melting process automation systems with hardware and creating on their basis rational choices of computer- integrated electricarc melting control systems is an actual task since it allows a comprehensive approach to the issue of modernizing melting sites of workshops. This approach allows to form the computer-integrated electric-arc furnace control system as part of a queuing system “electric-arc furnace - foundry conveyor” and consider, when taking ...

  20. Integrating Network Management for Cloud Computing Services

    Science.gov (United States)

    2015-06-01

    Backend Distributed Datastore High-­‐level   Objec.ve   Network   Policy   Perf.   Metrics   SNAT  IP   Alloca.on   Controller...azure.microsoft.com/. 114 [16] Microsoft Azure ExpressRoute. http://azure.microsoft.com/en-us/ services/expressroute/. [17] Mobility and Networking...Networking Technologies, Services, and Protocols; Performance of Computer and Commu- nication Networks; Mobile and Wireless Communications Systems

  1. An integrated introduction to computer graphics and geometric modeling

    CERN Document Server

    Goldman, Ronald

    2009-01-01

    … this book may be the first book on geometric modelling that also covers computer graphics. In addition, it may be the first book on computer graphics that integrates a thorough introduction to 'freedom' curves and surfaces and to the mathematical foundations for computer graphics. … the book is well suited for an undergraduate course. … The entire book is very well presented and obviously written by a distinguished and creative researcher and educator. It certainly is a textbook I would recommend. …-Computer-Aided Design, 42, 2010… Many books concentrate on computer programming and soon beco

  2. Integrating Cloud-Computing-Specific Model into Aircraft Design

    Science.gov (United States)

    Zhimin, Tian; Qi, Lin; Guangwen, Yang

    Cloud Computing is becoming increasingly relevant, as it will enable companies involved in spreading this technology to open the door to Web 3.0. In the paper, the new categories of services introduced will slowly replace many types of computational resources currently used. In this perspective, grid computing, the basic element for the large scale supply of cloud services, will play a fundamental role in defining how those services will be provided. The paper tries to integrate cloud computing specific model into aircraft design. This work has acquired good results in sharing licenses of large scale and expensive software, such as CFD (Computational Fluid Dynamics), UG, CATIA, and so on.

  3. Integration of ecological-biological thresholds in conservation decision making.

    Science.gov (United States)

    Mavrommati, Georgia; Bithas, Kostas; Borsuk, Mark E; Howarth, Richard B

    2016-12-01

    In the Anthropocene, coupled human and natural systems dominate and only a few natural systems remain relatively unaffected by human influence. On the one hand, conservation criteria based on areas of minimal human impact are not relevant to much of the biosphere. On the other hand, conservation criteria based on economic factors are problematic with respect to their ability to arrive at operational indicators of well-being that can be applied in practice over multiple generations. Coupled human and natural systems are subject to economic development which, under current management structures, tends to affect natural systems and cross planetary boundaries. Hence, designing and applying conservation criteria applicable in real-world systems where human and natural systems need to interact and sustainably coexist is essential. By recognizing the criticality of satisfying basic needs as well as the great uncertainty over the needs and preferences of future generations, we sought to incorporate conservation criteria based on minimal human impact into economic evaluation. These criteria require the conservation of environmental conditions such that the opportunity for intergenerational welfare optimization is maintained. Toward this end, we propose the integration of ecological-biological thresholds into decision making and use as an example the planetary-boundaries approach. Both conservation scientists and economists must be involved in defining operational ecological-biological thresholds that can be incorporated into economic thinking and reflect the objectives of conservation, sustainability, and intergenerational welfare optimization. © 2016 Society for Conservation Biology.

  4. Computer integration in the curriculum: promises and problems

    NARCIS (Netherlands)

    Plomp, T.; van den Akker, Jan

    1988-01-01

    This discussion of the integration of computers into the curriculum begins by reviewing the results of several surveys conducted in the Netherlands and the United States which provide insight into the problems encountered by schools and teachers when introducing computers in education. Case studies

  5. Computation of Surface Integrals of Curl Vector Fields

    Science.gov (United States)

    Hu, Chenglie

    2007-01-01

    This article presents a way of computing a surface integral when the vector field of the integrand is a curl field. Presented in some advanced calculus textbooks such as [1], the technique, as the author experienced, is simple and applicable. The computation is based on Stokes' theorem in 3-space calculus, and thus provides not only a means to…

  6. Integrating Computational Chemistry into a Course in Classical Thermodynamics

    Science.gov (United States)

    Martini, Sheridan R.; Hartzell, Cynthia J.

    2015-01-01

    Computational chemistry is commonly addressed in the quantum mechanics course of undergraduate physical chemistry curricula. Since quantum mechanics traditionally follows the thermodynamics course, there is a lack of curricula relating computational chemistry to thermodynamics. A method integrating molecular modeling software into a semester long…

  7. A specialized ODE integrator for the efficient computation of parameter sensitivities

    Directory of Open Access Journals (Sweden)

    Gonnet Pedro

    2012-05-01

    Full Text Available Abstract Background Dynamic mathematical models in the form of systems of ordinary differential equations (ODEs play an important role in systems biology. For any sufficiently complex model, the speed and accuracy of solving the ODEs by numerical integration is critical. This applies especially to systems identification problems where the parameter sensitivities must be integrated alongside the system variables. Although several very good general purpose ODE solvers exist, few of them compute the parameter sensitivities automatically. Results We present a novel integration algorithm that is based on second derivatives and contains other unique features such as improved error estimates. These features allow the integrator to take larger time steps than other methods. In practical applications, i.e. systems biology models of different sizes and behaviors, the method competes well with established integrators in solving the system equations, and it outperforms them significantly when local parameter sensitivities are evaluated. For ease-of-use, the solver is embedded in a framework that automatically generates the integrator input from an SBML description of the system of interest. Conclusions For future applications, comparatively ‘cheap’ parameter sensitivities will enable advances in solving large, otherwise computationally expensive parameter estimation and optimization problems. More generally, we argue that substantially better computational performance can be achieved by exploiting characteristics specific to the problem domain; elements of our methods such as the error estimation could find broader use in other, more general numerical algorithms.

  8. An algorithm of computing inhomogeneous differential equations for definite integrals

    OpenAIRE

    Nakayama, Hiromasa; Nishiyama, Kenta

    2010-01-01

    We give an algorithm to compute inhomogeneous differential equations for definite integrals with parameters. The algorithm is based on the integration algorithm for $D$-modules by Oaku. Main tool in the algorithm is the Gr\\"obner basis method in the ring of differential operators.

  9. Integrating Computational Thinking into Technology and Engineering Education

    Science.gov (United States)

    Hacker, Michael

    2018-01-01

    Computational Thinking (CT) is being promoted as "a fundamental skill used by everyone in the world by the middle of the 21st Century" (Wing, 2006). CT has been effectively integrated into history, ELA, mathematics, art, and science courses (Settle, et al., 2012). However, there has been no analogous effort to integrate CT into…

  10. Integrating Computational Science Tools into a Thermodynamics Course

    Science.gov (United States)

    Vieira, Camilo; Magana, Alejandra J.; García, R. Edwin; Jana, Aniruddha; Krafcik, Matthew

    2018-01-01

    Computational tools and methods have permeated multiple science and engineering disciplines, because they enable scientists and engineers to process large amounts of data, represent abstract phenomena, and to model and simulate complex concepts. In order to prepare future engineers with the ability to use computational tools in the context of their disciplines, some universities have started to integrate these tools within core courses. This paper evaluates the effect of introducing three computational modules within a thermodynamics course on student disciplinary learning and self-beliefs about computation. The results suggest that using worked examples paired to computer simulations to implement these modules have a positive effect on (1) student disciplinary learning, (2) student perceived ability to do scientific computing, and (3) student perceived ability to do computer programming. These effects were identified regardless of the students' prior experiences with computer programming.

  11. An Integrated Computer-Aided Approach for Environmental Studies

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Chen, Fei; Jaksland, Cecilia

    1997-01-01

    A general framework for an integrated computer-aided approach to solve process design, control, and environmental problems simultaneously is presented. Physicochemical properties and their relationships to the molecular structure play an important role in the proposed integrated approach. The sco...... and applicability of the integrated approach is highlighted through examples involving estimation of properties and environmental pollution prevention. The importance of mixture effects on some environmentally important properties is also demonstrated....

  12. Computational Design Tools for Integrated Design

    DEFF Research Database (Denmark)

    Holst, Malene Kirstine; Kirkegaard, Poul Henning

    2010-01-01

    In an architectural conceptual sketching process, where an architect is working with the initial ideas for a design, the process is characterized by three phases: sketching, evaluation and modification. Basically the architect needs to address three areas in the conceptual sketching phase......: aesthetical, functional and technical requirements. The aim of the present paper is to address the problem of a vague or not existing link between digital conceptual design tools used by architects and designers and engineering analysis and simulation tools. Based on an analysis of the architectural design...... process different digital design methods are related to tasks in an integrated design process....

  13. Numerical computation of molecular integrals via optimized (vectorized) FORTRAN code

    International Nuclear Information System (INIS)

    Scott, T.C.; Grant, I.P.; Saunders, V.R.

    1997-01-01

    The calculation of molecular properties based on quantum mechanics is an area of fundamental research whose horizons have always been determined by the power of state-of-the-art computers. A computational bottleneck is the numerical calculation of the required molecular integrals to sufficient precision. Herein, we present a method for the rapid numerical evaluation of molecular integrals using optimized FORTRAN code generated by Maple. The method is based on the exploitation of common intermediates and the optimization can be adjusted to both serial and vectorized computations. (orig.)

  14. Computer aided probabilistic assessment of containment integrity

    International Nuclear Information System (INIS)

    Tsai, J.C.; Touchton, R.A.

    1984-01-01

    In the probabilistic risk assessment (PRA) of a nuclear power plant, there are three probability-based techniques which are widely used for event sequence frequency quantification (including nodal probability estimation). These three techniques are the event tree analysis, the fault tree analysis and the Bayesian approach for database development. In the barrier analysis for assessing radionuclide release to the environment in a PRA study, these techniques are employed to a greater extent in estimating conditions which could lead to failure of the fuel cladding and the reactor coolant system (RCS) pressure boundary, but to a lesser degree in the containment pressure boundary failure analysis. The main reason is that containment issues are currently still in a state of flux. In this paper, the authors describe briefly the computer programs currently used by the nuclear industry to do event tree analyses, fault tree analyses and the Bayesian update. The authors discuss how these computer aided probabilistic techniques might be adopted for failure analysis of the containment pressure boundary

  15. Integrated Computer Controlled Glow Discharge Tube

    Science.gov (United States)

    Kaiser, Erik; Post-Zwicker, Andrew

    2002-11-01

    An "Interactive Plasma Display" was created for the Princeton Plasma Physics Laboratory to demonstrate the characteristics of plasma to various science education outreach programs. From high school students and teachers, to undergraduate students and visitors to the lab, the plasma device will be a key component in advancing the public's basic knowledge of plasma physics. The device is fully computer controlled using LabVIEW, a touchscreen Graphical User Interface [GUI], and a GPIB interface. Utilizing a feedback loop, the display is fully autonomous in controlling pressure, as well as in monitoring the safety aspects of the apparatus. With a digital convectron gauge continuously monitoring pressure, the computer interface analyzes the input signals, while making changes to a digital flow controller. This function works independently of the GUI, allowing the user to simply input and receive a desired pressure; quickly, easily, and intuitively. The discharge tube is a 36" x 4"id glass cylinder with 3" side port. A 3000 volt, 10mA power supply, is used to breakdown the plasma. A 300 turn solenoid was created to demonstrate the magnetic pinching of a plasma. All primary functions of the device are controlled through the GUI digital controllers. This configuration allows for operators to safely control the pressure (100mTorr-1Torr), magnetic field (0-90Gauss, 7amps, 10volts), and finally, the voltage applied across the electrodes (0-3000v, 10mA).

  16. Development of integrated platform for computational material design

    Energy Technology Data Exchange (ETDEWEB)

    Kiyoshi, Matsubara; Kumi, Itai; Nobutaka, Nishikawa; Akifumi, Kato [Center for Computational Science and Engineering, Fuji Research Institute Corporation (Japan); Hideaki, Koike [Advance Soft Corporation (Japan)

    2003-07-01

    The goal of our project is to design and develop a problem-solving environment (PSE) that will help computational scientists and engineers develop large complicated application software and simulate complex phenomena by using networking and parallel computing. The integrated platform, which is designed for PSE in the Japanese national project of Frontier Simulation Software for Industrial Science, is defined by supporting the entire range of problem solving activity from program formulation and data setup to numerical simulation, data management, and visualization. A special feature of our integrated platform is based on a new architecture called TASK FLOW. It integrates the computational resources such as hardware and software on the network and supports complex and large-scale simulation. This concept is applied to computational material design and the project 'comprehensive research for modeling, analysis, control, and design of large-scale complex system considering properties of human being'. Moreover this system will provide the best solution for developing large and complicated software and simulating complex and large-scaled phenomena in computational science and engineering. A prototype has already been developed and the validation and verification of an integrated platform will be scheduled by using the prototype in 2003. In the validation and verification, fluid-structure coupling analysis system for designing an industrial machine will be developed on the integrated platform. As other examples of validation and verification, integrated platform for quantum chemistry and bio-mechanical system are planned.

  17. Development of integrated platform for computational material design

    International Nuclear Information System (INIS)

    Kiyoshi, Matsubara; Kumi, Itai; Nobutaka, Nishikawa; Akifumi, Kato; Hideaki, Koike

    2003-01-01

    The goal of our project is to design and develop a problem-solving environment (PSE) that will help computational scientists and engineers develop large complicated application software and simulate complex phenomena by using networking and parallel computing. The integrated platform, which is designed for PSE in the Japanese national project of Frontier Simulation Software for Industrial Science, is defined by supporting the entire range of problem solving activity from program formulation and data setup to numerical simulation, data management, and visualization. A special feature of our integrated platform is based on a new architecture called TASK FLOW. It integrates the computational resources such as hardware and software on the network and supports complex and large-scale simulation. This concept is applied to computational material design and the project 'comprehensive research for modeling, analysis, control, and design of large-scale complex system considering properties of human being'. Moreover this system will provide the best solution for developing large and complicated software and simulating complex and large-scaled phenomena in computational science and engineering. A prototype has already been developed and the validation and verification of an integrated platform will be scheduled by using the prototype in 2003. In the validation and verification, fluid-structure coupling analysis system for designing an industrial machine will be developed on the integrated platform. As other examples of validation and verification, integrated platform for quantum chemistry and bio-mechanical system are planned

  18. Integration and macroevolutionary patterns in the pollination biology of conifers.

    Science.gov (United States)

    Leslie, Andrew B; Beaulieu, Jeremy M; Crane, Peter R; Knopf, Patrick; Donoghue, Michael J

    2015-06-01

    Integration influences patterns of trait evolution, but the relationship between these patterns and the degree of trait integration is not well understood. To explore this further, we study a specialized pollination mechanism in conifers whose traits are linked through function but not development. This mechanism depends on interactions among three characters: pollen that is buoyant, ovules that face downward at pollination, and the production of a liquid droplet that buoyant grains float through to enter the ovule. We use a well-sampled phylogeny of conifers to test correlated evolution among these characters and specific sequences of character change. Using likelihood models of character evolution, we find that pollen morphology and ovule characters evolve in a concerted manner, where the flotation mechanism breaks down irreversibly following changes in orientation or drop production. The breakdown of this functional constraint, which may be facilitated by the lack of developmental integration among the constituent traits, is associated with increased trait variation and more diverse pollination strategies. Although this functional "release" increases diversity in some ways, the irreversible way in which the flotation mechanism is lost may eventually result in its complete disappearance from seed plant reproductive biology. © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.

  19. How computational models can help unlock biological systems.

    Science.gov (United States)

    Brodland, G Wayne

    2015-12-01

    With computation models playing an ever increasing role in the advancement of science, it is important that researchers understand what it means to model something; recognize the implications of the conceptual, mathematical and algorithmic steps of model construction; and comprehend what models can and cannot do. Here, we use examples to show that models can serve a wide variety of roles, including hypothesis testing, generating new insights, deepening understanding, suggesting and interpreting experiments, tracing chains of causation, doing sensitivity analyses, integrating knowledge, and inspiring new approaches. We show that models can bring together information of different kinds and do so across a range of length scales, as they do in multi-scale, multi-faceted embryogenesis models, some of which connect gene expression, the cytoskeleton, cell properties, tissue mechanics, morphogenetic movements and phenotypes. Models cannot replace experiments nor can they prove that particular mechanisms are at work in a given situation. But they can demonstrate whether or not a proposed mechanism is sufficient to produce an observed phenomenon. Although the examples in this article are taken primarily from the field of embryo mechanics, most of the arguments and discussion are applicable to any form of computational modelling. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  20. GenFlow: generic flow for integration, management and analysis of molecular biology data

    Directory of Open Access Journals (Sweden)

    Marcio Katsumi Oikawa

    2004-01-01

    Full Text Available A large number of DNA sequencing projects all over the world have yielded a fantastic amount of data, whose analysis is, currently, a big challenge for computational biology. The limiting step in this task is the integration of large volumes of data stored in highly heterogeneous repositories of genomic and cDNA sequences, as well as gene expression results. Solving this problem requires automated analytical tools to optimize operations and efficiently generate knowledge. This paper presents an information flow model , called GenFlow, that can tackle this analytical task.

  1. Biomedical data integration in computational drug design and bioinformatics.

    Science.gov (United States)

    Seoane, Jose A; Aguiar-Pulido, Vanessa; Munteanu, Cristian R; Rivero, Daniel; Rabunal, Juan R; Dorado, Julian; Pazos, Alejandro

    2013-03-01

    In recent years, in the post genomic era, more and more data is being generated by biological high throughput technologies, such as proteomics and transcriptomics. This omics data can be very useful, but the real challenge is to analyze all this data, as a whole, after integrating it. Biomedical data integration enables making queries to different, heterogeneous and distributed biomedical data sources. Data integration solutions can be very useful not only in the context of drug design, but also in biomedical information retrieval, clinical diagnosis, system biology, etc. In this review, we analyze the most common approaches to biomedical data integration, such as federated databases, data warehousing, multi-agent systems and semantic technology, as well as the solutions developed using these approaches in the past few years.

  2. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00291854; The ATLAS collaboration; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computin...

  3. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    Science.gov (United States)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  4. Integrated Ecological River Health Assessments, Based on Water Chemistry, Physical Habitat Quality and Biological Integrity

    Directory of Open Access Journals (Sweden)

    Ji Yoon Kim

    2015-11-01

    Full Text Available This study evaluated integrative river ecosystem health using stressor-based models of physical habitat health, chemical water health, and biological health of fish and identified multiple-stressor indicators influencing the ecosystem health. Integrated health responses (IHRs, based on star-plot approach, were calculated from qualitative habitat evaluation index (QHEI, nutrient pollution index (NPI, and index of biological integrity (IBI in four different longitudinal regions (Groups I–IV. For the calculations of IHRs values, multi-metric QHEI, NPI, and IBI models were developed and their criteria for the diagnosis of the health were determined. The longitudinal patterns of the river were analyzed by a self-organizing map (SOM model and the key major stressors in the river were identified by principal component analysis (PCA. Our model scores of integrated health responses (IHRs suggested that mid-stream and downstream regions were impaired, and the key stressors were closely associated with nutrient enrichment (N and P and organic matter pollutions from domestic wastewater disposal plants and urban sewage. This modeling approach of IHRs may be used as an effective tool for evaluations of integrative ecological river health..

  5. ChlamyCyc: an integrative systems biology database and web-portal for Chlamydomonas reinhardtii

    Directory of Open Access Journals (Sweden)

    Kempa Stefan

    2009-05-01

    Full Text Available Abstract Background The unicellular green alga Chlamydomonas reinhardtii is an important eukaryotic model organism for the study of photosynthesis and plant growth. In the era of modern high-throughput technologies there is an imperative need to integrate large-scale data sets from high-throughput experimental techniques using computational methods and database resources to provide comprehensive information about the molecular and cellular organization of a single organism. Results In the framework of the German Systems Biology initiative GoFORSYS, a pathway database and web-portal for Chlamydomonas (ChlamyCyc was established, which currently features about 250 metabolic pathways with associated genes, enzymes, and compound information. ChlamyCyc was assembled using an integrative approach combining the recently published genome sequence, bioinformatics methods, and experimental data from metabolomics and proteomics experiments. We analyzed and integrated a combination of primary and secondary database resources, such as existing genome annotations from JGI, EST collections, orthology information, and MapMan classification. Conclusion ChlamyCyc provides a curated and integrated systems biology repository that will enable and assist in systematic studies of fundamental cellular processes in Chlamydomonas. The ChlamyCyc database and web-portal is freely available under http://chlamycyc.mpimp-golm.mpg.de.

  6. ChlamyCyc: an integrative systems biology database and web-portal for Chlamydomonas reinhardtii.

    Science.gov (United States)

    May, Patrick; Christian, Jan-Ole; Kempa, Stefan; Walther, Dirk

    2009-05-04

    The unicellular green alga Chlamydomonas reinhardtii is an important eukaryotic model organism for the study of photosynthesis and plant growth. In the era of modern high-throughput technologies there is an imperative need to integrate large-scale data sets from high-throughput experimental techniques using computational methods and database resources to provide comprehensive information about the molecular and cellular organization of a single organism. In the framework of the German Systems Biology initiative GoFORSYS, a pathway database and web-portal for Chlamydomonas (ChlamyCyc) was established, which currently features about 250 metabolic pathways with associated genes, enzymes, and compound information. ChlamyCyc was assembled using an integrative approach combining the recently published genome sequence, bioinformatics methods, and experimental data from metabolomics and proteomics experiments. We analyzed and integrated a combination of primary and secondary database resources, such as existing genome annotations from JGI, EST collections, orthology information, and MapMan classification. ChlamyCyc provides a curated and integrated systems biology repository that will enable and assist in systematic studies of fundamental cellular processes in Chlamydomonas. The ChlamyCyc database and web-portal is freely available under http://chlamycyc.mpimp-golm.mpg.de.

  7. Integrating Biological Perspectives:. a Quantum Leap for Microarray Expression Analysis

    Science.gov (United States)

    Wanke, Dierk; Kilian, Joachim; Bloss, Ulrich; Mangelsen, Elke; Supper, Jochen; Harter, Klaus; Berendzen, Kenneth W.

    2009-02-01

    Biologists and bioinformatic scientists cope with the analysis of transcript abundance and the extraction of meaningful information from microarray expression data. By exploiting biological information accessible in public databases, we try to extend our current knowledge over the plant model organism Arabidopsis thaliana. Here, we give two examples of increasing the quality of information gained from large scale expression experiments by the integration of microarray-unrelated biological information: First, we utilize Arabidopsis microarray data to demonstrate that expression profiles are usually conserved between orthologous genes of different organisms. In an initial step of the analysis, orthology has to be inferred unambiguously, which then allows comparison of expression profiles between orthologs. We make use of the publicly available microarray expression data of Arabidopsis and barley, Hordeum vulgare. We found a generally positive correlation in expression trajectories between true orthologs although both organisms are only distantly related in evolutionary time scale. Second, extracting clusters of co-regulated genes implies similarities in transcriptional regulation via similar cis-regulatory elements (CREs). Vice versa approaches, where co-regulated gene clusters are found by investigating on CREs were not successful in general. Nonetheless, in some cases the presence of CREs in a defined position, orientation or CRE-combinations is positively correlated with co-regulated gene clusters. Here, we make use of genes involved in the phenylpropanoid biosynthetic pathway, to give one positive example for this approach.

  8. Elastic Multi-scale Mechanisms: Computation and Biological Evolution.

    Science.gov (United States)

    Diaz Ochoa, Juan G

    2018-01-01

    Explanations based on low-level interacting elements are valuable and powerful since they contribute to identify the key mechanisms of biological functions. However, many dynamic systems based on low-level interacting elements with unambiguous, finite, and complete information of initial states generate future states that cannot be predicted, implying an increase of complexity and open-ended evolution. Such systems are like Turing machines, that overlap with dynamical systems that cannot halt. We argue that organisms find halting conditions by distorting these mechanisms, creating conditions for a constant creativity that drives evolution. We introduce a modulus of elasticity to measure the changes in these mechanisms in response to changes in the computed environment. We test this concept in a population of predators and predated cells with chemotactic mechanisms and demonstrate how the selection of a given mechanism depends on the entire population. We finally explore this concept in different frameworks and postulate that the identification of predictive mechanisms is only successful with small elasticity modulus.

  9. Computer simulations for biological aging and sexual reproduction

    Directory of Open Access Journals (Sweden)

    DIETRICH STAUFFER

    2001-03-01

    Full Text Available The sexual version of the Penna model of biological aging, simulated since 1996, is compared here with alternative forms of reproduction as well as with models not involving aging. In particular we want to check how sexual forms of life could have evolved and won over earlier asexual forms hundreds of million years ago. This computer model is based on the mutation-accumulation theory of aging, using bits-strings to represent the genome. Its population dynamics is studied by Monte Carlo methods.A versão sexual do modelo de envelhecimento biológico de Penna, simulada desde 1996, é comparada aqui com formas alternativas de reprodução bem como com modelos que não envolvem envelhecimento. Em particular, queremos verificar como formas sexuais de vida poderiam ter evoluído e predominado sobre formas assexuais há centenas de milhões de anos. Este modelo computacional baseia-se na teoria do envelhecimento por acumulação de mutações, usando 'bits-strings' para representar o genoma. Sua dinâmica de populações é estudada por métodos de Monte Carlo.

  10. Computational intelligence techniques for biological data mining: An overview

    Science.gov (United States)

    Faye, Ibrahima; Iqbal, Muhammad Javed; Said, Abas Md; Samir, Brahim Belhaouari

    2014-10-01

    Computational techniques have been successfully utilized for a highly accurate analysis and modeling of multifaceted and raw biological data gathered from various genome sequencing projects. These techniques are proving much more effective to overcome the limitations of the traditional in-vitro experiments on the constantly increasing sequence data. However, most critical problems that caught the attention of the researchers may include, but not limited to these: accurate structure and function prediction of unknown proteins, protein subcellular localization prediction, finding protein-protein interactions, protein fold recognition, analysis of microarray gene expression data, etc. To solve these problems, various classification and clustering techniques using machine learning have been extensively used in the published literature. These techniques include neural network algorithms, genetic algorithms, fuzzy ARTMAP, K-Means, K-NN, SVM, Rough set classifiers, decision tree and HMM based algorithms. Major difficulties in applying the above algorithms include the limitations found in the previous feature encoding and selection methods while extracting the best features, increasing classification accuracy and decreasing the running time overheads of the learning algorithms. The application of this research would be potentially useful in the drug design and in the diagnosis of some diseases. This paper presents a concise overview of the well-known protein classification techniques.

  11. Computational Biology Methods for Characterization of Pluripotent Cells.

    Science.gov (United States)

    Araúzo-Bravo, Marcos J

    2016-01-01

    Pluripotent cells are a powerful tool for regenerative medicine and drug discovery. Several techniques have been developed to induce pluripotency, or to extract pluripotent cells from different tissues and biological fluids. However, the characterization of pluripotency requires tedious, expensive, time-consuming, and not always reliable wet-lab experiments; thus, an easy, standard quality-control protocol of pluripotency assessment remains to be established. Here to help comes the use of high-throughput techniques, and in particular, the employment of gene expression microarrays, which has become a complementary technique for cellular characterization. Research has shown that the transcriptomics comparison with an Embryonic Stem Cell (ESC) of reference is a good approach to assess the pluripotency. Under the premise that the best protocol is a computer software source code, here I propose and explain line by line a software protocol coded in R-Bioconductor for pluripotency assessment based on the comparison of transcriptomics data of pluripotent cells with an ESC of reference. I provide advice for experimental design, warning about possible pitfalls, and guides for results interpretation.

  12. CIPSS [computer-integrated process and safeguards system]: The integration of computer-integrated manufacturing and robotics with safeguards, security, and process operations

    International Nuclear Information System (INIS)

    Leonard, R.S.; Evans, J.C.

    1987-01-01

    This poster session describes the computer-integrated process and safeguards system (CIPSS). The CIPSS combines systems developed for factory automation and automated mechanical functions (robots) with varying degrees of intelligence (expert systems) to create an integrated system that would satisfy current and emerging security and safeguards requirements. Specifically, CIPSS is an extension of the automated physical security functions concepts. The CIPSS also incorporates the concepts of computer-integrated manufacturing (CIM) with integrated safeguards concepts, and draws upon the Defense Advance Research Project Agency's (DARPA's) strategic computing program

  13. The Air Force "In Silico" -- Computational Biology in 2025

    National Research Council Canada - National Science Library

    Coates, Christopher

    2007-01-01

    The biological sciences have recently experienced remarkable advances and there are now frequent claims that "we are on the advent of being able to model or simulate biological systems to the smallest, molecular detail...

  14. Integrated anaerobic/aerobic biological treatment for intensive swine production.

    Science.gov (United States)

    Bortone, Giuseppe

    2009-11-01

    Manure processing could help farmers to effectively manage nitrogen (N) surplus load. Many pig farms have to treat wastewater. Piggery wastewater treatment is a complex challenge, due to the high COD and N concentrations and low C/N ratio. Anaerobic digestion (AD) could be a convenient pre-treatment, particularly from the energetic view point and farm income, but this causes further reduction of C/N ratio and makes denitrification difficult. N removal can only be obtained integrating anaerobic/aerobic treatment by taking into account the best use of electron donors. Experiences gained in Italy during development of integrated biological treatment approaches for swine manure, from bench to full scale, are reported in this paper. Solid/liquid separation as pre-treatment of raw manure is an efficient strategy to facilitate liquid fraction treatment without significantly lowering C/N ratio. In Italy, two full scale SBRs showed excellent efficiency and reliability. Current renewable energy policy and incentives makes economically attractive the application of AD to the separated solid fraction using high solid anaerobic digester (HSAD) technology. Economic evaluation showed that energy production can reduce costs up to 60%, making sustainable the overall treatment.

  15. Strategic Integration of Multiple Bioinformatics Resources for System Level Analysis of Biological Networks.

    Science.gov (United States)

    D'Souza, Mark; Sulakhe, Dinanath; Wang, Sheng; Xie, Bing; Hashemifar, Somaye; Taylor, Andrew; Dubchak, Inna; Conrad Gilliam, T; Maltsev, Natalia

    2017-01-01

    Recent technological advances in genomics allow the production of biological data at unprecedented tera- and petabyte scales. Efficient mining of these vast and complex datasets for the needs of biomedical research critically depends on a seamless integration of the clinical, genomic, and experimental information with prior knowledge about genotype-phenotype relationships. Such experimental data accumulated in publicly available databases should be accessible to a variety of algorithms and analytical pipelines that drive computational analysis and data mining.We present an integrated computational platform Lynx (Sulakhe et al., Nucleic Acids Res 44:D882-D887, 2016) ( http://lynx.cri.uchicago.edu ), a web-based database and knowledge extraction engine. It provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization. It gives public access to the Lynx integrated knowledge base (LynxKB) and its analytical tools via user-friendly web services and interfaces. The Lynx service-oriented architecture supports annotation and analysis of high-throughput experimental data. Lynx tools assist the user in extracting meaningful knowledge from LynxKB and experimental data, and in the generation of weighted hypotheses regarding the genes and molecular mechanisms contributing to human phenotypes or conditions of interest. The goal of this integrated platform is to support the end-to-end analytical needs of various translational projects.

  16. 9th International Conference on Practical Applications of Computational Biology and Bioinformatics

    CERN Document Server

    Rocha, Miguel; Fdez-Riverola, Florentino; Paz, Juan

    2015-01-01

    This proceedings presents recent practical applications of Computational Biology and  Bioinformatics. It contains the proceedings of the 9th International Conference on Practical Applications of Computational Biology & Bioinformatics held at University of Salamanca, Spain, at June 3rd-5th, 2015. The International Conference on Practical Applications of Computational Biology & Bioinformatics (PACBB) is an annual international meeting dedicated to emerging and challenging applied research in Bioinformatics and Computational Biology. Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis o...

  17. Computational local stiffness analysis of biological cell: High aspect ratio single wall carbon nanotube tip

    Energy Technology Data Exchange (ETDEWEB)

    TermehYousefi, Amin, E-mail: at.tyousefi@gmail.com [Department of Human Intelligence Systems, Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology (Kyutech) (Japan); Bagheri, Samira; Shahnazar, Sheida [Nanotechnology & Catalysis Research Centre (NANOCAT), IPS Building, University Malaya, 50603 Kuala Lumpur (Malaysia); Rahman, Md. Habibur [Department of Computer Science and Engineering, University of Asia Pacific, Green Road, Dhaka-1215 (Bangladesh); Kadri, Nahrizul Adib [Department of Biomedical Engineering, Faculty of Engineering, University Malaya, 50603 Kuala Lumpur (Malaysia)

    2016-02-01

    Carbon nanotubes (CNTs) are potentially ideal tips for atomic force microscopy (AFM) due to the robust mechanical properties, nanoscale diameter and also their ability to be functionalized by chemical and biological components at the tip ends. This contribution develops the idea of using CNTs as an AFM tip in computational analysis of the biological cells. The proposed software was ABAQUS 6.13 CAE/CEL provided by Dassault Systems, which is a powerful finite element (FE) tool to perform the numerical analysis and visualize the interactions between proposed tip and membrane of the cell. Finite element analysis employed for each section and displacement of the nodes located in the contact area was monitored by using an output database (ODB). Mooney–Rivlin hyperelastic model of the cell allows the simulation to obtain a new method for estimating the stiffness and spring constant of the cell. Stress and strain curve indicates the yield stress point which defines as a vertical stress and plan stress. Spring constant of the cell and the local stiffness was measured as well as the applied force of CNT-AFM tip on the contact area of the cell. This reliable integration of CNT-AFM tip process provides a new class of high performance nanoprobes for single biological cell analysis. - Graphical abstract: This contribution develops the idea of using CNTs as an AFM tip in computational analysis of the biological cells. The proposed software was ABAQUS 6.13 CAE/CEL provided by Dassault Systems. Finite element analysis employed for each section and displacement of the nodes located in the contact area was monitored by using an output database (ODB). Mooney–Rivlin hyperelastic model of the cell allows the simulation to obtain a new method for estimating the stiffness and spring constant of the cell. Stress and strain curve indicates the yield stress point which defines as a vertical stress and plan stress. Spring constant of the cell and the local stiffness was measured as well

  18. Gauss-Kronrod-Trapezoidal Integration Scheme for Modeling Biological Tissues with Continuous Fiber Distributions

    Science.gov (United States)

    Hou, Chieh; Ateshian, Gerard A.

    2015-01-01

    Fibrous biological tissues may be modeled using a continuous fiber distribution (CFD) to capture tension-compression nonlinearity, anisotropic fiber distributions, and load-induced anisotropy. The CFD framework requires spherical integration of weighted individual fiber responses, with fibers contributing to the stress response only when they are in tension. The common method for performing this integration employs the discretization of the unit sphere into a polyhedron with nearly uniform triangular faces (finite element integration or FEI scheme). Although FEI has proven to be more accurate and efficient than integration using spherical coordinates, it presents three major drawbacks: First, the number of elements on the unit sphere needed to achieve satisfactory accuracy becomes a significant computational cost in a finite element analysis. Second, fibers may not be in tension in some regions on the unit sphere, where the integration becomes a waste. Third, if tensed fiber bundles span a small region compared to the area of the elements on the sphere, a significant discretization error arises. This study presents an integration scheme specialized to the CFD framework, which significantly mitigates the first drawback of the FEI scheme, while eliminating the second and third completely. Here, integration is performed only over the regions of the unit sphere where fibers are in tension. Gauss-Kronrod quadrature is used across latitudes and the trapezoidal scheme across longitudes. Over a wide range of strain states, fiber material properties, and fiber angular distributions, results demonstrate that this new scheme always outperforms FEI, sometimes by orders of magnitude in the number of computational steps and relative accuracy of the stress calculation. PMID:26291492

  19. A Gauss-Kronrod-Trapezoidal integration scheme for modeling biological tissues with continuous fiber distributions.

    Science.gov (United States)

    Hou, Chieh; Ateshian, Gerard A

    2016-01-01

    Fibrous biological tissues may be modeled using a continuous fiber distribution (CFD) to capture tension-compression nonlinearity, anisotropic fiber distributions, and load-induced anisotropy. The CFD framework requires spherical integration of weighted individual fiber responses, with fibers contributing to the stress response only when they are in tension. The common method for performing this integration employs the discretization of the unit sphere into a polyhedron with nearly uniform triangular faces (finite element integration or FEI scheme). Although FEI has proven to be more accurate and efficient than integration using spherical coordinates, it presents three major drawbacks: First, the number of elements on the unit sphere needed to achieve satisfactory accuracy becomes a significant computational cost in a finite element (FE) analysis. Second, fibers may not be in tension in some regions on the unit sphere, where the integration becomes a waste. Third, if tensed fiber bundles span a small region compared to the area of the elements on the sphere, a significant discretization error arises. This study presents an integration scheme specialized to the CFD framework, which significantly mitigates the first drawback of the FEI scheme, while eliminating the second and third completely. Here, integration is performed only over the regions of the unit sphere where fibers are in tension. Gauss-Kronrod quadrature is used across latitudes and the trapezoidal scheme across longitudes. Over a wide range of strain states, fiber material properties, and fiber angular distributions, results demonstrate that this new scheme always outperforms FEI, sometimes by orders of magnitude in the number of computational steps and relative accuracy of the stress calculation.

  20. Computer-aided engineering of semiconductor integrated circuits

    Science.gov (United States)

    Meindl, J. D.; Dutton, R. W.; Gibbons, J. F.; Helms, C. R.; Plummer, J. D.; Tiller, W. A.; Ho, C. P.; Saraswat, K. C.; Deal, B. E.; Kamins, T. I.

    1980-07-01

    Economical procurement of small quantities of high performance custom integrated circuits for military systems is impeded by inadequate process, device and circuit models that handicap low cost computer aided design. The principal objective of this program is to formulate physical models of fabrication processes, devices and circuits to allow total computer-aided design of custom large-scale integrated circuits. The basic areas under investigation are (1) thermal oxidation, (2) ion implantation and diffusion, (3) chemical vapor deposition of silicon and refractory metal silicides, (4) device simulation and analytic measurements. This report discusses the fourth year of the program.

  1. Interactomes to Biological Phase Space: a call to begin thinking at a new level in computational biology.

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, George S.; Brown, William Michael

    2007-09-01

    Techniques for high throughput determinations of interactomes, together with high resolution protein collocalizations maps within organelles and through membranes will soon create a vast resource. With these data, biological descriptions, akin to the high dimensional phase spaces familiar to physicists, will become possible. These descriptions will capture sufficient information to make possible realistic, system-level models of cells. The descriptions and the computational models they enable will require powerful computing techniques. This report is offered as a call to the computational biology community to begin thinking at this scale and as a challenge to develop the required algorithms and codes to make use of the new data.3

  2. DNA-Enabled Integrated Molecular Systems for Computation and Sensing

    Science.gov (United States)

    2014-05-21

    Computational devices can be chemically conjugated to different strands of DNA that are then self-assembled according to strict Watson − Crick binding rules... DNA -Enabled Integrated Molecular Systems for Computation and Sensing Craig LaBoda,† Heather Duschl,† and Chris L. Dwyer*,†,‡ †Department of...guided folding of DNA , inspired by nature, allows designs to manipulate molecular-scale processes unlike any other material system. Thus, DNA can be

  3. Complexity estimates based on integral transforms induced by computational units

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra

    2012-01-01

    Roč. 33, September (2012), s. 160-167 ISSN 0893-6080 R&D Projects: GA ČR GAP202/11/1368 Institutional research plan: CEZ:AV0Z10300504 Institutional support: RVO:67985807 Keywords : neural networks * estimates of model complexity * approximation from a dictionary * integral transforms * norms induced by computational units Subject RIV: IN - Informatics, Computer Science Impact factor: 1.927, year: 2012

  4. Competitiveness in organizational integrated computer system project management

    Directory of Open Access Journals (Sweden)

    Zenovic GHERASIM

    2010-06-01

    Full Text Available The organizational integrated computer system project management aims at achieving competitiveness by unitary, connected and personalised treatment of the requirements for this type of projects, along with the adequate application of all the basic management, administration and project planning principles, as well as of the basic concepts of the organisational information management development. The paper presents some aspects of organizational computer systems project management competitiveness with the specific reference to some Romanian companies’ projects.

  5. Integration of cloud resources in the LHCb distributed computing

    International Nuclear Information System (INIS)

    García, Mario Úbeda; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel; Muñoz, Víctor Méndez

    2014-01-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) – instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  6. Integration of Cloud resources in the LHCb Distributed Computing

    Science.gov (United States)

    Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel

    2014-06-01

    This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.

  7. Status of integration of small computers into NDE systems

    International Nuclear Information System (INIS)

    Dau, G.J.; Behravesh, M.M.

    1988-01-01

    Introduction of computers in nondestructive evaluations (NDE) has enabled data acquisition devices to provide a more thorough and complete coverage in the scanning process, and has aided human inspectors in their data analysis and decision making efforts. The price and size/weight of small computers, coupled with recent increases in processing and storage capacity, have made small personal computers (PC's) the most viable platform for NDE equipment. Several NDE systems using minicomputers and newer PC-based systems, capable of automatic data acquisition, and knowledge-based analysis of the test data, have been field tested in the nuclear power plant environment and are currently available through commercial sources. While computers have been in common use for several NDE methods during the last few years, their greatest impact, however, has been on ultrasonic testing. This paper discusses the evolution of small computers and their integration into the ultrasonic testing process

  8. Soft computing integrating evolutionary, neural, and fuzzy systems

    CERN Document Server

    Tettamanzi, Andrea

    2001-01-01

    Soft computing encompasses various computational methodologies, which, unlike conventional algorithms, are tolerant of imprecision, uncertainty, and partial truth. Soft computing technologies offer adaptability as a characteristic feature and thus permit the tracking of a problem through a changing environment. Besides some recent developments in areas like rough sets and probabilistic networks, fuzzy logic, evolutionary algorithms, and artificial neural networks are core ingredients of soft computing, which are all bio-inspired and can easily be combined synergetically. This book presents a well-balanced integration of fuzzy logic, evolutionary computing, and neural information processing. The three constituents are introduced to the reader systematically and brought together in differentiated combinations step by step. The text was developed from courses given by the authors and offers numerous illustrations as

  9. Multiple-Swarm Ensembles: Improving the Predictive Power and Robustness of Predictive Models and Its Use in Computational Biology.

    Science.gov (United States)

    Alves, Pedro; Liu, Shuang; Wang, Daifeng; Gerstein, Mark

    2018-01-01

    Machine learning is an integral part of computational biology, and has already shown its use in various applications, such as prognostic tests. In the last few years in the non-biological machine learning community, ensembling techniques have shown their power in data mining competitions such as the Netflix challenge; however, such methods have not found wide use in computational biology. In this work, we endeavor to show how ensembling techniques can be applied to practical problems, including problems in the field of bioinformatics, and how they often outperform other machine learning techniques in both predictive power and robustness. Furthermore, we develop a methodology of ensembling, Multi-Swarm Ensemble (MSWE) by using multiple particle swarm optimizations and demonstrate its ability to further enhance the performance of ensembles.

  10. Computer integrated manufacturing in the chemical industry : Theory & practice

    NARCIS (Netherlands)

    Ashayeri, J.; Teelen, A.; Selen, W.J.

    1995-01-01

    This paper addresses the possibilities of implementing Computer Integrated Manufacturing in the process industry, and the chemical industry in particular. After presenting some distinct differences of the process industry in relation to discrete manufacturing, a number of focal points are discussed.

  11. An integrated compact airborne multispectral imaging system using embedded computer

    Science.gov (United States)

    Zhang, Yuedong; Wang, Li; Zhang, Xuguo

    2015-08-01

    An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.

  12. Models for integrated pest control and their biological implications.

    Science.gov (United States)

    Tang, Sanyi; Cheke, Robert A

    2008-09-01

    Successful integrated pest management (IPM) control programmes depend on many factors which include host-parasitoid ratios, starting densities, timings of parasitoid releases, dosages and timings of insecticide applications and levels of host-feeding and parasitism. Mathematical models can help us to clarify and predict the effects of such factors on the stability of host-parasitoid systems, which we illustrate here by extending the classical continuous and discrete host-parasitoid models to include an IPM control programme. The results indicate that one of three control methods can maintain the host level below the economic threshold (ET) in relation to different ET levels, initial densities of host and parasitoid populations and host-parasitoid ratios. The effects of host intrinsic growth rate and parasitoid searching efficiency on host mean outbreak period can be calculated numerically from the models presented. The instantaneous pest killing rate of an insecticide application is also estimated from the models. The results imply that the modelling methods described can help in the design of appropriate control strategies and assist management decision-making. The results also indicate that a high initial density of parasitoids (such as in inundative releases) and high parasitoid inter-generational survival rates will lead to more frequent host outbreaks and, therefore, greater economic damage. The biological implications of this counter intuitive result are discussed.

  13. Integrated self-powered microchip biosensor for endogenous biological cyanide.

    Science.gov (United States)

    Deng, Liu; Chen, Chaogui; Zhou, Ming; Guo, Shaojun; Wang, Erkang; Dong, Shaojun

    2010-05-15

    In this work we developed a fully integrated biofuel cell on a microchip, which consisted of glucose dehydrogenase supported (carbon nanotubes/thionine/gold nanoparticles)(8) multilayer as the anode, and the (carbon nanotubes/polylysine/laccase)(15) multilayer as the cathode. The as-obtained biofuel cell produced open circuit potential 620 mV and power density 302 microW cm(-2), showing great potential as a small power resource of portable electronics. Most importantly, for the first time we demonstrated the feasibility of developing a self-powered biosensor based on the inhibitive effect on microchip enzyme biofuel cell. With cyanide employed as the model analyte, this method showed a linear range of 3.0 x 10(-7) to 5.0 x 10(-4) M and a detection limit with 1.0 x 10(-7) M under the optimal conditions. The detection limit was lower than the acceptable cyanide concentration in drinking water (1.9 x 10(-6) M) according to the World Health Organization (WHO). This self-powered sensor was successfully used to detect the cyanide concentration in a real sample, cassava, which is the main carbohydrate resource in South America and Africa. This presented biosensor combined with a resistor and a multimeter demonstrated the general applicability as a fast and simple detection method in the determination of endogenous biological cyanide.

  14. Integrating computational methods to retrofit enzymes to synthetic pathways.

    Science.gov (United States)

    Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula

    2012-02-01

    Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.

  15. Integrating Xgrid into the HENP distributed computing model

    International Nuclear Information System (INIS)

    Hajdu, L; Lauret, J; Kocoloski, A; Miller, M

    2008-01-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology

  16. Integrating Xgrid into the HENP distributed computing model

    Science.gov (United States)

    Hajdu, L.; Kocoloski, A.; Lauret, J.; Miller, M.

    2008-07-01

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  17. Parallel metaheuristics in computational biology: an asynchronous cooperative enhanced scatter search method

    OpenAIRE

    Penas, David R.; González, Patricia; Egea, José A.; Banga, Julio R.; Doallo, Ramón

    2015-01-01

    Metaheuristics are gaining increased attention as efficient solvers for hard global optimization problems arising in bioinformatics and computational systems biology. Scatter Search (SS) is one of the recent outstanding algorithms in that class. However, its application to very hard problems, like those considering parameter estimation in dynamic models of systems biology, still results in excessive computation times. In order to reduce the computational cost of the SS and improve its success...

  18. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective

    OpenAIRE

    Shuo Gu; Jianfeng Pei

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regula...

  19. Computing with networks of spiking neurons on a biophysically motivated floating-gate based neuromorphic integrated circuit.

    Science.gov (United States)

    Brink, S; Nease, S; Hasler, P

    2013-09-01

    Results are presented from several spiking network experiments performed on a novel neuromorphic integrated circuit. The networks are discussed in terms of their computational significance, which includes applications such as arbitrary spatiotemporal pattern generation and recognition, winner-take-all competition, stable generation of rhythmic outputs, and volatile memory. Analogies to the behavior of real biological neural systems are also noted. The alternatives for implementing the same computations are discussed and compared from a computational efficiency standpoint, with the conclusion that implementing neural networks on neuromorphic hardware is significantly more power efficient than numerical integration of model equations on traditional digital hardware. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Integrated evolutionary computation neural network quality controller for automated systems

    Energy Technology Data Exchange (ETDEWEB)

    Patro, S.; Kolarik, W.J. [Texas Tech Univ., Lubbock, TX (United States). Dept. of Industrial Engineering

    1999-06-01

    With increasing competition in the global market, more and more stringent quality standards and specifications are being demands at lower costs. Manufacturing applications of computing power are becoming more common. The application of neural networks to identification and control of dynamic processes has been discussed. The limitations of using neural networks for control purposes has been pointed out and a different technique, evolutionary computation, has been discussed. The results of identifying and controlling an unstable, dynamic process using evolutionary computation methods has been presented. A framework for an integrated system, using both neural networks and evolutionary computation, has been proposed to identify the process and then control the product quality, in a dynamic, multivariable system, in real-time.

  1. Computer graphics application in the engineering design integration system

    Science.gov (United States)

    Glatt, C. R.; Abel, R. W.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Stewart, W. A.

    1975-01-01

    The computer graphics aspect of the Engineering Design Integration (EDIN) system and its application to design problems were discussed. Three basic types of computer graphics may be used with the EDIN system for the evaluation of aerospace vehicles preliminary designs: offline graphics systems using vellum-inking or photographic processes, online graphics systems characterized by direct coupled low cost storage tube terminals with limited interactive capabilities, and a minicomputer based refresh terminal offering highly interactive capabilities. The offline line systems are characterized by high quality (resolution better than 0.254 mm) and slow turnaround (one to four days). The online systems are characterized by low cost, instant visualization of the computer results, slow line speed (300 BAUD), poor hard copy, and the early limitations on vector graphic input capabilities. The recent acquisition of the Adage 330 Graphic Display system has greatly enhanced the potential for interactive computer aided design.

  2. CMS Distributed Computing Integration in the LHC sustained operations era

    International Nuclear Information System (INIS)

    Grandi, C; Bonacorsi, D; Bockelman, B; Fisk, I

    2011-01-01

    After many years of preparation the CMS computing system has reached a situation where stability in operations limits the possibility to introduce innovative features. Nevertheless it is the same need of stability and smooth operations that requires the introduction of features that were considered not strategic in the previous phases. Examples are: adequate authorization to control and prioritize the access to storage and computing resources; improved monitoring to investigate problems and identify bottlenecks on the infrastructure; increased automation to reduce the manpower needed for operations; effective process to deploy in production new releases of the software tools. We present the work of the CMS Distributed Computing Integration Activity that is responsible for providing a liaison between the CMS distributed computing infrastructure and the software providers, both internal and external to CMS. In particular we describe the introduction of new middleware features during the last 18 months as well as the requirements to Grid and Cloud software developers for the future.

  3. Computational tools for high-throughput discovery in biology

    OpenAIRE

    Jones, Neil Christopher

    2007-01-01

    High throughput data acquisition technology has inarguably transformed the landscape of the life sciences, in part by making possible---and necessary---the computational disciplines of bioinformatics and biomedical informatics. These fields focus primarily on developing tools for analyzing data and generating hypotheses about objects in nature, and it is in this context that we address three pressing problems in the fields of the computational life sciences which each require computing capaci...

  4. 6th International Conference on Practical Applications of Computational Biology & Bioinformatics

    CERN Document Server

    Luscombe, Nicholas; Fdez-Riverola, Florentino; Rodríguez, Juan; Practical Applications of Computational Biology & Bioinformatics

    2012-01-01

    The growth in the Bioinformatics and Computational Biology fields over the last few years has been remarkable.. The analysis of the datasets of Next Generation Sequencing needs new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Also Systems Biology has also been emerging as an alternative to the reductionist view that dominated biological research in the last decades. This book presents the results of the  6th International Conference on Practical Applications of Computational Biology & Bioinformatics held at University of Salamanca, Spain, 28-30th March, 2012 which brought together interdisciplinary scientists that have a strong background in the biological and computational sciences.

  5. Redefining Perineural Invasion: Integration of Biology With Clinical Outcome.

    Science.gov (United States)

    Schmitd, Ligia B; Beesley, Lauren J; Russo, Nickole; Bellile, Emily L; Inglehart, Ronald C; Liu, Min; Romanowicz, Genevieve; Wolf, Gregory T; Taylor, Jeremy M G; D'Silva, Nisha J

    2018-05-22

    A diagnosis of perineural invasion (PNI), defined as cancer within or surrounding at least 33% of the nerve, leads to selection of aggressive treatment in squamous cell carcinoma (SCC). Recent mechanistic studies show that cancer and nerves interact prior to physical contact. The purpose of this study was to explore cancer-nerve interactions relative to clinical outcome. Biopsy specimens from 71 patients with oral cavity SCC were stained with hematoxylin and eosin and immunohistochemical (IHC; cytokeratin, S100, GAP43, Tuj1) stains. Using current criteria, PNI detection was increased with IHC. Overall survival (OS) tended to be poor for patients with PNI (P = .098). OS was significantly lower for patients with minimum tumor-nerve distance smaller than 5 μm (P = .011). The estimated relative death rate decreased as the nerve-tumor distance increased; there was a gradual drop off in death rate from distance equal to zero that stabilized around 500 μm. In PNI-negative patients, nerve diameter was significantly related to OS (HR 2.88, 95%CI[1.11,7.49]). Among PNI-negative nerves, larger nerve-tumor distance and smaller nerve diameter were significantly related to better OS, even when adjusting for T-stage and age (HR 0.82, 95% CI[0.72,0.92]; HR 1.27, 95% CI[1.00,1.62], respectively). GAP43, a marker for neuronal outgrowth, stained less than Tuj1 in nerves at greater distances from tumor (OR 0.76, 95% CI[0.73,0.79]); more GAP43 staining was associated with PNI. Findings from a small group of patients suggest that nerve parameters other than presence of PNI can influence outcome and that current criteria of PNI need to be re-evaluated to integrate recent biological discoveries. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  6. Computer-integrated design and information management for nuclear projects

    International Nuclear Information System (INIS)

    Gonzalez, A.; Martin-Guirado, L.; Nebrera, F.

    1987-01-01

    Over the past seven years, Empresarios Agrupados has been developing a comprehensive, computer-integrated system to perform the majority of the engineering, design, procurement and construction management activities in nuclear, fossil-fired as well as hydro power plant projects. This system, which is already in a production environment, comprises a large number of computer programs and data bases designed using a modular approach. Each software module, dedicated to meeting the needs of a particular design group or project discipline, facilitates the performance of functional tasks characteristic of the power plant engineering process

  7. Three-dimensional integrated CAE system applying computer graphic technique

    International Nuclear Information System (INIS)

    Kato, Toshisada; Tanaka, Kazuo; Akitomo, Norio; Obata, Tokayasu.

    1991-01-01

    A three-dimensional CAE system for nuclear power plant design is presented. This system utilizes high-speed computer graphic techniques for the plant design review, and an integrated engineering database for handling the large amount of nuclear power plant engineering data in a unified data format. Applying this system makes it possible to construct a nuclear power plant using only computer data from the basic design phase to the manufacturing phase, and it increases the productivity and reliability of the nuclear power plants. (author)

  8. Computation of rectangular source integral by rational parameter polynomial method

    International Nuclear Information System (INIS)

    Prabha, Hem

    2001-01-01

    Hubbell et al. (J. Res. Nat Bureau Standards 64C, (1960) 121) have obtained a series expansion for the calculation of the radiation field generated by a plane isotropic rectangular source (plaque), in which leading term is the integral H(a,b). In this paper another integral I(a,b), which is related with the integral H(a,b) has been solved by the rational parameter polynomial method. From I(a,b), we compute H(a,b). Using this method the integral I(a,b) is expressed in the form of a polynomial of a rational parameter. Generally, a function f (x) is expressed in terms of x. In this method this is expressed in terms of x/(1+x). In this way, the accuracy of the expression is good over a wide range of x as compared to the earlier approach. The results for I(a,b) and H(a,b) are given for a sixth degree polynomial and are found to be in good agreement with the results obtained by numerically integrating the integral. Accuracy could be increased either by increasing the degree of the polynomial or by dividing the range of integration. The results of H(a,b) and I(a,b) are given for values of b and a up to 2.0 and 20.0, respectively

  9. Future of Chemical Engineering: Integrating Biology into the Undergraduate ChE Curriculum

    Science.gov (United States)

    Mosto, Patricia; Savelski, Mariano; Farrell, Stephanie H.; Hecht, Gregory B.

    2007-01-01

    Integrating biology in the chemical engineering curriculum seems to be the future for chemical engineering programs nation and worldwide. Rowan University's efforts to address this need include a unique chemical engineering curriculum with an intensive biology component integrated throughout from freshman to senior years. Freshman and Sophomore…

  10. Integration of cardiac proteome biology and medicine by a specialized knowledgebase.

    Science.gov (United States)

    Zong, Nobel C; Li, Haomin; Li, Hua; Lam, Maggie P Y; Jimenez, Rafael C; Kim, Christina S; Deng, Ning; Kim, Allen K; Choi, Jeong Ho; Zelaya, Ivette; Liem, David; Meyer, David; Odeberg, Jacob; Fang, Caiyun; Lu, Hao-Jie; Xu, Tao; Weiss, James; Duan, Huilong; Uhlen, Mathias; Yates, John R; Apweiler, Rolf; Ge, Junbo; Hermjakob, Henning; Ping, Peipei

    2013-10-12

    Omics sciences enable a systems-level perspective in characterizing cardiovascular biology. Integration of diverse proteomics data via a computational strategy will catalyze the assembly of contextualized knowledge, foster discoveries through multidisciplinary investigations, and minimize unnecessary redundancy in research efforts. The goal of this project is to develop a consolidated cardiac proteome knowledgebase with novel bioinformatics pipeline and Web portals, thereby serving as a new resource to advance cardiovascular biology and medicine. We created Cardiac Organellar Protein Atlas Knowledgebase (COPaKB; www.HeartProteome.org), a centralized platform of high-quality cardiac proteomic data, bioinformatics tools, and relevant cardiovascular phenotypes. Currently, COPaKB features 8 organellar modules, comprising 4203 LC-MS/MS experiments from human, mouse, drosophila, and Caenorhabditis elegans, as well as expression images of 10,924 proteins in human myocardium. In addition, the Java-coded bioinformatics tools provided by COPaKB enable cardiovascular investigators in all disciplines to retrieve and analyze pertinent organellar protein properties of interest. COPaKB provides an innovative and interactive resource that connects research interests with the new biological discoveries in protein sciences. With an array of intuitive tools in this unified Web server, nonproteomics investigators can conveniently collaborate with proteomics specialists to dissect the molecular signatures of cardiovascular phenotypes.

  11. Statistical Methodologies to Integrate Experimental and Computational Research

    Science.gov (United States)

    Parker, P. A.; Johnson, R. T.; Montgomery, D. C.

    2008-01-01

    Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.

  12. Computer integration of engineering design and production: A national opportunity

    Science.gov (United States)

    1984-01-01

    The National Aeronautics and Space Administration (NASA), as a purchaser of a variety of manufactured products, including complex space vehicles and systems, clearly has a stake in the advantages of computer-integrated manufacturing (CIM). Two major NASA objectives are to launch a Manned Space Station by 1992 with a budget of $8 billion, and to be a leader in the development and application of productivity-enhancing technology. At the request of NASA, a National Research Council committee visited five companies that have been leaders in using CIM. Based on these case studies, technical, organizational, and financial issues that influence computer integration are described, guidelines for its implementation in industry are offered, and the use of CIM to manage the space station program is recommended.

  13. Applying Integrated Computer Assisted Media (ICAM in Teaching Vocabulary

    Directory of Open Access Journals (Sweden)

    Opick Dwi Indah

    2015-02-01

    Full Text Available The objective of this research was to find out whether the use of integrated computer assisted media (ICAM is effective to improve the vocabulary achievement of the second semester students of Cokroaminoto Palopo University. The population of this research was the second semester students of English department of Cokroaminoto Palopo University in academic year 2013/2014. The samples of this research were 60 students and they were placed into two groups: experimental and control group where each group consisted of 30 students. This research used cluster random sampling technique. The research data was collected by applying vocabulary test and it was analyzed by using descriptive and inferential statistics. The result of this research was integrated computer assisted media (ICAM can improve vocabulary achievement of the students of English department of Cokroaminoto Palopo University. It can be concluded that the use of ICAM in the teaching vocabulary is effective to be implemented in improving the students’ vocabulary achievement.

  14. Integrating computer programs for engineering analysis and design

    Science.gov (United States)

    Wilhite, A. W.; Crisp, V. K.; Johnson, S. C.

    1983-01-01

    The design of a third-generation system for integrating computer programs for engineering and design has been developed for the Aerospace Vehicle Interactive Design (AVID) system. This system consists of an engineering data management system, program interface software, a user interface, and a geometry system. A relational information system (ARIS) was developed specifically for the computer-aided engineering system. It is used for a repository of design data that are communicated between analysis programs, for a dictionary that describes these design data, for a directory that describes the analysis programs, and for other system functions. A method is described for interfacing independent analysis programs into a loosely-coupled design system. This method emphasizes an interactive extension of analysis techniques and manipulation of design data. Also, integrity mechanisms exist to maintain database correctness for multidisciplinary design tasks by an individual or a team of specialists. Finally, a prototype user interface program has been developed to aid in system utilization.

  15. On the Edge of Mathematics and Biology Integration: Improving Quantitative Skills in Undergraduate Biology Education

    Science.gov (United States)

    Feser, Jason; Vasaly, Helen; Herrera, Jose

    2013-01-01

    In this paper, the authors describe how two institutions are helping their undergraduate biology students build quantitative competencies. Incorporation of quantitative skills and reasoning in biology are framed through a discussion of two cases that both concern introductory biology courses, but differ in the complexity of the mathematics and the…

  16. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Hatton, L.

    2012-01-01

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  17. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    Science.gov (United States)

    Filipčič, A.; ATLAS Collaboration

    2017-10-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte Carlo Simulation in SCEAPI and have been providing CPU power since fall 2015.

  18. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160; The ATLAS collaboration

    2016-01-01

    Fifteen Chinese High Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  19. Integration of the Chinese HPC Grid in ATLAS Distributed Computing

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00081160

    2017-01-01

    Fifteen Chinese High-Performance Computing sites, many of them on the TOP500 list of most powerful supercomputers, are integrated into a common infrastructure providing coherent access to a user through an interface based on a RESTful interface called SCEAPI. These resources have been integrated into the ATLAS Grid production system using a bridge between ATLAS and SCEAPI which translates the authorization and job submission protocols between the two environments. The ARC Computing Element (ARC-CE) forms the bridge using an extended batch system interface to allow job submission to SCEAPI. The ARC-CE was setup at the Institute for High Energy Physics, Beijing, in order to be as close as possible to the SCEAPI front-end interface at the Computing Network Information Center, also in Beijing. This paper describes the technical details of the integration between ARC-CE and SCEAPI and presents results so far with two supercomputer centers, Tianhe-IA and ERA. These two centers have been the pilots for ATLAS Monte C...

  20. An integrated computer aided system for integrated design of chemical processes

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Hytoft, Glen; Jaksland, Cecilia

    1997-01-01

    In this paper, an Integrated Computer Aided System (ICAS), which is particularly suitable for solving problems related to integrated design of chemical processes; is presented. ICAS features include a model generator (generation of problem specific models including model simplification and model ...... form the basis for the toolboxes. The available features of ICAS are highlighted through a case study involving the separation of binary azeotropic mixtures. (C) 1997 Elsevier Science Ltd....

  1. The Human Genome Project: Biology, Computers, and Privacy.

    Science.gov (United States)

    Cutter, Mary Ann G.; Drexler, Edward; Gottesman, Kay S.; Goulding, Philip G.; McCullough, Laurence B.; McInerney, Joseph D.; Micikas, Lynda B.; Mural, Richard J.; Murray, Jeffrey C.; Zola, John

    This module, for high school teachers, is the second of two modules about the Human Genome Project (HGP) produced by the Biological Sciences Curriculum Study (BSCS). The first section of this module provides background information for teachers about the structure and objectives of the HGP, aspects of the science and technology that underlie the…

  2. Fast computation of complete elliptic integrals and Jacobian elliptic functions

    Science.gov (United States)

    Fukushima, Toshio

    2009-12-01

    As a preparation step to compute Jacobian elliptic functions efficiently, we created a fast method to calculate the complete elliptic integral of the first and second kinds, K( m) and E( m), for the standard domain of the elliptic parameter, 0 procedure to compute simultaneously three Jacobian elliptic functions, sn( u| m), cn( u| m), and dn( u| m), by repeated usage of the double argument formulae starting from the Maclaurin series expansions with respect to the elliptic argument, u, after its domain is reduced to the standard range, 0 ≤ u procedure is 25-70% faster than the methods based on the Gauss transformation such as Bulirsch’s algorithm, sncndn, quoted in the Numerical Recipes even if the acceleration of computation of K( m) is not taken into account.

  3. The role of computer modelling in participatory integrated assessments

    International Nuclear Information System (INIS)

    Siebenhuener, Bernd; Barth, Volker

    2005-01-01

    In a number of recent research projects, computer models have been included in participatory procedures to assess global environmental change. The intention was to support knowledge production and to help the involved non-scientists to develop a deeper understanding of the interactions between natural and social systems. This paper analyses the experiences made in three projects with the use of computer models from a participatory and a risk management perspective. Our cross-cutting analysis of the objectives, the employed project designs and moderation schemes and the observed learning processes in participatory processes with model use shows that models play a mixed role in informing participants and stimulating discussions. However, no deeper reflection on values and belief systems could be achieved. In terms of the risk management phases, computer models serve best the purposes of problem definition and option assessment within participatory integrated assessment (PIA) processes

  4. Toward an interactive article: integrating journals and biological databases

    Directory of Open Access Journals (Sweden)

    Marygold Steven J

    2011-05-01

    Full Text Available Abstract Background Journal articles and databases are two major modes of communication in the biological sciences, and thus integrating these critical resources is of urgent importance to increase the pace of discovery. Projects focused on bridging the gap between journals and databases have been on the rise over the last five years and have resulted in the development of automated tools that can recognize entities within a document and link those entities to a relevant database. Unfortunately, automated tools cannot resolve ambiguities that arise from one term being used to signify entities that are quite distinct from one another. Instead, resolving these ambiguities requires some manual oversight. Finding the right balance between the speed and portability of automation and the accuracy and flexibility of manual effort is a crucial goal to making text markup a successful venture. Results We have established a journal article mark-up pipeline that links GENETICS journal articles and the model organism database (MOD WormBase. This pipeline uses a lexicon built with entities from the database as a first step. The entity markup pipeline results in links from over nine classes of objects including genes, proteins, alleles, phenotypes and anatomical terms. New entities and ambiguities are discovered and resolved by a database curator through a manual quality control (QC step, along with help from authors via a web form that is provided to them by the journal. New entities discovered through this pipeline are immediately sent to an appropriate curator at the database. Ambiguous entities that do not automatically resolve to one link are resolved by hand ensuring an accurate link. This pipeline has been extended to other databases, namely Saccharomyces Genome Database (SGD and FlyBase, and has been implemented in marking up a paper with links to multiple databases. Conclusions Our semi-automated pipeline hyperlinks articles published in GENETICS to

  5. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective

    Directory of Open Access Journals (Sweden)

    Shuo Gu

    2017-01-01

    Full Text Available With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.

  6. Chinese Herbal Medicine Meets Biological Networks of Complex Diseases: A Computational Perspective.

    Science.gov (United States)

    Gu, Shuo; Pei, Jianfeng

    2017-01-01

    With the rapid development of cheminformatics, computational biology, and systems biology, great progress has been made recently in the computational research of Chinese herbal medicine with in-depth understanding towards pharmacognosy. This paper summarized these studies in the aspects of computational methods, traditional Chinese medicine (TCM) compound databases, and TCM network pharmacology. Furthermore, we chose arachidonic acid metabolic network as a case study to demonstrate the regulatory function of herbal medicine in the treatment of inflammation at network level. Finally, a computational workflow for the network-based TCM study, derived from our previous successful applications, was proposed.

  7. Integrating Xgrid into the HENP distributed computing model

    Energy Technology Data Exchange (ETDEWEB)

    Hajdu, L; Lauret, J [Brookhaven National Laboratory, Upton, NY 11973 (United States); Kocoloski, A; Miller, M [Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)], E-mail: kocolosk@mit.edu

    2008-07-15

    Modern Macintosh computers feature Xgrid, a distributed computing architecture built directly into Apple's OS X operating system. While the approach is radically different from those generally expected by the Unix based Grid infrastructures (Open Science Grid, TeraGrid, EGEE), opportunistic computing on Xgrid is nonetheless a tempting and novel way to assemble a computing cluster with a minimum of additional configuration. In fact, it requires only the default operating system and authentication to a central controller from each node. OS X also implements arbitrarily extensible metadata, allowing an instantly updated file catalog to be stored as part of the filesystem itself. The low barrier to entry allows an Xgrid cluster to grow quickly and organically. This paper and presentation will detail the steps that can be taken to make such a cluster a viable resource for HENP research computing. We will further show how to provide to users a unified job submission framework by integrating Xgrid through the STAR Unified Meta-Scheduler (SUMS), making tasks and jobs submission effortlessly at reach for those users already using the tool for traditional Grid or local cluster job submission. We will discuss additional steps that can be taken to make an Xgrid cluster a full partner in grid computing initiatives, focusing on Open Science Grid integration. MIT's Xgrid system currently supports the work of multiple research groups in the Laboratory for Nuclear Science, and has become an important tool for generating simulations and conducting data analyses at the Massachusetts Institute of Technology.

  8. Medicinal electrochemistry: integration of electrochemistry, medicinal chemistry and computational chemistry.

    Science.gov (United States)

    Almeida, M O; Maltarollo, V G; de Toledo, R A; Shim, H; Santos, M C; Honorio, K M

    2014-01-01

    Over the last centuries, there were many important discoveries in medicine that were crucial for gaining a better understanding of several physiological processes. Molecular modelling techniques are powerful tools that have been successfully used to analyse and interface medicinal chemistry studies with electrochemical experimental results. This special combination can help to comprehend medicinal chemistry problems, such as predicting biological activity and understanding drug action mechanisms. Electrochemistry has provided better comprehension of biological reactions and, as a result of many technological improvements, the combination of electrochemical techniques and biosensors has become an appealing choice for pharmaceutical and biomedical analyses. Therefore, this review will briefly outline the present scope and future advances related to the integration of electrochemical and medicinal chemistry approaches based on various applications from recent studies.

  9. Effective use of latent semantic indexing and computational linguistics in biological and biomedical applications.

    Science.gov (United States)

    Chen, Hongyu; Martin, Bronwen; Daimon, Caitlin M; Maudsley, Stuart

    2013-01-01

    Text mining is rapidly becoming an essential technique for the annotation and analysis of large biological data sets. Biomedical literature currently increases at a rate of several thousand papers per week, making automated information retrieval methods the only feasible method of managing this expanding corpus. With the increasing prevalence of open-access journals and constant growth of publicly-available repositories of biomedical literature, literature mining has become much more effective with respect to the extraction of biomedically-relevant data. In recent years, text mining of popular databases such as MEDLINE has evolved from basic term-searches to more sophisticated natural language processing techniques, indexing and retrieval methods, structural analysis and integration of literature with associated metadata. In this review, we will focus on Latent Semantic Indexing (LSI), a computational linguistics technique increasingly used for a variety of biological purposes. It is noted for its ability to consistently outperform benchmark Boolean text searches and co-occurrence models at information retrieval and its power to extract indirect relationships within a data set. LSI has been used successfully to formulate new hypotheses, generate novel connections from existing data, and validate empirical data.

  10. Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology (Final Report)

    Science.gov (United States)

    EPA announced the release of the final report, Next Generation Risk Assessment: Incorporation of Recent Advances in Molecular, Computational, and Systems Biology. This report describes new approaches that are faster, less resource intensive, and more robust that can help ...

  11. Where mathematics, computer science, linguistics and biology meet essays in honour of Gheorghe Păun

    CERN Document Server

    Mitrana, Victor

    2001-01-01

    In the last years, it was observed an increasing interest of computer scientists in the structure of biological molecules and the way how they can be manipulated in vitro in order to define theoretical models of computation based on genetic engineering tools. Along the same lines, a parallel interest is growing regarding the process of evolution of living organisms. Much of the current data for genomes are expressed in the form of maps which are now becoming available and permit the study of the evolution of organisms at the scale of genome for the first time. On the other hand, there is an active trend nowadays throughout the field of computational biology toward abstracted, hierarchical views of biological sequences, which is very much in the spirit of computational linguistics. In the last decades, results and methods in the field of formal language theory that might be applied to the description of biological sequences were pointed out.

  12. Ontology-supported research on vaccine efficacy, safety and integrative biological networks.

    Science.gov (United States)

    He, Yongqun

    2014-07-01

    While vaccine efficacy and safety research has dramatically progressed with the methods of in silico prediction and data mining, many challenges still exist. A formal ontology is a human- and computer-interpretable set of terms and relations that represent entities in a specific domain and how these terms relate to each other. Several community-based ontologies (including Vaccine Ontology, Ontology of Adverse Events and Ontology of Vaccine Adverse Events) have been developed to support vaccine and adverse event representation, classification, data integration, literature mining of host-vaccine interaction networks, and analysis of vaccine adverse events. The author further proposes minimal vaccine information standards and their ontology representations, ontology-based linked open vaccine data and meta-analysis, an integrative One Network ('OneNet') Theory of Life, and ontology-based approaches to study and apply the OneNet theory. In the Big Data era, these proposed strategies provide a novel framework for advanced data integration and analysis of fundamental biological networks including vaccine immune mechanisms.

  13. Complex fluids in biological systems experiment, theory, and computation

    CERN Document Server

    2015-01-01

    This book serves as an introduction to the continuum mechanics and mathematical modeling of complex fluids in living systems. The form and function of living systems are intimately tied to the nature of surrounding fluid environments, which commonly exhibit nonlinear and history dependent responses to forces and displacements. With ever-increasing capabilities in the visualization and manipulation of biological systems, research on the fundamental phenomena, models, measurements, and analysis of complex fluids has taken a number of exciting directions. In this book, many of the world’s foremost experts explore key topics such as: Macro- and micro-rheological techniques for measuring the material properties of complex biofluids and the subtleties of data interpretation Experimental observations and rheology of complex biological materials, including mucus, cell membranes, the cytoskeleton, and blood The motility of microorganisms in complex fluids and the dynamics of active suspensions Challenges and solut...

  14. Fundamentals of bioinformatics and computational biology methods and exercises in matlab

    CERN Document Server

    Singh, Gautam B

    2015-01-01

    This book offers comprehensive coverage of all the core topics of bioinformatics, and includes practical examples completed using the MATLAB bioinformatics toolbox™. It is primarily intended as a textbook for engineering and computer science students attending advanced undergraduate and graduate courses in bioinformatics and computational biology. The book develops bioinformatics concepts from the ground up, starting with an introductory chapter on molecular biology and genetics. This chapter will enable physical science students to fully understand and appreciate the ultimate goals of applying the principles of information technology to challenges in biological data management, sequence analysis, and systems biology. The first part of the book also includes a survey of existing biological databases, tools that have become essential in today’s biotechnology research. The second part of the book covers methodologies for retrieving biological information, including fundamental algorithms for sequence compar...

  15. IntegromeDB: an integrated system and biological search engine.

    Science.gov (United States)

    Baitaluk, Michael; Kozhenkov, Sergey; Dubinina, Yulia; Ponomarenko, Julia

    2012-01-19

    With the growth of biological data in volume and heterogeneity, web search engines become key tools for researchers. However, general-purpose search engines are not specialized for the search of biological data. Here, we present an approach at developing a biological web search engine based on the Semantic Web technologies and demonstrate its implementation for retrieving gene- and protein-centered knowledge. The engine is available at http://www.integromedb.org. The IntegromeDB search engine allows scanning data on gene regulation, gene expression, protein-protein interactions, pathways, metagenomics, mutations, diseases, and other gene- and protein-related data that are automatically retrieved from publicly available databases and web pages using biological ontologies. To perfect the resource design and usability, we welcome and encourage community feedback.

  16. Atomic force microscope with integrated optical microscope for biological applications

    OpenAIRE

    Putman, Constant A.J.; Putman, C.A.J.; van der Werf, Kees; de Grooth, B.G.; van Hulst, N.F.; Segerink, Franciscus B.; Greve, Jan

    1992-01-01

    Since atomic force microscopy (AFM) is capable of imaging nonconducting surfaces, the technique holds great promises for high‐resolution imaging of biological specimens. A disadvantage of most AFMs is the fact that the relatively large sample surface has to be scanned multiple times to pinpoint a specific biological object of interest. Here an AFM is presented which has an incorporated inverted optical microscope. The optical image from the optical microscope is not obscured by the cantilever...

  17. Dynamic integration of remote cloud resources into local computing clusters

    Energy Technology Data Exchange (ETDEWEB)

    Fleig, Georg; Erli, Guenther; Giffels, Manuel; Hauth, Thomas; Quast, Guenter; Schnepf, Matthias [Institut fuer Experimentelle Kernphysik, Karlsruher Institut fuer Technologie (Germany)

    2016-07-01

    In modern high-energy physics (HEP) experiments enormous amounts of data are analyzed and simulated. Traditionally dedicated HEP computing centers are built or extended to meet this steadily increasing demand for computing resources. Nowadays it is more reasonable and more flexible to utilize computing power at remote data centers providing regular cloud services to users as they can be operated in a more efficient manner. This approach uses virtualization and allows the HEP community to run virtual machines containing a dedicated operating system and transparent access to the required software stack on almost any cloud site. The dynamic management of virtual machines depending on the demand for computing power is essential for cost efficient operation and sharing of resources with other communities. For this purpose the EKP developed the on-demand cloud manager ROCED for dynamic instantiation and integration of virtualized worker nodes into the institute's computing cluster. This contribution will report on the concept of our cloud manager and the implementation utilizing a remote OpenStack cloud site and a shared HPC center (bwForCluster located in Freiburg).

  18. ISCB Ebola Award for Important Future Research on the Computational Biology of Ebola Virus.

    Directory of Open Access Journals (Sweden)

    Peter D. Karp

    2015-01-01

    Full Text Available Speed is of the essence in combating Ebola; thus, computational approaches should form a significant component of Ebola research. As for the development of any modern drug, computational biology is uniquely positioned to contribute through comparative analysis of the genome sequences of Ebola strains as well as 3-D protein modeling. Other computational approaches to Ebola may include large-scale docking studies of Ebola proteins with human proteins and with small-molecule libraries, computational modeling of the spread of the virus, computational mining of the Ebola literature, and creation of a curated Ebola database. Taken together, such computational efforts could significantly accelerate traditional scientific approaches. In recognition of the need for important and immediate solutions from the field of computational biology against Ebola, the International Society for Computational Biology (ISCB announces a prize for an important computational advance in fighting the Ebola virus. ISCB will confer the ISCB Fight against Ebola Award, along with a prize of US$2,000, at its July 2016 annual meeting (ISCB Intelligent Systems for Molecular Biology [ISMB] 2016, Orlando, Florida.

  19. Computer simulation of heating of biological tissue during laser radiation

    International Nuclear Information System (INIS)

    Bojanic, S.; Sreckovic, M.

    1995-01-01

    Computer model is based on an implicit finite difference scheme to solve the diffusion equation for light distribution and the bio-heat equation. A practical application of the model is to calculate the temperature distributions during thermal coagulation of prostate by radiative heating. (author)

  20. The Effects of 3D Computer Simulation on Biology Students' Achievement and Memory Retention

    Science.gov (United States)

    Elangovan, Tavasuria; Ismail, Zurida

    2014-01-01

    A quasi experimental study was conducted for six weeks to determine the effectiveness of two different 3D computer simulation based teaching methods, that is, realistic simulation and non-realistic simulation on Form Four Biology students' achievement and memory retention in Perak, Malaysia. A sample of 136 Form Four Biology students in Perak,…

  1. Extending and Applying Spartan to Perform Temporal Sensitivity Analyses for Predicting Changes in Influential Biological Pathways in Computational Models.

    Science.gov (United States)

    Alden, Kieran; Timmis, Jon; Andrews, Paul S; Veiga-Fernandes, Henrique; Coles, Mark

    2017-01-01

    Through integrating real time imaging, computational modelling, and statistical analysis approaches, previous work has suggested that the induction of and response to cell adhesion factors is the key initiating pathway in early lymphoid tissue development, in contrast to the previously accepted view that the process is triggered by chemokine mediated cell recruitment. These model derived hypotheses were developed using spartan, an open-source sensitivity analysis toolkit designed to establish and understand the relationship between a computational model and the biological system that model captures. Here, we extend the functionality available in spartan to permit the production of statistical analyses that contrast the behavior exhibited by a computational model at various simulated time-points, enabling a temporal analysis that could suggest whether the influence of biological mechanisms changes over time. We exemplify this extended functionality by using the computational model of lymphoid tissue development as a time-lapse tool. By generating results at twelve- hour intervals, we show how the extensions to spartan have been used to suggest that lymphoid tissue development could be biphasic, and predict the time-point when a switch in the influence of biological mechanisms might occur.

  2. The computational design of Geological Disposal Technology Integration System

    International Nuclear Information System (INIS)

    Ishihara, Yoshinao; Iwamoto, Hiroshi; Kobayashi, Shigeki; Neyama, Atsushi; Endo, Shuji; Shindo, Tomonori

    2002-03-01

    In order to develop 'Geological Disposal Technology Integration System' that is intended to systematize as knowledge base for fundamental study, the computational design of an indispensable database and image processing function to 'Geological Disposal Technology Integration System' was done, the prototype was made for trial purposes, and the function was confirmed. (1) Database of Integration System which systematized necessary information and relating information as an examination of a whole of repository composition and managed were constructed, and the system function was constructed as a system composed of image processing, analytical information management, the repository component management, and the system security function. (2) The range of the data treated with this system and information was examined, the design examination of the database structure was done, and the design examination of the image processing function of the data preserved in an integrated database was done. (3) The prototype of the database concerning a basic function, the system operation interface, and the image processing function was manufactured to verify the feasibility of the 'Geological Disposal Technology Integration System' based on the result of the design examination and the function was confirmed. (author)

  3. An Integrative and Collaborative Approach to Creating a Diverse and Computationally Competent Geoscience Workforce

    Science.gov (United States)

    Moore, S. L.; Kar, A.; Gomez, R.

    2015-12-01

    A partnership between Fort Valley State University (FVSU), the Jackson School of Geosciences at The University of Texas (UT) at Austin, and the Texas Advanced Computing Center (TACC) is engaging computational geoscience faculty and researchers with academically talented underrepresented minority (URM) students, training them to solve grand challenges . These next generation computational geoscientists are being trained to solve some of the world's most challenging geoscience grand challenges requiring data intensive large scale modeling and simulation on high performance computers . UT Austin's geoscience outreach program GeoFORCE, recently awarded the Presidential Award in Excellence in Science, Mathematics and Engineering Mentoring, contributes to the collaborative best practices in engaging researchers with URM students. Collaborative efforts over the past decade are providing data demonstrating that integrative pipeline programs with mentoring and paid internship opportunities, multi-year scholarships, computational training, and communication skills development are having an impact on URMs developing middle skills for geoscience careers. Since 1997, the Cooperative Developmental Energy Program at FVSU and its collaborating universities have graduated 87 engineers, 33 geoscientists, and eight health physicists. Recruited as early as high school, students enroll for three years at FVSU majoring in mathematics, chemistry or biology, and then transfer to UT Austin or other partner institutions to complete a second STEM degree, including geosciences. A partnership with the Integrative Computational Education and Research Traineeship (ICERT), a National Science Foundation (NSF) Research Experience for Undergraduates (REU) Site at TACC provides students with a 10-week summer research experience at UT Austin. Mentored by TACC researchers, students with no previous background in computational science learn to use some of the world's most powerful high performance

  4. Economic value of biological control in integrated pest management of managed plant systems.

    Science.gov (United States)

    Naranjo, Steven E; Ellsworth, Peter C; Frisvold, George B

    2015-01-07

    Biological control is an underlying pillar of integrated pest management, yet little focus has been placed on assigning economic value to this key ecosystem service. Setting biological control on a firm economic foundation would help to broaden its utility and adoption for sustainable crop protection. Here we discuss approaches and methods available for valuation of biological control of arthropod pests by arthropod natural enemies and summarize economic evaluations in classical, augmentative, and conservation biological control. Emphasis is placed on valuation of conservation biological control, which has received little attention. We identify some of the challenges of and opportunities for applying economics to biological control to advance integrated pest management. Interaction among diverse scientists and stakeholders will be required to measure the direct and indirect costs and benefits of biological control that will allow farmers and others to internalize the benefits that incentivize and accelerate adoption for private and public good.

  5. Inferring biological functions of guanylyl cyclases with computational methods

    KAUST Repository

    Alquraishi, May Majed; Meier, Stuart Kurt

    2013-01-01

    A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.

  6. Inferring biological functions of guanylyl cyclases with computational methods

    KAUST Repository

    Alquraishi, May Majed

    2013-09-03

    A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.

  7. FOREWORD: Third Nordic Symposium on Computer Simulation in Physics, Chemistry, Biology and Mathematics

    Science.gov (United States)

    Kaski, K.; Salomaa, M.

    1990-01-01

    ), physics (fluid-dynamical and quantum-mechanical calculations; extensive numerical simulations of various condensed-matter systems; the development of stellar constellations, even the early Universe), chemistry (quantum-chemical calculations on the structures of new chemical compounds; chemical reactions and reaction dynamics), and biology (various models, for example, in population dynamics). We succeeded in our effort to assemble several internationally recognized researchers of Computational Science to deliver invited talks on a couple of exceptionally beautiful late-summer days in the modern premises of the Adult Education Center at Lahti. Among the plenary speakers, Per Bak described his highly original work on self-organized criticality. David Ceperley discussed pioneering numerical simulations of superfluid helium in which, for the first time, Feynman's path-integral formulation of quantum mechanics has been implemented on a computer. Jim Gunton presented his comprehensive studies of the Cahn-Hilliard equation for the dynamics of ordering in a condensed-matter system far from equilibrium, while Alex Hansen explained those on nonlinear breakdown in disordered materials. Representing the important field of computational chemistry, Bo Jönsson dealt with attractive forces between polyelectrolytes. Kurt Kremer gave an interesting account on computer-simulation studies of complex polymer systems, while Ole Mouritsen reviewed studies of interfacial fluctuations in lipid membranes. Pekka Pyykkö introduced his pioneering work which has led to predictions of completely novel chemical species. Annette Zippelius gave an expert introduction to the highly active field of neural networks. It is evident from each of these intriguing plenary contributions that, indeed, the computational approach is a frontier field of science, possibly providing the most versatile research method available today. We also arranged a competition for the best Posters presented at the Symposium; the

  8. Optic Glomeruli: Biological Circuits that Compute Target Identity

    Science.gov (United States)

    2013-11-01

    by local interneurons that then relay this higher order data to neurons that reach various targets in the brain, including the mushroom bodies...melanogaster. Author: Laiyong Mu and Nicholas J. Strausfeld Poster#: 673.18/KK11 2011 March 13 - March 16, Vision in flies, Janelia Farm Conference, Ashburn...and Behavior, Janelia Farm Conference, Ashburn, VA Title: Visual and mechanical sensory integration of descending neurons in Drosophila melanogaster

  9. Decomposing dendrophilia. Comment on “Toward a computational framework for cognitive biology: Unifying approaches from cognitive neuroscience and comparative cognition” by W. Tecumseh Fitch

    Science.gov (United States)

    Honing, Henkjan; Zuidema, Willem

    2014-09-01

    The future of cognitive science will be about bridging neuroscience and behavioral studies, with essential roles played by comparative biology, formal modeling, and the theory of computation. Nowhere will this integration be more strongly needed than in understanding the biological basis of language and music. We thus strongly sympathize with the general framework that Fitch [1] proposes, and welcome the remarkably broad and readable review he presents to support it.

  10. Computer Literacy for Life Sciences: Helping the Digital-Era Biology Undergraduates Face Today's Research

    Science.gov (United States)

    Smolinski, Tomasz G.

    2010-01-01

    Computer literacy plays a critical role in today's life sciences research. Without the ability to use computers to efficiently manipulate and analyze large amounts of data resulting from biological experiments and simulations, many of the pressing questions in the life sciences could not be answered. Today's undergraduates, despite the ubiquity of…

  11. Quality controls in integrative approaches to detect errors and inconsistencies in biological databases

    Directory of Open Access Journals (Sweden)

    Ghisalberti Giorgio

    2010-12-01

    Full Text Available Numerous biomolecular data are available, but they are scattered in many databases and only some of them are curated by experts. Most available data are computationally derived and include errors and inconsistencies. Effective use of available data in order to derive new knowledge hence requires data integration and quality improvement. Many approaches for data integration have been proposed. Data warehousing seams to be the most adequate when comprehensive analysis of integrated data is required. This makes it the most suitable also to implement comprehensive quality controls on integrated data. We previously developed GFINDer (http://www.bioinformatics.polimi.it/GFINDer/, a web system that supports scientists in effectively using available information. It allows comprehensive statistical analysis and mining of functional and phenotypic annotations of gene lists, such as those identified by high-throughput biomolecular experiments. GFINDer backend is composed of a multi-organism genomic and proteomic data warehouse (GPDW. Within the GPDW, several controlled terminologies and ontologies, which describe gene and gene product related biomolecular processes, functions and phenotypes, are imported and integrated, together with their associations with genes and proteins of several organisms. In order to ease maintaining updated the GPDW and to ensure the best possible quality of data integrated in subsequent updating of the data warehouse, we developed several automatic procedures. Within them, we implemented numerous data quality control techniques to test the integrated data for a variety of possible errors and inconsistencies. Among other features, the implemented controls check data structure and completeness, ontological data consistency, ID format and evolution, unexpected data quantification values, and consistency of data from single and multiple sources. We use the implemented controls to analyze the quality of data available from several

  12. Advanced data analysis in neuroscience integrating statistical and computational models

    CERN Document Server

    Durstewitz, Daniel

    2017-01-01

    This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering.  Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat ory frameworks, but become powerfu...

  13. Computing thermal Wigner densities with the phase integration method

    International Nuclear Information System (INIS)

    Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.

    2014-01-01

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems

  14. Computing thermal Wigner densities with the phase integration method.

    Science.gov (United States)

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  15. Operational experience with the Sizewell B integrated plant computer system

    International Nuclear Information System (INIS)

    Ladner, J.E.J.; Alexander, N.C.; Fitzpatrick, J.A.

    1997-01-01

    The Westinghouse Integrated System for Centralised Operation (WISCO) is the primary plant control system at the Sizewell B Power Station. It comprises three subsystems; the High Integrity Control System (HICS), the Process Control System (PCS) and the Distributed Computer system (DCS). The HICS performs the control and data acquisition of nuclear safety significant plant systems. The PCS uses redundant data processing unit pairs. The workstations and servers of the DCS communicate with each other over a standard ethernet. The maintenance requirements for every plant system are covered by a Maintenance Strategy Report. The breakdown of these reports is listed. The WISCO system has performed exceptionally well. Due to the diagnostic information presented by the HICS, problems could normally be resolved within 24 hours. There have been some 200 outstanding modifications to the system. The procedure of modification is briefly described. (A.K.)

  16. Computational Approaches for Integrative Analysis of the Metabolome and Microbiome

    Directory of Open Access Journals (Sweden)

    Jasmine Chong

    2017-11-01

    Full Text Available The study of the microbiome, the totality of all microbes inhabiting the host or an environmental niche, has experienced exponential growth over the past few years. The microbiome contributes functional genes and metabolites, and is an important factor for maintaining health. In this context, metabolomics is increasingly applied to complement sequencing-based approaches (marker genes or shotgun metagenomics to enable resolution of microbiome-conferred functionalities associated with health. However, analyzing the resulting multi-omics data remains a significant challenge in current microbiome studies. In this review, we provide an overview of different computational approaches that have been used in recent years for integrative analysis of metabolome and microbiome data, ranging from statistical correlation analysis to metabolic network-based modeling approaches. Throughout the process, we strive to present a unified conceptual framework for multi-omics integration and interpretation, as well as point out potential future directions.

  17. Total quality through computer integrated manufacturing in the pharmaceutical industry.

    Science.gov (United States)

    Ufret, C M

    1995-01-01

    The role of Computer Integrated Manufacturing (CIM) in the pursue of total quality in pharmaceutical manufacturing is assessed. CIM key objectives, design criteria, and performance measurements, in addition to its scope and implementation in a hierarchical structure, are explored in detail. Key elements for the success of each phase in a CIM project and a brief status of current CIM implementations in the pharmaceutical industry are presented. The role of World Class Manufacturing performance standards and other key issues to achieve full CIM benefits are also addressed.

  18. Computer integrated construction at AB building in reprocessing plant

    International Nuclear Information System (INIS)

    Takami, Masahiro; Azuchi, Takehiro; Sekiguchi, Kenji

    1999-01-01

    JNFL (Japan Nuclear Fuel Limited) is now processing with construction of the spent nuclear fuel reprocessing plant at Rokkasho Village in Aomori Prefecture, which is coming near to the busiest period of construction. Now we are trying to complete the civil work of AB Building and KA Building in a very short construction term by applying CIC (Computer Integrated Construction) concept, in spite of its hard construction conditions, such as the massive and complicated building structure, interferences with M and E (Mechanical and Electrical) work, severe winter weather, remote site location, etc. The key technologies of CIC are three-dimensional CAD, information network, and prefabrication and mechanization of site work. (author)

  19. Integration of Openstack cloud resources in BES III computing cluster

    Science.gov (United States)

    Li, Haibo; Cheng, Yaodong; Huang, Qiulan; Cheng, Zhenjing; Shi, Jingyan

    2017-10-01

    Cloud computing provides a new technical means for data processing of high energy physics experiment. However, the resource of each queue is fixed and the usage of the resource is static in traditional job management system. In order to make it simple and transparent for physicist to use, we developed a virtual cluster system (vpmanager) to integrate IHEPCloud and different batch systems such as Torque and HTCondor. Vpmanager provides dynamic virtual machines scheduling according to the job queue. The BES III use case results show that resource efficiency is greatly improved.

  20. Integration of rocket turbine design and analysis through computer graphics

    Science.gov (United States)

    Hsu, Wayne; Boynton, Jim

    1988-01-01

    An interactive approach with engineering computer graphics is used to integrate the design and analysis processes of a rocket engine turbine into a progressive and iterative design procedure. The processes are interconnected through pre- and postprocessors. The graphics are used to generate the blade profiles, their stacking, finite element generation, and analysis presentation through color graphics. Steps of the design process discussed include pitch-line design, axisymmetric hub-to-tip meridional design, and quasi-three-dimensional analysis. The viscous two- and three-dimensional analysis codes are executed after acceptable designs are achieved and estimates of initial losses are confirmed.

  1. Computed tomographic evaluation of dinosar egg shell integrity

    International Nuclear Information System (INIS)

    Jones, J.C.; Greenberg, W.; Ayers, S.

    1998-01-01

    The purpose of this study was to determine whether computed tomography (CT) could be used to identify hatching holes in partially embedded dinosaur eggs. One Faveololithus and two Dendroolithus eggs were examined using a fourth generation CT scanner. The eggs were partially embedded in a fossilized sediment matrix, with the exposed portion of the shell appearing intact. In CT images of all three eggs, the shells appeared hyperdense relative to the matrix. Hatching holes were visible as large gaps in the embedded portion of the shell, with inwardly displaced shell fragments. It was concluded that CT is an effective technique for nondestructively assessing dinosaur egg shell integrity

  2. 3-D computer graphics based on integral photography.

    Science.gov (United States)

    Naemura, T; Yoshida, T; Harashima, H

    2001-02-12

    Integral photography (IP), which is one of the ideal 3-D photographic technologies, can be regarded as a method of capturing and displaying light rays passing through a plane. The NHK Science and Technical Research Laboratories have developed a real-time IP system using an HDTV camera and an optical fiber array. In this paper, the authors propose a method of synthesizing arbitrary views from IP images captured by the HDTV camera. This is a kind of image-based rendering system, founded on the 4-D data space Representation of light rays. Experimental results show the potential to improve the quality of images rendered by computer graphics techniques.

  3. KinomeXplorer: an integrated platform for kinome biology studies

    DEFF Research Database (Denmark)

    Horn, Heiko; Schoof, Erwin; Kim, Jinho

    2014-01-01

    A letter to the editor is presented related to the KinomeXplorer, an integrated platform providing workflows to efficiently analyze phosphorylation dependent interaction networks or kinase signaling networks....

  4. Monitoring Biological Modes in a Bioreactor Process by Computer Simulation

    Directory of Open Access Journals (Sweden)

    Samia Semcheddine

    2015-12-01

    Full Text Available This paper deals with the general framework of fermentation system modeling and monitoring, focusing on the fermentation of Escherichia coli. Our main objective is to develop an algorithm for the online detection of acetate production during the culture of recombinant proteins. The analysis the fermentation process shows that it behaves like a hybrid dynamic system with commutation (since it can be represented by 5 nonlinear models. We present a strategy of fault detection based on residual generation for detecting the different actual biological modes. The residual generation is based on nonlinear analytical redundancy relations. The simulation results show that the several modes that are occulted during the bacteria cultivation can be detected by residuals using a nonlinear dynamic model and a reduced instrumentation.

  5. Theoretical discussion for quantum computation in biological systems

    Science.gov (United States)

    Baer, Wolfgang

    2010-04-01

    Analysis of the brain as a physical system, that has the capacity of generating a display of every day observed experiences and contains some knowledge of the physical reality which stimulates those experiences, suggests the brain executes a self-measurement process described by quantum theory. Assuming physical reality is a universe of interacting self-measurement loops, we present a model of space as a field of cells executing such self-measurement activities. Empty space is the observable associated with the measurement of this field when the mass and charge density defining the material aspect of the cells satisfy the least action principle. Content is the observable associated with the measurement of the quantum wave function ψ interpreted as mass-charge displacements. The illusion of space and its content incorporated into cognitive biological systems is evidence of self-measurement activity that can be associated with quantum operations.

  6. Wiring Together Synthetic Bacterial Consortia to Create a Biological Integrated Circuit.

    Science.gov (United States)

    Perry, Nicolas; Nelson, Edward M; Timp, Gregory

    2016-12-16

    The promise of adapting biology to information processing will not be realized until engineered gene circuits, operating in different cell populations, can be wired together to express a predictable function. Here, elementary biological integrated circuits (BICs), consisting of two sets of transmitter and receiver gene circuit modules with embedded memory placed in separate cell populations, were meticulously assembled using live cell lithography and wired together by the mass transport of quorum-sensing (QS) signal molecules to form two isolated communication links (comlinks). The comlink dynamics were tested by broadcasting "clock" pulses of inducers into the networks and measuring the responses of functionally linked fluorescent reporters, and then modeled through simulations that realistically captured the protein production and molecular transport. These results show that the comlinks were isolated and each mimicked aspects of the synchronous, sequential networks used in digital computing. The observations about the flow conditions, derived from numerical simulations, and the biofilm architectures that foster or silence cell-to-cell communications have implications for everything from decontamination of drinking water to bacterial virulence.

  7. ISCB Ebola Award for Important Future Research on the Computational Biology of Ebola Virus

    OpenAIRE

    Karp, P.D.; Berger, B.; Kovats, D.; Lengauer, T.; Linial, M.; Sabeti, P.; Hide, W.; Rost, B.

    2015-01-01

    Speed is of the essence in combating Ebola; thus, computational approaches should form a significant component of Ebola research. As for the development of any modern drug, computational biology is uniquely positioned to contribute through comparative analysis of the genome sequences of Ebola strains as well as 3-D protein modeling. Other computational approaches to Ebola may include large-scale docking studies of Ebola proteins with human proteins and with small-molecule libraries, computati...

  8. Computation of fields in an arbitrarily shaped heterogeneous dielectric or biological body by an iterative conjugate gradient method

    International Nuclear Information System (INIS)

    Wang, J.J.H.; Dubberley, J.R.

    1989-01-01

    Electromagnetic (EM) fields in a three-dimensional, arbitrarily shaped heterogeneous dielectric or biological body illuminated by a plane wave are computed by an iterative conjugate gradient method. The method is a generalized method of moments applied to the volume integral equation. Because no matrix is explicitly involved or stored, the present iterative method is capable of computing EM fields in objects an order of magnitude larger than those that can be handled by the conventional method of moments. Excellent numerical convergence is achieved. Perfect convergence to the result of the conventional moment method using the same basis and weighted with delta functions is consistently achieved in all the cases computed, indicating that these two algorithms (direct and interactive) are equivalent

  9. Biological removal of algae in an integrated pond system

    CSIR Research Space (South Africa)

    Meiring, PGJ

    1995-01-01

    Full Text Available A system of oxidation ponds in series with a biological trickling filter is described. It was known that this arrangement was incapable of reducing effectively the levels of algae present in the pond liquid even though nitrification was effected...

  10. Atomic force microscope with integrated optical microscope for biological applications

    NARCIS (Netherlands)

    Putman, Constant A.J.; Putman, C.A.J.; van der Werf, Kees; de Grooth, B.G.; van Hulst, N.F.; Segerink, Franciscus B.; Greve, Jan

    1992-01-01

    Since atomic force microscopy (AFM) is capable of imaging nonconducting surfaces, the technique holds great promises for high‐resolution imaging of biological specimens. A disadvantage of most AFMs is the fact that the relatively large sample surface has to be scanned multiple times to pinpoint a

  11. An Integrative Systems Biology Approach to Understanding Pulmonary Diseases

    NARCIS (Netherlands)

    Auffray, Charles; Adcock, Ian M.; Chung, Kian Fan; Djukanovic, Ratko; Pison, Christophe; Sterk, Peter J.

    2010-01-01

    Chronic inflammatory pulmonary diseases such as COPD and asthma are highly prevalent and associated with a major health burden worldwide. Despite a wealth of biologic and clinical information on normal and pathologic airway structure and function, the primary causes and mechanisms of disease remain

  12. DEMONSTRATION OF AN INTEGRATED, PASSIVE BIOLOGICAL TREATMENT PROCESS FOR AMD

    Science.gov (United States)

    An innovative, cost-effective, biological treatment process has been designed by MSE Technology Applications, Inc. to treat acid mine drainage (AMD). A pilot-scale demonstration is being conducted under the Mine Waste Technology Program using water flowing from an abandoned mine ...

  13. Biological soil crusts as an integral component of desert environments

    Science.gov (United States)

    Belnap, Jayne; Weber, Bettina

    2013-01-01

    The biology and ecology of biological soil crusts, a soil surface community of mosses, lichens, cyanobacteria, green algae, fungi, and bacteria, have only recently been a topic of research. Most efforts began in the western U.S. (Cameron, Harper, Rushforth, and St. Clair), Australia (Rogers), and Israel (Friedmann, Evenari, and Lange) in the late 1960s and 1970s (e.g., Friedmann et al. 1967; Evenari 1985reviewed in Harper and Marble 1988). However, these groups worked independently of each other and, in fact, were often not aware of each other’s work. In addition, biological soil crust communities were seen as more a novelty than a critical component of dryland ecosystems. Since then, researchers have investigated many different aspects of these communities and have shown that although small to microscopic, biological soil crusts are critical in many ecological processes of deserts. They often cover most of desert soil surfaces and substantially mediate inputs and outputs from desert soils (Belnap et al. 2003). They can be a large source of biodiversity for deserts, as they can contain more species than the surrounding vascular plant community (Rosentreter 1986). These communities are important in reducing soil erosion and increasing soil fertility through the capture of dust and the fixation of atmospheric nitrogen and carbon into forms available to other life forms (Elbert et al. 2012). Because of their many effects on soil characteristics, such as external and internal morphological characteristics, aggregate stability, soil moisture, and permeability, they also affect seed germination and establishment and local hydrological cycles. Covering up to 70% of the surface area in many arid and semi-arid regions around the world (Belnap and Lange 2003), biological soil crusts are a key component within desert environments.

  14. Analog Integrated Circuit Design for Spike Time Dependent Encoder and Reservoir in Reservoir Computing Processors

    Science.gov (United States)

    2018-01-01

    HAS BEEN REVIEWED AND IS APPROVED FOR PUBLICATION IN ACCORDANCE WITH ASSIGNED DISTRIBUTION STATEMENT. FOR THE CHIEF ENGINEER : / S / / S...bridged high-performance computing, nanotechnology , and integrated circuits & systems. 15. SUBJECT TERMS neuromorphic computing, neuron design, spike...multidisciplinary effort encompassed high-performance computing, nanotechnology , integrated circuits, and integrated systems. The project’s architecture was

  15. DNA barcodes for assessment of the biological integrity of aquatic ecosystems

    Science.gov (United States)

    Water quality regulations and aquatic ecosystem monitoring increasingly rely on direct assessments of biological integrity. Because these aquatic “bioassessments” evaluate the incidence and abundance of sensitive aquatic species, they are able to measure cumulative ecosystem eff...

  16. Biological data - Integrated acoustic and trawl survey of Pacific hake off the Pacific Coast

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Integrated acoustic and trawl surveys are used to assess the distribution, biomass, and biology of Pacific hake along the Pacific coasts of the United States and...

  17. Development of a Prototype System for Archiving Integrative/Hybrid Structure Models of Biological Macromolecules.

    Science.gov (United States)

    Vallat, Brinda; Webb, Benjamin; Westbrook, John D; Sali, Andrej; Berman, Helen M

    2018-04-09

    Essential processes in biology are carried out by large macromolecular assemblies, whose structures are often difficult to determine by traditional methods. Increasingly, researchers combine measured data and computed information from several complementary methods to obtain "hybrid" or "integrative" structural models of macromolecules and their assemblies. These integrative/hybrid (I/H) models are not archived in the PDB because of the absence of standard data representations and processing mechanisms. Here we present the development of data standards and a prototype system for archiving I/H models. The data standards provide the definitions required for representing I/H models that span multiple spatiotemporal scales and conformational states, as well as spatial restraints derived from different experimental techniques. Based on these data definitions, we have built a prototype system called PDB-Dev, which provides the infrastructure necessary to archive I/H structural models. PDB-Dev is now accepting structures and is open to the community for new submissions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Integrated Computational Solution for Predicting Skin Sensitization Potential of Molecules.

    Directory of Open Access Journals (Sweden)

    Konda Leela Sarath Kumar

    Full Text Available Skin sensitization forms a major toxicological endpoint for dermatology and cosmetic products. Recent ban on animal testing for cosmetics demands for alternative methods. We developed an integrated computational solution (SkinSense that offers a robust solution and addresses the limitations of existing computational tools i.e. high false positive rate and/or limited coverage.The key components of our solution include: QSAR models selected from a combinatorial set, similarity information and literature-derived sub-structure patterns of known skin protein reactive groups. Its prediction performance on a challenge set of molecules showed accuracy = 75.32%, CCR = 74.36%, sensitivity = 70.00% and specificity = 78.72%, which is better than several existing tools including VEGA (accuracy = 45.00% and CCR = 54.17% with 'High' reliability scoring, DEREK (accuracy = 72.73% and CCR = 71.44% and TOPKAT (accuracy = 60.00% and CCR = 61.67%. Although, TIMES-SS showed higher predictive power (accuracy = 90.00% and CCR = 92.86%, the coverage was very low (only 10 out of 77 molecules were predicted reliably.Owing to improved prediction performance and coverage, our solution can serve as a useful expert system towards Integrated Approaches to Testing and Assessment for skin sensitization. It would be invaluable to cosmetic/ dermatology industry for pre-screening their molecules, and reducing time, cost and animal testing.

  19. Computational Acoustics: Computational PDEs, Pseudodifferential Equations, Path Integrals, and All That Jazz

    Science.gov (United States)

    Fishman, Louis

    2000-11-01

    The role of mathematical modeling in the physical sciences will be briefly addressed. Examples will focus on computational acoustics, with applications to underwater sound propagation, electromagnetic modeling, optics, and seismic inversion. Direct and inverse wave propagation problems in both the time and frequency domains will be considered. Focusing on fixed-frequency (elliptic) wave propagation problems, the usual, two-way, partial differential equation formulation will be exactly reformulated, in a well-posed manner, as a one-way (marching) problem. This is advantageous for both direct and inverse considerations, as well as stochastic modeling problems. The reformulation will require the introduction of pseudodifferential operators and their accompanying phase space analysis (calculus), in addition to path integral representations for the fundamental solutions and their subsequent computational algorithms. Unlike the more traditional, purely numerical applications of, for example, finite-difference and finite-element methods, this approach, in effect, writes the exact, or, more generally, the asymptotically correct, answer as a functional integral and, subsequently, computes it directly. The overall computational philosophy is to combine analysis, asymptotics, and numerical methods to attack complicated, real-world problems. Exact and asymptotic analysis will stress the complementary nature of the direct and inverse formulations, as well as indicating the explicit structural connections between the time- and frequency-domain solutions.

  20. Application of integrative genomics and systems biology to conventional and in vitro reproductive traits in cattle

    DEFF Research Database (Denmark)

    Mazzoni, Gianluca; Pedersen, Hanne S.; de Oliveira Junior, Gerson A.

    2017-01-01

    by both conventional and ARTs such as OPU-IVP. The integration of systems biology information across different biological layers generates a complete view of the different molecular networks that control complex traits and can provide a strong contribution to the understanding of traits related to ARTs....

  1. Campus Eco Tours: An Integrative & Interactive Field Project for Undergraduate Biology Students

    Science.gov (United States)

    Boes, Katie E.

    2013-01-01

    Outdoor areas within or near college campuses offer an opportunity for biology students to observe the natural world and apply concepts from class. Here, I describe an engaging and integrative project where undergraduate non-major biology students work in teams to develop and present professional "eco tours." This project takes place over multiple…

  2. Tavaxy: integrating Taverna and Galaxy workflows with cloud computing support.

    Science.gov (United States)

    Abouelhoda, Mohamed; Issa, Shadi Alaa; Ghanem, Moustafa

    2012-05-04

    Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis.The system can be accessed either through a

  3. Tavaxy: Integrating Taverna and Galaxy workflows with cloud computing support

    Directory of Open Access Journals (Sweden)

    Abouelhoda Mohamed

    2012-05-01

    Full Text Available Abstract Background Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. Results In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Conclusions Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub- workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and

  4. Tavaxy: Integrating Taverna and Galaxy workflows with cloud computing support

    Science.gov (United States)

    2012-01-01

    Background Over the past decade the workflow system paradigm has evolved as an efficient and user-friendly approach for developing complex bioinformatics applications. Two popular workflow systems that have gained acceptance by the bioinformatics community are Taverna and Galaxy. Each system has a large user-base and supports an ever-growing repository of application workflows. However, workflows developed for one system cannot be imported and executed easily on the other. The lack of interoperability is due to differences in the models of computation, workflow languages, and architectures of both systems. This lack of interoperability limits sharing of workflows between the user communities and leads to duplication of development efforts. Results In this paper, we present Tavaxy, a stand-alone system for creating and executing workflows based on using an extensible set of re-usable workflow patterns. Tavaxy offers a set of new features that simplify and enhance the development of sequence analysis applications: It allows the integration of existing Taverna and Galaxy workflows in a single environment, and supports the use of cloud computing capabilities. The integration of existing Taverna and Galaxy workflows is supported seamlessly at both run-time and design-time levels, based on the concepts of hierarchical workflows and workflow patterns. The use of cloud computing in Tavaxy is flexible, where the users can either instantiate the whole system on the cloud, or delegate the execution of certain sub-workflows to the cloud infrastructure. Conclusions Tavaxy reduces the workflow development cycle by introducing the use of workflow patterns to simplify workflow creation. It enables the re-use and integration of existing (sub-) workflows from Taverna and Galaxy, and allows the creation of hybrid workflows. Its additional features exploit recent advances in high performance cloud computing to cope with the increasing data size and complexity of analysis. The system

  5. Integr8: enhanced inter-operability of European molecular biology databases.

    Science.gov (United States)

    Kersey, P J; Morris, L; Hermjakob, H; Apweiler, R

    2003-01-01

    The increasing production of molecular biology data in the post-genomic era, and the proliferation of databases that store it, require the development of an integrative layer in database services to facilitate the synthesis of related information. The solution of this problem is made more difficult by the absence of universal identifiers for biological entities, and the breadth and variety of available data. Integr8 was modelled using UML (Universal Modelling Language). Integr8 is being implemented as an n-tier system using a modern object-oriented programming language (Java). An object-relational mapping tool, OJB, is being used to specify the interface between the upper layers and an underlying relational database. The European Bioinformatics Institute is launching the Integr8 project. Integr8 will be an automatically populated database in which we will maintain stable identifiers for biological entities, describe their relationships with each other (in accordance with the central dogma of biology), and store equivalences between identified entities in the source databases. Only core data will be stored in Integr8, with web links to the source databases providing further information. Integr8 will provide the integrative layer of the next generation of bioinformatics services from the EBI. Web-based interfaces will be developed to offer gene-centric views of the integrated data, presenting (where known) the links between genome, proteome and phenotype.

  6. The ISCB Student Council Internship Program: Expanding computational biology capacity worldwide.

    Science.gov (United States)

    Anupama, Jigisha; Francescatto, Margherita; Rahman, Farzana; Fatima, Nazeefa; DeBlasio, Dan; Shanmugam, Avinash Kumar; Satagopam, Venkata; Santos, Alberto; Kolekar, Pandurang; Michaut, Magali; Guney, Emre

    2018-01-01

    Education and training are two essential ingredients for a successful career. On one hand, universities provide students a curriculum for specializing in one's field of study, and on the other, internships complement coursework and provide invaluable training experience for a fruitful career. Consequently, undergraduates and graduates are encouraged to undertake an internship during the course of their degree. The opportunity to explore one's research interests in the early stages of their education is important for students because it improves their skill set and gives their career a boost. In the long term, this helps to close the gap between skills and employability among students across the globe and balance the research capacity in the field of computational biology. However, training opportunities are often scarce for computational biology students, particularly for those who reside in less-privileged regions. Aimed at helping students develop research and academic skills in computational biology and alleviating the divide across countries, the Student Council of the International Society for Computational Biology introduced its Internship Program in 2009. The Internship Program is committed to providing access to computational biology training, especially for students from developing regions, and improving competencies in the field. Here, we present how the Internship Program works and the impact of the internship opportunities so far, along with the challenges associated with this program.

  7. Computational protein design-the next generation tool to expand synthetic biology applications.

    Science.gov (United States)

    Gainza-Cirauqui, Pablo; Correia, Bruno Emanuel

    2018-05-02

    One powerful approach to engineer synthetic biology pathways is the assembly of proteins sourced from one or more natural organisms. However, synthetic pathways often require custom functions or biophysical properties not displayed by natural proteins, limitations that could be overcome through modern protein engineering techniques. Structure-based computational protein design is a powerful tool to engineer new functional capabilities in proteins, and it is beginning to have a profound impact in synthetic biology. Here, we review efforts to increase the capabilities of synthetic biology using computational protein design. We focus primarily on computationally designed proteins not only validated in vitro, but also shown to modulate different activities in living cells. Efforts made to validate computational designs in cells can illustrate both the challenges and opportunities in the intersection of protein design and synthetic biology. We also highlight protein design approaches, which although not validated as conveyors of new cellular function in situ, may have rapid and innovative applications in synthetic biology. We foresee that in the near-future, computational protein design will vastly expand the functional capabilities of synthetic cells. Copyright © 2018. Published by Elsevier Ltd.

  8. The ISCB Student Council Internship Program: Expanding computational biology capacity worldwide.

    Directory of Open Access Journals (Sweden)

    Jigisha Anupama

    2018-01-01

    Full Text Available Education and training are two essential ingredients for a successful career. On one hand, universities provide students a curriculum for specializing in one's field of study, and on the other, internships complement coursework and provide invaluable training experience for a fruitful career. Consequently, undergraduates and graduates are encouraged to undertake an internship during the course of their degree. The opportunity to explore one's research interests in the early stages of their education is important for students because it improves their skill set and gives their career a boost. In the long term, this helps to close the gap between skills and employability among students across the globe and balance the research capacity in the field of computational biology. However, training opportunities are often scarce for computational biology students, particularly for those who reside in less-privileged regions. Aimed at helping students develop research and academic skills in computational biology and alleviating the divide across countries, the Student Council of the International Society for Computational Biology introduced its Internship Program in 2009. The Internship Program is committed to providing access to computational biology training, especially for students from developing regions, and improving competencies in the field. Here, we present how the Internship Program works and the impact of the internship opportunities so far, along with the challenges associated with this program.

  9. Integrative Physiology 2.0’: integration of systems biology into physiology and its application to cardiovascular homeostasis

    Science.gov (United States)

    Kuster, Diederik W D; Merkus, Daphne; van der Velden, Jolanda; Verhoeven, Adrie J M; Duncker, Dirk J

    2011-01-01

    Since the completion of the Human Genome Project and the advent of the large scaled unbiased ‘-omics’ techniques, the field of systems biology has emerged. Systems biology aims to move away from the traditional reductionist molecular approach, which focused on understanding the role of single genes or proteins, towards a more holistic approach by studying networks and interactions between individual components of networks. From a conceptual standpoint, systems biology elicits a ‘back to the future’ experience for any integrative physiologist. However, many of the new techniques and modalities employed by systems biologists yield tremendous potential for integrative physiologists to expand their tool arsenal to (quantitatively) study complex biological processes, such as cardiac remodelling and heart failure, in a truly holistic fashion. We therefore advocate that systems biology should not become/stay a separate discipline with ‘-omics’ as its playing field, but should be integrated into physiology to create ‘Integrative Physiology 2.0’. PMID:21224228

  10. Integrated Geo Hazard Management System in Cloud Computing Technology

    Science.gov (United States)

    Hanifah, M. I. M.; Omar, R. C.; Khalid, N. H. N.; Ismail, A.; Mustapha, I. S.; Baharuddin, I. N. Z.; Roslan, R.; Zalam, W. M. Z.

    2016-11-01

    Geo hazard can result in reducing of environmental health and huge economic losses especially in mountainous area. In order to mitigate geo-hazard effectively, cloud computer technology are introduce for managing geo hazard database. Cloud computing technology and it services capable to provide stakeholder's with geo hazards information in near to real time for an effective environmental management and decision-making. UNITEN Integrated Geo Hazard Management System consist of the network management and operation to monitor geo-hazard disaster especially landslide in our study area at Kelantan River Basin and boundary between Hulu Kelantan and Hulu Terengganu. The system will provide easily manage flexible measuring system with data management operates autonomously and can be controlled by commands to collects and controls remotely by using “cloud” system computing. This paper aims to document the above relationship by identifying the special features and needs associated with effective geohazard database management using “cloud system”. This system later will use as part of the development activities and result in minimizing the frequency of the geo-hazard and risk at that research area.

  11. Integrating multiple scientific computing needs via a Private Cloud infrastructure

    International Nuclear Information System (INIS)

    Bagnasco, S; Berzano, D; Brunetti, R; Lusso, S; Vallero, S

    2014-01-01

    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit.

  12. WE-DE-202-00: Connecting Radiation Physics with Computational Biology

    International Nuclear Information System (INIS)

    2016-01-01

    Radiation therapy for the treatment of cancer has been established as a highly precise and effective way to eradicate a localized region of diseased tissue. To achieve further significant gains in the therapeutic ratio, we need to move towards biologically optimized treatment planning. To achieve this goal, we need to understand how the radiation-type dependent patterns of induced energy depositions within the cell (physics) connect via molecular, cellular and tissue reactions to treatment outcome such as tumor control and undesirable effects on normal tissue. Several computational biology approaches have been developed connecting physics to biology. Monte Carlo simulations are the most accurate method to calculate physical dose distributions at the nanometer scale, however simulations at the DNA scale are slow and repair processes are generally not simulated. Alternative models that rely on the random formation of individual DNA lesions within one or two turns of the DNA have been shown to reproduce the clusters of DNA lesions, including single strand breaks (SSBs), double strand breaks (DSBs) without the need for detailed track structure simulations. Efficient computational simulations of initial DNA damage induction facilitate computational modeling of DNA repair and other molecular and cellular processes. Mechanistic, multiscale models provide a useful conceptual framework to test biological hypotheses and help connect fundamental information about track structure and dosimetry at the sub-cellular level to dose-response effects on larger scales. In this symposium we will learn about the current state of the art of computational approaches estimating radiation damage at the cellular and sub-cellular scale. How can understanding the physics interactions at the DNA level be used to predict biological outcome? We will discuss if and how such calculations are relevant to advance our understanding of radiation damage and its repair, or, if the underlying biological

  13. WE-DE-202-00: Connecting Radiation Physics with Computational Biology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2016-06-15

    Radiation therapy for the treatment of cancer has been established as a highly precise and effective way to eradicate a localized region of diseased tissue. To achieve further significant gains in the therapeutic ratio, we need to move towards biologically optimized treatment planning. To achieve this goal, we need to understand how the radiation-type dependent patterns of induced energy depositions within the cell (physics) connect via molecular, cellular and tissue reactions to treatment outcome such as tumor control and undesirable effects on normal tissue. Several computational biology approaches have been developed connecting physics to biology. Monte Carlo simulations are the most accurate method to calculate physical dose distributions at the nanometer scale, however simulations at the DNA scale are slow and repair processes are generally not simulated. Alternative models that rely on the random formation of individual DNA lesions within one or two turns of the DNA have been shown to reproduce the clusters of DNA lesions, including single strand breaks (SSBs), double strand breaks (DSBs) without the need for detailed track structure simulations. Efficient computational simulations of initial DNA damage induction facilitate computational modeling of DNA repair and other molecular and cellular processes. Mechanistic, multiscale models provide a useful conceptual framework to test biological hypotheses and help connect fundamental information about track structure and dosimetry at the sub-cellular level to dose-response effects on larger scales. In this symposium we will learn about the current state of the art of computational approaches estimating radiation damage at the cellular and sub-cellular scale. How can understanding the physics interactions at the DNA level be used to predict biological outcome? We will discuss if and how such calculations are relevant to advance our understanding of radiation damage and its repair, or, if the underlying biological

  14. Functionalization and microfluidic integration of silicon nanowire biologically gated field effect transistors

    DEFF Research Database (Denmark)

    Pfreundt, Andrea

    This thesis deals with the development of a novel biosensor for the detection of biomolecules based on a silicon nanowire biologically gated field-effect transistor and its integration into a point-of-care device. The sensor and electrical on-chip integration was developed in a different project...

  15. Functionalization and microfluidic integration of silicon nanowire biologically gated field effect transistors

    DEFF Research Database (Denmark)

    Pfreundt, Andrea; Svendsen, Winnie Edith; Dimaki, Maria

    2016-01-01

    This thesis deals with the development of a novel biosensor for the detection of biomolecules based on a silicon nanowire biologically gated field-effect transistor and its integration into a point-of-care device. The sensor and electrical on-chip integration was developed in a different project...

  16. INTEGRATED MANAGEMENT OF CHROMOLAENA ODORATA EMPHASIZING THE CLASSICAL BIOLOGICAL CONTROL

    Directory of Open Access Journals (Sweden)

    SOEKISMAN TJITROSEMITO

    1998-01-01

    Full Text Available Chromolaena odorata, Siam weed, a very important weed of Java Island (Indonesia is native to Central and South America. In the laboratory it showed rapid growth (1.15 g/g/week in the first 8 weeks of its growth. The biomass was mainly as leaves (LAR : 317.50 cm'/g total weight. It slowed down in the following month as the biomass was utilized for stem and branch formation. This behavior supported the growth of C. odorata into a very dense stand. It flowered, fruited during the dry season, and senesced following maturation of seeds from inflorescence branches. These branches dried out, but soon the stem resumed aggressive growth following the wet season. Leaf biomass was affected by the size of the stem in its early phase of regrowth, but later on it was more affected by the number of branches. The introduction of Pareuchaetes pseudoinsulata to Indonesia, was successful only in North Sumatera. In Java it has not been reported to establish succesfully. The introduction of another biological control agent, Procecidochares conneca to Indonesia was shown to be sp ecific and upon release in West Java it established immediately. It spread exponentia lly in the first 6 months of its release. Field monitoring continues to eval uate the impact of the agents. Other biocontrol agents (Actmole anteas and Conotrachelus wilt be introduced to Indonesia in 1997 through ACIAR Project on the Biological Control of Chromolaena odorata in Indonesia and Papua New Guinea.

  17. Ion transport through biological membranes an integrated theoretical approach

    CERN Document Server

    Mackey, Michael C

    1975-01-01

    This book illustrates some of the ways physics and mathematics have been, and are being, used to elucidate the underlying mechan­ isms of passive ion movement through biological membranes in general, and the membranes of excltable cells in particular. I have made no effort to be comprehensive in my introduction of biological material and the reader interested in a brief account of single cell electro­ physlology from a physically-oriented biologists viewpoint will find the chapters by Woodbury (1965) an excellent introduction. Part I is introductory in nature, exploring the basic electrical properties of inexcitable and excitable cell plasma membranes. Cable theory is utilized to illustrate the function of the non-decrementing action potential as a signaling mechanism for the long range trans­ mission of information in the nervous system, and to gain some in­ sight into the gross behaviour of neurons. The detailed analysis of Hodgkin and Huxley on the squid giant axon membrane ionic conductance properties...

  18. Converting differential-equation models of biological systems to membrane computing.

    Science.gov (United States)

    Muniyandi, Ravie Chandren; Zin, Abdullah Mohd; Sanders, J W

    2013-12-01

    This paper presents a method to convert the deterministic, continuous representation of a biological system by ordinary differential equations into a non-deterministic, discrete membrane computation. The dynamics of the membrane computation is governed by rewrite rules operating at certain rates. That has the advantage of applying accurately to small systems, and to expressing rates of change that are determined locally, by region, but not necessary globally. Such spatial information augments the standard differentiable approach to provide a more realistic model. A biological case study of the ligand-receptor network of protein TGF-β is used to validate the effectiveness of the conversion method. It demonstrates the sense in which the behaviours and properties of the system are better preserved in the membrane computing model, suggesting that the proposed conversion method may prove useful for biological systems in particular. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Modeling biological problems in computer science: a case study in genome assembly.

    Science.gov (United States)

    Medvedev, Paul

    2018-01-30

    As computer scientists working in bioinformatics/computational biology, we often face the challenge of coming up with an algorithm to answer a biological question. This occurs in many areas, such as variant calling, alignment and assembly. In this tutorial, we use the example of the genome assembly problem to demonstrate how to go from a question in the biological realm to a solution in the computer science realm. We show the modeling process step-by-step, including all the intermediate failed attempts. Please note this is not an introduction to how genome assembly algorithms work and, if treated as such, would be incomplete and unnecessarily long-winded. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Integration of gene expression and methylation to unravel biological networks in glioblastoma patients.

    Science.gov (United States)

    Gadaleta, Francesco; Bessonov, Kyrylo; Van Steen, Kristel

    2017-02-01

    The vast amount of heterogeneous omics data, encompassing a broad range of biomolecular information, requires novel methods of analysis, including those that integrate the available levels of information. In this work, we describe Regression2Net, a computational approach that is able to integrate gene expression and genomic or methylation data in two steps. First, penalized regressions are used to build Expression-Expression (EEnet) and Expression-Genomic or Expression-Methylation (EMnet) networks. Second, network theory is used to highlight important communities of genes. When applying our approach, Regression2Net to gene expression and methylation profiles for individuals with glioblastoma multiforme, we identified, respectively, 284 and 447 potentially interesting genes in relation to glioblastoma pathology. These genes showed at least one connection in the integrated networks ANDnet and XORnet derived from aforementioned EEnet and EMnet networks. Although the edges in ANDnet occur in both EEnet and EMnet, the edges in XORnet occur in EMnet but not in EEnet. In-depth biological analysis of connected genes in ANDnet and XORnet revealed genes that are related to energy metabolism, cell cycle control (AATF), immune system response, and several cancer types. Importantly, we observed significant overrepresentation of cancer-related pathways including glioma, especially in the XORnet network, suggesting a nonignorable role of methylation in glioblastoma multiforma. In the ANDnet, we furthermore identified potential glioma suppressor genes ACCN3 and ACCN4 linked to the NBPF1 neuroblastoma breakpoint family, as well as numerous ABC transporter genes (ABCA1, ABCB1) suggesting drug resistance of glioblastoma tumors. © 2016 WILEY PERIODICALS, INC.

  1. Mathematical computer simulation of the process of ultrasound interaction with biological medium

    International Nuclear Information System (INIS)

    Yakovleva, T.; Nassiri, D.; Ciantar, D.

    1996-01-01

    The aim of the paper is to study theoretically the interaction of ultrasound irradiation with biological medium and the peculiarities of ultrasound scattering by inhomogeneities of biological tissue, which can be represented by fractal structures. This investigation has been used for the construction of the computer model of three-dimensional ultrasonic imaging system what gives the possibility to define more accurately the pathological changes in such a tissue by means of its image analysis. Poster 180. (author)

  2. Bayesian integration of position and orientation cues in perception of biological and non-biological dynamic forms

    Directory of Open Access Journals (Sweden)

    Steven Matthew Thurman

    2014-02-01

    Full Text Available Visual form analysis is fundamental to shape perception and likely plays a central role in perception of more complex dynamic shapes, such as moving objects or biological motion. Two primary form-based cues serve to represent the overall shape of an object: the spatial position and the orientation of locations along the boundary of the object. However, it is unclear how the visual system integrates these two sources of information in dynamic form analysis, and in particular how the brain resolves ambiguities due to sensory uncertainty and/or cue conflict. In the current study, we created animations of sparsely-sampled dynamic objects (human walkers or rotating squares comprised of oriented Gabor patches in which orientation could either coincide or conflict with information provided by position cues. When the cues were incongruent, we found a characteristic trade-off between position and orientation information whereby position cues increasingly dominated perception as the relative uncertainty of orientation increased and vice versa. Furthermore, we found no evidence for differences in the visual processing of biological and non-biological objects, casting doubt on the claim that biological motion may be specialized in the human brain, at least in specific terms of form analysis. To explain these behavioral results quantitatively, we adopt a probabilistic template-matching model that uses Bayesian inference within local modules to estimate object shape separately from either spatial position or orientation signals. The outputs of the two modules are integrated with weights that reflect individual estimates of subjective cue reliability, and integrated over time to produce a decision about the perceived dynamics of the input data. Results of this model provided a close fit to the behavioral data, suggesting a mechanism in the human visual system that approximates rational Bayesian inference to integrate position and orientation signals in dynamic

  3. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  4. BioInt: an integrative biological object-oriented application framework and interpreter.

    Science.gov (United States)

    Desai, Sanket; Burra, Prasad

    2015-01-01

    BioInt, a biological programming application framework and interpreter, is an attempt to equip the researchers with seamless integration, efficient extraction and effortless analysis of the data from various biological databases and algorithms. Based on the type of biological data, algorithms and related functionalities, a biology-specific framework was developed which has nine modules. The modules are a compilation of numerous reusable BioADTs. This software ecosystem containing more than 450 biological objects underneath the interpreter makes it flexible, integrative and comprehensive. Similar to Python, BioInt eliminates the compilation and linking steps cutting the time significantly. The researcher can write the scripts using available BioADTs (following C++ syntax) and execute them interactively or use as a command line application. It has features that enable automation, extension of the framework with new/external BioADTs/libraries and deployment of complex work flows.

  5. DOE EPSCoR Initiative in Structural and computational Biology/Bioinformatics

    Energy Technology Data Exchange (ETDEWEB)

    Wallace, Susan S.

    2008-02-21

    The overall goal of the DOE EPSCoR Initiative in Structural and Computational Biology was to enhance the competiveness of Vermont research in these scientific areas. To develop self-sustaining infrastructure, we increased the critical mass of faculty, developed shared resources that made junior researchers more competitive for federal research grants, implemented programs to train graduate and undergraduate students who participated in these research areas and provided seed money for research projects. During the time period funded by this DOE initiative: (1) four new faculty were recruited to the University of Vermont using DOE resources, three in Computational Biology and one in Structural Biology; (2) technical support was provided for the Computational and Structural Biology facilities; (3) twenty-two graduate students were directly funded by fellowships; (4) fifteen undergraduate students were supported during the summer; and (5) twenty-eight pilot projects were supported. Taken together these dollars resulted in a plethora of published papers, many in high profile journals in the fields and directly impacted competitive extramural funding based on structural or computational biology resulting in 49 million dollars awarded in grants (Appendix I), a 600% return on investment by DOE, the State and University.

  6. Secure encapsulation and publication of biological services in the cloud computing environment.

    Science.gov (United States)

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved.

  7. Integrated modelling of physical, chemical and biological weather

    DEFF Research Database (Denmark)

    Kurganskiy, Alexander

    . This is an online-coupled meteorology-chemistry model where chemical constituents and different types of aerosols are an integrated part of the dynamical model, i.e., these constituents are transported in the same way as, e.g., water vapor and cloud water, and, at the same time, the aerosols can interactively...... impact radiation and cloud micro-physics. The birch pollen modelling study has been performed for domains covering Europe and western Russia. Verification of the simulated birch pollen concentrations against in-situ observations showed good agreement obtaining the best score for two Danish sites...

  8. Integration of biological networks and gene expression data using Cytoscape

    DEFF Research Database (Denmark)

    Cline, M.S.; Smoot, M.; Cerami, E.

    2007-01-01

    of an interaction network obtained for genes of interest. Five major steps are described: (i) obtaining a gene or protein network, (ii) displaying the network using layout algorithms, (iii) integrating with gene expression and other functional attributes, (iv) identifying putative complexes and functional modules......Cytoscape is a free software package for visualizing, modeling and analyzing molecular and genetic interaction networks. This protocol explains how to use Cytoscape to analyze the results of mRNA expression profiling, and other functional genomics and proteomics experiments, in the context...... and (v) identifying enriched Gene Ontology annotations in the network. These steps provide a broad sample of the types of analyses performed by Cytoscape....

  9. Several problems of algorithmization in integrated computation programs on third generation computers for short circuit currents in complex power networks

    Energy Technology Data Exchange (ETDEWEB)

    Krylov, V.A.; Pisarenko, V.P.

    1982-01-01

    Methods of modeling complex power networks with short circuits in the networks are described. The methods are implemented in integrated computation programs for short circuit currents and equivalents in electrical networks with a large number of branch points (up to 1000) on a computer with a limited on line memory capacity (M equals 4030 for the computer).

  10. Probabilistic Inference of Biological Networks via Data Integration

    Directory of Open Access Journals (Sweden)

    Mark F. Rogers

    2015-01-01

    Full Text Available There is significant interest in inferring the structure of subcellular networks of interaction. Here we consider supervised interactive network inference in which a reference set of known network links and nonlinks is used to train a classifier for predicting new links. Many types of data are relevant to inferring functional links between genes, motivating the use of data integration. We use pairwise kernels to predict novel links, along with multiple kernel learning to integrate distinct sources of data into a decision function. We evaluate various pairwise kernels to establish which are most informative and compare individual kernel accuracies with accuracies for weighted combinations. By associating a probability measure with classifier predictions, we enable cautious classification, which can increase accuracy by restricting predictions to high-confidence instances, and data cleaning that can mitigate the influence of mislabeled training instances. Although one pairwise kernel (the tensor product pairwise kernel appears to work best, different kernels may contribute complimentary information about interactions: experiments in S. cerevisiae (yeast reveal that a weighted combination of pairwise kernels applied to different types of data yields the highest predictive accuracy. Combined with cautious classification and data cleaning, we can achieve predictive accuracies of up to 99.6%.

  11. Seeds integrate biological information about conspecific and allospecific neighbours.

    Science.gov (United States)

    Yamawo, Akira; Mukai, Hiromi

    2017-06-28

    Numerous organisms integrate information from multiple sources and express adaptive behaviours, but how they do so at different developmental stages remains to be identified. Seeds, which are the embryonic stage of plants, need to make decisions about the timing of emergence in response to environmental cues related to survival. We investigated the timing of emergence of Plantago asiatica (Plantaginaceae) seed while manipulating the presence of Trifolium repens seed and the relatedness of neighbouring P. asiatica seed. The relatedness of neighbouring P. asiatica seed and the presence of seeds of T. repens did not on their own influence the timing of P. asiatica emergence. However, when encountering a T. repens seed, a P. asiatica seed emerged faster in the presence of a sibling seed than in the presence of a non-sibling seed. Water extracts of seeds gave the same result. We show that P. asiatica seeds integrate information about the relatedness of neighbouring P. asiatica seeds and the presence of seeds of a different species via water-soluble chemicals and adjust their emergence behaviour in response. These findings suggest the presence of kin-dependent interspecific interactions. © 2017 The Author(s).

  12. An interdepartmental Ph.D. program in computational biology and bioinformatics: the Yale perspective.

    Science.gov (United States)

    Gerstein, Mark; Greenbaum, Dov; Cheung, Kei; Miller, Perry L

    2007-02-01

    Computational biology and bioinformatics (CBB), the terms often used interchangeably, represent a rapidly evolving biological discipline. With the clear potential for discovery and innovation, and the need to deal with the deluge of biological data, many academic institutions are committing significant resources to develop CBB research and training programs. Yale formally established an interdepartmental Ph.D. program in CBB in May 2003. This paper describes Yale's program, discussing the scope of the field, the program's goals and curriculum, as well as a number of issues that arose in implementing the program. (Further updated information is available from the program's website, www.cbb.yale.edu.)

  13. Evolving a lingua franca and associated software infrastructure for computational systems biology: the Systems Biology Markup Language (SBML) project.

    Science.gov (United States)

    Hucka, M; Finney, A; Bornstein, B J; Keating, S M; Shapiro, B E; Matthews, J; Kovitz, B L; Schilstra, M J; Funahashi, A; Doyle, J C; Kitano, H

    2004-06-01

    Biologists are increasingly recognising that computational modelling is crucial for making sense of the vast quantities of complex experimental data that are now being collected. The systems biology field needs agreed-upon information standards if models are to be shared, evaluated and developed cooperatively. Over the last four years, our team has been developing the Systems Biology Markup Language (SBML) in collaboration with an international community of modellers and software developers. SBML has become a de facto standard format for representing formal, quantitative and qualitative models at the level of biochemical reactions and regulatory networks. In this article, we summarise the current and upcoming versions of SBML and our efforts at developing software infrastructure for supporting and broadening its use. We also provide a brief overview of the many SBML-compatible software tools available today.

  14. Fully integrated digital GAMMA camera-computer system

    International Nuclear Information System (INIS)

    Berger, H.J.; Eisner, R.L.; Gober, A.; Plankey, M.; Fajman, W.

    1985-01-01

    Although most of the new non-nuclear imaging techniques are fully digital, there has been a reluctance in nuclear medicine to abandon traditional analog planar imaging in favor of digital acquisition and display. The authors evaluated a prototype digital camera system (GE STARCAM) in which all of the analog acquisition components are replaced by microprocessor controls and digital circuitry. To compare the relative effects of acquisition matrix size on image quality and to ascertain whether digital techniques could be used in place of analog imaging, Tc-99m bone scans were obtained on this digital system and on a comparable analog camera in 10 patients. The dedicated computer is used for camera setup including definition of the energy window, spatial energy correction, and spatial distortion correction. The display monitor, which is used for patient positioning and image analysis, is 512/sup 2/ non-interlaced, allowing high resolution imaging. Data acquisition and processing can be performed simultaneously. Thus, the development of a fully integrated digital camera-computer system with optimized display should allow routine utilization of non-analog studies in nuclear medicine and the ultimate establishment of fully digital nuclear imaging laboratories

  15. An integrated computer design environment for the development of micro-computer critical software

    International Nuclear Information System (INIS)

    De Agostino, E.; Massari, V.

    1986-01-01

    The paper deals with the development of micro-computer software for Nuclear Safety System. More specifically, it describes an experimental work in the field of software development methodologies to be used for the implementation of micro-computer based safety systems. An investigation of technological improvements that are provided by state-of-the-art integrated packages for micro-based systems development has been carried out. The work has aimed to assess a suitable automated tools environment for the whole software life-cycle. The main safety functions, as DNBR, KW/FT, of a nuclear power reactor have been implemented in a host-target approach. A prototype test-bed microsystem has been implemented to run the safety functions in order to derive a concrete evaluation on the feasibility of critical software according to new technological trends of ''Software Factories''. (author)

  16. Integrated severe accident containment analysis with the CONTAIN computer code

    International Nuclear Information System (INIS)

    Bergeron, K.D.; Williams, D.C.; Rexroth, P.E.; Tills, J.L.

    1985-12-01

    Analysis of physical and radiological conditions iunside the containment building during a severe (core-melt) nuclear reactor accident requires quantitative evaluation of numerous highly disparate yet coupled phenomenologies. These include two-phase thermodynamics and thermal-hydraulics, aerosol physics, fission product phenomena, core-concrete interactions, the formation and combustion of flammable gases, and performance of engineered safety features. In the past, this complexity has meant that a complete containment analysis would require application of suites of separate computer codes each of which would treat only a narrower subset of these phenomena, e.g., a thermal-hydraulics code, an aerosol code, a core-concrete interaction code, etc. In this paper, we describe the development and some recent applications of the CONTAIN code, which offers an integrated treatment of the dominant containment phenomena and the interactions among them. We describe the results of a series of containment phenomenology studies, based upon realistic accident sequence analyses in actual plants. These calculations highlight various phenomenological effects that have potentially important implications for source term and/or containment loading issues, and which are difficult or impossible to treat using a less integrated code suite

  17. Safety integrity requirements for computer based I ampersand C systems

    International Nuclear Information System (INIS)

    Thuy, N.N.Q.; Ficheux-Vapne, F.

    1997-01-01

    In order to take into account increasingly demanding functional requirements, many instrumentation and control (I ampersand C) systems in nuclear power plants are implemented with computers. In order to ensure the required safety integrity of such equipment, i.e., to ensure that they satisfactorily perform the required safety functions under all stated conditions and within stated periods of time, requirements applicable to these equipment and to their life cycle need to be expressed and followed. On the other hand, the experience of the last years has led EDF (Electricite de France) and its partners to consider three classes of systems and equipment, according to their importance to safety. In the EPR project (European Pressurized water Reactor), these classes are labeled E1A, E1B and E2. The objective of this paper is to present the outline of the work currently done in the framework of the ETC-I (EPR Technical Code for I ampersand C) regarding safety integrity requirements applicable to each of the three classes. 4 refs., 2 figs

  18. Mixed Waste Treatment Project: Computer simulations of integrated flowsheets

    International Nuclear Information System (INIS)

    Dietsche, L.J.

    1993-12-01

    The disposal of mixed waste, that is waste containing both hazardous and radioactive components, is a challenging waste management problem of particular concern to DOE sites throughout the United States. Traditional technologies used for the destruction of hazardous wastes need to be re-evaluated for their ability to handle mixed wastes, and in some cases new technologies need to be developed. The Mixed Waste Treatment Project (MWTP) was set up by DOE's Waste Operations Program (EM30) to provide guidance on mixed waste treatment options. One of MWTP's charters is to develop flowsheets for prototype integrated mixed waste treatment facilities which can serve as models for sites developing their own treatment strategies. Evaluation of these flowsheets is being facilitated through the use of computer modelling. The objective of the flowsheet simulations is to provide mass and energy balances, product compositions, and equipment sizing (leading to cost) information. The modelled flowsheets need to be easily modified to examine how alternative technologies and varying feed streams effect the overall integrated process. One such commercially available simulation program is ASPEN PLUS. This report contains details of the Aspen Plus program

  19. A Tractable Disequilbrium Framework for Integrating Computational Thermodynamics and Geodynamics

    Science.gov (United States)

    Spiegelman, M. W.; Tweed, L. E. L.; Evans, O.; Kelemen, P. B.; Wilson, C. R.

    2017-12-01

    The consistent integration of computational thermodynamics and geodynamics is essential for exploring and understanding a wide range of processes from high-PT magma dynamics in the convecting mantle to low-PT reactive alteration of the brittle crust. Nevertheless, considerable challenges remain for coupling thermodynamics and fluid-solid mechanics within computationally tractable and insightful models. Here we report on a new effort, part of the ENKI project, that provides a roadmap for developing flexible geodynamic models of varying complexity that are thermodynamically consistent with established thermodynamic models. The basic theory is derived from the disequilibrium thermodynamics of De Groot and Mazur (1984), similar to Rudge et. al (2011, GJI), but extends that theory to include more general rheologies, multiple solid (and liquid) phases and explicit chemical reactions to describe interphase exchange. Specifying stoichiometric reactions clearly defines the compositions of reactants and products and allows the affinity of each reaction (A = -Δ/Gr) to be used as a scalar measure of disequilibrium. This approach only requires thermodynamic models to return chemical potentials of all components and phases (as well as thermodynamic quantities for each phase e.g. densities, heat capacity, entropies), but is not constrained to be in thermodynamic equilibrium. Allowing meta-stable phases mitigates some of the computational issues involved with the introduction and exhaustion of phases. Nevertheless, for closed systems, these problems are guaranteed to evolve to the same equilibria predicted by equilibrium thermodynamics. Here we illustrate the behavior of this theory for a range of simple problems (constructed with our open-source model builder TerraFERMA) that model poro-viscous behavior in the well understood Fo-Fa binary phase loop. Other contributions in this session will explore a range of models with more petrologically interesting phase diagrams as well as

  20. Carbon sequestration, biological diversity, and sustainable development: Integrated forest management

    Energy Technology Data Exchange (ETDEWEB)

    Cairns, M.A. (Environmental Research Lab., Corvallis, OR (United States)); Meganck, R.A. (United Nations Environment Programme for the Wider Caribbean, Kingston (Jamaica))

    Tropical deforestation provides a significant contribution to anthropogenic increases in atmospheric CO[sub 2] concentration that may lead to global warming. Forestation and other forest management options to sequester CO[sub 2] in the tropical latitudes may fail unless they address local economic, social, environmental, and political needs of people in the developing world. Forest management is discussed in terms of three objectives: Carbon sequestration, sustainable development, and biodiversity conservation. An integrated forest management strategy of land-use planning is proposed to achieve these objectives and is centered around: Preservation of primary forest, intensified use of nontimber resources, agroforestry, and selective use of plantation forestry. 89 refs., 1 fig., 1 tab.

  1. Treatment of atrazine by integrating photocatalytic and biological processes

    International Nuclear Information System (INIS)

    Chan, C.Y.; Tao, S.; Dawson, R.; Wong, P.K.

    2004-01-01

    This research examines the degradation of atrazine by photocatalytic oxidation (PCO) under different experimental conditions. Deisopropylatrazine, deethylatrazine and deethyldeisopropylatrazine were formed as major intermediates based on gas chromatography-mass spectrometry. The reaction mixture was found to be toxic towards two bioassays, i.e. the Microtox[reg] and amphipods survival tests even when atrazine was completely degraded by PCO within 2 h under optimized conditions. The results indicate that adding H 2 O 2 could significantly enhance the degradation of atrazine by PCO. Ammeline, ammelide and cyanuric acid (CA) became the major intermediates/products as detected by high performance liquid chromatography from 6th to the 40th h of PCO treatment. After 72 h PCO treatment, only CA was detectable in the reaction mixture. Further degradation of CA was carried out by a newly isolated CA-degrading bacterium, Sphingomonas capsulata. The photochemical pretreatment integrated with microbial degradation lead to the complete degradation and detoxification of atrazine

  2. CaliBayes and BASIS: integrated tools for the calibration, simulation and storage of biological simulation models.

    Science.gov (United States)

    Chen, Yuhui; Lawless, Conor; Gillespie, Colin S; Wu, Jake; Boys, Richard J; Wilkinson, Darren J

    2010-05-01

    Dynamic simulation modelling of complex biological processes forms the backbone of systems biology. Discrete stochastic models are particularly appropriate for describing sub-cellular molecular interactions, especially when critical molecular species are thought to be present at low copy-numbers. For example, these stochastic effects play an important role in models of human ageing, where ageing results from the long-term accumulation of random damage at various biological scales. Unfortunately, realistic stochastic simulation of discrete biological processes is highly computationally intensive, requiring specialist hardware, and can benefit greatly from parallel and distributed approaches to computation and analysis. For these reasons, we have developed the BASIS system for the simulation and storage of stochastic SBML models together with associated simulation results. This system is exposed as a set of web services to allow users to incorporate its simulation tools into their workflows. Parameter inference for stochastic models is also difficult and computationally expensive. The CaliBayes system provides a set of web services (together with an R package for consuming these and formatting data) which addresses this problem for SBML models. It uses a sequential Bayesian MCMC method, which is powerful and flexible, providing very rich information. However this approach is exceptionally computationally intensive and requires the use of a carefully designed architecture. Again, these tools are exposed as web services to allow users to take advantage of this system. In this article, we describe these two systems and demonstrate their integrated use with an example workflow to estimate the parameters of a simple model of Saccharomyces cerevisiae growth on agar plates.

  3. Value-Based Medicine and Integration of Tumor Biology.

    Science.gov (United States)

    Brooks, Gabriel A; Bosserman, Linda D; Mambetsariev, Isa; Salgia, Ravi

    2017-01-01

    Clinical oncology is in the midst of a genomic revolution, as molecular insights redefine our understanding of cancer biology. Greater awareness of the distinct aberrations that drive carcinogenesis is also contributing to a growing armamentarium of genomically targeted therapies. Although much work remains to better understand how to combine and sequence these therapies, improved outcomes for patients are becoming manifest. As we welcome this genomic revolution in cancer care, oncologists also must grapple with a number of practical problems. Costs of cancer care continue to grow, with targeted therapies responsible for an increasing proportion of spending. Rising costs are bringing the concept of value into sharper focus and challenging the oncology community with implementation of value-based cancer care. This article explores the ways that the genomic revolution is transforming cancer care, describes various frameworks for considering the value of genomically targeted therapies, and outlines key challenges for delivering on the promise of personalized cancer care. It highlights practical solutions for the implementation of value-based care, including investment in biomarker development and clinical trials to improve the efficacy of targeted therapy, the use of evidence-based clinical pathways, team-based care, computerized clinical decision support, and value-based payment approaches.

  4. Integrating fluid dynamic and biologic effects on staphylococci bacteria biofilms

    Science.gov (United States)

    Sherman, Erica; Endres, Jennifer; Bayles, Kenneth; Wei, Timothy

    2017-11-01

    Staphylococcus aureus bacteria are able to form biofilms and distinctive tower structures that facilitate their ability to tolerate treatment and to spread within the human body. The formation of towers, which break off, get carried downstream and serve to initiate biofilms in other parts of the body are of particular interest here. In previous work on biofilm growth and evolution in steady, laminar microchannel flows, it has been established that tower formation occurs around a very limited range of applied shear stresses centered on 0.6 dynes/cm2. Quantifying cell density characteristics as a function of time during biofilm formation reveals indicators of tower development hours before towers actually form and become visible. The next step in this research is to explore biological factors that might explain why this specific shear is so important. Additional studies with mutants, e.g. ica-A, that have been tied to tower formation have been conducted. The shear dependence of these mutants and their correlation to the behavior of wild type S. aureus is examined.

  5. Computational analysis of battery optimized reactor integral system

    International Nuclear Information System (INIS)

    Hwang, J. S.; Son, H. M.; Jeong, W. S.; Kim, T. W.; Suh, K. Y.

    2007-01-01

    Battery Optimized Reactor Integral System (BORIS) is being developed as a multi-purpose fast spectrum reactor cooled by lead (Pb). BORIS is an integral optimized reactor with an ultra-long life core. BORIS aims to satisfy various energy demands maintaining inherent safety with the primary coolant Pb, and improving economics. BORIS is being designed to generate 23 MW t h with 10 MW e for at least twenty consecutive years without refueling and to meet the Generation IV Nuclear Energy System goals of sustainability, safety, reliability, and economics. BORIS is conceptualized to be used as the main power and heat source for remote areas and barren lands, and also considered to be deployed for desalinisation purpose. BORIS, based on modular components to be viable for rapid construction and easy maintenance, adopts an integrated heat exchanger system operated by natural circulation of Pb without pumps to realize a small sized reactor. The BORIS primary system is designed through an optimization study. Thermal hydraulic characteristics during a reactor steady state with heat source and sink by core and heat exchanger, respectively, have been carried out by utilizing a computational fluid dynamics code and hand calculations based on first principles. This paper analyzes a transient condition of the BORIS primary system. The Pb coolant was selected for its lower chemical activity with air or water than sodium (Na) and good thermal characteristics. The reactor transient conditions such as core blockage, heat exchanger failure, and loss of heat sink, were selected for this study. Blockage in the core or its inlet structure causes localized flow starvation in one or several fuel assemblies. The coolant loop blockages cause a more or less uniform flow reduction across the core, which may trigger coolant temperature transient. General conservation equations were applied to model the primary system transients. Numerical approaches were adopted to discretized the governing

  6. Making United States Integrated Ocean Observing System (U.S. IOOS) inclusive of marine biological resources

    Science.gov (United States)

    Moustahfid, H.; Potemra, J.; Goldstein, P.; Mendelssohn, R.; Desrochers, A.

    2011-01-01

    An important Data Management and Communication (DMAC) goal is to enable a multi-disciplinary view of the ocean environment by facilitating discovery and integration of data from various sources, projects and scientific domains. United States Integrated Ocean Observing System (U.S. IOOS) DMAC functional requirements are based upon guidelines for standardized data access services, data formats, metadata, controlled vocabularies, and other conventions. So far, the data integration effort has focused on geophysical U.S. IOOS core variables such as temperature, salinity, ocean currents, etc. The IOOS Biological Observations Project is addressing the DMAC requirements that pertain to biological observations standards and interoperability applicable to U.S. IOOS and to various observing systems. Biological observations are highly heterogeneous and the variety of formats, logical structures, and sampling methods create significant challenges. Here we describe an informatics framework for biological observing data (e.g. species presence/absence and abundance data) that will expand information content and reconcile standards for the representation and integration of these biological observations for users to maximize the value of these observing data. We further propose that the approach described can be applied to other datasets generated in scientific observing surveys and will provide a vehicle for wider dissemination of biological observing data. We propose to employ data definition conventions that are well understood in U.S. IOOS and to combine these with ratified terminologies, policies and guidelines. ?? 2011 MTS.

  7. Integrative analysis of many weighted co-expression networks using tensor computation.

    Directory of Open Access Journals (Sweden)

    Wenyuan Li

    2011-06-01

    Full Text Available The rapid accumulation of biological networks poses new challenges and calls for powerful integrative analysis tools. Most existing methods capable of simultaneously analyzing a large number of networks were primarily designed for unweighted networks, and cannot easily be extended to weighted networks. However, it is known that transforming weighted into unweighted networks by dichotomizing the edges of weighted networks with a threshold generally leads to information loss. We have developed a novel, tensor-based computational framework for mining recurrent heavy subgraphs in a large set of massive weighted networks. Specifically, we formulate the recurrent heavy subgraph identification problem as a heavy 3D subtensor discovery problem with sparse constraints. We describe an effective approach to solving this problem by designing a multi-stage, convex relaxation protocol, and a non-uniform edge sampling technique. We applied our method to 130 co-expression networks, and identified 11,394 recurrent heavy subgraphs, grouped into 2,810 families. We demonstrated that the identified subgraphs represent meaningful biological modules by validating against a large set of compiled biological knowledge bases. We also showed that the likelihood for a heavy subgraph to be meaningful increases significantly with its recurrence in multiple networks, highlighting the importance of the integrative approach to biological network analysis. Moreover, our approach based on weighted graphs detects many patterns that would be overlooked using unweighted graphs. In addition, we identified a large number of modules that occur predominately under specific phenotypes. This analysis resulted in a genome-wide mapping of gene network modules onto the phenome. Finally, by comparing module activities across many datasets, we discovered high-order dynamic cooperativeness in protein complex networks and transcriptional regulatory networks.

  8. COMPUTING THERAPY FOR PRECISION MEDICINE: COLLABORATIVE FILTERING INTEGRATES AND PREDICTS MULTI-ENTITY INTERACTIONS.

    Science.gov (United States)

    Regenbogen, Sam; Wilkins, Angela D; Lichtarge, Olivier

    2016-01-01

    Biomedicine produces copious information it cannot fully exploit. Specifically, there is considerable need to integrate knowledge from disparate studies to discover connections across domains. Here, we used a Collaborative Filtering approach, inspired by online recommendation algorithms, in which non-negative matrix factorization (NMF) predicts interactions among chemicals, genes, and diseases only from pairwise information about their interactions. Our approach, applied to matrices derived from the Comparative Toxicogenomics Database, successfully recovered Chemical-Disease, Chemical-Gene, and Disease-Gene networks in 10-fold cross-validation experiments. Additionally, we could predict each of these interaction matrices from the other two. Integrating all three CTD interaction matrices with NMF led to good predictions of STRING, an independent, external network of protein-protein interactions. Finally, this approach could integrate the CTD and STRING interaction data to improve Chemical-Gene cross-validation performance significantly, and, in a time-stamped study, it predicted information added to CTD after a given date, using only data prior to that date. We conclude that collaborative filtering can integrate information across multiple types of biological entities, and that as a first step towards precision medicine it can compute drug repurposing hypotheses.

  9. Lignin valorization through integrated biological funneling and chemical catalysis

    Science.gov (United States)

    Linger, Jeffrey G.; Vardon, Derek R.; Guarnieri, Michael T.; Karp, Eric M.; Hunsinger, Glendon B.; Franden, Mary Ann; Johnson, Christopher W.; Chupka, Gina; Strathmann, Timothy J.; Pienkos, Philip T.; Beckham, Gregg T.

    2014-01-01

    Lignin is an energy-dense, heterogeneous polymer comprised of phenylpropanoid monomers used by plants for structure, water transport, and defense, and it is the second most abundant biopolymer on Earth after cellulose. In production of fuels and chemicals from biomass, lignin is typically underused as a feedstock and burned for process heat because its inherent heterogeneity and recalcitrance make it difficult to selectively valorize. In nature, however, some organisms have evolved metabolic pathways that enable the utilization of lignin-derived aromatic molecules as carbon sources. Aromatic catabolism typically occurs via upper pathways that act as a “biological funnel” to convert heterogeneous substrates to central intermediates, such as protocatechuate or catechol. These intermediates undergo ring cleavage and are further converted via the β-ketoadipate pathway to central carbon metabolism. Here, we use a natural aromatic-catabolizing organism, Pseudomonas putida KT2440, to demonstrate that these aromatic metabolic pathways can be used to convert both aromatic model compounds and heterogeneous, lignin-enriched streams derived from pilot-scale biomass pretreatment into medium chain-length polyhydroxyalkanoates (mcl-PHAs). mcl-PHAs were then isolated from the cells and demonstrated to be similar in physicochemical properties to conventional carbohydrate-derived mcl-PHAs, which have applications as bioplastics. In a further demonstration of their utility, mcl-PHAs were catalytically converted to both chemical precursors and fuel-range hydrocarbons. Overall, this work demonstrates that the use of aromatic catabolic pathways enables an approach to valorize lignin by overcoming its inherent heterogeneity to produce fuels, chemicals, and materials. PMID:25092344

  10. Human evolution. Evolution of early Homo: an integrated biological perspective.

    Science.gov (United States)

    Antón, Susan C; Potts, Richard; Aiello, Leslie C

    2014-07-04

    Integration of evidence over the past decade has revised understandings about the major adaptations underlying the origin and early evolution of the genus Homo. Many features associated with Homo sapiens, including our large linear bodies, elongated hind limbs, large energy-expensive brains, reduced sexual dimorphism, increased carnivory, and unique life history traits, were once thought to have evolved near the origin of the genus in response to heightened aridity and open habitats in Africa. However, recent analyses of fossil, archaeological, and environmental data indicate that such traits did not arise as a single package. Instead, some arose substantially earlier and some later than previously thought. From ~2.5 to 1.5 million years ago, three lineages of early Homo evolved in a context of habitat instability and fragmentation on seasonal, intergenerational, and evolutionary time scales. These contexts gave a selective advantage to traits, such as dietary flexibility and larger body size, that facilitated survival in shifting environments. Copyright © 2014, American Association for the Advancement of Science.

  11. An integrated platform for assessing biologics (Conference Presentation)

    Science.gov (United States)

    Schein, Perry; O'Dell, Dakota; Erickson, David

    2016-04-01

    Protein therapeutics are a rapidly growing portion of the pharmaceuticals market and have many significant advantages over traditional small molecule drugs. As this market expands, however, critical regulatory and quality control issues remain, most notably the problem of protein aggregation. Individual target proteins often aggregate into larger masses which trigger an immune response in the body, which can reduce the efficacy of the drug for its intended purpose, or cause serious anaphylactic side-effects. Although detecting and minimizing aggregate formation is critical to ensure an effective product, aggregation dynamics are often highly complicated and there is little hope of reliable prediction and prevention from first principles. This problem is compounded for aggregates in the subvisible range of 100 nm to 10 micrometers where traditional techniques for detecting aggregates have significant limitations. Here, we present an integrated optofluidic platform for detecting nanoscale protein aggregates and characterizing interactions between these aggregates and a reference surface. By delivering light to a solution of proteins with an optical waveguide, scattered light from individual protein aggregates can be detected and analyzed to determine the force profile between each particle and the waveguide surface. Unlike existing methods which only determine size or charge, our label-free screening technique can directly measure the surface interaction forces between single aggregates and the glass substrate. This direct measurement capability may allow for better empirical predictions of the stability of protein aggregates during drug manufacturing and storage.

  12. Gradient matching methods for computational inference in mechanistic models for systems biology: a review and comparative analysis

    Directory of Open Access Journals (Sweden)

    Benn eMacdonald

    2015-11-01

    Full Text Available Parameter inference in mathematical models of biological pathways, expressed as coupled ordinary differential equations (ODEs, is a challenging problem in contemporary systems biology. Conventional methods involve repeatedly solving the ODEs by numerical integration, which is computationally onerous and does not scale up to complex systems. Aimed at reducing the computational costs, new concepts based on gradient matching have recently been proposed in the computational statistics and machine learning literature. In a preliminary smoothing step, the time series data are interpolated; then, in a second step, the parameters of the ODEs are optimised so as to minimise some metric measuring the difference between the slopes of the tangents to the interpolants, and the time derivatives from the ODEs. In this way, the ODEs never have to be solved explicitly. This review provides a concise methodological overview of the current state-of-the-art methods for gradient matching in ODEs, followed by an empirical comparative evaluation based on a set of widely used and representative benchmark data.

  13. Derivation and computation of discrete-delay and continuous-delay SDEs in mathematical biology.

    Science.gov (United States)

    Allen, Edward J

    2014-06-01

    Stochastic versions of several discrete-delay and continuous-delay differential equations, useful in mathematical biology, are derived from basic principles carefully taking into account the demographic, environmental, or physiological randomness in the dynamic processes. In particular, stochastic delay differential equation (SDDE) models are derived and studied for Nicholson's blowflies equation, Hutchinson's equation, an SIS epidemic model with delay, bacteria/phage dynamics, and glucose/insulin levels. Computational methods for approximating the SDDE models are described. Comparisons between computational solutions of the SDDEs and independently formulated Monte Carlo calculations support the accuracy of the derivations and of the computational methods.

  14. Final report for Conference Support Grant "From Computational Biophysics to Systems Biology - CBSB12"

    Energy Technology Data Exchange (ETDEWEB)

    Hansmann, Ulrich H.E.

    2012-07-02

    This report summarizes the outcome of the international workshop From Computational Biophysics to Systems Biology (CBSB12) which was held June 3-5, 2012, at the University of Tennessee Conference Center in Knoxville, TN, and supported by DOE through the Conference Support Grant 120174. The purpose of CBSB12 was to provide a forum for the interaction between a data-mining interested systems biology community and a simulation and first-principle oriented computational biophysics/biochemistry community. CBSB12 was the sixth in a series of workshops of the same name organized in recent years, and the second that has been held in the USA. As in previous years, it gave researchers from physics, biology, and computer science an opportunity to acquaint each other with current trends in computational biophysics and systems biology, to explore venues of cooperation, and to establish together a detailed understanding of cells at a molecular level. The conference grant of $10,000 was used to cover registration fees and provide travel fellowships to selected students and postdoctoral scientists. By educating graduate students and providing a forum for young scientists to perform research into the working of cells at a molecular level, the workshop adds to DOE's mission of paving the way to exploit the abilities of living systems to capture, store and utilize energy.

  15. A direct method for computing extreme value (Gumbel) parameters for gapped biological sequence alignments.

    Science.gov (United States)

    Quinn, Terrance; Sinkala, Zachariah

    2014-01-01

    We develop a general method for computing extreme value distribution (Gumbel, 1958) parameters for gapped alignments. Our approach uses mixture distribution theory to obtain associated BLOSUM matrices for gapped alignments, which in turn are used for determining significance of gapped alignment scores for pairs of biological sequences. We compare our results with parameters already obtained in the literature.

  16. 10 years for the Journal of Bioinformatics and Computational Biology (2003-2013) -- a retrospective.

    Science.gov (United States)

    Eisenhaber, Frank; Sherman, Westley Arthur

    2014-06-01

    The Journal of Bioinformatics and Computational Biology (JBCB) started publishing scientific articles in 2003. It has established itself as home for solid research articles in the field (~ 60 per year) that are surprisingly well cited. JBCB has an important function as alternative publishing channel in addition to other, bigger journals.

  17. Computer-aided design of biological circuits using TinkerCell.

    Science.gov (United States)

    Chandran, Deepak; Bergmann, Frank T; Sauro, Herbert M

    2010-01-01

    Synthetic biology is an engineering discipline that builds on modeling practices from systems biology and wet-lab techniques from genetic engineering. As synthetic biology advances, efficient procedures will be developed that will allow a synthetic biologist to design, analyze, and build biological networks. In this idealized pipeline, computer-aided design (CAD) is a necessary component. The role of a CAD application would be to allow efficient transition from a general design to a final product. TinkerCell is a design tool for serving this purpose in synthetic biology. In TinkerCell, users build biological networks using biological parts and modules. The network can be analyzed using one of several functions provided by TinkerCell or custom programs from third-party sources. Since best practices for modeling and constructing synthetic biology networks have not yet been established, TinkerCell is designed as a flexible and extensible application that can adjust itself to changes in the field. © 2010 Landes Bioscience

  18. Integration of Principles of Systems Biology and Radiation Biology: Toward Development of in silico Models to Optimize IUdR-Mediated Radiosensitization of DNA Mismatch Repair Deficient (Damage Tolerant) Human Cancers

    International Nuclear Information System (INIS)

    Kinsella, Timothy J.; Gurkan-Cavusoglu, Evren; Du, Weinan; Loparo, Kenneth A.

    2011-01-01

    Over the last 7 years, we have focused our experimental and computational research efforts on improving our understanding of the biochemical, molecular, and cellular processing of iododeoxyuridine (IUdR) and ionizing radiation (IR) induced DNA base damage by DNA mismatch repair (MMR). These coordinated research efforts, sponsored by the National Cancer Institute Integrative Cancer Biology Program (ICBP), brought together system scientists with expertise in engineering, mathematics, and complex systems theory and translational cancer researchers with expertise in radiation biology. Our overall goal was to begin to develop computational models of IUdR- and/or IR-induced base damage processing by MMR that may provide new clinical strategies to optimize IUdR-mediated radiosensitization in MMR deficient (MMR − ) “damage tolerant” human cancers. Using multiple scales of experimental testing, ranging from purified protein systems to in vitro (cellular) and to in vivo (human tumor xenografts in athymic mice) models, we have begun to integrate and interpolate these experimental data with hybrid stochastic biochemical models of MMR damage processing and probabilistic cell cycle regulation models through a systems biology approach. In this article, we highlight the results and current status of our integration of radiation biology approaches and computational modeling to enhance IUdR-mediated radiosensitization in MMR − damage tolerant cancers.

  19. Selfishness, warfare and economics; or integration, cooperation and biology

    Directory of Open Access Journals (Sweden)

    Emiliano Jesus Salvucci

    2012-05-01

    Full Text Available The acceptance of Darwin's theory of evolution by natural selection is not complete and it has been pointed out its limitation to explain the complex processes that constitute the transformation of species. The darwinian paradigm had its origin in the free market theories and concepts of Malthus and Spencer. Nature was explained on the basis of market theories moving away from an accurate explanation of natural phenomena. It is common that new discoveries bring about contradictions that are intended to be overcome by adjusting results to the dominant reductionist paradigm using all sorts of gradations and combinations that are admitted for each case. Modern findings represent a challenge to the interpretation of the observations with the Darwinian view of competition and struggle for life as theoretical basis. New holistic interpretations are emerging related with the Net of Life, in which the interconnection of ecosystems constitutes a dynamic and self-regulating biosphere: Viruses are recognized as a macroorganism with a huge collection of genes, most unknown, that constitute the major planet's gene pool with a fundamental role in evolution. The hologenome theory considers an organism and all of its associated symbiotic microbes as a result of symbiopoiesis. Microbes, helmints, that normally are understood as parasites, are cohabitants and they have cohabited with their host and drives the evolution and existence of the partners. Each organism is a result of integration of complex systems. The eukaryotic organism is the result of combination of bacterial, virus and eukaryotic DNA and the interaction of its own genome with the genome of its microbiota resulting in an intertwined metabolism (a superorganism along evolution. These new interpretations are remarkable points to be considered in order to construct a solid theory adjusted to the facts and with less speculations and tortuous semantic traps.

  20. The biology of lysine acetylation integrates transcriptional programming and metabolism

    Directory of Open Access Journals (Sweden)

    Mujtaba Shiraz

    2011-03-01

    Full Text Available Abstract The biochemical landscape of lysine acetylation has expanded from a small number of proteins in the nucleus to a multitude of proteins in the cytoplasm. Since the first report confirming acetylation of the tumor suppressor protein p53 by a lysine acetyltransferase (KAT, there has been a surge in the identification of new, non-histone targets of KATs. Added to the known substrates of KATs are metabolic enzymes, cytoskeletal proteins, molecular chaperones, ribosomal proteins and nuclear import factors. Emerging studies demonstrate that no fewer than 2000 proteins in any particular cell type may undergo lysine acetylation. As described in this review, our analyses of cellular acetylated proteins using DAVID 6.7 bioinformatics resources have facilitated organization of acetylated proteins into functional clusters integral to cell signaling, the stress response, proteolysis, apoptosis, metabolism, and neuronal development. In addition, these clusters also depict association of acetylated proteins with human diseases. These findings not only support lysine acetylation as a widespread cellular phenomenon, but also impel questions to clarify the underlying molecular and cellular mechanisms governing target selectivity by KATs. Present challenges are to understand the molecular basis for the overlapping roles of KAT-containing co-activators, to differentiate between global versus dynamic acetylation marks, and to elucidate the physiological roles of acetylated proteins in biochemical pathways. In addition to discussing the cellular 'acetylome', a focus of this work is to present the widespread and dynamic nature of lysine acetylation and highlight the nexus that exists between epigenetic-directed transcriptional regulation and metabolism.

  1. The observation of biology implemented by integrated religion values in integrated Islamic school (Decriptive Study in X Integrated Senior Hight School Tasikmalaya)

    Science.gov (United States)

    Nurjanah, E.; Adisendjaja, Y. H.; Kusumastuti, M. N.

    2018-05-01

    The learning Integrated Religious value is one of the efforts to increase the motivation of learning and building the student character. This study aims to describe the application of Biology learning integrated religion values in Integrated Islamic School. Research methods used in this research is descriptive. Participants in this study involved the headmaster, headmaster of curriculum, biology teachers, boarding school teachers, the lead of boarding schools, and students. The instruments used are interview, observation and the student questionnaire about learning biology. The results showed that learning in X school consists of two curriculums, there was the curriculum of national education and curriculum of boarding school. The curriculum of national education referred to 2013 curriculum and boarding school curriculum referred to the curriculum of Salafi boarding school (Kitab Kuning). However, in its learning process not delivered integrated. The main obstacle to implementing the learning integrated religious values are 1) the background of general teacher education did not know of any connection between biology subject and subject that are studied in boarding school; 2) schools did not form the teaching team; 3) unavailability of materials integrated religious values.

  2. Novel approaches to the integration and analysis of systems biology data

    OpenAIRE

    Ramírez, Fidel

    2011-01-01

    The opportunity to investigate whole cellular systems using experimental and computational high-throughput methods leads to the generation of unprecedented amounts of data. Processing of these data often results in large lists of genes or proteins that need to be analyzed and interpreted in the context of all other biological information that is already available. To support such analyses, repositories aggregating and merging the biological information contained in different databases are req...

  3. Systems biology of bacterial nitrogen fixation: High-throughput technology and its integrative description with constraint-based modeling

    Directory of Open Access Journals (Sweden)

    Resendis-Antonio Osbaldo

    2011-07-01

    Full Text Available Abstract Background Bacterial nitrogen fixation is the biological process by which atmospheric nitrogen is uptaken by bacteroids located in plant root nodules and converted into ammonium through the enzymatic activity of nitrogenase. In practice, this biological process serves as a natural form of fertilization and its optimization has significant implications in sustainable agricultural programs. Currently, the advent of high-throughput technology supplies with valuable data that contribute to understanding the metabolic activity during bacterial nitrogen fixation. This undertaking is not trivial, and the development of computational methods useful in accomplishing an integrative, descriptive and predictive framework is a crucial issue to decoding the principles that regulated the metabolic activity of this biological process. Results In this work we present a systems biology description of the metabolic activity in bacterial nitrogen fixation. This was accomplished by an integrative analysis involving high-throughput data and constraint-based modeling to characterize the metabolic activity in Rhizobium etli bacteroids located at the root nodules of Phaseolus vulgaris (bean plant. Proteome and transcriptome technologies led us to identify 415 proteins and 689 up-regulated genes that orchestrate this biological process. Taking into account these data, we: 1 extended the metabolic reconstruction reported for R. etli; 2 simulated the metabolic activity during symbiotic nitrogen fixation; and 3 evaluated the in silico results in terms of bacteria phenotype. Notably, constraint-based modeling simulated nitrogen fixation activity in such a way that 76.83% of the enzymes and 69.48% of the genes were experimentally justified. Finally, to further assess the predictive scope of the computational model, gene deletion analysis was carried out on nine metabolic enzymes. Our model concluded that an altered metabolic activity on these enzymes induced

  4. Modular multiple sensors information management for computer-integrated surgery.

    Science.gov (United States)

    Vaccarella, Alberto; Enquobahrie, Andinet; Ferrigno, Giancarlo; Momi, Elena De

    2012-09-01

    In the past 20 years, technological advancements have modified the concept of modern operating rooms (ORs) with the introduction of computer-integrated surgery (CIS) systems, which promise to enhance the outcomes, safety and standardization of surgical procedures. With CIS, different types of sensor (mainly position-sensing devices, force sensors and intra-operative imaging devices) are widely used. Recently, the need for a combined use of different sensors raised issues related to synchronization and spatial consistency of data from different sources of information. In this study, we propose a centralized, multi-sensor management software architecture for a distributed CIS system, which addresses sensor information consistency in both space and time. The software was developed as a data server module in a client-server architecture, using two open-source software libraries: Image-Guided Surgery Toolkit (IGSTK) and OpenCV. The ROBOCAST project (FP7 ICT 215190), which aims at integrating robotic and navigation devices and technologies in order to improve the outcome of the surgical intervention, was used as the benchmark. An experimental protocol was designed in order to prove the feasibility of a centralized module for data acquisition and to test the application latency when dealing with optical and electromagnetic tracking systems and ultrasound (US) imaging devices. Our results show that a centralized approach is suitable for minimizing synchronization errors; latency in the client-server communication was estimated to be 2 ms (median value) for tracking systems and 40 ms (median value) for US images. The proposed centralized approach proved to be adequate for neurosurgery requirements. Latency introduced by the proposed architecture does not affect tracking system performance in terms of frame rate and limits US images frame rate at 25 fps, which is acceptable for providing visual feedback to the surgeon in the OR. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Stochastic processes, multiscale modeling, and numerical methods for computational cellular biology

    CERN Document Server

    2017-01-01

    This book focuses on the modeling and mathematical analysis of stochastic dynamical systems along with their simulations. The collected chapters will review fundamental and current topics and approaches to dynamical systems in cellular biology. This text aims to develop improved mathematical and computational methods with which to study biological processes. At the scale of a single cell, stochasticity becomes important due to low copy numbers of biological molecules, such as mRNA and proteins that take part in biochemical reactions driving cellular processes. When trying to describe such biological processes, the traditional deterministic models are often inadequate, precisely because of these low copy numbers. This book presents stochastic models, which are necessary to account for small particle numbers and extrinsic noise sources. The complexity of these models depend upon whether the biochemical reactions are diffusion-limited or reaction-limited. In the former case, one needs to adopt the framework of s...

  6. Engineering integrated digital circuits with allosteric ribozymes for scaling up molecular computation and diagnostics.

    Science.gov (United States)

    Penchovsky, Robert

    2012-10-19

    Here we describe molecular implementations of integrated digital circuits, including a three-input AND logic gate, a two-input multiplexer, and 1-to-2 decoder using allosteric ribozymes. Furthermore, we demonstrate a multiplexer-decoder circuit. The ribozymes are designed to seek-and-destroy specific RNAs with a certain length by a fully computerized procedure. The algorithm can accurately predict one base substitution that alters the ribozyme's logic function. The ability to sense the length of RNA molecules enables single ribozymes to be used as platforms for multiple interactions. These ribozymes can work as integrated circuits with the functionality of up to five logic gates. The ribozyme design is universal since the allosteric and substrate domains can be altered to sense different RNAs. In addition, the ribozymes can specifically cleave RNA molecules with triplet-repeat expansions observed in genetic disorders such as oculopharyngeal muscular dystrophy. Therefore, the designer ribozymes can be employed for scaling up computing and diagnostic networks in the fields of molecular computing and diagnostics and RNA synthetic biology.

  7. Integration of smart wearable mobile devices and cloud computing in South African healthcare

    CSIR Research Space (South Africa)

    Mvelase, PS

    2015-11-01

    Full Text Available Integration of Smart Wearable Mobile Devices and Cloud Computing in South African Healthcare Promise MVELASE, Zama DLAMINI, Angeline DLUDLA, Happy SITHOLE Abstract: The acceptance of cloud computing is increasing in a fast pace in distributed...

  8. Uncharted waters: Bivalves of midway atoll and integrating mathematics into biology education

    Science.gov (United States)

    McCully, Kristin M.

    To protect and conserve the Earth's biodiversity and ecosystem services, it is important not only to understand and conserve species and ecosystems, but also to instill an understanding and appreciation for biodiversity and ecosystem services in the next generations of both scientists and citizens. Thus, this dissertation combines research into the ecology and identity of large bivalves at Midway Atoll in the Northwestern Hawaiian Islands (NWHI) with research on pedagogical strategies for integrating mathematics into undergraduate biology education. The NWHI is one of the few remaining large, mainly intact, predator-dominated coral reef ecosystems and one of the world's largest marine protected areas. Previous bivalve studies focused on the black-lipped pearl oyster, Pinctada margaritifera, which was heavily harvested in the late 1920s, has not recovered, and is now a candidate species for restoration. First, I combined remote sensing, geographic information systems, SCUBA, and mathematical modeling to quantify the abundance, spatial distributions, and filtration capacity of large epifaunal bivalves at Midway Atoll. These bivalves are most abundant on the forereef outside the atoll, but densities are much lower than reported on other reefs, and Midway's bivalves are unlikely to affect plankton abundance and productivity inside the lagoon. Second, I used molecular techniques and phylogenetic reconstructions to identify pearl oysters (Pinctada) from Midway Atoll as P. maculata , a species not previously reported in Hawaii. As a small morphologically cryptic species, P. maculata may be a native species that has not been collected previously, a native species that has been identified incorrectly as the morphologically similar P. radiata, or it may be a recent introduction or natural range extension from the western Pacific. Finally, I review science education literature integrating mathematics into undergraduate biology curricula, and then present and evaluate a

  9. An Introductory "How-to" Guide for Incorporating Microbiome Research into Integrative and Comparative Biology.

    Science.gov (United States)

    Kohl, Kevin D

    2017-10-01

    Research on host-associated microbial communities has grown rapidly. Despite the great body of work, inclusion of microbiota-related questions into integrative and comparative biology is still lagging behind other disciplines. The purpose of this paper is to offer an introduction into the basic tools and techniques of host-microbe research. Specifically, what considerations should be made before embarking on such projects (types of samples, types of controls)? How is microbiome data analyzed and integrated with data measured from the hosts? How can researchers experimentally manipulate the microbiome? With this information, integrative and comparative biologists should be able to include host-microbe studies into their research and push the boundaries of both fields. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  10. ePlant and the 3D data display initiative: integrative systems biology on the world wide web.

    Science.gov (United States)

    Fucile, Geoffrey; Di Biase, David; Nahal, Hardeep; La, Garon; Khodabandeh, Shokoufeh; Chen, Yani; Easley, Kante; Christendat, Dinesh; Kelley, Lawrence; Provart, Nicholas J

    2011-01-10

    Visualization tools for biological data are often limited in their ability to interactively integrate data at multiple scales. These computational tools are also typically limited by two-dimensional displays and programmatic implementations that require separate configurations for each of the user's computing devices and recompilation for functional expansion. Towards overcoming these limitations we have developed "ePlant" (http://bar.utoronto.ca/eplant) - a suite of open-source world wide web-based tools for the visualization of large-scale data sets from the model organism Arabidopsis thaliana. These tools display data spanning multiple biological scales on interactive three-dimensional models. Currently, ePlant consists of the following modules: a sequence conservation explorer that includes homology relationships and single nucleotide polymorphism data, a protein structure model explorer, a molecular interaction network explorer, a gene product subcellular localization explorer, and a gene expression pattern explorer. The ePlant's protein structure explorer module represents experimentally determined and theoretical structures covering >70% of the Arabidopsis proteome. The ePlant framework is accessed entirely through a web browser, and is therefore platform-independent. It can be applied to any model organism. To facilitate the development of three-dimensional displays of biological data on the world wide web we have established the "3D Data Display Initiative" (http://3ddi.org).

  11. Integrating biological and social values when prioritizing places for biodiversity conservation.

    Science.gov (United States)

    Whitehead, Amy L; Kujala, Heini; Ives, Christopher D; Gordon, Ascelin; Lentini, Pia E; Wintle, Brendan A; Nicholson, Emily; Raymond, Christopher M

    2014-08-01

    The consideration of information on social values in conjunction with biological data is critical for achieving both socially acceptable and scientifically defensible conservation planning outcomes. However, the influence of social values on spatial conservation priorities has received limited attention and is poorly understood. We present an approach that incorporates quantitative data on social values for conservation and social preferences for development into spatial conservation planning. We undertook a public participation GIS survey to spatially represent social values and development preferences and used species distribution models for 7 threatened fauna species to represent biological values. These spatially explicit data were simultaneously included in the conservation planning software Zonation to examine how conservation priorities changed with the inclusion of social data. Integrating spatially explicit information about social values and development preferences with biological data produced prioritizations that differed spatially from the solution based on only biological data. However, the integrated solutions protected a similar proportion of the species' distributions, indicating that Zonation effectively combined the biological and social data to produce socially feasible conservation solutions of approximately equivalent biological value. We were able to identify areas of the landscape where synergies and conflicts between different value sets are likely to occur. Identification of these synergies and conflicts will allow decision makers to target communication strategies to specific areas and ensure effective community engagement and positive conservation outcomes. © 2014 Society for Conservation Biology.

  12. The Integrated Computational Environment for Airbreathing Hypersonic Flight Vehicle Modeling and Design Evaluation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — An integrated computational environment for multidisciplinary, physics-based simulation and analyses of airbreathing hypersonic flight vehicles will be developed....

  13. The Virtual Brain Integrates Computational Modeling and Multimodal Neuroimaging

    Science.gov (United States)

    Schirner, Michael; McIntosh, Anthony R.; Jirsa, Viktor K.

    2013-01-01

    Abstract Brain function is thought to emerge from the interactions among neuronal populations. Apart from traditional efforts to reproduce brain dynamics from the micro- to macroscopic scales, complementary approaches develop phenomenological models of lower complexity. Such macroscopic models typically generate only a few selected—ideally functionally relevant—aspects of the brain dynamics. Importantly, they often allow an understanding of the underlying mechanisms beyond computational reproduction. Adding detail to these models will widen their ability to reproduce a broader range of dynamic features of the brain. For instance, such models allow for the exploration of consequences of focal and distributed pathological changes in the system, enabling us to identify and develop approaches to counteract those unfavorable processes. Toward this end, The Virtual Brain (TVB) (www.thevirtualbrain.org), a neuroinformatics platform with a brain simulator that incorporates a range of neuronal models and dynamics at its core, has been developed. This integrated framework allows the model-based simulation, analysis, and inference of neurophysiological mechanisms over several brain scales that underlie the generation of macroscopic neuroimaging signals. In this article, we describe how TVB works, and we present the first proof of concept. PMID:23442172

  14. Integrated modeling tool for performance engineering of complex computer systems

    Science.gov (United States)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  15. An Integrated Review of Emoticons in Computer-Mediated Communication.

    Science.gov (United States)

    Aldunate, Nerea; González-Ibáñez, Roberto

    2016-01-01

    Facial expressions constitute a rich source of non-verbal cues in face-to-face communication. They provide interlocutors with resources to express and interpret verbal messages, which may affect their cognitive and emotional processing. Contrarily, computer-mediated communication (CMC), particularly text-based communication, is limited to the use of symbols to convey a message, where facial expressions cannot be transmitted naturally. In this scenario, people use emoticons as paralinguistic cues to convey emotional meaning. Research has shown that emoticons contribute to a greater social presence as a result of the enrichment of text-based communication channels. Additionally, emoticons constitute a valuable resource for language comprehension by providing expressivity to text messages. The latter findings have been supported by studies in neuroscience showing that particular brain regions involved in emotional processing are also activated when people are exposed to emoticons. To reach an integrated understanding of the influence of emoticons in human communication on both socio-cognitive and neural levels, we review the literature on emoticons in three different areas. First, we present relevant literature on emoticons in CMC. Second, we study the influence of emoticons in language comprehension. Finally, we show the incipient research in neuroscience on this topic. This mini review reveals that, while there are plenty of studies on the influence of emoticons in communication from a social psychology perspective, little is known about the neurocognitive basis of the effects of emoticons on communication dynamics.

  16. Towards the prediction of essential genes by integration of network topology, cellular localization and biological process information

    Directory of Open Access Journals (Sweden)

    Lemke Ney

    2009-09-01

    Full Text Available Abstract Background The identification of essential genes is important for the understanding of the minimal requirements for cellular life and for practical purposes, such as drug design. However, the experimental techniques for essential genes discovery are labor-intensive and time-consuming. Considering these experimental constraints, a computational approach capable of accurately predicting essential genes would be of great value. We therefore present here a machine learning-based computational approach relying on network topological features, cellular localization and biological process information for prediction of essential genes. Results We constructed a decision tree-based meta-classifier and trained it on datasets with individual and grouped attributes-network topological features, cellular compartments and biological processes-to generate various predictors of essential genes. We showed that the predictors with better performances are those generated by datasets with integrated attributes. Using the predictor with all attributes, i.e., network topological features, cellular compartments and biological processes, we obtained the best predictor of essential genes that was then used to classify yeast genes with unknown essentiality status. Finally, we generated decision trees by training the J48 algorithm on datasets with all network topological features, cellular localization and biological process information to discover cellular rules for essentiality. We found that the number of protein physical interactions, the nuclear localization of proteins and the number of regulating transcription factors are the most important factors determining gene essentiality. Conclusion We were able to demonstrate that network topological features, cellular localization and biological process information are reliable predictors of essential genes. Moreover, by constructing decision trees based on these data, we could discover cellular rules governing

  17. Computational biomechanics

    International Nuclear Information System (INIS)

    Ethier, C.R.

    2004-01-01

    Computational biomechanics is a fast-growing field that integrates modern biological techniques and computer modelling to solve problems of medical and biological interest. Modelling of blood flow in the large arteries is the best-known application of computational biomechanics, but there are many others. Described here is work being carried out in the laboratory on the modelling of blood flow in the coronary arteries and on the transport of viral particles in the eye. (author)

  18. Network motif-based identification of transcription factor-target gene relationships by integrating multi-source biological data

    Directory of Open Access Journals (Sweden)

    de los Reyes Benildo G

    2008-04-01

    Full Text Available Abstract Background Integrating data from multiple global assays and curated databases is essential to understand the spatio-temporal interactions within cells. Different experiments measure cellular processes at various widths and depths, while databases contain biological information based on established facts or published data. Integrating these complementary datasets helps infer a mutually consistent transcriptional regulatory network (TRN with strong similarity to the structure of the underlying genetic regulatory modules. Decomposing the TRN into a small set of recurring regulatory patterns, called network motifs (NM, facilitates the inference. Identifying NMs defined by specific transcription factors (TF establishes the framework structure of a TRN and allows the inference of TF-target gene relationship. This paper introduces a computational framework for utilizing data from multiple sources to infer TF-target gene relationships on the basis of NMs. The data include time course gene expression profiles, genome-wide location analysis data, binding sequence data, and gene ontology (GO information. Results The proposed computational framework was tested using gene expression data associated with cell cycle progression in yeast. Among 800 cell cycle related genes, 85 were identified as candidate TFs and classified into four previously defined NMs. The NMs for a subset of TFs are obtained from literature. Support vector machine (SVM classifiers were used to estimate NMs for the remaining TFs. The potential downstream target genes for the TFs were clustered into 34 biologically significant groups. The relationships between TFs and potential target gene clusters were examined by training recurrent neural networks whose topologies mimic the NMs to which the TFs are classified. The identified relationships between TFs and gene clusters were evaluated using the following biological validation and statistical analyses: (1 Gene set enrichment

  19. Promoting Student Learning through the Integration of Lab and Lecture: The Seamless Biology Curriculum

    Science.gov (United States)

    Burrowes, Patricia; Nazario, Gladys

    2008-01-01

    The authors engaged in an education experiment to determine if the integration of lab and lecture activities in zoology and botany proved beneficial to student learning and motivation toward biology. Their results revealed that this strategy positively influenced students' academic achievement, conceptual understanding, and ability to apply…

  20. Biological and Psychosocial Predictors of Postpartum Depression: Systematic Review and Call for Integration

    Science.gov (United States)

    Tanner Stapleton, Lynlee R.; Guardino, Christine M.; Hahn-Holbrook, Jennifer; Schetter, Christine Dunkel

    2017-01-01

    Postpartum depression (PPD) adversely affects the health and well being of many new mothers, their infants, and their families. A comprehensive understanding of biopsychosocial precursors to PPD is needed to solidify the current evidence base for best practices in translation. We conducted a systematic review of research published from 2000 through 2013 on biological and psychosocial factors associated with PPD and postpartum depressive symptoms. Two hundred fourteen publications based on 199 investigations of 151,651 women in the first postpartum year met inclusion criteria. The biological and psychosocial literatures are largely distinct, and few studies provide integrative analyses. The strongest PPD risk predictors among biological processes are hypothalamic-pituitary-adrenal dysregulation, inflammatory processes, and genetic vulnerabilities. Among psychosocial factors, the strongest predictors are severe life events, some forms of chronic strain, relationship quality, and support from partner and mother. Fully integrated biopsychosocial investigations with large samples are needed to advance our knowledge of PPD etiology. PMID:25822344

  1. Camels, Cormorants, and Kangaroo Rats: Integration and Synthesis in Organismal Biology After World War II.

    Science.gov (United States)

    Hagen, Joel B

    2015-01-01

    During the decades following World War II diverse groups of American biologists established a variety of distinctive approaches to organismal biology. Rhetorically, organismal biology could be used defensively to distinguish established research traditions from perceived threats from newly emerging fields such as molecular biology. But, organismal biologists were also interested in integrating biological disciplines and using a focus on organisms to synthesize levels of organization from molecules and cells to populations and communities. Part of this broad movement was the development of an area of research variously referred to as physiological ecology, environmental physiology, or ecophysiology. This area of research was distinctive in its self-conscious blend of field and laboratory practices and its explicit integration with other areas of biology such as ecology, animal behavior, and evolution in order to study adaptation. Comparing the intersecting careers of Knut Schmidt-Nielsen and George Bartholomew highlights two strikingly different approaches to physiological ecology. These alternative approaches to studying the interactions of organisms and environments also differed in important ways from the organismal biology championed by leading figures in the modern synthesis.

  2. Defining Biological Networks for Noise Buffering and Signaling Sensitivity Using Approximate Bayesian Computation

    Directory of Open Access Journals (Sweden)

    Shuqiang Wang

    2014-01-01

    Full Text Available Reliable information processing in cells requires high sensitivity to changes in the input signal but low sensitivity to random fluctuations in the transmitted signal. There are often many alternative biological circuits qualifying for this biological function. Distinguishing theses biological models and finding the most suitable one are essential, as such model ranking, by experimental evidence, will help to judge the support of the working hypotheses forming each model. Here, we employ the approximate Bayesian computation (ABC method based on sequential Monte Carlo (SMC to search for biological circuits that can maintain signaling sensitivity while minimizing noise propagation, focusing on cases where the noise is characterized by rapid fluctuations. By systematically analyzing three-component circuits, we rank these biological circuits and identify three-basic-biological-motif buffering noise while maintaining sensitivity to long-term changes in input signals. We discuss in detail a particular implementation in control of nutrient homeostasis in yeast. The principal component analysis of the posterior provides insight into the nature of the reaction between nodes.

  3. The transhumanism of Ray Kurzweil. Is biological ontology reducible to computation?

    Directory of Open Access Journals (Sweden)

    Javier Monserrat

    2016-02-01

    Full Text Available Computer programs, primarily engineering machine vision and programming of somatic sensors, have already allowed, and they will do it more perfectly in the future, to build high perfection androids or cyborgs. They will collaborate with man and open new moral reflections to respect the ontological dignity in the new humanoid machines. In addition, both men and new androids will be in connection with huge external computer networks that will grow up to almost incredible levels the efficiency in the domain of body and nature. However, our current scientific knowledge, on the one hand, about hardware and software that will support both the humanoid machines and external computer networks, made with existing engineering (and also the foreseeable medium and even long term engineering and, on the other hand, our scientific knowledge about animal and human behavior from neural-biological structures that produce a psychic system, allow us to establish that there is no scientific basis to talk about an ontological identity between the computational machines and man. Accordingly, different ontologies (computational machines and biological entities will produce various different functional systems. There may be simulation, but never ontological identity. These ideas are essential to assess the transhumanism of Ray Kurzweil.

  4. A data integration approach for cell cycle analysis oriented to model simulation in systems biology

    Directory of Open Access Journals (Sweden)

    Mosca Ettore

    2007-08-01

    Full Text Available Abstract Background The cell cycle is one of the biological processes most frequently investigated in systems biology studies and it involves the knowledge of a large number of genes and networks of protein interactions. A deep knowledge of the molecular aspect of this biological process can contribute to making cancer research more accurate and innovative. In this context the mathematical modelling of the cell cycle has a relevant role to quantify the behaviour of each component of the systems. The mathematical modelling of a biological process such as the cell cycle allows a systemic description that helps to highlight some features such as emergent properties which could be hidden when the analysis is performed only from a reductionism point of view. Moreover, in modelling complex systems, a complete annotation of all the components is equally important to understand the interaction mechanism inside the network: for this reason data integration of the model components has high relevance in systems biology studies. Description In this work, we present a resource, the Cell Cycle Database, intended to support systems biology analysis on the Cell Cycle process, based on two organisms, yeast and mammalian. The database integrates information about genes and proteins involved in the cell cycle process, stores complete models of the interaction networks and allows the mathematical simulation over time of the quantitative behaviour of each component. To accomplish this task, we developed, a web interface for browsing information related to cell cycle genes, proteins and mathematical models. In this framework, we have implemented a pipeline which allows users to deal with the mathematical part of the models, in order to solve, using different variables, the ordinary differential equation systems that describe the biological process. Conclusion This integrated system is freely available in order to support systems biology research on the cell cycle and

  5. Periodicity computation of generalized mathematical biology problems involving delay differential equations.

    Science.gov (United States)

    Jasim Mohammed, M; Ibrahim, Rabha W; Ahmad, M Z

    2017-03-01

    In this paper, we consider a low initial population model. Our aim is to study the periodicity computation of this model by using neutral differential equations, which are recognized in various studies including biology. We generalize the neutral Rayleigh equation for the third-order by exploiting the model of fractional calculus, in particular the Riemann-Liouville differential operator. We establish the existence and uniqueness of a periodic computational outcome. The technique depends on the continuation theorem of the coincidence degree theory. Besides, an example is presented to demonstrate the finding.

  6. The role of bacillus-based biological control agents in integrated pest management systems: plant diseases.

    Science.gov (United States)

    Jacobsen, B J; Zidack, N K; Larson, B J

    2004-11-01

    ABSTRACT Bacillus-based biological control agents (BCAs) have great potential in integrated pest management (IPM) systems; however, relatively little work has been published on integration with other IPM management tools. Unfortunately, most research has focused on BCAs as alternatives to synthetic chemical fungicides or bactericides and not as part of an integrated management system. IPM has had many definitions and this review will use the national coalition for IPM definition: "A sustainable approach to managing pests by combining biological, cultural, physical and chemical tools in a way that minimizes economic, health and environmental risks." This review will examine the integrated use of Bacillus-based BCAs with disease management tools, including resistant cultivars, fungicides or bactericides, or other BCAs. This integration is important because the consistency and degree of disease control by Bacillus-based BCAs is rarely equal to the control afforded by the best fungicides or bactericides. In theory, integration of several tools brings stability to disease management programs. Integration of BCAs with other disease management tools often provides broader crop adaptation and both more efficacious and consistent levels of disease control. This review will also discuss the use of Bacillus-based BCAs in fungicide resistance management. Work with Bacillus thuringiensis and insect pest management is the exception to the relative paucity of reports but will not be the focus of this review.

  7. Secure Encapsulation and Publication of Biological Services in the Cloud Computing Environment

    Science.gov (United States)

    Zhang, Weizhe; Wang, Xuehui; Lu, Bo; Kim, Tai-hoon

    2013-01-01

    Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved. PMID:24078906

  8. Secure Encapsulation and Publication of Biological Services in the Cloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Weizhe Zhang

    2013-01-01

    Full Text Available Secure encapsulation and publication for bioinformatics software products based on web service are presented, and the basic function of biological information is realized in the cloud computing environment. In the encapsulation phase, the workflow and function of bioinformatics software are conducted, the encapsulation interfaces are designed, and the runtime interaction between users and computers is simulated. In the publication phase, the execution and management mechanisms and principles of the GRAM components are analyzed. The functions such as remote user job submission and job status query are implemented by using the GRAM components. The services of bioinformatics software are published to remote users. Finally the basic prototype system of the biological cloud is achieved.

  9. Computer Simulation and Data Analysis in Molecular Biology and Biophysics An Introduction Using R

    CERN Document Server

    Bloomfield, Victor

    2009-01-01

    This book provides an introduction, suitable for advanced undergraduates and beginning graduate students, to two important aspects of molecular biology and biophysics: computer simulation and data analysis. It introduces tools to enable readers to learn and use fundamental methods for constructing quantitative models of biological mechanisms, both deterministic and with some elements of randomness, including complex reaction equilibria and kinetics, population models, and regulation of metabolism and development; to understand how concepts of probability can help in explaining important features of DNA sequences; and to apply a useful set of statistical methods to analysis of experimental data from spectroscopic, genomic, and proteomic sources. These quantitative tools are implemented using the free, open source software program R. R provides an excellent environment for general numerical and statistical computing and graphics, with capabilities similar to Matlab®. Since R is increasingly used in bioinformat...

  10. Human Computation An Integrated Approach to Learning from the Crowd

    CERN Document Server

    Law, Edith

    2011-01-01

    Human computation is a new and evolving research area that centers around harnessing human intelligence to solve computational problems that are beyond the scope of existing Artificial Intelligence (AI) algorithms. With the growth of the Web, human computation systems can now leverage the abilities of an unprecedented number of people via the Web to perform complex computation. There are various genres of human computation applications that exist today. Games with a purpose (e.g., the ESP Game) specifically target online gamers who generate useful data (e.g., image tags) while playing an enjoy

  11. Airborne gravimetry used in precise geoid computations by ring integration

    DEFF Research Database (Denmark)

    Kearsley, A.H.W.; Forsberg, René; Olesen, Arne Vestergaard

    1998-01-01

    Two detailed geoids have been computed in the region of North Jutland. The first computation used marine data in the offshore areas. For the second computation the marine data set was replaced by the sparser airborne gravity data resulting from the AG-MASCO campaign of September 1996. The results...... of comparisons of the geoid heights at on-shore geometric control showed that the geoid heights computed from the airborne gravity data matched in precision those computed using the marine data, supporting the view that airborne techniques have enormous potential for mapping those unsurveyed areas between...

  12. ADAM: analysis of discrete models of biological systems using computer algebra.

    Science.gov (United States)

    Hinkelmann, Franziska; Brandon, Madison; Guang, Bonny; McNeill, Rustin; Blekherman, Grigoriy; Veliz-Cuba, Alan; Laubenbacher, Reinhard

    2011-07-20

    Many biological systems are modeled qualitatively with discrete models, such as probabilistic Boolean networks, logical models, Petri nets, and agent-based models, to gain a better understanding of them. The computational complexity to analyze the complete dynamics of these models grows exponentially in the number of variables, which impedes working with complex models. There exist software tools to analyze discrete models, but they either lack the algorithmic functionality to analyze complex models deterministically or they are inaccessible to many users as they require understanding the underlying algorithm and implementation, do not have a graphical user interface, or are hard to install. Efficient analysis methods that are accessible to modelers and easy to use are needed. We propose a method for efficiently identifying attractors and introduce the web-based tool Analysis of Dynamic Algebraic Models (ADAM), which provides this and other analysis methods for discrete models. ADAM converts several discrete model types automatically into polynomial dynamical systems and analyzes their dynamics using tools from computer algebra. Specifically, we propose a method to identify attractors of a discrete model that is equivalent to solving a system of polynomial equations, a long-studied problem in computer algebra. Based on extensive experimentation with both discrete models arising in systems biology and randomly generated networks, we found that the algebraic algorithms presented in this manuscript are fast for systems with the structure maintained by most biological systems, namely sparseness and robustness. For a large set of published complex discrete models, ADAM identified the attractors in less than one second. Discrete modeling techniques are a useful tool for analyzing complex biological systems and there is a need in the biological community for accessible efficient analysis tools. ADAM provides analysis methods based on mathematical algorithms as a web

  13. Integrated analysis of multiple data sources reveals modular structure of biological networks

    International Nuclear Information System (INIS)

    Lu Hongchao; Shi Baochen; Wu Gaowei; Zhang Yong; Zhu Xiaopeng; Zhang Zhihua; Liu Changning; Zhao, Yi; Wu Tao; Wang Jie; Chen Runsheng

    2006-01-01

    It has been a challenging task to integrate high-throughput data into investigations of the systematic and dynamic organization of biological networks. Here, we presented a simple hierarchical clustering algorithm that goes a long way to achieve this aim. Our method effectively reveals the modular structure of the yeast protein-protein interaction network and distinguishes protein complexes from functional modules by integrating high-throughput protein-protein interaction data with the added subcellular localization and expression profile data. Furthermore, we take advantage of the detected modules to provide a reliably functional context for the uncharacterized components within modules. On the other hand, the integration of various protein-protein association information makes our method robust to false-positives, especially for derived protein complexes. More importantly, this simple method can be extended naturally to other types of data fusion and provides a framework for the study of more comprehensive properties of the biological network and other forms of complex networks

  14. Large Scale Computing and Storage Requirements for Biological and Environmental Research

    Energy Technology Data Exchange (ETDEWEB)

    DOE Office of Science, Biological and Environmental Research Program Office (BER),

    2009-09-30

    In May 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of Biological and Environmental Research (BER) held a workshop to characterize HPC requirements for BER-funded research over the subsequent three to five years. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. Chief among them: scientific progress in BER-funded research is limited by current allocations of computational resources. Additionally, growth in mission-critical computing -- combined with new requirements for collaborative data manipulation and analysis -- will demand ever increasing computing, storage, network, visualization, reliability and service richness from NERSC. This report expands upon these key points and adds others. It also presents a number of"case studies" as significant representative samples of the needs of science teams within BER. Workshop participants were asked to codify their requirements in this"case study" format, summarizing their science goals, methods of solution, current and 3-5 year computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel,"multi-core" environment that is expected to dominate HPC architectures over the next few years.

  15. National Ignition Facility system design requirements NIF integrated computer controls SDR004

    International Nuclear Information System (INIS)

    Bliss, E.

    1996-01-01

    This System Design Requirement document establishes the performance, design, development, and test requirements for the NIF Integrated Computer Control System. The Integrated Computer Control System (ICCS) is covered in NIF WBS element 1.5. This document responds directly to the requirements detailed in the NIF Functional Requirements/Primary Criteria, and is supported by subsystem design requirements documents for each major ICCS Subsystem

  16. Synthetic Biology Outside the Cell: Linking Computational Tools to Cell-Free Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, Daniel D. [Integrative Genetics and Genomics, University of California Davis, Davis, CA (United States); Department of Biomedical Engineering, University of California Davis, Davis, CA (United States); Villarreal, Fernando D.; Wu, Fan; Tan, Cheemeng, E-mail: cmtan@ucdavis.edu [Department of Biomedical Engineering, University of California Davis, Davis, CA (United States)

    2014-12-09

    As mathematical models become more commonly integrated into the study of biology, a common language for describing biological processes is manifesting. Many tools have emerged for the simulation of in vivo synthetic biological systems, with only a few examples of prominent work done on predicting the dynamics of cell-free synthetic systems. At the same time, experimental biologists have begun to study dynamics of in vitro systems encapsulated by amphiphilic molecules, opening the door for the development of a new generation of biomimetic systems. In this review, we explore both in vivo and in vitro models of biochemical networks with a special focus on tools that could be applied to the construction of cell-free expression systems. We believe that quantitative studies of complex cellular mechanisms and pathways in synthetic systems can yield important insights into what makes cells different from conventional chemical systems.

  17. Synthetic Biology Outside the Cell: Linking Computational Tools to Cell-Free Systems

    International Nuclear Information System (INIS)

    Lewis, Daniel D.; Villarreal, Fernando D.; Wu, Fan; Tan, Cheemeng

    2014-01-01

    As mathematical models become more commonly integrated into the study of biology, a common language for describing biological processes is manifesting. Many tools have emerged for the simulation of in vivo synthetic biological systems, with only a few examples of prominent work done on predicting the dynamics of cell-free synthetic systems. At the same time, experimental biologists have begun to study dynamics of in vitro systems encapsulated by amphiphilic molecules, opening the door for the development of a new generation of biomimetic systems. In this review, we explore both in vivo and in vitro models of biochemical networks with a special focus on tools that could be applied to the construction of cell-free expression systems. We believe that quantitative studies of complex cellular mechanisms and pathways in synthetic systems can yield important insights into what makes cells different from conventional chemical systems.

  18. Biospheric Life Support - integrating biological regeneration into protection of humans in space.

    Science.gov (United States)

    Rocha, Mauricio; Iha, Koshun

    2016-07-01

    retirement (2016). The extension will allow partner agencies to deploy new experiments there, resuming basic research focusing more forward-looking goals. For deep-space, since consumables logistics becomes more difficult- and habitability an issue, with diminishing Earth's view, further research has been recommended. Four major areas have been identified for human protection: (1) radiation mitigation; (2) highly recyclable bio-regenerative (BR) LSS; (3) micro-gravity countermeasures- including artificial gravity (AG), and (4) psychological safety. To contribute to the efforts to address these issues, a basic lab/virtual iterative research has been proposed, assuming (in a worst case scenario) that: I) It won't be possible to send people to long deep space missions, safely, with the current (low quality of life) support technology (ISS micro-gravity 'up-gradings'); II) The alternative to implant a Mars surface human supportive biosphere would also not be possible, due to environmental/ evolutionary restraints (life could adapt and survive, but not necessarily to favor humans). From the above considerations arises the question: Would an average approach be possible where, by applying the artificial gravity concept to S/Cs, a fragment of Earth bio-regenerative environment could be integrated inside reusable manned vehicles- thus enhancing its habitability/autonomy in long deep space missions? For this research question a provisory answer/hypothesis has been provided. And to test it, a small AG+BR bench simulator (plus computer methods) has been devised.

  19. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME.

    Science.gov (United States)

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2016-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.

  20. Concept of development of integrated computer - based control system for 'Ukryttia' object

    International Nuclear Information System (INIS)

    Buyal'skij, V.M.; Maslov, V.P.

    2003-01-01

    The structural concept of Chernobyl NPP 'Ukryttia' Object's integrated computer - based control system development is presented on the basis of general concept of integrated Computer - based Control System (CCS) design process for organizing and technical management subjects.The concept is aimed at state-of-the-art architectural design technique application and allows using modern computer-aided facilities for functional model,information (logical and physical) models development,as well as for system object model under design

  1. Unstructured Computational Aerodynamics on Many Integrated Core Architecture

    KAUST Repository

    Al Farhan, Mohammed A.

    2016-06-08

    Shared memory parallelization of the flux kernel of PETSc-FUN3D, an unstructured tetrahedral mesh Euler flow code previously studied for distributed memory and multi-core shared memory, is evaluated on up to 61 cores per node and up to 4 threads per core. We explore several thread-level optimizations to improve flux kernel performance on the state-of-the-art many integrated core (MIC) Intel processor Xeon Phi “Knights Corner,” with a focus on strong thread scaling. While the linear algebraic kernel is bottlenecked by memory bandwidth for even modest numbers of cores sharing a common memory, the flux kernel, which arises in the control volume discretization of the conservation law residuals and in the formation of the preconditioner for the Jacobian by finite-differencing the conservation law residuals, is compute-intensive and is known to exploit effectively contemporary multi-core hardware. We extend study of the performance of the flux kernel to the Xeon Phi in three thread affinity modes, namely scatter, compact, and balanced, in both offload and native mode, with and without various code optimizations to improve alignment and reduce cache coherency penalties. Relative to baseline “out-of-the-box” optimized compilation, code restructuring optimizations provide about 3.8x speedup using the offload mode and about 5x speedup using the native mode. Even with these gains for the flux kernel, with respect to execution time the MIC simply achieves par with optimized compilation on a contemporary multi-core Intel CPU, the 16-core Sandy Bridge E5 2670. Nevertheless, the optimizations employed to reduce the data motion and cache coherency protocol penalties of the MIC are expected to be of value for CFD and many other unstructured applications as many-core architecture evolves. We explore large-scale distributed-shared memory performance on the Cray XC40 supercomputer, to demonstrate that optimizations employed on Phi hybridize to this context, where each of

  2. Unstructured Computational Aerodynamics on Many Integrated Core Architecture

    KAUST Repository

    Al Farhan, Mohammed A.; Kaushik, Dinesh K.; Keyes, David E.

    2016-01-01

    Shared memory parallelization of the flux kernel of PETSc-FUN3D, an unstructured tetrahedral mesh Euler flow code previously studied for distributed memory and multi-core shared memory, is evaluated on up to 61 cores per node and up to 4 threads per core. We explore several thread-level optimizations to improve flux kernel performance on the state-of-the-art many integrated core (MIC) Intel processor Xeon Phi “Knights Corner,” with a focus on strong thread scaling. While the linear algebraic kernel is bottlenecked by memory bandwidth for even modest numbers of cores sharing a common memory, the flux kernel, which arises in the control volume discretization of the conservation law residuals and in the formation of the preconditioner for the Jacobian by finite-differencing the conservation law residuals, is compute-intensive and is known to exploit effectively contemporary multi-core hardware. We extend study of the performance of the flux kernel to the Xeon Phi in three thread affinity modes, namely scatter, compact, and balanced, in both offload and native mode, with and without various code optimizations to improve alignment and reduce cache coherency penalties. Relative to baseline “out-of-the-box” optimized compilation, code restructuring optimizations provide about 3.8x speedup using the offload mode and about 5x speedup using the native mode. Even with these gains for the flux kernel, with respect to execution time the MIC simply achieves par with optimized compilation on a contemporary multi-core Intel CPU, the 16-core Sandy Bridge E5 2670. Nevertheless, the optimizations employed to reduce the data motion and cache coherency protocol penalties of the MIC are expected to be of value for CFD and many other unstructured applications as many-core architecture evolves. We explore large-scale distributed-shared memory performance on the Cray XC40 supercomputer, to demonstrate that optimizations employed on Phi hybridize to this context, where each of

  3. Mission and Sustainability of Informatics for Integrating Biology and the Bedside (i2b2).

    Science.gov (United States)

    Murphy, Shawn; Wilcox, Adam

    2014-01-01

    A visible example of a successfully disseminated research project in the healthcare space is Informatics for Integrating Biology and the Bedside, or i2b2. The project serves to provide the software that can allow a researcher to do direct, self-serve queries against the electronic healthcare data form a hospital. The goals of these queries are to find cohorts of patients that fit specific profiles, while providing for patient privacy and discretion. Sustaining this resource and keeping its direction has always been a challenge, but ever more so as the ten year National Centers for Biomedical Computing (NCBCs) sunset their funding. Building on the i2b2 structures has helped the dissemination plans for grants leveraging it because it is a disseminated national resource. While this has not directly increased the support of i2b2 internally, it has increased the ability of institutions to leverage the resource and generally leads to increased institutional support. The successful development, use, and dissemination i2b2 has been significant in clinical research and informatics. Its evolution has been from a local research data infrastructure to one disseminated more broadly than any other product of the National Centers for Biomedical Computing, and an infrastructure spawning larger investments than were originally used to create it. Throughout this, there were two main lessons about the benefits of dissemination: that people have great creativity in utilizing a resource in different ways and that broader system use can make the system more robust. One option for long-term sustainability of the central authority would be to translate the function to an industry partner. Another option currently being pursued is to create a foundation that would be a central authority for the project. Over the past 10 years, i2b2 has risen to be an important staple in the toolkit of health care researchers. There are now over 110 hospitals that use i2b2 for research. This open

  4. Integrating numerical computation into the undergraduate education physics curriculum using spreadsheet excel

    Science.gov (United States)

    Fauzi, Ahmad

    2017-11-01

    Numerical computation has many pedagogical advantages: it develops analytical skills and problem-solving skills, helps to learn through visualization, and enhances physics education. Unfortunately, numerical computation is not taught to undergraduate education physics students in Indonesia. Incorporate numerical computation into the undergraduate education physics curriculum presents many challenges. The main challenges are the dense curriculum that makes difficult to put new numerical computation course and most students have no programming experience. In this research, we used case study to review how to integrate numerical computation into undergraduate education physics curriculum. The participants of this research were 54 students of the fourth semester of physics education department. As a result, we concluded that numerical computation could be integrated into undergraduate education physics curriculum using spreadsheet excel combined with another course. The results of this research become complements of the study on how to integrate numerical computation in learning physics using spreadsheet excel.

  5. Integrative computational approach for genome-based study of microbial lipid-degrading enzymes.

    Science.gov (United States)

    Vorapreeda, Tayvich; Thammarongtham, Chinae; Laoteng, Kobkul

    2016-07-01

    Lipid-degrading or lipolytic enzymes have gained enormous attention in academic and industrial sectors. Several efforts are underway to discover new lipase enzymes from a variety of microorganisms with particular catalytic properties to be used for extensive applications. In addition, various tools and strategies have been implemented to unravel the functional relevance of the versatile lipid-degrading enzymes for special purposes. This review highlights the study of microbial lipid-degrading enzymes through an integrative computational approach. The identification of putative lipase genes from microbial genomes and metagenomic libraries using homology-based mining is discussed, with an emphasis on sequence analysis of conserved motifs and enzyme topology. Molecular modelling of three-dimensional structure on the basis of sequence similarity is shown to be a potential approach for exploring the structural and functional relationships of candidate lipase enzymes. The perspectives on a discriminative framework of cutting-edge tools and technologies, including bioinformatics, computational biology, functional genomics and functional proteomics, intended to facilitate rapid progress in understanding lipolysis mechanism and to discover novel lipid-degrading enzymes of microorganisms are discussed.

  6. Cloud Computing: Should It Be Integrated into the Curriculum?

    Science.gov (United States)

    Changchit, Chuleeporn

    2015-01-01

    Cloud computing has become increasingly popular among users and businesses around the world, and education is no exception. Cloud computing can bring an increased number of benefits to an educational setting, not only for its cost effectiveness, but also for the thirst for technology that college students have today, which allows learning and…

  7. Integrating Human and Computer Intelligence. Technical Report No. 32.

    Science.gov (United States)

    Pea, Roy D.

    This paper explores the thesis that advances in computer applications and artificial intelligence have important implications for the study of development and learning in psychology. Current approaches to the use of computers as devices for problem solving, reasoning, and thinking--i.e., expert systems and intelligent tutoring systems--are…

  8. Gesture Recognition by Computer Vision : An Integral Approach

    NARCIS (Netherlands)

    Lichtenauer, J.F.

    2009-01-01

    The fundamental objective of this Ph.D. thesis is to gain more insight into what is involved in the practical application of a computer vision system, when the conditions of use cannot be controlled completely. The basic assumption is that research on isolated aspects of computer vision often leads

  9. Integrative computational and experimental approaches to establish a post-myocardial infarction knowledge map.

    Directory of Open Access Journals (Sweden)

    Nguyen T Nguyen

    2014-03-01

    Full Text Available Vast research efforts have been devoted to providing clinical diagnostic markers of myocardial infarction (MI, leading to over one million abstracts associated with "MI" and "Cardiovascular Diseases" in PubMed. Accumulation of the research results imposed a challenge to integrate and interpret these results. To address this problem and better understand how the left ventricle (LV remodels post-MI at both the molecular and cellular levels, we propose here an integrative framework that couples computational methods and experimental data. We selected an initial set of MI-related proteins from published human studies and constructed an MI-specific protein-protein-interaction network (MIPIN. Structural and functional analysis of the MIPIN showed that the post-MI LV exhibited increased representation of proteins involved in transcriptional activity, inflammatory response, and extracellular matrix (ECM remodeling. Known plasma or serum expression changes of the MIPIN proteins in patients with MI were acquired by data mining of the PubMed and UniProt knowledgebase, and served as a training set to predict unlabeled MIPIN protein changes post-MI. The predictions were validated with published results in PubMed, suggesting prognosticative capability of the MIPIN. Further, we established the first knowledge map related to the post-MI response, providing a major step towards enhancing our understanding of molecular interactions specific to MI and linking the molecular interaction, cellular responses, and biological processes to quantify LV remodeling.

  10. Engineering challenges of BioNEMS: the integration of microfluidics, micro- and nanodevices, models and external control for systems biology.

    Science.gov (United States)

    Wikswo, J P; Prokop, A; Baudenbacher, F; Cliffel, D; Csukas, B; Velkovsky, M

    2006-08-01

    Systems biology, i.e. quantitative, postgenomic, postproteomic, dynamic, multiscale physiology, addresses in an integrative, quantitative manner the shockwave of genetic and proteomic information using computer models that may eventually have 10(6) dynamic variables with non-linear interactions. Historically, single biological measurements are made over minutes, suggesting the challenge of specifying 10(6) model parameters. Except for fluorescence and micro-electrode recordings, most cellular measurements have inadequate bandwidth to discern the time course of critical intracellular biochemical events. Micro-array expression profiles of thousands of genes cannot determine quantitative dynamic cellular signalling and metabolic variables. Major gaps must be bridged between the computational vision and experimental reality. The analysis of cellular signalling dynamics and control requires, first, micro- and nano-instruments that measure simultaneously multiple extracellular and intracellular variables with sufficient bandwidth; secondly, the ability to open existing internal control and signalling loops; thirdly, external BioMEMS micro-actuators that provide high bandwidth feedback and externally addressable intracellular nano-actuators; and, fourthly, real-time, closed-loop, single-cell control algorithms. The unravelling of the nested and coupled nature of cellular control loops requires simultaneous recording of multiple single-cell signatures. Externally controlled nano-actuators, needed to effect changes in the biochemical, mechanical and electrical environment both outside and inside the cell, will provide a major impetus for nanoscience.

  11. Integration of distributed computing into the drug discovery process.

    Science.gov (United States)

    von Korff, Modest; Rufener, Christian; Stritt, Manuel; Freyss, Joel; Bär, Roman; Sander, Thomas

    2011-02-01

    Grid computing offers an opportunity to gain massive computing power at low costs. We give a short introduction into the drug discovery process and exemplify the use of grid computing for image processing, docking and 3D pharmacophore descriptor calculations. The principle of a grid and its architecture are briefly explained. More emphasis is laid on the issues related to a company-wide grid installation and embedding the grid into the research process. The future of grid computing in drug discovery is discussed in the expert opinion section. Most needed, besides reliable algorithms to predict compound properties, is embedding the grid seamlessly into the discovery process. User friendly access to powerful algorithms without any restrictions, that is, by a limited number of licenses, has to be the goal of grid computing in drug discovery.

  12. Computation of Hopkins' 3-circle integrals using Zernike expansions

    NARCIS (Netherlands)

    Janssen, A.J.E.M.

    2011-01-01

    The integrals occurring in optical diffraction theory under conditions of partial coherence have the form of an incomplete autocorrelation integral of the pupil function of the optical system. The incompleteness is embodied by a spatial coherence function of limited extent. In the case of circular

  13. Integrated Visible Photonics for Trapped-Ion Quantum Computing

    Science.gov (United States)

    2017-06-10

    etch to provide a smooth oxide facet, and clearance for fiber positioning for edge input coupling. Integrated Visible Photonics for Trapped-Ion...capability to optically address individual ions at several wavelengths. We demonstrate a dual-layered silicon nitride photonic platform for integration...coherence times, strong coulomb interactions, and optical addressability, hold great promise for implementation of practical quantum information

  14. Measurement of the Ecological Integrity of Cerrado Streams Using Biological Metrics and the Index of Habitat Integrity

    Directory of Open Access Journals (Sweden)

    Deusiano Florêncio dos Reis

    2017-01-01

    Full Text Available Generally, aquatic communities reflect the effects of anthropogenic changes such as deforestation or organic pollution. The Cerrado stands among the most threatened ecosystems by human activities in Brazil. In order to evaluate the ecological integrity of the streams in a preserved watershed in the Northern Cerrado biome corresponding to a mosaic of ecosystems in transition to the Amazonia biome in Brazil, biological metrics related to diversity, structure, and sensitivity of aquatic macroinvertebrates were calculated. Sampling included collections along stretches of 200 m of nine streams and measurements of abiotic variables (temperature, electrical conductivity, pH, total dissolved solids, dissolved oxygen, and discharge and the Index of Habitat Integrity (HII. The values of the abiotic variables and the HII indicated that most of the streams have good ecological integrity, due to high oxygen levels and low concentrations of dissolved solids and electric conductivity. Two streams showed altered HII scores mainly related to small dams for recreational and domestic use, use of Cerrado natural pasture for cattle raising, and spot deforestation in bathing areas. However, this finding is not reflected in the biological metrics that were used. Considering all nine streams, only two showed satisfactory ecological quality (measured by Biological Monitoring Working Party (BMWP, total richness, and EPT (Ephemeroptera, Plecoptera, and Trichoptera richness, only one of which had a low HII score. These results indicate that punctual measures of abiotic parameters do not reveal the long-term impacts of anthropic activities in these streams, including related fire management of pasture that annually alters the vegetation matrix and may act as a disturbance for the macroinvertebrate communities. Due to this, biomonitoring of low order streams in Cerrado ecosystems of the Northern Central Brazil by different biotic metrics and also physical attributes of the

  15. The potential of text mining in data integration and network biology for plant research: a case study on Arabidopsis.

    Science.gov (United States)

    Van Landeghem, Sofie; De Bodt, Stefanie; Drebert, Zuzanna J; Inzé, Dirk; Van de Peer, Yves

    2013-03-01

    Despite the availability of various data repositories for plant research, a wealth of information currently remains hidden within the biomolecular literature. Text mining provides the necessary means to retrieve these data through automated processing of texts. However, only recently has advanced text mining methodology been implemented with sufficient computational power to process texts at a large scale. In this study, we assess the potential of large-scale text mining for plant biology research in general and for network biology in particular using a state-of-the-art text mining system applied to all PubMed abstracts and PubMed Central full texts. We present extensive evaluation of the textual data for Arabidopsis thaliana, assessing the overall accuracy of this new resource for usage in plant network analyses. Furthermore, we combine text mining information with both protein-protein and regulatory interactions from experimental databases. Clusters of tightly connected genes are delineated from the resulting network, illustrating how such an integrative approach is essential to grasp the current knowledge available for Arabidopsis and to uncover gene information through guilt by association. All large-scale data sets, as well as the manually curated textual data, are made publicly available, hereby stimulating the application of text mining data in future plant biology studies.

  16. Methods of integrating Islamic values in teaching biology for shaping attitude and character

    Science.gov (United States)

    Listyono; Supardi, K. I.; Hindarto, N.; Ridlo, S.

    2018-03-01

    Learning is expected to develop the potential of learners to have the spiritual attitude: moral strength, self-control, personality, intelligence, noble character, as well as the skills needed by themselves, society, and nation. Implementation of role and morale in learning is an alternative way which is expected to answer the challenge. The solution offered is to inject student with religious material Islamic in learning biology. The content value of materials teaching biology includes terms of practical value, religious values, daily life value, socio-political value, and the value of art. In Islamic religious values (Qur'an and Hadith) various methods can touch human feelings, souls, and generate motivation. Integrating learning with Islamic value can be done by the deductive or inductive approach. The appropriate method of integration is the amtsal (analog) method, hiwar (dialog) method, targhib & tarhib (encouragement & warning) method, and example method (giving a noble role model / good example). The right strategy in integrating Islamic values is outlined in the design of lesson plan. The integration of Islamic values in lesson plan will facilitate teachers to build students' character because Islamic values can be implemented in every learning steps so students will be accustomed to receiving the character value in this integrated learning.

  17. Integrating user studies into computer graphics-related courses.

    Science.gov (United States)

    Santos, B S; Dias, P; Silva, S; Ferreira, C; Madeira, J

    2011-01-01

    This paper presents computer graphics. Computer graphics and visualization are essentially about producing images for a target audience, be it the millions watching a new CG-animated movie or the small group of researchers trying to gain insight into the large amount of numerical data resulting from a scientific experiment. To ascertain the final images' effectiveness for their intended audience or the designed visualizations' accuracy and expressiveness, formal user studies are often essential. In human-computer interaction (HCI), such user studies play a similar fundamental role in evaluating the usability and applicability of interaction methods and metaphors for the various devices and software systems we use.

  18. Tuneable resolution as a systems biology approach for multi-scale, multi-compartment computational models.

    Science.gov (United States)

    Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J

    2014-01-01

    The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work. © 2014 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.

  19. A Scheme for Verification on Data Integrity in Mobile Multicloud Computing Environment

    Directory of Open Access Journals (Sweden)

    Laicheng Cao

    2016-01-01

    Full Text Available In order to verify the data integrity in mobile multicloud computing environment, a MMCDIV (mobile multicloud data integrity verification scheme is proposed. First, the computability and nondegeneracy of verification can be obtained by adopting BLS (Boneh-Lynn-Shacham short signature scheme. Second, communication overhead is reduced based on HVR (Homomorphic Verifiable Response with random masking and sMHT (sequence-enforced Merkle hash tree construction. Finally, considering the resource constraints of mobile devices, data integrity is verified by lightweight computing and low data transmission. The scheme improves shortage that mobile device communication and computing power are limited, it supports dynamic data operation in mobile multicloud environment, and data integrity can be verified without using direct source file block. Experimental results also demonstrate that this scheme can achieve a lower cost of computing and communications.

  20. Trait-Dependent Biogeography: (Re)Integrating Biology into Probabilistic Historical Biogeographical Models.

    Science.gov (United States)

    Sukumaran, Jeet; Knowles, L Lacey

    2018-04-20

    The development of process-based probabilistic models for historical biogeography has transformed the field by grounding it in modern statistical hypothesis testing. However, most of these models abstract away biological differences, reducing species to interchangeable lineages. We present here the case for reintegration of biology into probabilistic historical biogeographical models, allowing a broader range of questions about biogeographical processes beyond ancestral range estimation or simple correlation between a trait and a distribution pattern, as well as allowing us to assess how inferences about ancestral ranges themselves might be impacted by differential biological traits. We show how new approaches to inference might cope with the computational challenges resulting from the increased complexity of these trait-based historical biogeographical models. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. PathJam: a new service for integrating biological pathway information

    Directory of Open Access Journals (Sweden)

    Glez-Peña Daniel

    2010-03-01

    Full Text Available Biological pathways are crucial to much of the scientific research today including the study of specific biological processes related with human diseases. PathJam is a new comprehensive and freely accessible web-server application integrating scattered human pathway annotation from several public sources. The tool has been designed for both (i being intuitive for wet-lab users providing statistical enrichment analysis of pathway annotations and (ii giving support to the development of new integrative pathway applications. PathJam’s unique features and advantages include interactive graphs linking pathways and genes of interest, downloadable results in fully compatible formats, GSEA compatible output files and a standardized RESTful API.

  2. Biosocial Conservation: Integrating Biological and Ethnographic Methods to Study Human-Primate Interactions.

    Science.gov (United States)

    Setchell, Joanna M; Fairet, Emilie; Shutt, Kathryn; Waters, Siân; Bell, Sandra

    2017-01-01

    Biodiversity conservation is one of the grand challenges facing society. Many people interested in biodiversity conservation have a background in wildlife biology. However, the diverse social, cultural, political, and historical factors that influence the lives of people and wildlife can be investigated fully only by incorporating social science methods, ideally within an interdisciplinary framework. Cultural hierarchies of knowledge and the hegemony of the natural sciences create a barrier to interdisciplinary understandings. Here, we review three different projects that confront this difficulty, integrating biological and ethnographic methods to study conservation problems. The first project involved wildlife foraging on crops around a newly established national park in Gabon. Biological methods revealed the extent of crop loss, the species responsible, and an effect of field isolation, while ethnography revealed institutional and social vulnerability to foraging wildlife. The second project concerned great ape tourism in the Central African Republic. Biological methods revealed that gorilla tourism poses risks to gorillas, while ethnography revealed why people seek close proximity to gorillas. The third project focused on humans and other primates living alongside one another in Morocco. Incorporating shepherds in the coproduction of ecological knowledge about primates built trust and altered attitudes to the primates. These three case studies demonstrate how the integration of biological and social methods can help us to understand the sustainability of human-wildlife interactions, and thus promote coexistence. In each case, an integrated biosocial approach incorporating ethnographic data produced results that would not otherwise have come to light. Research that transcends conventional academic boundaries requires the openness and flexibility to move beyond one's comfort zone to understand and acknowledge the legitimacy of "other" kinds of knowledge. It is

  3. Root Systems Biology: Integrative Modeling across Scales, from Gene Regulatory Networks to the Rhizosphere1

    Science.gov (United States)

    Hill, Kristine; Porco, Silvana; Lobet, Guillaume; Zappala, Susan; Mooney, Sacha; Draye, Xavier; Bennett, Malcolm J.

    2013-01-01

    Genetic and genomic approaches in model organisms have advanced our understanding of root biology over the last decade. Recently, however, systems biology and modeling have emerged as important approaches, as our understanding of root regulatory pathways has become more complex and interpreting pathway outputs has become less intuitive. To relate root genotype to phenotype, we must move beyond the examination of interactions at the genetic network scale and employ multiscale modeling approaches to predict emergent properties at the tissue, organ, organism, and rhizosphere scales. Understanding the underlying biological mechanisms and the complex interplay between systems at these different scales requires an integrative approach. Here, we describe examples of such approaches and discuss the merits of developing models to span multiple scales, from network to population levels, and to address dynamic interactions between plants and their environment. PMID:24143806

  4. Integrating external biological knowledge in the construction of regulatory networks from time-series expression data

    Directory of Open Access Journals (Sweden)

    Lo Kenneth

    2012-08-01

    Full Text Available Abstract Background Inference about regulatory networks from high-throughput genomics data is of great interest in systems biology. We present a Bayesian approach to infer gene regulatory networks from time series expression data by integrating various types of biological knowledge. Results We formulate network construction as a series of variable selection problems and use linear regression to model the data. Our method summarizes additional data sources with an informative prior probability distribution over candidate regression models. We extend the Bayesian model averaging (BMA variable selection method to select regulators in the regression framework. We summarize the external biological knowledge by an informative prior probability distribution over the candidate regression models. Conclusions We demonstrate our method on simulated data and a set of time-series microarray experiments measuring the effect of a drug perturbation on gene expression levels, and show that it outperforms leading regression-based methods in the literature.

  5. Biologic and clinical aspects of integration of different bone substitutes in oral surgery: a literature review.

    Science.gov (United States)

    Zizzari, Vincenzo Luca; Zara, Susi; Tetè, Giulia; Vinci, Raffaele; Gherlone, Enrico; Cataldi, Amelia

    2016-10-01

    Many bone substitutes have been proposed for bone regeneration, and researchers have focused on the interactions occurring between grafts and host tissue, as the biologic response of host tissue is related to the origin of the biomaterial. Bone substitutes used in oral and maxillofacial surgery could be categorized according to their biologic origin and source as autologous bone graft when obtained from the same individual receiving the graft; homologous bone graft, or allograft, when harvested from an individual other than the one receiving the graft; animal-derived heterologous bone graft, or xenograft, when derived from a species other than human; and alloplastic graft, made of bone substitute of synthetic origin. The aim of this review is to describe the most commonly used bone substitutes, according to their origin, and to focus on the biologic events that ultimately lead to the integration of a biomaterial with the host tissue. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Dispensing processes impact apparent biological activity as determined by computational and statistical analyses.

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    Full Text Available Dispensing and dilution processes may profoundly influence estimates of biological activity of compounds. Published data show Ephrin type-B receptor 4 IC50 values obtained via tip-based serial dilution and dispensing versus acoustic dispensing with direct dilution differ by orders of magnitude with no correlation or ranking of datasets. We generated computational 3D pharmacophores based on data derived by both acoustic and tip-based transfer. The computed pharmacophores differ significantly depending upon dispensing and dilution methods. The acoustic dispensing-derived pharmacophore correctly identified active compounds in a subsequent test set where the tip-based method failed. Data from acoustic dispensing generates a pharmacophore containing two hydrophobic features, one hydrogen bond donor and one hydrogen bond acceptor. This is consistent with X-ray crystallography studies of ligand-protein interactions and automatically generated pharmacophores derived from this structural data. In contrast, the tip-based data suggest a pharmacophore with two hydrogen bond acceptors, one hydrogen bond donor and no hydrophobic features. This pharmacophore is inconsistent with the X-ray crystallographic studies and automatically generated pharmacophores. In short, traditional dispensing processes are another important source of error in high-throughput screening that impacts computational and statistical analyses. These findings have far-reaching implications in biological research.

  7. Advanced computer algebra algorithms for the expansion of Feynman integrals

    International Nuclear Information System (INIS)

    Ablinger, Jakob; Round, Mark; Schneider, Carsten

    2012-10-01

    Two-point Feynman parameter integrals, with at most one mass and containing local operator insertions in 4+ε-dimensional Minkowski space, can be transformed to multi-integrals or multi-sums over hyperexponential and/or hypergeometric functions depending on a discrete parameter n. Given such a specific representation, we utilize an enhanced version of the multivariate Almkvist-Zeilberger algorithm (for multi-integrals) and a common summation framework of the holonomic and difference field approach (for multi-sums) to calculate recurrence relations in n. Finally, solving the recurrence we can decide efficiently if the first coefficients of the Laurent series expansion of a given Feynman integral can be expressed in terms of indefinite nested sums and products; if yes, the all n solution is returned in compact representations, i.e., no algebraic relations exist among the occurring sums and products.

  8. Advanced computer algebra algorithms for the expansion of Feynman integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, Jakob; Round, Mark; Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2012-10-15

    Two-point Feynman parameter integrals, with at most one mass and containing local operator insertions in 4+{epsilon}-dimensional Minkowski space, can be transformed to multi-integrals or multi-sums over hyperexponential and/or hypergeometric functions depending on a discrete parameter n. Given such a specific representation, we utilize an enhanced version of the multivariate Almkvist-Zeilberger algorithm (for multi-integrals) and a common summation framework of the holonomic and difference field approach (for multi-sums) to calculate recurrence relations in n. Finally, solving the recurrence we can decide efficiently if the first coefficients of the Laurent series expansion of a given Feynman integral can be expressed in terms of indefinite nested sums and products; if yes, the all n solution is returned in compact representations, i.e., no algebraic relations exist among the occurring sums and products.

  9. Improving integrative searching of systems chemical biology data using semantic annotation.

    Science.gov (United States)

    Chen, Bin; Ding, Ying; Wild, David J

    2012-03-08

    Systems chemical biology and chemogenomics are considered critical, integrative disciplines in modern biomedical research, but require data mining of large, integrated, heterogeneous datasets from chemistry and biology. We previously developed an RDF-based resource called Chem2Bio2RDF that enabled querying of such data using the SPARQL query language. Whilst this work has proved useful in its own right as one of the first major resources in these disciplines, its utility could be greatly improved by the application of an ontology for annotation of the nodes and edges in the RDF graph, enabling a much richer range of semantic queries to be issued. We developed a generalized chemogenomics and systems chemical biology OWL ontology called Chem2Bio2OWL that describes the semantics of chemical compounds, drugs, protein targets, pathways, genes, diseases and side-effects, and the relationships between them. The ontology also includes data provenance. We used it to annotate our Chem2Bio2RDF dataset, making it a rich semantic resource. Through a series of scientific case studies we demonstrate how this (i) simplifies the process of building SPARQL queries, (ii) enables useful new kinds of queries on the data and (iii) makes possible intelligent reasoning and semantic graph mining in chemogenomics and systems chemical biology. Chem2Bio2OWL is available at http://chem2bio2rdf.org/owl. The document is available at http://chem2bio2owl.wikispaces.com.

  10. Biological and psychological rhythms: an integrative approach to rhythm disturbances in autistic disorder.

    Science.gov (United States)

    Botbol, Michel; Cabon, Philippe; Kermarrec, Solenn; Tordjman, Sylvie

    2013-09-01

    Biological rhythms are crucial phenomena that are perfect examples of the adaptation of organisms to their environment. A considerable amount of work has described different types of biological rhythms (from circadian to ultradian), individual differences in their patterns and the complexity of their regulation. In particular, the regulation and maturation of the sleep-wake cycle have been thoroughly studied. Its desynchronization, both endogenous and exogenous, is now well understood, as are its consequences for cognitive impairments and health problems. From a completely different perspective, psychoanalysts have shown a growing interest in the rhythms of psychic life. This interest extends beyond the original focus of psychoanalysis on dreams and the sleep-wake cycle, incorporating central theoretical and practical psychoanalytic issues related to the core functioning of the psychic life: the rhythmic structures of drive dynamics, intersubjective developmental processes and psychic containment functions. Psychopathological and biological approaches to the study of infantile autism reveal the importance of specific biological and psychological rhythmic disturbances in this disorder. Considering data and hypotheses from both perspectives, this paper proposes an integrative approach to the study of these rhythmic disturbances and offers an etiopathogenic hypothesis based on this integrative approach. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Conservation Biology and Traditional Ecological Knowledge: Integrating Academic Disciplines for Better Conservation Practice

    Directory of Open Access Journals (Sweden)

    Joshua A. Drew

    2006-12-01

    Full Text Available Conservation biology and environmental anthropology are disciplines that are both concerned with the identification and preservation of diversity, in one case biological and in the other cultural. Both conservation biology and the study of traditional ecoloigcal knowledge function at the nexus of the social and natural worlds, yet historically there have been major impediments to integrating the two. Here we identify linguistic, cultural, and epistemological barriers between the two disciplines. We argue that the two disciplines are uniquely positioned to inform each other and to provide critical insights and new perspectives on the way these sciences are practiced. We conclude by synthesizing common themes found in conservation success stories, and by making several suggestions on integration. These include cross-disciplinary publication, expanding memberships in professional societies and conducting multidisciplinary research based on similar interests in ecological process, taxonomy, or geography. Finally, we argue that extinction threats, be they biological or cultural/linguistic are imminent, and that by bringing these disciplines together we may be able to forge synergistic conservation programs capable of protecting the vivid splendor of life on Earth.

  12. Improving integrative searching of systems chemical biology data using semantic annotation

    Directory of Open Access Journals (Sweden)

    Chen Bin

    2012-03-01

    Full Text Available Abstract Background Systems chemical biology and chemogenomics are considered critical, integrative disciplines in modern biomedical research, but require data mining of large, integrated, heterogeneous datasets from chemistry and biology. We previously developed an RDF-based resource called Chem2Bio2RDF that enabled querying of such data using the SPARQL query language. Whilst this work has proved useful in its own right as one of the first major resources in these disciplines, its utility could be greatly improved by the application of an ontology for annotation of the nodes and edges in the RDF graph, enabling a much richer range of semantic queries to be issued. Results We developed a generalized chemogenomics and systems chemical biology OWL ontology called Chem2Bio2OWL that describes the semantics of chemical compounds, drugs, protein targets, pathways, genes, diseases and side-effects, and the relationships between them. The ontology also includes data provenance. We used it to annotate our Chem2Bio2RDF dataset, making it a rich semantic resource. Through a series of scientific case studies we demonstrate how this (i simplifies the process of building SPARQL queries, (ii enables useful new kinds of queries on the data and (iii makes possible intelligent reasoning and semantic graph mining in chemogenomics and systems chemical biology. Availability Chem2Bio2OWL is available at http://chem2bio2rdf.org/owl. The document is available at http://chem2bio2owl.wikispaces.com.

  13. Development and Assessment of Modules to Integrate Quantitative Skills in Introductory Biology Courses.

    Science.gov (United States)

    Hoffman, Kathleen; Leupen, Sarah; Dowell, Kathy; Kephart, Kerrie; Leips, Jeff

    2016-01-01

    Redesigning undergraduate biology courses to integrate quantitative reasoning and skill development is critical to prepare students for careers in modern medicine and scientific research. In this paper, we report on the development, implementation, and assessment of stand-alone modules that integrate quantitative reasoning into introductory biology courses. Modules are designed to improve skills in quantitative numeracy, interpreting data sets using visual tools, and making inferences about biological phenomena using mathematical/statistical models. We also examine demographic/background data that predict student improvement in these skills through exposure to these modules. We carried out pre/postassessment tests across four semesters and used student interviews in one semester to examine how students at different levels approached quantitative problems. We found that students improved in all skills in most semesters, although there was variation in the degree of improvement among skills from semester to semester. One demographic variable, transfer status, stood out as a major predictor of the degree to which students improved (transfer students achieved much lower gains every semester, despite the fact that pretest scores in each focus area were similar between transfer and nontransfer students). We propose that increased exposure to quantitative skill development in biology courses is effective at building competency in quantitative reasoning. © 2016 K. Hoffman, S. Leupen, et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  14. Effectiveness of computer-assisted learning in biology teaching in primary schools in Serbia

    Directory of Open Access Journals (Sweden)

    Županec Vera

    2013-01-01

    Full Text Available The paper analyzes the comparative effectiveness of Computer-Assisted Learning (CAL and the traditional teaching method in biology on primary school pupils. A stratified random sample consisted of 214 pupils from two primary schools in Novi Sad. The pupils in the experimental group learned the biology content (Chordate using CAL, whereas the pupils in the control group learned the same content using traditional teaching. The research design was the pretest-posttest equivalent groups design. All instruments (the pretest, the posttest and the retest contained the questions belonging to three different cognitive domains: knowing, applying, and reasoning. Arithmetic mean, standard deviation, and standard error were analyzed using the software package SPSS 14.0, and t-test was used in order to establish the difference between the same statistical indicators. The analysis of results of the post­test and the retest showed that the pupils from the CAL group achieved significantly higher quantity and quality of knowledge in all three cognitive domains than the pupils from the traditional group. The results accomplished by the pupils from the CAL group suggest that individual CAL should be more present in biology teaching in primary schools, with the aim of raising the quality of biology education in pupils. [Projekat Ministarstva nauke Republike Srbije, br. 179010: Quality of Educational System in Serbia in the European Perspective

  15. Proceedings of the 8. Mediterranean Conference on Medical and Biological Engineering and Computing (Medicon `98)

    Energy Technology Data Exchange (ETDEWEB)

    Christofides, Stelios; Pattichis, Constantinos; Schizas, Christos; Keravnou-Papailiou, Elpida; Kaplanis, Prodromos; Spyros, Spyrou; Christodoulides, George; Theodoulou, Yiannis [eds.

    1999-12-31

    Medicon `98 is the eighth in the series of regional meetings of the International Federation of Medical and Biological Engineering (IFMBE) in the Mediterranean. The goal of Medicon `98 is to provide updated information on the state of the art on medical and biological engineering and computing. Medicon `98 was held in Lemesos, Cyprus, between 14-17 June, 1998. The full papers of the proceedings were published on CD and consisted of 190 invited and submitted papers. A book of abstracts was also published in paper form and was available to all the participants. Twenty seven papers fall within the scope of INIS and are dealing with Nuclear Medicine,Computerized Tomography, Radiology, Radiotherapy, Magnetic Resonance Imaging and Personnel Dosimetry (eds).

  16. Proceedings of the 8. Mediterranean Conference on Medical and Biological Engineering and Computing (Medicon '98)

    International Nuclear Information System (INIS)

    Christofides, Stelios; Pattichis, Constantinos; Schizas, Christos; Keravnou-Papailiou, Elpida; Kaplanis, Prodromos; Spyros, Spyrou; Christodoulides, George; Theodoulou, Yiannis

    1998-01-01

    Medicon '98 is the eighth in the series of regional meetings of the International Federation of Medical and Biological Engineering (IFMBE) in the Mediterranean. The goal of Medicon '98 is to provide updated information on the state of the art on medical and biological engineering and computing. Medicon '98 was held in Lemesos, Cyprus, between 14-17 June, 1998. The full papers of the proceedings were published on CD and consisted of 190 invited and submitted papers. A book of abstracts was also published in paper form and was available to all the participants. Twenty seven papers fall within the scope of INIS and are dealing with Nuclear Medicine,Computerized Tomography, Radiology, Radiotherapy, Magnetic Resonance Imaging and Personnel Dosimetry (eds)

  17. Synthetic Biology Outside the Cell: Linking Computational Tools to Cell-Free Systems

    Directory of Open Access Journals (Sweden)

    Daniel eLewis

    2014-12-01

    Full Text Available As mathematical models become more commonly integrated into the study of biology, a common language for describing biological processes is manifesting. Many tools have emerged for the simulation of in vivo systems, with only a few examples of prominent work done on predicting the dynamics of cell-free systems. At the same time, experimental biologists have begun to study dynamics of in vitro systems encapsulated by amphiphilic molecules, opening the door for the development of a new generation of biomimetic systems. In this review, we explore both in vivo and in vitro models of biochemical networks with a special focus on tools that could be applied to the construction of cell-free expression systems. We believe that quantitative studies of complex cellular mechanisms and pathways in synthetic systems can yield important insights into what makes cells different from conventional chemical systems.

  18. Computational systems biology and dose-response modeling in relation to new directions in toxicity testing.

    Science.gov (United States)

    Zhang, Qiang; Bhattacharya, Sudin; Andersen, Melvin E; Conolly, Rory B

    2010-02-01

    The new paradigm envisioned for toxicity testing in the 21st century advocates shifting from the current animal-based testing process to a combination of in vitro cell-based studies, high-throughput techniques, and in silico modeling. A strategic component of the vision is the adoption of the systems biology approach to acquire, analyze, and interpret toxicity pathway data. As key toxicity pathways are identified and their wiring details elucidated using traditional and high-throughput techniques, there is a pressing need to understand their qualitative and quantitative behaviors in response to perturbation by both physiological signals and exogenous stressors. The complexity of these molecular networks makes the task of understanding cellular responses merely by human intuition challenging, if not impossible. This process can be aided by mathematical modeling and computer simulation of the networks and their dynamic behaviors. A number of theoretical frameworks were developed in the last century for understanding dynamical systems in science and engineering disciplines. These frameworks, which include metabolic control analysis, biochemical systems theory, nonlinear dynamics, and control theory, can greatly facilitate the process of organizing, analyzing, and understanding toxicity pathways. Such analysis will require a comprehensive examination of the dynamic properties of "network motifs"--the basic building blocks of molecular circuits. Network motifs like feedback and feedforward loops appear repeatedly in various molecular circuits across cell types and enable vital cellular functions like homeostasis, all-or-none response, memory, and biological rhythm. These functional motifs and associated qualitative and quantitative properties are the predominant source of nonlinearities observed in cellular dose response data. Complex response behaviors can arise from toxicity pathways built upon combinations of network motifs. While the field of computational cell

  19. On the Modelling of Biological Patterns with Mechanochemical Models: Insights from Analysis and Computation

    KAUST Repository

    Moreo, P.; Gaffney, E. A.; Garcí a-Aznar, J. M.; Doblaré , M.

    2009-01-01

    The diversity of biological form is generated by a relatively small number of underlying mechanisms. Consequently, mathematical and computational modelling can, and does, provide insight into how cellular level interactions ultimately give rise

  20. MACBenAbim: A Multi-platform Mobile Application for searching keyterms in Computational Biology and Bioinformatics.

    Science.gov (United States)

    Oluwagbemi, Olugbenga O; Adewumi, Adewole; Esuruoso, Abimbola

    2012-01-01

    Computational biology and bioinformatics are gradually gaining grounds in Africa and other developing nations of the world. However, in these countries, some of the challenges of computational biology and bioinformatics education are inadequate infrastructures, and lack of readily-available complementary and motivational tools to support learning as well as research. This has lowered the morale of many promising undergraduates, postgraduates and researchers from aspiring to undertake future study in these fields. In this paper, we developed and described MACBenAbim (Multi-platform Mobile Application for Computational Biology and Bioinformatics), a flexible user-friendly tool to search for, define and describe the meanings of keyterms in computational biology and bioinformatics, thus expanding the frontiers of knowledge of the users. This tool also has the capability of achieving visualization of results on a mobile multi-platform context. MACBenAbim is available from the authors for non-commercial purposes.

  1. Proceedings of the 2013 MidSouth Computational Biology and Bioinformatics Society (MCBIOS) Conference.

    Science.gov (United States)

    Wren, Jonathan D; Dozmorov, Mikhail G; Burian, Dennis; Kaundal, Rakesh; Perkins, Andy; Perkins, Ed; Kupfer, Doris M; Springer, Gordon K

    2013-01-01

    The tenth annual conference of the MidSouth Computational Biology and Bioinformatics Society (MCBIOS 2013), "The 10th Anniversary in a Decade of Change: Discovery in a Sea of Data", took place at the Stoney Creek Inn & Conference Center in Columbia, Missouri on April 5-6, 2013. This year's Conference Chairs were Gordon Springer and Chi-Ren Shyu from the University of Missouri and Edward Perkins from the US Army Corps of Engineers Engineering Research and Development Center, who is also the current MCBIOS President (2012-3). There were 151 registrants and a total of 111 abstracts (51 oral presentations and 60 poster session abstracts).

  2. IS IT POSSIBLE TO INTEGRATE BASIC BIOLOGICAL DISCIPLINES IN A PRIVATE INSTITUTION?

    Directory of Open Access Journals (Sweden)

    L.A. Azzalis

    2008-05-01

    Full Text Available Basic biological disciplines as biochemistry, genetic and molecular biology have grown faster than any of other sciences. Moreover, those disciplines contribute to the understanding and treatment of an elevated number of illnesses. On the other hand, teachers cannot assure the graduating students that each particular discipline  is essential.  Furthermore,  those disciplines are often studied separately without any interdisciplinary integration between them.  The new curriculum proposed at Anhembi Morumbi University  - a private institution placed at São Paulo city  - incorporates learning blocks that  have been designed to integrate basic biological disciplines and clinical contents from the beginning in order to provide the stimulation and motivation to guide the  student through his learning.  The educational trend has concentrated on the following steps: 1 Biochemistry, genetic, cellular and molecular biology teachers´ from that institution have elaborated a new discipline  that was named Biologic Process. The aim of this new discipline was integrate basic biological sciences in a single content;  2  Selecting problems that could be discussed in the light of biochemistry, genetic and molecular contents; e.g. sickle cell anemia; 3 Developing  an innovative instructional method that challenges students “learn to learn” different from problem-based learning , economically unavailable at any particular university,  and  4 Assessments that measure knowledge, skills, attitudes and beliefs.  We believe that the future pedagogical system in  private health university will be a combination of “classical”  presentation of contents combined with actively involved students in the educational process and instruction based on either hypothetical  or real clinical cases in order to create  the stimulus for  the student continues to  integrate basic and clinical investigation.

  3. Use of Graph Database for the Integration of Heterogeneous Biological Data.

    Science.gov (United States)

    Yoon, Byoung-Ha; Kim, Seon-Kyu; Kim, Seon-Young

    2017-03-01

    Understanding complex relationships among heterogeneous biological data is one of the fundamental goals in biology. In most cases, diverse biological data are stored in relational databases, such as MySQL and Oracle, which store data in multiple tables and then infer relationships by multiple-join statements. Recently, a new type of database, called the graph-based database, was developed to natively represent various kinds of complex relationships, and it is widely used among computer science communities and IT industries. Here, we demonstrate the feasibility of using a graph-based database for complex biological relationships by comparing the performance between MySQL and Neo4j, one of the most widely used graph databases. We collected various biological data (protein-protein interaction, drug-target, gene-disease, etc.) from several existing sources, removed duplicate and redundant data, and finally constructed a graph database containing 114,550 nodes and 82,674,321 relationships. When we tested the query execution performance of MySQL versus Neo4j, we found that Neo4j outperformed MySQL in all cases. While Neo4j exhibited a very fast response for various queries, MySQL exhibited latent or unfinished responses for complex queries with multiple-join statements. These results show that using graph-based databases, such as Neo4j, is an efficient way to store complex biological relationships. Moreover, querying a graph database in diverse ways has the potential to reveal novel relationships among heterogeneous biological data.

  4. An integrative computational modelling of music structure apprehension

    DEFF Research Database (Denmark)

    Lartillot, Olivier

    2014-01-01

    , the computational model, by virtue of its generality, extensiveness and operationality, is suggested as a blueprint for the establishment of cognitively validated model of music structure apprehension. Available as a Matlab module, it can be used for practical musicological uses.......An objectivization of music analysis requires a detailed formalization of the underlying principles and methods. The formalization of the most elementary structural processes is hindered by the complexity of music, both in terms of profusions of entities (such as notes) and of tight interactions...... between a large number of dimensions. Computational modeling would enable systematic and exhaustive tests on sizeable pieces of music, yet current researches cover particular musical dimensions with limited success. The aim of this research is to conceive a computational modeling of music analysis...

  5. [Problems of world outlook and methodology of science integration in biological studies].

    Science.gov (United States)

    Khododova, Iu D

    1981-01-01

    Problems of worldoutlook and methodology of the natural-science knowledge are considered basing on the analysis of tendencies in the development of the membrane theory of cell processes and the use of principles of biological membrane functioning when solving some scientific and applied problems pertaining to different branches of chemistry and biology. The notion scientific knowledge integration is defined as interpenetration of approaches, methods and ideas of different branches of knowledge and enrichment on this basis of their content resulting in knowledge augmentation in each field taken separately. These processes are accompanied by appearance of new branches of knowledge - sciences "on junction" and their subsequent differentiations. The analysis of some gnoseological situations shows that integration of sciences contributes to coordination and some agreement of thinking styles of different specialists, puts forward keen personality of a scientist demanding, in particular, his high professional mobility. Problems of scientific activity organization are considered, which involve social sciences into the integration processes. The role of philosophy in the integration processes is emphasized.

  6. SED-ED, a workflow editor for computational biology experiments written in SED-ML.

    Science.gov (United States)

    Adams, Richard R

    2012-04-15

    The simulation experiment description markup language (SED-ML) is a new community data standard to encode computational biology experiments in a computer-readable XML format. Its widespread adoption will require the development of software support to work with SED-ML files. Here, we describe a software tool, SED-ED, to view, edit, validate and annotate SED-ML documents while shielding end-users from the underlying XML representation. SED-ED supports modellers who wish to create, understand and further develop a simulation description provided in SED-ML format. SED-ED is available as a standalone Java application, as an Eclipse plug-in and as an SBSI (www.sbsi.ed.ac.uk) plug-in, all under an MIT open-source license. Source code is at https://sed-ed-sedmleditor.googlecode.com/svn. The application itself is available from https://sourceforge.net/projects/jlibsedml/files/SED-ED/.

  7. Computer-Based Support of Decision Making Processes during Biological Incidents

    Directory of Open Access Journals (Sweden)

    Karel Antos

    2010-04-01

    Full Text Available The paper describes contextual analysis of a general system that should provide a computerized support of decision making processes related to response operations in case of a biological incident. This analysis is focused on information systems and information resources perspective and their integration using appropriate tools and technology. In the contextual design the basic modules of BioDSS system are suggested and further elaborated. The modules deal with incident description, scenarios development and recommendation of appropriate countermeasures. Proposals for further research are also included.

  8. Operational facility-integrated computer system for safeguards

    International Nuclear Information System (INIS)

    Armento, W.J.; Brooksbank, R.E.; Krichinsky, A.M.

    1980-01-01

    A computer system for safeguards in an active, remotely operated, nuclear fuel processing pilot plant has been developed. This sytem maintains (1) comprehensive records of special nuclear materials, (2) automatically updated book inventory files, (3) material transfer catalogs, (4) timely inventory estimations, (5) sample transactions, (6) automatic, on-line volume balances and alarmings, and (7) terminal access and applications software monitoring and logging. Future development will include near-real-time SNM mass balancing as both a static, in-tank summation and a dynamic, in-line determination. It is planned to incorporate aspects of site security and physical protection into the computer monitoring

  9. Multi-omic data integration enables discovery of hidden biological regularities

    DEFF Research Database (Denmark)

    Ebrahim, Ali; Brunk, Elizabeth; Tan, Justin

    2016-01-01

    Rapid growth in size and complexity of biological data sets has led to the 'Big Data to Knowledge' challenge. We develop advanced data integration methods for multi- level analysis of genomic, transcriptomic, ribosomal profiling, proteomic and fluxomic data. First, we show that pairwise integration...... of primary omics data reveals regularities that tie cellular processes together in Escherichia coli: the number of protein molecules made per mRNA transcript and the number of ribosomes required per translated protein molecule. Second, we show that genome- scale models, based on genomic and bibliomic data......, enable quantitative synchronization of disparate data types. Integrating omics data with models enabled the discovery of two novel regularities: condition invariant in vivo turnover rates of enzymes and the correlation of protein structural motifs and translational pausing. These regularities can...

  10. Optimizing Computation of Repairs from Active Integrity Constraints

    DEFF Research Database (Denmark)

    Cruz-Filipe, Luís

    2014-01-01

    Active integrity constraints (AICs) are a form of integrity constraints for databases that not only identify inconsistencies, but also suggest how these can be overcome. The semantics for AICs defines different types of repairs, but deciding whether an inconsistent database can be repaired...... and finding possible repairs is a NP- or Σ2p-complete problem, depending on the type of repairs one has in mind. In this paper, we introduce two different relations on AICs: an equivalence relation of independence, allowing the search to be parallelized among the equivalence classes, and a precedence relation...

  11. Agent-based re-engineering of ErbB signaling: a modeling pipeline for integrative systems biology.

    Science.gov (United States)

    Das, Arya A; Ajayakumar Darsana, T; Jacob, Elizabeth

    2017-03-01

    Experiments in systems biology are generally supported by a computational model which quantitatively estimates the parameters of the system by finding the best fit to the experiment. Mathematical models have proved to be successful in reverse engineering the system. The data generated is interpreted to understand the dynamics of the underlying phenomena. The question we have sought to answer is that - is it possible to use an agent-based approach to re-engineer a biological process, making use of the available knowledge from experimental and modelling efforts? Can the bottom-up approach benefit from the top-down exercise so as to create an integrated modelling formalism for systems biology? We propose a modelling pipeline that learns from the data given by reverse engineering, and uses it for re-engineering the system, to carry out in-silico experiments. A mathematical model that quantitatively predicts co-expression of EGFR-HER2 receptors in activation and trafficking has been taken for this study. The pipeline architecture takes cues from the population model that gives the rates of biochemical reactions, to formulate knowledge-based rules for the particle model. Agent-based simulations using these rules, support the existing facts on EGFR-HER2 dynamics. We conclude that, re-engineering models, built using the results of reverse engineering, opens up the possibility of harnessing the power pack of data which now lies scattered in literature. Virtual experiments could then become more realistic when empowered with the findings of empirical cell biology and modelling studies. Implemented on the Agent Modelling Framework developed in-house. C ++ code templates available in Supplementary material . liz.csir@gmail.com. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  12. Integration of small computers in the low budget facility

    International Nuclear Information System (INIS)

    Miller, G.E.; Crofoot, T.A.

    1988-01-01

    Inexpensive computers (PC's) are well within the reach of low budget reactor facilities. It is possible to envisage many uses that will both improve capabilities of existing instrumentation and also assist operators and staff with certain routine tasks. Both of these opportunities are important for survival at facilities with severe budget and staffing limitations. (author)

  13. All for One: Integrating Budgetary Methods by Computer.

    Science.gov (United States)

    Herman, Jerry J.

    1994-01-01

    With the advent of high speed and sophisticated computer programs, all budgetary systems can be combined in one fiscal management information system. Defines and provides examples for the four budgeting systems: (1) function/object; (2) planning, programming, budgeting system; (3) zero-based budgeting; and (4) site-based budgeting. (MLF)

  14. Integration of case study approach, project design and computer ...

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... computer modeling used as a research method applied in the process ... conclusions discuss the benefits for students who analyzed the ... accounting education process the case study method should not .... providing travel safety information to passengers ... from literature readings with practical problems.

  15. Beyond Computer Literacy: Technology Integration and Curriculum Transformation

    Science.gov (United States)

    Safar, Ammar H.; AlKhezzi, Fahad A.

    2013-01-01

    Personal computers, the Internet, smartphones, and other forms of information and communication technology (ICT) have changed our world, our job, our personal lives, as well as how we manage our knowledge and time effectively and efficiently. Research findings in the past decades have acknowledged and affirmed that the content the ICT medium…

  16. GLOFRIM v1.0-A globally applicable computational framework for integrated hydrological-hydrodynamic modelling

    NARCIS (Netherlands)

    Hoch, Jannis M.; Neal, Jeffrey C.; Baart, Fedor; Van Beek, Rens; Winsemius, Hessel C.; Bates, Paul D.; Bierkens, Marc F.P.

    2017-01-01

    We here present GLOFRIM, a globally applicable computational framework for integrated hydrological-hydrodynamic modelling. GLOFRIM facilitates spatially explicit coupling of hydrodynamic and hydrologic models and caters for an ensemble of models to be coupled. It currently encompasses the global

  17. A Computational Model of the SC Multisensory Neurons: Integrative Capabilities, Maturation, and Plasticity

    Directory of Open Access Journals (Sweden)

    Cristiano Cuppini

    2011-10-01

    Full Text Available Different cortical and subcortical structures present neurons able to integrate stimuli of different sensory modalities. Among the others, one of the most investigated integrative regions is the Superior Colliculus (SC, a midbrain structure whose aim is to guide attentive behaviour and motor responses toward external events. Despite the large amount of experimental data in the literature, the neural mechanisms underlying the SC response are not completely understood. Moreover, recent data indicate that multisensory integration ability is the result of maturation after birth, depending on sensory experience. Mathematical models and computer simulations can be of value to investigate and clarify these phenomena. In the last few years, several models have been implemented to shed light on these mechanisms and to gain a deeper comprehension of the SC capabilities. Here, a neural network model (Cuppini et al., 2010 is extensively discussed. The model considers visual-auditory interaction, and is able to reproduce and explain the main physiological features of multisensory integration in SC neurons, and their acquisition during postnatal life. To reproduce a neonatal condition, the model assumes that during early life: 1 cortical-SC synapses are present but not active; 2 in this phase, responses are driven by non-cortical inputs with very large receptive fields (RFs and little spatial tuning; 3 a slight spatial preference for the visual inputs is present. Sensory experience is modeled by a “training phase” in which the network is repeatedly exposed to modality-specific and cross-modal stimuli at different locations. As results, Cortical-SC synapses are crafted during this period thanks to the Hebbian rules of potentiation and depression, RFs are reduced in size, and neurons exhibit integrative capabilities to cross-modal stimuli, such as multisensory enhancement, inverse effectiveness, and multisensory depression. The utility of the modelling

  18. Higher-Order Integral Equation Methods in Computational Electromagnetics

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Meincke, Peter

    Higher-order integral equation methods have been investigated. The study has focused on improving the accuracy and efficiency of the Method of Moments (MoM) applied to electromagnetic problems. A new set of hierarchical Legendre basis functions of arbitrary order is developed. The new basis...

  19. Integration of the TNXYZ computer program inside the platform Salome

    International Nuclear Information System (INIS)

    Chaparro V, F. J.

    2014-01-01

    The present work shows the procedure carried out to integrate the code TNXYZ as a calculation tool at the graphical simulation platform Salome. The TNXYZ code propose a numerical solution of the neutron transport equation, in several groups of energy, steady-state and three-dimensional geometry. In order to discretized the variables of the transport equation, the code uses the method of discrete ordinates for the angular variable, and a nodal method for the spatial dependence. The Salome platform is a graphical environment designed for building, editing and simulating mechanical models mainly focused on the industry and unlike other software, in order to form a complete scheme of pre and post processing of information, to integrate and control an external source code. Before the integration the in the Salome platform TNXYZ code was upgraded. TNXYZ was programmed in the 90s using Fortran 77 compiler; for this reason the code was adapted to the characteristics of the current Fortran compilers; in addition, with the intention of extracting partial results over the process sequence, the original structure of the program underwent a modularization process, i.e. the main program was divided into sections where the code performs major operations. This procedure is controlled by the information module (YACS) on Salome platform, and it could be useful for a subsequent coupling with thermal-hydraulics codes. Finally, with the help of the Monte Carlo code Serpent several study cases were defined in order to check the process of integration; the verification process consisted in performing a comparison of the results obtained with the code executed as stand-alone and after modernized, integrated and controlled by the Salome platform. (Author)

  20. Computational Assessment of Pharmacokinetics and Biological Effects of Some Anabolic and Androgen Steroids.

    Science.gov (United States)

    Roman, Marin; Roman, Diana Larisa; Ostafe, Vasile; Ciorsac, Alecu; Isvoran, Adriana

    2018-02-05

    The aim of this study is to use computational approaches to predict the ADME-Tox profiles, pharmacokinetics, molecular targets, biological activity spectra and side/toxic effects of 31 anabolic and androgen steroids in humans. The following computational tools are used: (i) FAFDrugs4, SwissADME and admetSARfor obtaining the ADME-Tox profiles and for predicting pharmacokinetics;(ii) SwissTargetPrediction and PASS online for predicting the molecular targets and biological activities; (iii) PASS online, Toxtree, admetSAR and Endocrine Disruptomefor envisaging the specific toxicities; (iv) SwissDock to assess the interactions of investigated steroids with cytochromes involved in drugs metabolism. Investigated steroids usually reveal a high gastrointestinal absorption and a good oral bioavailability, may inhibit someof the human cytochromes involved in the metabolism of xenobiotics (CYP2C9 being the most affected) and reflect a good capacity for skin penetration. There are predicted numerous side effects of investigated steroids in humans: genotoxic carcinogenicity, hepatotoxicity, cardiovascular, hematotoxic and genitourinary effects, dermal irritations, endocrine disruption and reproductive dysfunction. These results are important to be known as an occupational exposure to anabolic and androgenic steroids at workplaces may occur and because there also is a deliberate human exposure to steroids for their performance enhancement and anti-aging properties.