WorldWideScience

Sample records for bioinformatics consortium pcabc

  1. Skate Genome Project: Cyber-Enabled Bioinformatics Collaboration

    Science.gov (United States)

    Vincent, J.

    2011-01-01

    The Skate Genome Project, a pilot project of the North East Cyber infrastructure Consortium, aims to produce a draft genome sequence of Leucoraja erinacea, the Little Skate. The pilot project was designed to also develop expertise in large scale collaborations across the NECC region. An overview of the bioinformatics and infrastructure challenges faced during the first year of the project will be presented. Results to date and lessons learned from the perspective of a bioinformatics core will be highlighted.

  2. The ocean sampling day consortium

    DEFF Research Database (Denmark)

    Kopf, Anna; Bicak, Mesude; Kottmann, Renzo

    2015-01-01

    Ocean Sampling Day was initiated by the EU-funded Micro B3 (Marine Microbial Biodiversity, Bioinformatics, Biotechnology) project to obtain a snapshot of the marine microbial biodiversity and function of the world’s oceans. It is a simultaneous global mega-sequencing campaign aiming to generate...... the largest standardized microbial data set in a single day. This will be achievable only through the coordinated efforts of an Ocean Sampling Day Consortium, supportive partnerships and networks between sites. This commentary outlines the establishment, function and aims of the Consortium and describes our...

  3. Development of Bioinformatics Infrastructure for Genomics Research.

    Science.gov (United States)

    Mulder, Nicola J; Adebiyi, Ezekiel; Adebiyi, Marion; Adeyemi, Seun; Ahmed, Azza; Ahmed, Rehab; Akanle, Bola; Alibi, Mohamed; Armstrong, Don L; Aron, Shaun; Ashano, Efejiro; Baichoo, Shakuntala; Benkahla, Alia; Brown, David K; Chimusa, Emile R; Fadlelmola, Faisal M; Falola, Dare; Fatumo, Segun; Ghedira, Kais; Ghouila, Amel; Hazelhurst, Scott; Isewon, Itunuoluwa; Jung, Segun; Kassim, Samar Kamal; Kayondo, Jonathan K; Mbiyavanga, Mamana; Meintjes, Ayton; Mohammed, Somia; Mosaku, Abayomi; Moussa, Ahmed; Muhammd, Mustafa; Mungloo-Dilmohamud, Zahra; Nashiru, Oyekanmi; Odia, Trust; Okafor, Adaobi; Oladipo, Olaleye; Osamor, Victor; Oyelade, Jellili; Sadki, Khalid; Salifu, Samson Pandam; Soyemi, Jumoke; Panji, Sumir; Radouani, Fouzia; Souiai, Oussama; Tastan Bishop, Özlem

    2017-06-01

    Although pockets of bioinformatics excellence have developed in Africa, generally, large-scale genomic data analysis has been limited by the availability of expertise and infrastructure. H3ABioNet, a pan-African bioinformatics network, was established to build capacity specifically to enable H3Africa (Human Heredity and Health in Africa) researchers to analyze their data in Africa. Since the inception of the H3Africa initiative, H3ABioNet's role has evolved in response to changing needs from the consortium and the African bioinformatics community. H3ABioNet set out to develop core bioinformatics infrastructure and capacity for genomics research in various aspects of data collection, transfer, storage, and analysis. Various resources have been developed to address genomic data management and analysis needs of H3Africa researchers and other scientific communities on the continent. NetMap was developed and used to build an accurate picture of network performance within Africa and between Africa and the rest of the world, and Globus Online has been rolled out to facilitate data transfer. A participant recruitment database was developed to monitor participant enrollment, and data is being harmonized through the use of ontologies and controlled vocabularies. The standardized metadata will be integrated to provide a search facility for H3Africa data and biospecimens. Because H3Africa projects are generating large-scale genomic data, facilities for analysis and interpretation are critical. H3ABioNet is implementing several data analysis platforms that provide a large range of bioinformatics tools or workflows, such as Galaxy, the Job Management System, and eBiokits. A set of reproducible, portable, and cloud-scalable pipelines to support the multiple H3Africa data types are also being developed and dockerized to enable execution on multiple computing infrastructures. In addition, new tools have been developed for analysis of the uniquely divergent African data and for

  4. Bioinformatics education dissemination with an evolutionary problem solving perspective.

    Science.gov (United States)

    Jungck, John R; Donovan, Samuel S; Weisstein, Anton E; Khiripet, Noppadon; Everse, Stephen J

    2010-11-01

    Bioinformatics is central to biology education in the 21st century. With the generation of terabytes of data per day, the application of computer-based tools to stored and distributed data is fundamentally changing research and its application to problems in medicine, agriculture, conservation and forensics. In light of this 'information revolution,' undergraduate biology curricula must be redesigned to prepare the next generation of informed citizens as well as those who will pursue careers in the life sciences. The BEDROCK initiative (Bioinformatics Education Dissemination: Reaching Out, Connecting and Knitting together) has fostered an international community of bioinformatics educators. The initiative's goals are to: (i) Identify and support faculty who can take leadership roles in bioinformatics education; (ii) Highlight and distribute innovative approaches to incorporating evolutionary bioinformatics data and techniques throughout undergraduate education; (iii) Establish mechanisms for the broad dissemination of bioinformatics resource materials and teaching models; (iv) Emphasize phylogenetic thinking and problem solving; and (v) Develop and publish new software tools to help students develop and test evolutionary hypotheses. Since 2002, BEDROCK has offered more than 50 faculty workshops around the world, published many resources and supported an environment for developing and sharing bioinformatics education approaches. The BEDROCK initiative builds on the established pedagogical philosophy and academic community of the BioQUEST Curriculum Consortium to assemble the diverse intellectual and human resources required to sustain an international reform effort in undergraduate bioinformatics education.

  5. Mathematics and evolutionary biology make bioinformatics education comprehensible

    Science.gov (United States)

    Weisstein, Anton E.

    2013-01-01

    The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes—the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software—the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a ‘two-culture’ problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses. PMID:23821621

  6. Mathematics and evolutionary biology make bioinformatics education comprehensible.

    Science.gov (United States)

    Jungck, John R; Weisstein, Anton E

    2013-09-01

    The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes-the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software-the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a 'two-culture' problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses.

  7. Engineering bioinformatics: building reliability, performance and productivity into bioinformatics software.

    Science.gov (United States)

    Lawlor, Brendan; Walsh, Paul

    2015-01-01

    There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians.

  8. Engineering bioinformatics: building reliability, performance and productivity into bioinformatics software

    Science.gov (United States)

    Lawlor, Brendan; Walsh, Paul

    2015-01-01

    There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians. PMID:25996054

  9. A bioinformatics potpourri.

    Science.gov (United States)

    Schönbach, Christian; Li, Jinyan; Ma, Lan; Horton, Paul; Sjaugi, Muhammad Farhan; Ranganathan, Shoba

    2018-01-19

    The 16th International Conference on Bioinformatics (InCoB) was held at Tsinghua University, Shenzhen from September 20 to 22, 2017. The annual conference of the Asia-Pacific Bioinformatics Network featured six keynotes, two invited talks, a panel discussion on big data driven bioinformatics and precision medicine, and 66 oral presentations of accepted research articles or posters. Fifty-seven articles comprising a topic assortment of algorithms, biomolecular networks, cancer and disease informatics, drug-target interactions and drug efficacy, gene regulation and expression, imaging, immunoinformatics, metagenomics, next generation sequencing for genomics and transcriptomics, ontologies, post-translational modification, and structural bioinformatics are the subject of this editorial for the InCoB2017 supplement issues in BMC Genomics, BMC Bioinformatics, BMC Systems Biology and BMC Medical Genomics. New Delhi will be the location of InCoB2018, scheduled for September 26-28, 2018.

  10. Bioinformatics analysis identify novel OB fold protein coding genes in C. elegans.

    Directory of Open Access Journals (Sweden)

    Daryanaz Dargahi

    Full Text Available BACKGROUND: The C. elegans genome has been extensively annotated by the WormBase consortium that uses state of the art bioinformatics pipelines, functional genomics and manual curation approaches. As a result, the identification of novel genes in silico in this model organism is becoming more challenging requiring new approaches. The Oligonucleotide-oligosaccharide binding (OB fold is a highly divergent protein family, in which protein sequences, in spite of having the same fold, share very little sequence identity (5-25%. Therefore, evidence from sequence-based annotation may not be sufficient to identify all the members of this family. In C. elegans, the number of OB-fold proteins reported is remarkably low (n=46 compared to other evolutionary-related eukaryotes, such as yeast S. cerevisiae (n=344 or fruit fly D. melanogaster (n=84. Gene loss during evolution or differences in the level of annotation for this protein family, may explain these discrepancies. METHODOLOGY/PRINCIPAL FINDINGS: This study examines the possibility that novel OB-fold coding genes exist in the worm. We developed a bioinformatics approach that uses the most sensitive sequence-sequence, sequence-profile and profile-profile similarity search methods followed by 3D-structure prediction as a filtering step to eliminate false positive candidate sequences. We have predicted 18 coding genes containing the OB-fold that have remarkably partially been characterized in C. elegans. CONCLUSIONS/SIGNIFICANCE: This study raises the possibility that the annotation of highly divergent protein fold families can be improved in C. elegans. Similar strategies could be implemented for large scale analysis by the WormBase consortium when novel versions of the genome sequence of C. elegans, or other evolutionary related species are being released. This approach is of general interest to the scientific community since it can be used to annotate any genome.

  11. Data mining for bioinformatics applications

    CERN Document Server

    Zengyou, He

    2015-01-01

    Data Mining for Bioinformatics Applications provides valuable information on the data mining methods have been widely used for solving real bioinformatics problems, including problem definition, data collection, data preprocessing, modeling, and validation. The text uses an example-based method to illustrate how to apply data mining techniques to solve real bioinformatics problems, containing 45 bioinformatics problems that have been investigated in recent research. For each example, the entire data mining process is described, ranging from data preprocessing to modeling and result validation. Provides valuable information on the data mining methods have been widely used for solving real bioinformatics problems Uses an example-based method to illustrate how to apply data mining techniques to solve real bioinformatics problems Contains 45 bioinformatics problems that have been investigated in recent research.

  12. The FaceBase Consortium: A comprehensive program to facilitate craniofacial research

    Science.gov (United States)

    Hochheiser, Harry; Aronow, Bruce J.; Artinger, Kristin; Beaty, Terri H.; Brinkley, James F.; Chai, Yang; Clouthier, David; Cunningham, Michael L.; Dixon, Michael; Donahue, Leah Rae; Fraser, Scott E.; Hallgrimsson, Benedikt; Iwata, Junichi; Klein, Ophir; Marazita, Mary L.; Murray, Jeffrey C.; Murray, Stephen; de Villena, Fernando Pardo-Manuel; Postlethwait, John; Potter, Steven; Shapiro, Linda; Spritz, Richard; Visel, Axel; Weinberg, Seth M.; Trainor, Paul A.

    2012-01-01

    The FaceBase Consortium consists of ten interlinked research and technology projects whose goal is to generate craniofacial research data and technology for use by the research community through a central data management and integrated bioinformatics hub. Funded by the National Institute of Dental and Craniofacial Research (NIDCR) and currently focused on studying the development of the middle region of the face, the Consortium will produce comprehensive datasets of global gene expression patterns, regulatory elements and sequencing; will generate anatomical and molecular atlases; will provide human normative facial data and other phenotypes; conduct follow up studies of a completed genome-wide association study; generate independent data on the genetics of craniofacial development, build repositories of animal models and of human samples and data for community access and analysis; and will develop software tools and animal models for analyzing and functionally testing and integrating these data. The FaceBase website (http://www.facebase.org) will serve as a web home for these efforts, providing interactive tools for exploring these datasets, together with discussion forums and other services to support and foster collaboration within the craniofacial research community. PMID:21458441

  13. Deep learning in bioinformatics.

    Science.gov (United States)

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2017-09-01

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Emerging strengths in Asia Pacific bioinformatics.

    Science.gov (United States)

    Ranganathan, Shoba; Hsu, Wen-Lian; Yang, Ueng-Cheng; Tan, Tin Wee

    2008-12-12

    The 2008 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation set up in 1998, was organized as the 7th International Conference on Bioinformatics (InCoB), jointly with the Bioinformatics and Systems Biology in Taiwan (BIT 2008) Conference, Oct. 20-23, 2008 at Taipei, Taiwan. Besides bringing together scientists from the field of bioinformatics in this region, InCoB is actively involving researchers from the area of systems biology, to facilitate greater synergy between these two groups. Marking the 10th Anniversary of APBioNet, this InCoB 2008 meeting followed on from a series of successful annual events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea), New Delhi (India) and Hong Kong. Additionally, tutorials and the Workshop on Education in Bioinformatics and Computational Biology (WEBCB) immediately prior to the 20th Federation of Asian and Oceanian Biochemists and Molecular Biologists (FAOBMB) Taipei Conference provided ample opportunity for inducting mainstream biochemists and molecular biologists from the region into a greater level of awareness of the importance of bioinformatics in their craft. In this editorial, we provide a brief overview of the peer-reviewed manuscripts accepted for publication herein, grouped into thematic areas. As the regional research expertise in bioinformatics matures, the papers fall into thematic areas, illustrating the specific contributions made by APBioNet to global bioinformatics efforts.

  15. Biggest challenges in bioinformatics.

    Science.gov (United States)

    Fuller, Jonathan C; Khoueiry, Pierre; Dinkel, Holger; Forslund, Kristoffer; Stamatakis, Alexandros; Barry, Joseph; Budd, Aidan; Soldatos, Theodoros G; Linssen, Katja; Rajput, Abdul Mateen

    2013-04-01

    The third Heidelberg Unseminars in Bioinformatics (HUB) was held on 18th October 2012, at Heidelberg University, Germany. HUB brought together around 40 bioinformaticians from academia and industry to discuss the 'Biggest Challenges in Bioinformatics' in a 'World Café' style event.

  16. Biggest challenges in bioinformatics

    OpenAIRE

    Fuller, Jonathan C; Khoueiry, Pierre; Dinkel, Holger; Forslund, Kristoffer; Stamatakis, Alexandros; Barry, Joseph; Budd, Aidan; Soldatos, Theodoros G; Linssen, Katja; Rajput, Abdul Mateen

    2013-01-01

    The third Heidelberg Unseminars in Bioinformatics (HUB) was held in October at Heidelberg University in Germany. HUB brought together around 40 bioinformaticians from academia and industry to discuss the ‘Biggest Challenges in Bioinformatics' in a ‘World Café' style event.

  17. Establishing bioinformatics research in the Asia Pacific

    Directory of Open Access Journals (Sweden)

    Tammi Martti

    2006-12-01

    Full Text Available Abstract In 1998, the Asia Pacific Bioinformatics Network (APBioNet, Asia's oldest bioinformatics organisation was set up to champion the advancement of bioinformatics in the Asia Pacific. By 2002, APBioNet was able to gain sufficient critical mass to initiate the first International Conference on Bioinformatics (InCoB bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2006 Conference was organized as the 5th annual conference of the Asia-Pacific Bioinformatics Network, on Dec. 18–20, 2006 in New Delhi, India, following a series of successful events in Bangkok (Thailand, Penang (Malaysia, Auckland (New Zealand and Busan (South Korea. This Introduction provides a brief overview of the peer-reviewed manuscripts accepted for publication in this Supplement. It exemplifies a typical snapshot of the growing research excellence in bioinformatics of the region as we embark on a trajectory of establishing a solid bioinformatics research culture in the Asia Pacific that is able to contribute fully to the global bioinformatics community.

  18. Preface to Introduction to Structural Bioinformatics

    NARCIS (Netherlands)

    Feenstra, K. Anton; Abeln, Sanne

    2018-01-01

    While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which

  19. COMPARISON OF POPULAR BIOINFORMATICS DATABASES

    OpenAIRE

    Abdulganiyu Abdu Yusuf; Zahraddeen Sufyanu; Kabir Yusuf Mamman; Abubakar Umar Suleiman

    2016-01-01

    Bioinformatics is the application of computational tools to capture and interpret biological data. It has wide applications in drug development, crop improvement, agricultural biotechnology and forensic DNA analysis. There are various databases available to researchers in bioinformatics. These databases are customized for a specific need and are ranged in size, scope, and purpose. The main drawbacks of bioinformatics databases include redundant information, constant change, data spread over m...

  20. Computational biology and bioinformatics in Nigeria.

    Science.gov (United States)

    Fatumo, Segun A; Adoga, Moses P; Ojo, Opeolu O; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi

    2014-04-01

    Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.

  1. Computational biology and bioinformatics in Nigeria.

    Directory of Open Access Journals (Sweden)

    Segun A Fatumo

    2014-04-01

    Full Text Available Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.

  2. Establishing bioinformatics research in the Asia Pacific

    OpenAIRE

    Ranganathan, Shoba; Tammi, Martti; Gribskov, Michael; Tan, Tin Wee

    2006-01-01

    Abstract In 1998, the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation was set up to champion the advancement of bioinformatics in the Asia Pacific. By 2002, APBioNet was able to gain sufficient critical mass to initiate the first International Conference on Bioinformatics (InCoB) bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2006 Conference was organized as the 5th annual conference of the Asia-...

  3. Gene Ontology Consortium: going forward.

    Science.gov (United States)

    2015-01-01

    The Gene Ontology (GO; http://www.geneontology.org) is a community-based bioinformatics resource that supplies information about gene product function using ontologies to represent biological knowledge. Here we describe improvements and expansions to several branches of the ontology, as well as updates that have allowed us to more efficiently disseminate the GO and capture feedback from the research community. The Gene Ontology Consortium (GOC) has expanded areas of the ontology such as cilia-related terms, cell-cycle terms and multicellular organism processes. We have also implemented new tools for generating ontology terms based on a set of logical rules making use of templates, and we have made efforts to increase our use of logical definitions. The GOC has a new and improved web site summarizing new developments and documentation, serving as a portal to GO data. Users can perform GO enrichment analysis, and search the GO for terms, annotations to gene products, and associated metadata across multiple species using the all-new AmiGO 2 browser. We encourage and welcome the input of the research community in all biological areas in our continued effort to improve the Gene Ontology. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Generalized Centroid Estimators in Bioinformatics

    Science.gov (United States)

    Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi

    2011-01-01

    In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017

  5. The DBCLS BioHackathon: standardization and interoperability for bioinformatics web services and workflows. The DBCLS BioHackathon Consortium*

    Directory of Open Access Journals (Sweden)

    Katayama Toshiaki

    2010-08-01

    Full Text Available Abstract Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS and Computational Biology Research Center (CBRC and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.

  6. Introduction to bioinformatics.

    Science.gov (United States)

    Can, Tolga

    2014-01-01

    Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.

  7. Bioinformatics clouds for big data manipulation.

    Science.gov (United States)

    Dai, Lin; Gao, Xin; Guo, Yan; Xiao, Jingfa; Zhang, Zhang

    2012-11-28

    As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics. This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.

  8. Bioinformatics and systems biology research update from the 15th International Conference on Bioinformatics (InCoB2016).

    Science.gov (United States)

    Schönbach, Christian; Verma, Chandra; Bond, Peter J; Ranganathan, Shoba

    2016-12-22

    The International Conference on Bioinformatics (InCoB) has been publishing peer-reviewed conference papers in BMC Bioinformatics since 2006. Of the 44 articles accepted for publication in supplement issues of BMC Bioinformatics, BMC Genomics, BMC Medical Genomics and BMC Systems Biology, 24 articles with a bioinformatics or systems biology focus are reviewed in this editorial. InCoB2017 is scheduled to be held in Shenzen, China, September 20-22, 2017.

  9. Designing XML schemas for bioinformatics.

    Science.gov (United States)

    Bruhn, Russel Elton; Burton, Philip John

    2003-06-01

    Data interchange bioinformatics databases will, in the future, most likely take place using extensible markup language (XML). The document structure will be described by an XML Schema rather than a document type definition (DTD). To ensure flexibility, the XML Schema must incorporate aspects of Object-Oriented Modeling. This impinges on the choice of the data model, which, in turn, is based on the organization of bioinformatics data by biologists. Thus, there is a need for the general bioinformatics community to be aware of the design issues relating to XML Schema. This paper, which is aimed at a general bioinformatics audience, uses examples to describe the differences between a DTD and an XML Schema and indicates how Unified Modeling Language diagrams may be used to incorporate Object-Oriented Modeling in the design of schema.

  10. Bioinformatics

    DEFF Research Database (Denmark)

    Baldi, Pierre; Brunak, Søren

    , and medicine will be particularly affected by the new results and the increased understanding of life at the molecular level. Bioinformatics is the development and application of computer methods for analysis, interpretation, and prediction, as well as for the design of experiments. It has emerged...

  11. Bioinformatics clouds for big data manipulation

    Directory of Open Access Journals (Sweden)

    Dai Lin

    2012-11-01

    Full Text Available Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS, Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS, and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.

  12. Bioinformatics clouds for big data manipulation

    KAUST Repository

    Dai, Lin

    2012-11-28

    As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics.This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. 2012 Dai et al.; licensee BioMed Central Ltd.

  13. Interdisciplinary Introductory Course in Bioinformatics

    Science.gov (United States)

    Kortsarts, Yana; Morris, Robert W.; Utell, Janine M.

    2010-01-01

    Bioinformatics is a relatively new interdisciplinary field that integrates computer science, mathematics, biology, and information technology to manage, analyze, and understand biological, biochemical and biophysical information. We present our experience in teaching an interdisciplinary course, Introduction to Bioinformatics, which was developed…

  14. International Lymphoma Epidemiology Consortium

    Science.gov (United States)

    The InterLymph Consortium, or formally the International Consortium of Investigators Working on Non-Hodgkin's Lymphoma Epidemiologic Studies, is an open scientific forum for epidemiologic research in non-Hodgkin's lymphoma.

  15. Navigating the changing learning landscape: perspective from bioinformatics.ca.

    Science.gov (United States)

    Brazas, Michelle D; Ouellette, B F Francis

    2013-09-01

    With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable in the learning continuum. Bioinformatics.ca, which hosts the Canadian Bioinformatics Workshops, has blended more traditional learning styles with current online and social learning styles. Here we share our growing experiences over the past 12 years and look toward what the future holds for bioinformatics training programs.

  16. Taking Bioinformatics to Systems Medicine.

    Science.gov (United States)

    van Kampen, Antoine H C; Moerland, Perry D

    2016-01-01

    Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically contributes to systems medicine. First, we explain the role of bioinformatics in the management and analysis of data. In particular we show the importance of publicly available biological and clinical repositories to support systems medicine studies. Second, we discuss how the integration and analysis of multiple types of omics data through integrative bioinformatics may facilitate the determination of more predictive and robust disease signatures, lead to a better understanding of (patho)physiological molecular mechanisms, and facilitate personalized medicine. Third, we focus on network analysis and discuss how gene networks can be constructed from omics data and how these networks can be decomposed into smaller modules. We discuss how the resulting modules can be used to generate experimentally testable hypotheses, provide insight into disease mechanisms, and lead to predictive models. Throughout, we provide several examples demonstrating how bioinformatics contributes to systems medicine and discuss future challenges in bioinformatics that need to be addressed to enable the advancement of systems medicine.

  17. Crowdsourcing for bioinformatics.

    Science.gov (United States)

    Good, Benjamin M; Su, Andrew I

    2013-08-15

    Bioinformatics is faced with a variety of problems that require human involvement. Tasks like genome annotation, image analysis, knowledge-base population and protein structure determination all benefit from human input. In some cases, people are needed in vast quantities, whereas in others, we need just a few with rare abilities. Crowdsourcing encompasses an emerging collection of approaches for harnessing such distributed human intelligence. Recently, the bioinformatics community has begun to apply crowdsourcing in a variety of contexts, yet few resources are available that describe how these human-powered systems work and how to use them effectively in scientific domains. Here, we provide a framework for understanding and applying several different types of crowdsourcing. The framework considers two broad classes: systems for solving large-volume 'microtasks' and systems for solving high-difficulty 'megatasks'. Within these classes, we discuss system types, including volunteer labor, games with a purpose, microtask markets and open innovation contests. We illustrate each system type with successful examples in bioinformatics and conclude with a guide for matching problems to crowdsourcing solutions that highlights the positives and negatives of different approaches.

  18. Is there room for ethics within bioinformatics education?

    Science.gov (United States)

    Taneri, Bahar

    2011-07-01

    When bioinformatics education is considered, several issues are addressed. At the undergraduate level, the main issue revolves around conveying information from two main and different fields: biology and computer science. At the graduate level, the main issue is bridging the gap between biology students and computer science students. However, there is an educational component that is rarely addressed within the context of bioinformatics education: the ethics component. Here, a different perspective is provided on bioinformatics education, and the current status of ethics is analyzed within the existing bioinformatics programs. Analysis of the existing undergraduate and graduate programs, in both Europe and the United States, reveals the minimal attention given to ethics within bioinformatics education. Given that bioinformaticians speedily and effectively shape the biomedical sciences and hence their implications for society, here redesigning of the bioinformatics curricula is suggested in order to integrate the necessary ethics education. Unique ethical problems awaiting bioinformaticians and bioinformatics ethics as a separate field of study are discussed. In addition, a template for an "Ethics in Bioinformatics" course is provided.

  19. Rising Strengths Hong Kong SAR in Bioinformatics.

    Science.gov (United States)

    Chakraborty, Chiranjib; George Priya Doss, C; Zhu, Hailong; Agoramoorthy, Govindasamy

    2017-06-01

    Hong Kong's bioinformatics sector is attaining new heights in combination with its economic boom and the predominance of the working-age group in its population. Factors such as a knowledge-based and free-market economy have contributed towards a prominent position on the world map of bioinformatics. In this review, we have considered the educational measures, landmark research activities and the achievements of bioinformatics companies and the role of the Hong Kong government in the establishment of bioinformatics as strength. However, several hurdles remain. New government policies will assist computational biologists to overcome these hurdles and further raise the profile of the field. There is a high expectation that bioinformatics in Hong Kong will be a promising area for the next generation.

  20. EURASIP journal on bioinformatics & systems biology

    National Research Council Canada - National Science Library

    2006-01-01

    "The overall aim of "EURASIP Journal on Bioinformatics and Systems Biology" is to publish research results related to signal processing and bioinformatics theories and techniques relevant to a wide...

  1. Virtual Bioinformatics Distance Learning Suite

    Science.gov (United States)

    Tolvanen, Martti; Vihinen, Mauno

    2004-01-01

    Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…

  2. The 2016 Bioinformatics Open Source Conference (BOSC).

    Science.gov (United States)

    Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather

    2016-01-01

    Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science.

  3. Bioinformatics in translational drug discovery.

    Science.gov (United States)

    Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G

    2017-08-31

    Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).

  4. Navigating the changing learning landscape: perspective from bioinformatics.ca

    OpenAIRE

    Brazas, Michelle D.; Ouellette, B. F. Francis

    2013-01-01

    With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable...

  5. Bioinformatics for cancer immunotherapy target discovery

    DEFF Research Database (Denmark)

    Olsen, Lars Rønn; Campos, Benito; Barnkob, Mike Stein

    2014-01-01

    therapy target discovery in a bioinformatics analysis pipeline. We describe specialized bioinformatics tools and databases for three main bottlenecks in immunotherapy target discovery: the cataloging of potentially antigenic proteins, the identification of potential HLA binders, and the selection epitopes...

  6. Bioinformatics for Exploration

    Science.gov (United States)

    Johnson, Kathy A.

    2006-01-01

    For the purpose of this paper, bioinformatics is defined as the application of computer technology to the management of biological information. It can be thought of as the science of developing computer databases and algorithms to facilitate and expedite biological research. This is a crosscutting capability that supports nearly all human health areas ranging from computational modeling, to pharmacodynamics research projects, to decision support systems within autonomous medical care. Bioinformatics serves to increase the efficiency and effectiveness of the life sciences research program. It provides data, information, and knowledge capture which further supports management of the bioastronautics research roadmap - identifying gaps that still remain and enabling the determination of which risks have been addressed.

  7. The GMOD Drupal bioinformatic server framework.

    Science.gov (United States)

    Papanicolaou, Alexie; Heckel, David G

    2010-12-15

    Next-generation sequencing technologies have led to the widespread use of -omic applications. As a result, there is now a pronounced bioinformatic bottleneck. The general model organism database (GMOD) tool kit (http://gmod.org) has produced a number of resources aimed at addressing this issue. It lacks, however, a robust online solution that can deploy heterogeneous data and software within a Web content management system (CMS). We present a bioinformatic framework for the Drupal CMS. It consists of three modules. First, GMOD-DBSF is an application programming interface module for the Drupal CMS that simplifies the programming of bioinformatic Drupal modules. Second, the Drupal Bioinformatic Software Bench (biosoftware_bench) allows for a rapid and secure deployment of bioinformatic software. An innovative graphical user interface (GUI) guides both use and administration of the software, including the secure provision of pre-publication datasets. Third, we present genes4all_experiment, which exemplifies how our work supports the wider research community. Given the infrastructure presented here, the Drupal CMS may become a powerful new tool set for bioinformaticians. The GMOD-DBSF base module is an expandable community resource that decreases development time of Drupal modules for bioinformatics. The biosoftware_bench module can already enhance biologists' ability to mine their own data. The genes4all_experiment module has already been responsible for archiving of more than 150 studies of RNAi from Lepidoptera, which were previously unpublished. Implemented in PHP and Perl. Freely available under the GNU Public License 2 or later from http://gmod-dbsf.googlecode.com.

  8. The 2015 Bioinformatics Open Source Conference (BOSC 2015).

    Science.gov (United States)

    Harris, Nomi L; Cock, Peter J A; Lapp, Hilmar; Chapman, Brad; Davey, Rob; Fields, Christopher; Hokamp, Karsten; Munoz-Torres, Monica

    2016-02-01

    The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.

  9. 25 CFR 1000.73 - Once a Tribe/Consortium has been awarded a grant, may the Tribe/Consortium obtain information...

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 2 2010-04-01 2010-04-01 false Once a Tribe/Consortium has been awarded a grant, may the Tribe/Consortium obtain information from a non-BIA bureau? 1000.73 Section 1000.73 Indians OFFICE OF THE... § 1000.73 Once a Tribe/Consortium has been awarded a grant, may the Tribe/Consortium obtain information...

  10. Component-Based Approach for Educating Students in Bioinformatics

    Science.gov (United States)

    Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.

    2009-01-01

    There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…

  11. Phylogenetic trees in bioinformatics

    Energy Technology Data Exchange (ETDEWEB)

    Burr, Tom L [Los Alamos National Laboratory

    2008-01-01

    Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.

  12. The GMOD Drupal Bioinformatic Server Framework

    Science.gov (United States)

    Papanicolaou, Alexie; Heckel, David G.

    2010-01-01

    Motivation: Next-generation sequencing technologies have led to the widespread use of -omic applications. As a result, there is now a pronounced bioinformatic bottleneck. The general model organism database (GMOD) tool kit (http://gmod.org) has produced a number of resources aimed at addressing this issue. It lacks, however, a robust online solution that can deploy heterogeneous data and software within a Web content management system (CMS). Results: We present a bioinformatic framework for the Drupal CMS. It consists of three modules. First, GMOD-DBSF is an application programming interface module for the Drupal CMS that simplifies the programming of bioinformatic Drupal modules. Second, the Drupal Bioinformatic Software Bench (biosoftware_bench) allows for a rapid and secure deployment of bioinformatic software. An innovative graphical user interface (GUI) guides both use and administration of the software, including the secure provision of pre-publication datasets. Third, we present genes4all_experiment, which exemplifies how our work supports the wider research community. Conclusion: Given the infrastructure presented here, the Drupal CMS may become a powerful new tool set for bioinformaticians. The GMOD-DBSF base module is an expandable community resource that decreases development time of Drupal modules for bioinformatics. The biosoftware_bench module can already enhance biologists' ability to mine their own data. The genes4all_experiment module has already been responsible for archiving of more than 150 studies of RNAi from Lepidoptera, which were previously unpublished. Availability and implementation: Implemented in PHP and Perl. Freely available under the GNU Public License 2 or later from http://gmod-dbsf.googlecode.com Contact: alexie@butterflybase.org PMID:20971988

  13. A Mathematical Optimization Problem in Bioinformatics

    Science.gov (United States)

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  14. Bioinformatics and Cancer

    Science.gov (United States)

    Researchers take on challenges and opportunities to mine "Big Data" for answers to complex biological questions. Learn how bioinformatics uses advanced computing, mathematics, and technological platforms to store, manage, analyze, and understand data.

  15. Biology in 'silico': The Bioinformatics Revolution.

    Science.gov (United States)

    Bloom, Mark

    2001-01-01

    Explains the Human Genome Project (HGP) and efforts to sequence the human genome. Describes the role of bioinformatics in the project and considers it the genetics Swiss Army Knife, which has many different uses, for use in forensic science, medicine, agriculture, and environmental sciences. Discusses the use of bioinformatics in the high school…

  16. BACTERIAL CONSORTIUM

    Directory of Open Access Journals (Sweden)

    Payel Sarkar

    2013-01-01

    Full Text Available Petroleum aromatic hydrocarbons like benzen e, toluene, ethyl benzene and xylene, together known as BTEX, has almost the same chemical structure. These aromatic hydrocarbons are released as pollutants in th e environment. This work was taken up to develop a solvent tolerant bacterial cons ortium that could degrade BTEX compounds as they all share a common chemical structure. We have isolated almost 60 different types of bacterial strains from different petroleum contaminated sites. Of these 60 bacterial strains almost 20 microorganisms were screene d on the basis of capability to tolerate high concentration of BTEX. Ten differe nt consortia were prepared and the compatibility of the bacterial strains within the consortia was checked by gram staining and BTEX tolerance level. Four successful mi crobial consortia were selected in which all the bacterial strains concomitantly grew in presence of high concentration of BTEX (10% of toluene, 10% of benzene 5% ethyl benzene and 1% xylene. Consortium #2 showed the highest growth rate in pr esence of BTEX. Degradation of BTEX by consortium #2 was monitored for 5 days by gradual decrease in the volume of the solvents. The maximum reduction observed wa s 85% in 5 days. Gas chromatography results also reveal that could completely degrade benzene and ethyl benzene within 48 hours. Almost 90% degradation of toluene and xylene in 48 hours was exhibited by consortium #2. It could also tolerate and degrade many industrial solvents such as chloroform, DMSO, acetonitrile having a wide range of log P values (0.03–3.1. Degradation of aromatic hydrocarbon like BTEX by a solvent tolerant bacterial consortium is greatly significant as it could degrade high concentration of pollutants compared to a bacterium and also reduces the time span of degradation.

  17. Bioinformatics research in the Asia Pacific: a 2007 update.

    Science.gov (United States)

    Ranganathan, Shoba; Gribskov, Michael; Tan, Tin Wee

    2008-01-01

    We provide a 2007 update on the bioinformatics research in the Asia-Pacific from the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation set up in 1998. From 2002, APBioNet has organized the first International Conference on Bioinformatics (InCoB) bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2007 Conference was organized as the 6th annual conference of the Asia-Pacific Bioinformatics Network, on Aug. 27-30, 2007 at Hong Kong, following a series of successful events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea) and New Delhi (India). Besides a scientific meeting at Hong Kong, satellite events organized are a pre-conference training workshop at Hanoi, Vietnam and a post-conference workshop at Nansha, China. This Introduction provides a brief overview of the peer-reviewed manuscripts accepted for publication in this Supplement. We have organized the papers into thematic areas, highlighting the growing contribution of research excellence from this region, to global bioinformatics endeavours.

  18. Consortium for military LCD display procurement

    Science.gov (United States)

    Echols, Gregg

    2002-08-01

    International Display Consortium (IDC) is the joining together of display companies to combined their buying power and obtained favorable terms with a major LCD manufacturer. Consolidating the buying power and grouping the demand enables the rugged display industry of avionics, ground vehicles, and ship based display manufacturers to have unencumbered access to high performance AMLCDs while greatly reducing risk and lowering cost. With an unrestricted supply of AMLCD displays, the consortium members have total control of their risk, cost, deliveries and added value partners. Every display manufacturer desires a very close relationship with a display vender. With IDC each consortium member achieves a close relationship. Consortium members enjoy cost effective access to high performance, industry standard sized LCD panels, and modified commercial displays with 100 degree C clearing points and portrait configurations. Consortium members also enjoy proposal support, technical support and long-term support.

  19. Challenge: A Multidisciplinary Degree Program in Bioinformatics

    Directory of Open Access Journals (Sweden)

    Mudasser Fraz Wyne

    2006-06-01

    Full Text Available Bioinformatics is a new field that is poorly served by any of the traditional science programs in Biology, Computer science or Biochemistry. Known to be a rapidly evolving discipline, Bioinformatics has emerged from experimental molecular biology and biochemistry as well as from the artificial intelligence, database, pattern recognition, and algorithms disciplines of computer science. While institutions are responding to this increased demand by establishing graduate programs in bioinformatics, entrance barriers for these programs are high, largely due to the significant prerequisite knowledge which is required, both in the fields of biochemistry and computer science. Although many schools currently have or are proposing graduate programs in bioinformatics, few are actually developing new undergraduate programs. In this paper I explore the blend of a multidisciplinary approach, discuss the response of academia and highlight challenges faced by this emerging field.

  20. When cloud computing meets bioinformatics: a review.

    Science.gov (United States)

    Zhou, Shuigeng; Liao, Ruiqi; Guan, Jihong

    2013-10-01

    In the past decades, with the rapid development of high-throughput technologies, biology research has generated an unprecedented amount of data. In order to store and process such a great amount of data, cloud computing and MapReduce were applied to many fields of bioinformatics. In this paper, we first introduce the basic concepts of cloud computing and MapReduce, and their applications in bioinformatics. We then highlight some problems challenging the applications of cloud computing and MapReduce to bioinformatics. Finally, we give a brief guideline for using cloud computing in biology research.

  1. Application of machine learning methods in bioinformatics

    Science.gov (United States)

    Yang, Haoyu; An, Zheng; Zhou, Haotian; Hou, Yawen

    2018-05-01

    Faced with the development of bioinformatics, high-throughput genomic technology have enabled biology to enter the era of big data. [1] Bioinformatics is an interdisciplinary, including the acquisition, management, analysis, interpretation and application of biological information, etc. It derives from the Human Genome Project. The field of machine learning, which aims to develop computer algorithms that improve with experience, holds promise to enable computers to assist humans in the analysis of large, complex data sets.[2]. This paper analyzes and compares various algorithms of machine learning and their applications in bioinformatics.

  2. The International Human Epigenome Consortium

    DEFF Research Database (Denmark)

    Stunnenberg, Hendrik G; Hirst, Martin

    2016-01-01

    The International Human Epigenome Consortium (IHEC) coordinates the generation of a catalog of high-resolution reference epigenomes of major primary human cell types. The studies now presented (see the Cell Press IHEC web portal at http://www.cell.com/consortium/IHEC) highlight the coordinated ac...

  3. Bioinformatics Training: A Review of Challenges, Actions and Support Requirements

    DEFF Research Database (Denmark)

    Schneider, M.V.; Watson, J.; Attwood, T.

    2010-01-01

    As bioinformatics becomes increasingly central to research in the molecular life sciences, the need to train non-bioinformaticians to make the most of bioinformatics resources is growing. Here, we review the key challenges and pitfalls to providing effective training for users of bioinformatics...... services, and discuss successful training strategies shared by a diverse set of bioinformatics trainers. We also identify steps that trainers in bioinformatics could take together to advance the state of the art in current training practices. The ideas presented in this article derive from the first...

  4. Bioinformatics Training Network (BTN): a community resource for bioinformatics trainers

    DEFF Research Database (Denmark)

    Schneider, Maria V.; Walter, Peter; Blatter, Marie-Claude

    2012-01-01

    and clearly tagged in relation to target audiences, learning objectives, etc. Ideally, they would also be peer reviewed, and easily and efficiently accessible for downloading. Here, we present the Bioinformatics Training Network (BTN), a new enterprise that has been initiated to address these needs and review...

  5. Planning bioinformatics workflows using an expert system

    Science.gov (United States)

    Chen, Xiaoling; Chang, Jeffrey T.

    2017-01-01

    Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: jeffrey.t.chang@uth.tmc.edu PMID:28052928

  6. Continuing Education Workshops in Bioinformatics Positively Impact Research and Careers.

    Science.gov (United States)

    Brazas, Michelle D; Ouellette, B F Francis

    2016-06-01

    Bioinformatics.ca has been hosting continuing education programs in introductory and advanced bioinformatics topics in Canada since 1999 and has trained more than 2,000 participants to date. These workshops have been adapted over the years to keep pace with advances in both science and technology as well as the changing landscape in available learning modalities and the bioinformatics training needs of our audience. Post-workshop surveys have been a mandatory component of each workshop and are used to ensure appropriate adjustments are made to workshops to maximize learning. However, neither bioinformatics.ca nor others offering similar training programs have explored the long-term impact of bioinformatics continuing education training. Bioinformatics.ca recently initiated a look back on the impact its workshops have had on the career trajectories, research outcomes, publications, and collaborations of its participants. Using an anonymous online survey, bioinformatics.ca analyzed responses from those surveyed and discovered its workshops have had a positive impact on collaborations, research, publications, and career progression.

  7. Using "Arabidopsis" Genetic Sequences to Teach Bioinformatics

    Science.gov (United States)

    Zhang, Xiaorong

    2009-01-01

    This article describes a new approach to teaching bioinformatics using "Arabidopsis" genetic sequences. Several open-ended and inquiry-based laboratory exercises have been designed to help students grasp key concepts and gain practical skills in bioinformatics, using "Arabidopsis" leucine-rich repeat receptor-like kinase (LRR…

  8. Fuzzy Logic in Medicine and Bioinformatics

    Directory of Open Access Journals (Sweden)

    Angela Torres

    2006-01-01

    Full Text Available The purpose of this paper is to present a general view of the current applications of fuzzy logic in medicine and bioinformatics. We particularly review the medical literature using fuzzy logic. We then recall the geometrical interpretation of fuzzy sets as points in a fuzzy hypercube and present two concrete illustrations in medicine (drug addictions and in bioinformatics (comparison of genomes.

  9. Massachusetts Institute of Technology Consortium Agreement

    National Research Council Canada - National Science Library

    Asada, Haruhiko

    1999-01-01

    ... of Phase 2 of the Home Automation and Healthcare Consortium. This report describes all major research accomplishments within the last six months since we launched the second phase of the consortium...

  10. Online Bioinformatics Tutorials | Office of Cancer Genomics

    Science.gov (United States)

    Bioinformatics is a scientific discipline that applies computer science and information technology to help understand biological processes. The NIH provides a list of free online bioinformatics tutorials, either generated by the NIH Library or other institutes, which includes introductory lectures and "how to" videos on using various tools.

  11. The development and application of bioinformatics core competencies to improve bioinformatics training and education.

    Science.gov (United States)

    Mulder, Nicola; Schwartz, Russell; Brazas, Michelle D; Brooksbank, Cath; Gaeta, Bruno; Morgan, Sarah L; Pauley, Mark A; Rosenwald, Anne; Rustici, Gabriella; Sierk, Michael; Warnow, Tandy; Welch, Lonnie

    2018-02-01

    Bioinformatics is recognized as part of the essential knowledge base of numerous career paths in biomedical research and healthcare. However, there is little agreement in the field over what that knowledge entails or how best to provide it. These disagreements are compounded by the wide range of populations in need of bioinformatics training, with divergent prior backgrounds and intended application areas. The Curriculum Task Force of the International Society of Computational Biology (ISCB) Education Committee has sought to provide a framework for training needs and curricula in terms of a set of bioinformatics core competencies that cut across many user personas and training programs. The initial competencies developed based on surveys of employers and training programs have since been refined through a multiyear process of community engagement. This report describes the current status of the competencies and presents a series of use cases illustrating how they are being applied in diverse training contexts. These use cases are intended to demonstrate how others can make use of the competencies and engage in the process of their continuing refinement and application. The report concludes with a consideration of remaining challenges and future plans.

  12. The development and application of bioinformatics core competencies to improve bioinformatics training and education

    Science.gov (United States)

    Brooksbank, Cath; Morgan, Sarah L.; Rosenwald, Anne; Warnow, Tandy; Welch, Lonnie

    2018-01-01

    Bioinformatics is recognized as part of the essential knowledge base of numerous career paths in biomedical research and healthcare. However, there is little agreement in the field over what that knowledge entails or how best to provide it. These disagreements are compounded by the wide range of populations in need of bioinformatics training, with divergent prior backgrounds and intended application areas. The Curriculum Task Force of the International Society of Computational Biology (ISCB) Education Committee has sought to provide a framework for training needs and curricula in terms of a set of bioinformatics core competencies that cut across many user personas and training programs. The initial competencies developed based on surveys of employers and training programs have since been refined through a multiyear process of community engagement. This report describes the current status of the competencies and presents a series of use cases illustrating how they are being applied in diverse training contexts. These use cases are intended to demonstrate how others can make use of the competencies and engage in the process of their continuing refinement and application. The report concludes with a consideration of remaining challenges and future plans. PMID:29390004

  13. 4273π: bioinformatics education on low cost ARM hardware.

    Science.gov (United States)

    Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D

    2013-08-12

    Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.

  14. OpenHelix: bioinformatics education outside of a different box.

    Science.gov (United States)

    Williams, Jennifer M; Mangan, Mary E; Perreault-Micale, Cynthia; Lathe, Scott; Sirohi, Neeraj; Lathe, Warren C

    2010-11-01

    The amount of biological data is increasing rapidly, and will continue to increase as new rapid technologies are developed. Professionals in every area of bioscience will have data management needs that require publicly available bioinformatics resources. Not all scientists desire a formal bioinformatics education but would benefit from more informal educational sources of learning. Effective bioinformatics education formats will address a broad range of scientific needs, will be aimed at a variety of user skill levels, and will be delivered in a number of different formats to address different learning styles. Informal sources of bioinformatics education that are effective are available, and will be explored in this review.

  15. Translational Bioinformatics and Clinical Research (Biomedical) Informatics.

    Science.gov (United States)

    Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T

    2015-06-01

    Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Development of a cloud-based Bioinformatics Training Platform.

    Science.gov (United States)

    Revote, Jerico; Watson-Haigh, Nathan S; Quenette, Steve; Bethwaite, Blair; McGrath, Annette; Shang, Catherine A

    2017-05-01

    The Bioinformatics Training Platform (BTP) has been developed to provide access to the computational infrastructure required to deliver sophisticated hands-on bioinformatics training courses. The BTP is a cloud-based solution that is in active use for delivering next-generation sequencing training to Australian researchers at geographically dispersed locations. The BTP was built to provide an easy, accessible, consistent and cost-effective approach to delivering workshops at host universities and organizations with a high demand for bioinformatics training but lacking the dedicated bioinformatics training suites required. To support broad uptake of the BTP, the platform has been made compatible with multiple cloud infrastructures. The BTP is an open-source and open-access resource. To date, 20 training workshops have been delivered to over 700 trainees at over 10 venues across Australia using the BTP. © The Author 2016. Published by Oxford University Press.

  17. Establishing a Consortium for the Study of Rare Diseases: The Urea Cycle Disorders Consortium

    Science.gov (United States)

    Seminara, Jennifer; Tuchman, Mendel; Krivitzky, Lauren; Krischer, Jeffrey; Lee, Hye-Seung; LeMons, Cynthia; Baumgartner, Matthias; Cederbaum, Stephen; Diaz, George A.; Feigenbaum, Annette; Gallagher, Renata C.; Harding, Cary O.; Kerr, Douglas S.; Lanpher, Brendan; Lee, Brendan; Lichter-Konecki, Uta; McCandless, Shawn E.; Merritt, J. Lawrence; Oster-Granite, Mary Lou; Seashore, Margretta R.; Stricker, Tamar; Summar, Marshall; Waisbren, Susan; Yudkoff, Marc; Batshaw, Mark L.

    2010-01-01

    The Urea Cycle Disorders Consortium (UCDC) was created as part of a larger network established by the National Institutes of Health to study rare diseases. This paper reviews the UCDC’s accomplishments over the first six years, including how the Consortium was developed and organized, clinical research studies initiated, and the importance of creating partnerships with patient advocacy groups, philanthropic foundations and biotech and pharmaceutical companies. PMID:20188616

  18. Lack of Association for Reported Endocrine Pancreatic Cancer Risk Loci in the PANDoRA Consortium.

    Science.gov (United States)

    Campa, Daniele; Obazee, Ofure; Pastore, Manuela; Panzuto, Francesco; Liço, Valbona; Greenhalf, William; Katzke, Verena; Tavano, Francesca; Costello, Eithne; Corbo, Vincenzo; Talar-Wojnarowska, Renata; Strobel, Oliver; Zambon, Carlo Federico; Neoptolemos, John P; Zerboni, Giulia; Kaaks, Rudolf; Key, Timothy J; Lombardo, Carlo; Jamroziak, Krzysztof; Gioffreda, Domenica; Hackert, Thilo; Khaw, Kay-Tee; Landi, Stefano; Milanetto, Anna Caterina; Landoni, Luca; Lawlor, Rita T; Bambi, Franco; Pirozzi, Felice; Basso, Daniela; Pasquali, Claudio; Capurso, Gabriele; Canzian, Federico

    2017-08-01

    Background: Pancreatic neuroendocrine tumors (PNETs) are rare neoplasms for which very little is known about either environmental or genetic risk factors. Only a handful of association studies have been performed so far, suggesting a small number of risk loci. Methods: To replicate the best findings, we have selected 16 SNPs suggested in previous studies to be relevant in PNET etiogenesis. We genotyped the selected SNPs (rs16944, rs1052536, rs1059293, rs1136410, rs1143634, rs2069762, rs2236302, rs2387632, rs3212961, rs3734299, rs3803258, rs4962081, rs7234941, rs7243091, rs12957119, and rs1800629) in 344 PNET sporadic cases and 2,721 controls in the context of the PANcreatic Disease ReseArch (PANDoRA) consortium. Results: After correction for multiple testing, we did not observe any statistically significant association between the SNPs and PNET risk. We also used three online bioinformatic tools (HaploReg, RegulomeDB, and GTEx) to predict a possible functional role of the SNPs, but we did not observe any clear indication. Conclusions: None of the selected SNPs were convincingly associated with PNET risk in the PANDoRA consortium. Impact: We can exclude a major role of the selected polymorphisms in PNET etiology, and this highlights the need for replication of epidemiologic findings in independent populations, especially in rare diseases such as PNETs. Cancer Epidemiol Biomarkers Prev; 26(8); 1349-51. ©2017 AACR . ©2017 American Association for Cancer Research.

  19. International Radical Cystectomy Consortium: A way forward

    Directory of Open Access Journals (Sweden)

    Syed Johar Raza

    2014-01-01

    Full Text Available Robot-assisted radical cystectomy (RARC is an emerging operative alternative to open surgery for the management of invasive bladder cancer. Studies from single institutions provide limited data due to the small number of patients. In order to better understand the related outcomes, a world-wide consortium was established in 2006 of patients undergoing RARC, called the International Robotic Cystectomy Consortium (IRCC. Thus far, the IRCC has reported its findings on various areas of operative interest and continues to expand its capacity to include other operative modalities and transform it into the International Radical Cystectomy Consortium. This article summarizes the findings of the IRCC and highlights the future direction of the consortium.

  20. Extending Asia Pacific bioinformatics into new realms in the "-omics" era.

    Science.gov (United States)

    Ranganathan, Shoba; Eisenhaber, Frank; Tong, Joo Chuan; Tan, Tin Wee

    2009-12-03

    The 2009 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation dating back to 1998, was organized as the 8th International Conference on Bioinformatics (InCoB), Sept. 7-11, 2009 at Biopolis, Singapore. Besides bringing together scientists from the field of bioinformatics in this region, InCoB has actively engaged clinicians and researchers from the area of systems biology, to facilitate greater synergy between these two groups. InCoB2009 followed on from a series of successful annual events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea), New Delhi (India), Hong Kong and Taipei (Taiwan), with InCoB2010 scheduled to be held in Tokyo, Japan, Sept. 26-28, 2010. The Workshop on Education in Bioinformatics and Computational Biology (WEBCB) and symposia on Clinical Bioinformatics (CBAS), the Singapore Symposium on Computational Biology (SYMBIO) and training tutorials were scheduled prior to the scientific meeting, and provided ample opportunity for in-depth learning and special interest meetings for educators, clinicians and students. We provide a brief overview of the peer-reviewed bioinformatics manuscripts accepted for publication in this supplement, grouped into thematic areas. In order to facilitate scientific reproducibility and accountability, we have, for the first time, introduced minimum information criteria for our pubilcations, including compliance to a Minimum Information about a Bioinformatics Investigation (MIABi). As the regional research expertise in bioinformatics matures, we have delineated a minimum set of bioinformatics skills required for addressing the computational challenges of the "-omics" era.

  1. Microsoft Biology Initiative: .NET Bioinformatics Platform and Tools

    Science.gov (United States)

    Diaz Acosta, B.

    2011-01-01

    The Microsoft Biology Initiative (MBI) is an effort in Microsoft Research to bring new technology and tools to the area of bioinformatics and biology. This initiative is comprised of two primary components, the Microsoft Biology Foundation (MBF) and the Microsoft Biology Tools (MBT). MBF is a language-neutral bioinformatics toolkit built as an extension to the Microsoft .NET Framework—initially aimed at the area of Genomics research. Currently, it implements a range of parsers for common bioinformatics file formats; a range of algorithms for manipulating DNA, RNA, and protein sequences; and a set of connectors to biological web services such as NCBI BLAST. MBF is available under an open source license, and executables, source code, demo applications, documentation and training materials are freely downloadable from http://research.microsoft.com/bio. MBT is a collection of tools that enable biology and bioinformatics researchers to be more productive in making scientific discoveries.

  2. Concepts and introduction to RNA bioinformatics

    DEFF Research Database (Denmark)

    Gorodkin, Jan; Hofacker, Ivo L.; Ruzzo, Walter L.

    2014-01-01

    RNA bioinformatics and computational RNA biology have emerged from implementing methods for predicting the secondary structure of single sequences. The field has evolved to exploit multiple sequences to take evolutionary information into account, such as compensating (and structure preserving) base...... for interactions between RNA and proteins.Here, we introduce the basic concepts of predicting RNA secondary structure relevant to the further analyses of RNA sequences. We also provide pointers to methods addressing various aspects of RNA bioinformatics and computational RNA biology....

  3. The OncoArray Consortium

    DEFF Research Database (Denmark)

    Amos, Christopher I; Dennis, Joe; Wang, Zhaoming

    2017-01-01

    by Illumina to facilitate efficient genotyping. The consortium developed standard approaches for selecting SNPs for study, for quality control of markers, and for ancestry analysis. The array was genotyped at selected sites and with prespecified replicate samples to permit evaluation of genotyping accuracy...... among centers and by ethnic background. RESULTS: The OncoArray consortium genotyped 447,705 samples. A total of 494,763 SNPs passed quality control steps with a sample success rate of 97% of the samples. Participating sites performed ancestry analysis using a common set of markers and a scoring...

  4. Increasing Sales by Developing Production Consortiums.

    Science.gov (United States)

    Smith, Christopher A.; Russo, Robert

    Intended to help rehabilitation facility administrators increase organizational income from manufacturing and/or contracted service sources, this document provides a decision-making model for the development of a production consortium. The document consists of five chapters and two appendices. Chapter 1 defines the consortium concept, explains…

  5. Adapting bioinformatics curricula for big data

    Science.gov (United States)

    Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469

  6. Developing library bioinformatics services in context: the Purdue University Libraries bioinformationist program.

    Science.gov (United States)

    Rein, Diane C

    2006-07-01

    Purdue University is a major agricultural, engineering, biomedical, and applied life science research institution with an increasing focus on bioinformatics research that spans multiple disciplines and campus academic units. The Purdue University Libraries (PUL) hired a molecular biosciences specialist to discover, engage, and support bioinformatics needs across the campus. After an extended period of information needs assessment and environmental scanning, the specialist developed a week of focused bioinformatics instruction (Bioinformatics Week) to launch system-wide, library-based bioinformatics services. The specialist employed a two-tiered approach to assess user information requirements and expectations. The first phase involved careful observation and collection of information needs in-context throughout the campus, attending laboratory meetings, interviewing department chairs and individual researchers, and engaging in strategic planning efforts. Based on the information gathered during the integration phase, several survey instruments were developed to facilitate more critical user assessment and the recovery of quantifiable data prior to planning. Given information gathered while working with clients and through formal needs assessments, as well as the success of instructional approaches used in Bioinformatics Week, the specialist is developing bioinformatics support services for the Purdue community. The specialist is also engaged in training PUL faculty librarians in bioinformatics to provide a sustaining culture of library-based bioinformatics support and understanding of Purdue's bioinformatics-related decision and policy making.

  7. Hickory Consortium 2001 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    2003-02-01

    As with all Building America Program consortia, systems thinking is the key to understanding the processes that Hickory Consortium hopes to improve. The Hickory Consortium applies this thinking to more than the whole-building concept. Their systems thinking embraces the meta process of how housing construction takes place in America. By understanding the larger picture, they are able to identify areas where improvements can be made and how to implement them.

  8. LXtoo: an integrated live Linux distribution for the bioinformatics community.

    Science.gov (United States)

    Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu

    2012-07-19

    Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.

  9. Concepts Of Bioinformatics And Its Application In Veterinary ...

    African Journals Online (AJOL)

    Bioinformatics has advanced the course of research and future veterinary vaccines development because it has provided new tools for identification of vaccine targets from sequenced biological data of organisms. In Nigeria, there is lack of bioinformatics training in the universities, expect for short training courses in which ...

  10. Tri-District Arts Consortium Summer Program.

    Science.gov (United States)

    Kirby, Charlotte O.

    1990-01-01

    The Tri-District Arts Consortium in South Carolina was formed to serve artistically gifted students in grades six-nine. The consortium developed a summer program offering music, dance, theatre, and visual arts instruction through a curriculum of intense training, performing, and hands-on experiences with faculty members and guest artists. (JDD)

  11. The National Astronomy Consortium (NAC)

    Science.gov (United States)

    Von Schill, Lyndele; Ivory, Joyce

    2017-01-01

    The National Astronomy Consortium (NAC) program is designed to increase the number of underrepresented minority students into STEM and STEM careers by providing unique summer research experiences followed by long-term mentoring and cohort support. Hallmarks of the NAC program include: research or internship opportunities at one of the NAC partner sites, a framework to continue research over the academic year, peer and faculty mentoring, monthly virtual hangouts, and much more. NAC students also participate in two professional travel opportunities each year: the annual NAC conference at Howard University and poster presentation at the annual AAS winter meeting following their summer internship.The National Astronomy Consortium (NAC) is a program led by the National Radio Astronomy Consortium (NRAO) and Associated Universities, Inc. (AUI), in partnership with the National Society of Black Physicist (NSBP), along with a number of minority and majority universities.

  12. BioWarehouse: a bioinformatics database warehouse toolkit.

    Science.gov (United States)

    Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D

    2006-03-23

    This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.

  13. Bioinformatics approaches for identifying new therapeutic bioactive peptides in food

    Directory of Open Access Journals (Sweden)

    Nora Khaldi

    2012-10-01

    Full Text Available ABSTRACT:The traditional methods for mining foods for bioactive peptides are tedious and long. Similar to the drug industry, the length of time to identify and deliver a commercial health ingredient that reduces disease symptoms can take anything between 5 to 10 years. Reducing this time and effort is crucial in order to create new commercially viable products with clear and important health benefits. In the past few years, bioinformatics, the science that brings together fast computational biology, and efficient genome mining, is appearing as the long awaited solution to this problem. By quickly mining food genomes for characteristics of certain food therapeutic ingredients, researchers can potentially find new ones in a matter of a few weeks. Yet, surprisingly, very little success has been achieved so far using bioinformatics in mining for food bioactives.The absence of food specific bioinformatic mining tools, the slow integration of both experimental mining and bioinformatics, and the important difference between different experimental platforms are some of the reasons for the slow progress of bioinformatics in the field of functional food and more specifically in bioactive peptide discovery.In this paper I discuss some methods that could be easily translated, using a rational peptide bioinformatics design, to food bioactive peptide mining. I highlight the need for an integrated food peptide database. I also discuss how to better integrate experimental work with bioinformatics in order to improve the mining of food for bioactive peptides, therefore achieving a higher success rates.

  14. Vertical and Horizontal Integration of Bioinformatics Education: A Modular, Interdisciplinary Approach

    Science.gov (United States)

    Furge, Laura Lowe; Stevens-Truss, Regina; Moore, D. Blaine; Langeland, James A.

    2009-01-01

    Bioinformatics education for undergraduates has been approached primarily in two ways: introduction of new courses with largely bioinformatics focus or introduction of bioinformatics experiences into existing courses. For small colleges such as Kalamazoo, creation of new courses within an already resource-stretched setting has not been an option.…

  15. Adapting bioinformatics curricula for big data.

    Science.gov (United States)

    Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. © The Author 2015. Published by Oxford University Press.

  16. Application of Bioinformatics in Chronobiology Research

    Directory of Open Access Journals (Sweden)

    Robson da Silva Lopes

    2013-01-01

    Full Text Available Bioinformatics and other well-established sciences, such as molecular biology, genetics, and biochemistry, provide a scientific approach for the analysis of data generated through “omics” projects that may be used in studies of chronobiology. The results of studies that apply these techniques demonstrate how they significantly aided the understanding of chronobiology. However, bioinformatics tools alone cannot eliminate the need for an understanding of the field of research or the data to be considered, nor can such tools replace analysts and researchers. It is often necessary to conduct an evaluation of the results of a data mining effort to determine the degree of reliability. To this end, familiarity with the field of investigation is necessary. It is evident that the knowledge that has been accumulated through chronobiology and the use of tools derived from bioinformatics has contributed to the recognition and understanding of the patterns and biological rhythms found in living organisms. The current work aims to develop new and important applications in the near future through chronobiology research.

  17. BioWarehouse: a bioinformatics database warehouse toolkit

    Directory of Open Access Journals (Sweden)

    Stringer-Calvert David WJ

    2006-03-01

    Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the

  18. 5th HUPO BPP Bioinformatics Meeting at the European Bioinformatics Institute in Hinxton, UK--Setting the analysis frame.

    Science.gov (United States)

    Stephan, Christian; Hamacher, Michael; Blüggel, Martin; Körting, Gerhard; Chamrad, Daniel; Scheer, Christian; Marcus, Katrin; Reidegeld, Kai A; Lohaus, Christiane; Schäfer, Heike; Martens, Lennart; Jones, Philip; Müller, Michael; Auyeung, Kevin; Taylor, Chris; Binz, Pierre-Alain; Thiele, Herbert; Parkinson, David; Meyer, Helmut E; Apweiler, Rolf

    2005-09-01

    The Bioinformatics Committee of the HUPO Brain Proteome Project (HUPO BPP) meets regularly to execute the post-lab analyses of the data produced in the HUPO BPP pilot studies. On July 7, 2005 the members came together for the 5th time at the European Bioinformatics Institute (EBI) in Hinxton, UK, hosted by Rolf Apweiler. As a main result, the parameter set of the semi-automated data re-analysis of MS/MS spectra has been elaborated and the subsequent work steps have been defined.

  19. Evaluating an Inquiry-Based Bioinformatics Course Using Q Methodology

    Science.gov (United States)

    Ramlo, Susan E.; McConnell, David; Duan, Zhong-Hui; Moore, Francisco B.

    2008-01-01

    Faculty at a Midwestern metropolitan public university recently developed a course on bioinformatics that emphasized collaboration and inquiry. Bioinformatics, essentially the application of computational tools to biological data, is inherently interdisciplinary. Thus part of the challenge of creating this course was serving the needs and…

  20. Report on the EMBER Project--A European Multimedia Bioinformatics Educational Resource

    Science.gov (United States)

    Attwood, Terri K.; Selimas, Ioannis; Buis, Rob; Altenburg, Ruud; Herzog, Robert; Ledent, Valerie; Ghita, Viorica; Fernandes, Pedro; Marques, Isabel; Brugman, Marc

    2005-01-01

    EMBER was a European project aiming to develop bioinformatics teaching materials on the Web and CD-ROM to help address the recognised skills shortage in bioinformatics. The project grew out of pilot work on the development of an interactive web-based bioinformatics tutorial and the desire to repackage that resource with the help of a professional…

  1. IPD-Work consortium

    DEFF Research Database (Denmark)

    Kivimäki, Mika; Singh-Manoux, Archana; Virtanen, Marianna

    2015-01-01

    of countries. The aim of the consortium is to estimate reliably the associations of work-related psychosocial factors with chronic diseases, disability, and mortality. Our findings are highly cited by the occupational health, epidemiology, and clinical medicine research community. However, some of the IPD-Work......'s findings have also generated disagreement as they challenge the importance of job strain as a major target for coronary heart disease (CHD) prevention, this is reflected in the critical discussion paper by Choi et al (1). In this invited reply to Choi et al, we aim to (i) describe how IPD-Work seeks......Established in 2008 and comprising over 60 researchers, the IPD-Work (individual-participant data meta-analysis in working populations) consortium is a collaborative research project that uses pre-defined meta-analyses of individual-participant data from multiple cohort studies representing a range...

  2. An Overview of Bioinformatics Tools and Resources in Allergy.

    Science.gov (United States)

    Fu, Zhiyan; Lin, Jing

    2017-01-01

    The rapidly increasing number of characterized allergens has created huge demands for advanced information storage, retrieval, and analysis. Bioinformatics and machine learning approaches provide useful tools for the study of allergens and epitopes prediction, which greatly complement traditional laboratory techniques. The specific applications mainly include identification of B- and T-cell epitopes, and assessment of allergenicity and cross-reactivity. In order to facilitate the work of clinical and basic researchers who are not familiar with bioinformatics, we review in this chapter the most important databases, bioinformatic tools, and methods with relevance to the study of allergens.

  3. Recent developments in life sciences research: Role of bioinformatics

    African Journals Online (AJOL)

    Life sciences research and development has opened up new challenges and opportunities for bioinformatics. The contribution of bioinformatics advances made possible the mapping of the entire human genome and genomes of many other organisms in just over a decade. These discoveries, along with current efforts to ...

  4. Current status and future perspectives of bioinformatics in Tanzania ...

    African Journals Online (AJOL)

    The main bottleneck in advancing genomics in present times is the lack of expertise in using bioinformatics tools and approaches for data mining in raw DNA sequences generated by modern high throughput technologies such as next generation sequencing. Although bioinformatics has been making major progress and ...

  5. Buying in to bioinformatics: an introduction to commercial sequence analysis software.

    Science.gov (United States)

    Smith, David Roy

    2015-07-01

    Advancements in high-throughput nucleotide sequencing techniques have brought with them state-of-the-art bioinformatics programs and software packages. Given the importance of molecular sequence data in contemporary life science research, these software suites are becoming an essential component of many labs and classrooms, and as such are frequently designed for non-computer specialists and marketed as one-stop bioinformatics toolkits. Although beautifully designed and powerful, user-friendly bioinformatics packages can be expensive and, as more arrive on the market each year, it can be difficult for researchers, teachers and students to choose the right software for their needs, especially if they do not have a bioinformatics background. This review highlights some of the currently available and most popular commercial bioinformatics packages, discussing their prices, usability, features and suitability for teaching. Although several commercial bioinformatics programs are arguably overpriced and overhyped, many are well designed, sophisticated and, in my opinion, worth the investment. If you are just beginning your foray into molecular sequence analysis or an experienced genomicist, I encourage you to explore proprietary software bundles. They have the potential to streamline your research, increase your productivity, energize your classroom and, if anything, add a bit of zest to the often dry detached world of bioinformatics. © The Author 2014. Published by Oxford University Press.

  6. NASA space radiation transport code development consortium

    International Nuclear Information System (INIS)

    Townsend, L. W.

    2005-01-01

    Recently, NASA established a consortium involving the Univ. of Tennessee (lead institution), the Univ. of Houston, Roanoke College and various government and national laboratories, to accelerate the development of a standard set of radiation transport computer codes for NASA human exploration applications. This effort involves further improvements of the Monte Carlo codes HETC and FLUKA and the deterministic code HZETRN, including developing nuclear reaction databases necessary to extend the Monte Carlo codes to carry out heavy ion transport, and extending HZETRN to three dimensions. The improved codes will be validated by comparing predictions with measured laboratory transport data, provided by an experimental measurements consortium, and measurements in the upper atmosphere on the balloon-borne Deep Space Test Bed (DSTB). In this paper, we present an overview of the consortium members and the current status and future plans of consortium efforts to meet the research goals and objectives of this extensive undertaking. (authors)

  7. The bioleaching potential of a bacterial consortium.

    Science.gov (United States)

    Latorre, Mauricio; Cortés, María Paz; Travisany, Dante; Di Genova, Alex; Budinich, Marko; Reyes-Jara, Angélica; Hödar, Christian; González, Mauricio; Parada, Pilar; Bobadilla-Fazzini, Roberto A; Cambiazo, Verónica; Maass, Alejandro

    2016-10-01

    This work presents the molecular foundation of a consortium of five efficient bacteria strains isolated from copper mines currently used in state of the art industrial-scale biotechnology. The strains Acidithiobacillus thiooxidans Licanantay, Acidiphilium multivorum Yenapatur, Leptospirillum ferriphilum Pañiwe, Acidithiobacillus ferrooxidans Wenelen and Sulfobacillus thermosulfidooxidans Cutipay were selected for genome sequencing based on metal tolerance, oxidation activity and bioleaching of copper efficiency. An integrated model of metabolic pathways representing the bioleaching capability of this consortium was generated. Results revealed that greater efficiency in copper recovery may be explained by the higher functional potential of L. ferriphilum Pañiwe and At. thiooxidans Licanantay to oxidize iron and reduced inorganic sulfur compounds. The consortium had a greater capacity to resist copper, arsenic and chloride ion compared to previously described biomining strains. Specialization and particular components in these bacteria provided the consortium a greater ability to bioleach copper sulfide ores. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Bioinformatics of cardiovascular miRNA biology.

    Science.gov (United States)

    Kunz, Meik; Xiao, Ke; Liang, Chunguang; Viereck, Janika; Pachel, Christina; Frantz, Stefan; Thum, Thomas; Dandekar, Thomas

    2015-12-01

    MicroRNAs (miRNAs) are small ~22 nucleotide non-coding RNAs and are highly conserved among species. Moreover, miRNAs regulate gene expression of a large number of genes associated with important biological functions and signaling pathways. Recently, several miRNAs have been found to be associated with cardiovascular diseases. Thus, investigating the complex regulatory effect of miRNAs may lead to a better understanding of their functional role in the heart. To achieve this, bioinformatics approaches have to be coupled with validation and screening experiments to understand the complex interactions of miRNAs with the genome. This will boost the subsequent development of diagnostic markers and our understanding of the physiological and therapeutic role of miRNAs in cardiac remodeling. In this review, we focus on and explain different bioinformatics strategies and algorithms for the identification and analysis of miRNAs and their regulatory elements to better understand cardiac miRNA biology. Starting with the biogenesis of miRNAs, we present approaches such as LocARNA and miRBase for combining sequence and structure analysis including phylogenetic comparisons as well as detailed analysis of RNA folding patterns, functional target prediction, signaling pathway as well as functional analysis. We also show how far bioinformatics helps to tackle the unprecedented level of complexity and systemic effects by miRNA, underlining the strong therapeutic potential of miRNA and miRNA target structures in cardiovascular disease. In addition, we discuss drawbacks and limitations of bioinformatics algorithms and the necessity of experimental approaches for miRNA target identification. This article is part of a Special Issue entitled 'Non-coding RNAs'. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. An innovative approach for testing bioinformatics programs using metamorphic testing

    Directory of Open Access Journals (Sweden)

    Liu Huai

    2009-01-01

    Full Text Available Abstract Background Recent advances in experimental and computational technologies have fueled the development of many sophisticated bioinformatics programs. The correctness of such programs is crucial as incorrectly computed results may lead to wrong biological conclusion or misguide downstream experimentation. Common software testing procedures involve executing the target program with a set of test inputs and then verifying the correctness of the test outputs. However, due to the complexity of many bioinformatics programs, it is often difficult to verify the correctness of the test outputs. Therefore our ability to perform systematic software testing is greatly hindered. Results We propose to use a novel software testing technique, metamorphic testing (MT, to test a range of bioinformatics programs. Instead of requiring a mechanism to verify whether an individual test output is correct, the MT technique verifies whether a pair of test outputs conform to a set of domain specific properties, called metamorphic relations (MRs, thus greatly increases the number and variety of test cases that can be applied. To demonstrate how MT is used in practice, we applied MT to test two open-source bioinformatics programs, namely GNLab and SeqMap. In particular we show that MT is simple to implement, and is effective in detecting faults in a real-life program and some artificially fault-seeded programs. Further, we discuss how MT can be applied to test programs from various domains of bioinformatics. Conclusion This paper describes the application of a simple, effective and automated technique to systematically test a range of bioinformatics programs. We show how MT can be implemented in practice through two real-life case studies. Since many bioinformatics programs, particularly those for large scale simulation and data analysis, are hard to test systematically, their developers may benefit from using MT as part of the testing strategy. Therefore our work

  10. Assessment of a Bioinformatics across Life Science Curricula Initiative

    Science.gov (United States)

    Howard, David R.; Miskowski, Jennifer A.; Grunwald, Sandra K.; Abler, Michael L.

    2007-01-01

    At the University of Wisconsin-La Crosse, we have undertaken a program to integrate the study of bioinformatics across the undergraduate life science curricula. Our efforts have included incorporating bioinformatics exercises into courses in the biology, microbiology, and chemistry departments, as well as coordinating the efforts of faculty within…

  11. Incorporating Genomics and Bioinformatics across the Life Sciences Curriculum

    Energy Technology Data Exchange (ETDEWEB)

    Ditty, Jayna L.; Kvaal, Christopher A.; Goodner, Brad; Freyermuth, Sharyn K.; Bailey, Cheryl; Britton, Robert A.; Gordon, Stuart G.; Heinhorst, Sabine; Reed, Kelynne; Xu, Zhaohui; Sanders-Lorenz, Erin R.; Axen, Seth; Kim, Edwin; Johns, Mitrick; Scott, Kathleen; Kerfeld, Cheryl A.

    2011-08-01

    Undergraduate life sciences education needs an overhaul, as clearly described in the National Research Council of the National Academies publication BIO 2010: Transforming Undergraduate Education for Future Research Biologists. Among BIO 2010's top recommendations is the need to involve students in working with real data and tools that reflect the nature of life sciences research in the 21st century. Education research studies support the importance of utilizing primary literature, designing and implementing experiments, and analyzing results in the context of a bona fide scientific question in cultivating the analytical skills necessary to become a scientist. Incorporating these basic scientific methodologies in undergraduate education leads to increased undergraduate and post-graduate retention in the sciences. Toward this end, many undergraduate teaching organizations offer training and suggestions for faculty to update and improve their teaching approaches to help students learn as scientists, through design and discovery (e.g., Council of Undergraduate Research [www.cur.org] and Project Kaleidoscope [www.pkal.org]). With the advent of genome sequencing and bioinformatics, many scientists now formulate biological questions and interpret research results in the context of genomic information. Just as the use of bioinformatic tools and databases changed the way scientists investigate problems, it must change how scientists teach to create new opportunities for students to gain experiences reflecting the influence of genomics, proteomics, and bioinformatics on modern life sciences research. Educators have responded by incorporating bioinformatics into diverse life science curricula. While these published exercises in, and guidelines for, bioinformatics curricula are helpful and inspirational, faculty new to the area of bioinformatics inevitably need training in the theoretical underpinnings of the algorithms. Moreover, effectively integrating bioinformatics

  12. Community Hospital Telehealth Consortium

    National Research Council Canada - National Science Library

    Williams, Elton

    2004-01-01

    The Community Hospital Telehealth Consortium is a unique, forward-thinking, community-based healthcare service project organized around 5 not-for-profit community hospitals located throughout Louisiana and Mississippi...

  13. Community Hospital Telehealth Consortium

    National Research Council Canada - National Science Library

    Williams, Elton

    2003-01-01

    The Community Hospital Telehealth Consortium is a unique, forward-thinking, community-based healthcare service project organized around 5 not-for-profit community hospitals located throughout Louisiana and Mississippi...

  14. Community Hospital Telehealth Consortium

    National Research Council Canada - National Science Library

    Williams, Jr, Elton L

    2007-01-01

    The Community Hospital Telehealth Consortium is a unique, forward-thinking, community-based healthcare service project organized around 5 not-for-profit community hospitals located throughout Louisiana and Mississippi...

  15. Bioinformatics-Aided Venomics

    Directory of Open Access Journals (Sweden)

    Quentin Kaas

    2015-06-01

    Full Text Available Venomics is a modern approach that combines transcriptomics and proteomics to explore the toxin content of venoms. This review will give an overview of computational approaches that have been created to classify and consolidate venomics data, as well as algorithms that have helped discovery and analysis of toxin nucleic acid and protein sequences, toxin three-dimensional structures and toxin functions. Bioinformatics is used to tackle specific challenges associated with the identification and annotations of toxins. Recognizing toxin transcript sequences among second generation sequencing data cannot rely only on basic sequence similarity because toxins are highly divergent. Mass spectrometry sequencing of mature toxins is challenging because toxins can display a large number of post-translational modifications. Identifying the mature toxin region in toxin precursor sequences requires the prediction of the cleavage sites of proprotein convertases, most of which are unknown or not well characterized. Tracing the evolutionary relationships between toxins should consider specific mechanisms of rapid evolution as well as interactions between predatory animals and prey. Rapidly determining the activity of toxins is the main bottleneck in venomics discovery, but some recent bioinformatics and molecular modeling approaches give hope that accurate predictions of toxin specificity could be made in the near future.

  16. Appalachian clean coal technology consortium

    International Nuclear Information System (INIS)

    Kutz, K.; Yoon, Roe-Hoan

    1995-01-01

    The Appalachian Clean Coal Technology Consortium (ACCTC) has been established to help U.S. coal producers, particularly those in the Appalachian region, increase the production of lower-sulfur coal. The cooperative research conducted as part of the consortium activities will help utilities meet the emissions standards established by the 1990 Clean Air Act Amendments, enhance the competitiveness of U.S. coals in the world market, create jobs in economically-depressed coal producing regions, and reduce U.S. dependence on foreign energy supplies. The research activities will be conducted in cooperation with coal companies, equipment manufacturers, and A ampersand E firms working in the Appalachian coal fields. This approach is consistent with President Clinton's initiative in establishing Regional Technology Alliances to meet regional needs through technology development in cooperation with industry. The consortium activities are complementary to the High-Efficiency Preparation program of the Pittsburgh Energy Technology Center, but are broader in scope as they are inclusive of technology developments for both near-term and long-term applications, technology transfer, and training a highly-skilled work force

  17. Appalachian clean coal technology consortium

    Energy Technology Data Exchange (ETDEWEB)

    Kutz, K.; Yoon, Roe-Hoan [Virginia Polytechnic Institute and State Univ., Blacksburg, VA (United States)

    1995-11-01

    The Appalachian Clean Coal Technology Consortium (ACCTC) has been established to help U.S. coal producers, particularly those in the Appalachian region, increase the production of lower-sulfur coal. The cooperative research conducted as part of the consortium activities will help utilities meet the emissions standards established by the 1990 Clean Air Act Amendments, enhance the competitiveness of U.S. coals in the world market, create jobs in economically-depressed coal producing regions, and reduce U.S. dependence on foreign energy supplies. The research activities will be conducted in cooperation with coal companies, equipment manufacturers, and A&E firms working in the Appalachian coal fields. This approach is consistent with President Clinton`s initiative in establishing Regional Technology Alliances to meet regional needs through technology development in cooperation with industry. The consortium activities are complementary to the High-Efficiency Preparation program of the Pittsburgh Energy Technology Center, but are broader in scope as they are inclusive of technology developments for both near-term and long-term applications, technology transfer, and training a highly-skilled work force.

  18. A Bioinformatics Facility for NASA

    Science.gov (United States)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  19. Comprehensive decision tree models in bioinformatics.

    Directory of Open Access Journals (Sweden)

    Gregor Stiglic

    Full Text Available PURPOSE: Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. METHODS: This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. RESULTS: The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. CONCLUSIONS: The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets

  20. Comprehensive decision tree models in bioinformatics.

    Science.gov (United States)

    Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter

    2012-01-01

    Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly

  1. When process mining meets bioinformatics

    NARCIS (Netherlands)

    Jagadeesh Chandra Bose, R.P.; Aalst, van der W.M.P.; Nurcan, S.

    2011-01-01

    Process mining techniques can be used to extract non-trivial process related knowledge and thus generate interesting insights from event logs. Similarly, bioinformatics aims at increasing the understanding of biological processes through the analysis of information associated with biological

  2. A Staff Education Consortium: One Model for Collaboration.

    Science.gov (United States)

    Stetler, Cheryl Beth; And Others

    1983-01-01

    Discusses the development, organization, activities, problems, and future of a staff education consortium of five medical center hospitals in Boston. The purposes of the consortium are mutual sharing, reduction in duplication, and cost containment of educational programing. (JOW)

  3. 9th International Conference on Practical Applications of Computational Biology and Bioinformatics

    CERN Document Server

    Rocha, Miguel; Fdez-Riverola, Florentino; Paz, Juan

    2015-01-01

    This proceedings presents recent practical applications of Computational Biology and  Bioinformatics. It contains the proceedings of the 9th International Conference on Practical Applications of Computational Biology & Bioinformatics held at University of Salamanca, Spain, at June 3rd-5th, 2015. The International Conference on Practical Applications of Computational Biology & Bioinformatics (PACBB) is an annual international meeting dedicated to emerging and challenging applied research in Bioinformatics and Computational Biology. Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis o...

  4. Bioconductor: open software development for computational biology and bioinformatics

    DEFF Research Database (Denmark)

    Gentleman, R.C.; Carey, V.J.; Bates, D.M.

    2004-01-01

    The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisci......The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry...... into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples....

  5. Peer Mentoring for Bioinformatics presentation

    OpenAIRE

    Budd, Aidan

    2014-01-01

    A handout used in a HUB (Heidelberg Unseminars in Bioinformatics) meeting focused on career development for bioinformaticians. It describes an activity for use to help introduce the idea of peer mentoring, potnetially acting as an opportunity to create peer-mentoring groups.

  6. PubData: search engine for bioinformatics databases worldwide

    OpenAIRE

    Vand, Kasra; Wahlestedt, Thor; Khomtchouk, Kelly; Sayed, Mohammed; Wahlestedt, Claes; Khomtchouk, Bohdan

    2016-01-01

    We propose a search engine and file retrieval system for all bioinformatics databases worldwide. PubData searches biomedical data in a user-friendly fashion similar to how PubMed searches biomedical literature. PubData is built on novel network programming, natural language processing, and artificial intelligence algorithms that can patch into the file transfer protocol servers of any user-specified bioinformatics database, query its contents, retrieve files for download, and adapt to the use...

  7. Corn in consortium with forages

    Directory of Open Access Journals (Sweden)

    Cássia Maria de Paula Garcia

    2013-12-01

    Full Text Available The basic premises for sustainable agricultural development with focus on rural producers are reducing the costs of production and aggregation of values through the use crop-livestock system (CLS throughout the year. The CLS is based on the consortium of grain crops, especially corn with tropical forages, mainly of the genus Panicum and Urochloa. The study aimed to evaluate the grain yield of irrigated corn crop intercropped with forage of the genus Panicum and Urochloa. The experiment was conducted at the Fazenda de Ensino, Pesquisa e Extensão – FEPE  of the Faculdade de Engenharia - UNESP, Ilha Solteira in an Oxisol in savannah conditions and in the autumn winter of 2009. The experimental area was irrigated by a center pivot and had a history of no-tillage system for 8 years. The corn hybrid used was simple DKB 390 YG at distances of 0.90 m. The seeds of grasses were sown in 0.34 m spacing in the amount of 5 kg ha-1, they were mixed with fertilizer minutes before sowing  and placed in a compartment fertilizer seeder and fertilizers were mechanically deposited in the soil at a depth of 0.03 m. The experimental design used was a randomized block with four replications and five treatments: Panicum maximum cv. Tanzania sown during the nitrogen fertilization (CTD of the corn; Panicum maximum cv. Mombaça sown during the nitrogen fertilization (CMD of the corn; Urochloa brizantha cv. Xaraés sown during the occasion of nitrogen fertilization (CBD of the corn; Urochloa ruziziensis cv. Comumsown during the nitrogen fertilization (CRD of the corn and single corn (control. The production components of corn: plant population per hectare (PlPo, number of ears per hectare (NE ha-1, number of rows per ear (NRE, number of kernels per row on the cob (NKR, number of grain in the ear (NGE and mass of 100 grains (M100G were not influenced by consortium with forage. Comparing grain yield (GY single corn and maize intercropped with forage of the genus Panicum

  8. Bioinformatics and its application in animal health: a review | Soetan ...

    African Journals Online (AJOL)

    Bioinformatics is an interdisciplinary subject, which uses computer application, statistics, mathematics and engineering for the analysis and management of biological information. It has become an important tool for basic and applied research in veterinary sciences. Bioinformatics has brought about advancements into ...

  9. Assessment of Data Reliability of Wireless Sensor Network for Bioinformatics

    Directory of Open Access Journals (Sweden)

    Ting Dong

    2017-09-01

    Full Text Available As a focal point of biotechnology, bioinformatics integrates knowledge from biology, mathematics, physics, chemistry, computer science and information science. It generally deals with genome informatics, protein structure and drug design. However, the data or information thus acquired from the main areas of bioinformatics may not be effective. Some researchers combined bioinformatics with wireless sensor network (WSN into biosensor and other tools, and applied them to such areas as fermentation, environmental monitoring, food engineering, clinical medicine and military. In the combination, the WSN is used to collect data and information. The reliability of the WSN in bioinformatics is the prerequisite to effective utilization of information. It is greatly influenced by factors like quality, benefits, service, timeliness and stability, some of them are qualitative and some are quantitative. Hence, it is necessary to develop a method that can handle both qualitative and quantitative assessment of information. A viable option is the fuzzy linguistic method, especially 2-tuple linguistic model, which has been extensively used to cope with such issues. As a result, this paper introduces 2-tuple linguistic representation to assist experts in giving their opinions on different WSNs in bioinformatics that involve multiple factors. Moreover, the author proposes a novel way to determine attribute weights and uses the method to weigh the relative importance of different influencing factors which can be considered as attributes in the assessment of the WSN in bioinformatics. Finally, an illustrative example is given to provide a reasonable solution for the assessment.

  10. Reproducible Bioinformatics Research for Biologists

    Science.gov (United States)

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  11. Bioinformatics of genomic association mapping

    NARCIS (Netherlands)

    Vaez Barzani, Ahmad

    2015-01-01

    In this thesis we present an overview of bioinformatics-based approaches for genomic association mapping, with emphasis on human quantitative traits and their contribution to complex diseases. We aim to provide a comprehensive walk-through of the classic steps of genomic association mapping

  12. Bioinformatic tools for PCR Primer design

    African Journals Online (AJOL)

    ES

    Bioinformatics is an emerging scientific discipline that uses information ... complex biological questions. ... and computer programs for various purposes of primer ..... polymerase chain reaction: Human Immunodeficiency Virus 1 model studies.

  13. PayDIBI: Pay-as-you-go data integration for bioinformatics

    NARCIS (Netherlands)

    Wanders, B.

    2012-01-01

    Background: Scientific research in bio-informatics is often data-driven and supported by biolog- ical databases. In a growing number of research projects, researchers like to ask questions that require the combination of information from more than one database. Most bio-informatics papers do not

  14. Kansas Wind Energy Consortium

    Energy Technology Data Exchange (ETDEWEB)

    Gruenbacher, Don [Kansas State Univ., Manhattan, KS (United States)

    2015-12-31

    This project addresses both fundamental and applied research problems that will help with problems defined by the DOE “20% Wind by 2030 Report”. In particular, this work focuses on increasing the capacity of small or community wind generation capabilities that would be operated in a distributed generation approach. A consortium (KWEC – Kansas Wind Energy Consortium) of researchers from Kansas State University and Wichita State University aims to dramatically increase the penetration of wind energy via distributed wind power generation. We believe distributed generation through wind power will play a critical role in the ability to reach and extend the renewable energy production targets set by the Department of Energy. KWEC aims to find technical and economic solutions to enable widespread implementation of distributed renewable energy resources that would apply to wind.

  15. Generative Topic Modeling in Image Data Mining and Bioinformatics Studies

    Science.gov (United States)

    Chen, Xin

    2012-01-01

    Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…

  16. Taking Bioinformatics to Systems Medicine

    NARCIS (Netherlands)

    van Kampen, Antoine H. C.; Moerland, Perry D.

    2016-01-01

    Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically

  17. Metagenomics and Bioinformatics in Microbial Ecology: Current Status and Beyond.

    Science.gov (United States)

    Hiraoka, Satoshi; Yang, Ching-Chia; Iwasaki, Wataru

    2016-09-29

    Metagenomic approaches are now commonly used in microbial ecology to study microbial communities in more detail, including many strains that cannot be cultivated in the laboratory. Bioinformatic analyses make it possible to mine huge metagenomic datasets and discover general patterns that govern microbial ecosystems. However, the findings of typical metagenomic and bioinformatic analyses still do not completely describe the ecology and evolution of microbes in their environments. Most analyses still depend on straightforward sequence similarity searches against reference databases. We herein review the current state of metagenomics and bioinformatics in microbial ecology and discuss future directions for the field. New techniques will allow us to go beyond routine analyses and broaden our knowledge of microbial ecosystems. We need to enrich reference databases, promote platforms that enable meta- or comprehensive analyses of diverse metagenomic datasets, devise methods that utilize long-read sequence information, and develop more powerful bioinformatic methods to analyze data from diverse perspectives.

  18. Bioinformatics Education in Pathology Training: Current Scope and Future Direction

    Directory of Open Access Journals (Sweden)

    Michael R Clay

    2017-04-01

    Full Text Available Training anatomic and clinical pathology residents in the principles of bioinformatics is a challenging endeavor. Most residents receive little to no formal exposure to bioinformatics during medical education, and most of the pathology training is spent interpreting histopathology slides using light microscopy or focused on laboratory regulation, management, and interpretation of discrete laboratory data. At a minimum, residents should be familiar with data structure, data pipelines, data manipulation, and data regulations within clinical laboratories. Fellowship-level training should incorporate advanced principles unique to each subspecialty. Barriers to bioinformatics education include the clinical apprenticeship training model, ill-defined educational milestones, inadequate faculty expertise, and limited exposure during medical training. Online educational resources, case-based learning, and incorporation into molecular genomics education could serve as effective educational strategies. Overall, pathology bioinformatics training can be incorporated into pathology resident curricula, provided there is motivation to incorporate, institutional support, educational resources, and adequate faculty expertise.

  19. Best practices in bioinformatics training for life scientists

    DEFF Research Database (Denmark)

    Via, Allegra; Blicher, Thomas; Bongcam-Rudloff, Erik

    2013-01-01

    their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource-centric demos, using interactivity, problem-solving exercises and cooperative learning to substantially enhance training quality and learning outcomes...... to environmental researchers, a common theme is the need not just to use, and gain familiarity with, bioinformatics tools and resources but also to understand their underlying fundamental theoretical and practical concepts. Providing bioinformatics training to empower life scientists to handle and analyse...

  20. Teaching Bioinformatics and Neuroinformatics by Using Free Web-Based Tools

    Science.gov (United States)

    Grisham, William; Schottler, Natalie A.; Valli-Marill, Joanne; Beck, Lisa; Beatty, Jackson

    2010-01-01

    This completely computer-based module's purpose is to introduce students to bioinformatics resources. We present an easy-to-adopt module that weaves together several important bioinformatic tools so students can grasp how these tools are used in answering research questions. Students integrate information gathered from websites dealing with…

  1. Bioinformatics and the Undergraduate Curriculum

    Science.gov (United States)

    Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael

    2010-01-01

    Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…

  2. Bringing Web 2.0 to bioinformatics.

    Science.gov (United States)

    Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P

    2009-01-01

    Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.

  3. Migrating from Informal to Formal Consortium — COSTLI Issues

    Science.gov (United States)

    Birdie, C.; Patil, Y. M.

    2010-10-01

    There are many models of library consortia which have come into existence due to various reasons and compulsions. FORSA (Forum for Resource Sharing in Astronomy) is an informal consortium born from the links between academic institutions specializing in astronomy in India. FORSA is a cooperative venture initiated by library professionals. Though this consortium was formed mainly for inter-lending activities and bibliographic access, it has matured over the years to adopt the consortium approach on cooperative acquisitions, due to increased requirements.

  4. Bioinformatics meets user-centred design: a perspective.

    Directory of Open Access Journals (Sweden)

    Katrina Pavelin

    Full Text Available Designers have a saying that "the joy of an early release lasts but a short time. The bitterness of an unusable system lasts for years." It is indeed disappointing to discover that your data resources are not being used to their full potential. Not only have you invested your time, effort, and research grant on the project, but you may face costly redesigns if you want to improve the system later. This scenario would be less likely if the product was designed to provide users with exactly what they need, so that it is fit for purpose before its launch. We work at EMBL-European Bioinformatics Institute (EMBL-EBI, and we consult extensively with life science researchers to find out what they need from biological data resources. We have found that although users believe that the bioinformatics community is providing accurate and valuable data, they often find the interfaces to these resources tricky to use and navigate. We believe that if you can find out what your users want even before you create the first mock-up of a system, the final product will provide a better user experience. This would encourage more people to use the resource and they would have greater access to the data, which could ultimately lead to more scientific discoveries. In this paper, we explore the need for a user-centred design (UCD strategy when designing bioinformatics resources and illustrate this with examples from our work at EMBL-EBI. Our aim is to introduce the reader to how selected UCD techniques may be successfully applied to software design for bioinformatics.

  5. Bioinformatics on the Cloud Computing Platform Azure

    Science.gov (United States)

    Shanahan, Hugh P.; Owen, Anne M.; Harrison, Andrew P.

    2014-01-01

    We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development. PMID:25050811

  6. Bioinformatics and Computational Core Technology Center

    Data.gov (United States)

    Federal Laboratory Consortium — SERVICES PROVIDED BY THE COMPUTER CORE FACILITYEvaluation, purchase, set up, and maintenance of the computer hardware and network for the 170 users in the research...

  7. BioShaDock: a community driven bioinformatics shared Docker-based tools registry.

    Science.gov (United States)

    Moreews, François; Sallou, Olivier; Ménager, Hervé; Le Bras, Yvan; Monjeaud, Cyril; Blanchet, Christophe; Collin, Olivier

    2015-01-01

    Linux container technologies, as represented by Docker, provide an alternative to complex and time-consuming installation processes needed for scientific software. The ease of deployment and the process isolation they enable, as well as the reproducibility they permit across environments and versions, are among the qualities that make them interesting candidates for the construction of bioinformatic infrastructures, at any scale from single workstations to high throughput computing architectures. The Docker Hub is a public registry which can be used to distribute bioinformatic software as Docker images. However, its lack of curation and its genericity make it difficult for a bioinformatics user to find the most appropriate images needed. BioShaDock is a bioinformatics-focused Docker registry, which provides a local and fully controlled environment to build and publish bioinformatic software as portable Docker images. It provides a number of improvements over the base Docker registry on authentication and permissions management, that enable its integration in existing bioinformatic infrastructures such as computing platforms. The metadata associated with the registered images are domain-centric, including for instance concepts defined in the EDAM ontology, a shared and structured vocabulary of commonly used terms in bioinformatics. The registry also includes user defined tags to facilitate its discovery, as well as a link to the tool description in the ELIXIR registry if it already exists. If it does not, the BioShaDock registry will synchronize with the registry to create a new description in the Elixir registry, based on the BioShaDock entry metadata. This link will help users get more information on the tool such as its EDAM operations, input and output types. This allows integration with the ELIXIR Tools and Data Services Registry, thus providing the appropriate visibility of such images to the bioinformatics community.

  8. Northern New Jersey Nursing Education Consortium: a partnership for graduate nursing education.

    Science.gov (United States)

    Quinless, F W; Levin, R F

    1998-01-01

    The purpose of this article is to describe the evolution and implementation of the Northern New Jersey Nursing Education consortium--a consortium of seven member institutions established in 1992. Details regarding the specific functions of the consortium relative to cross-registration of students in graduate courses, financial disbursement of revenue, faculty development activities, student services, library privileges, and institutional research review board mechanisms are described. The authors also review the administrative organizational structure through which the work conducted by the consortium occurs. Both the advantages and disadvantages of such a graduate consortium are explored, and specific examples of recent potential and real conflicts are fully discussed. The authors detail governance and structure of the consortium as a potential model for replication in other environments.

  9. Fundamentals of bioinformatics and computational biology methods and exercises in matlab

    CERN Document Server

    Singh, Gautam B

    2015-01-01

    This book offers comprehensive coverage of all the core topics of bioinformatics, and includes practical examples completed using the MATLAB bioinformatics toolbox™. It is primarily intended as a textbook for engineering and computer science students attending advanced undergraduate and graduate courses in bioinformatics and computational biology. The book develops bioinformatics concepts from the ground up, starting with an introductory chapter on molecular biology and genetics. This chapter will enable physical science students to fully understand and appreciate the ultimate goals of applying the principles of information technology to challenges in biological data management, sequence analysis, and systems biology. The first part of the book also includes a survey of existing biological databases, tools that have become essential in today’s biotechnology research. The second part of the book covers methodologies for retrieving biological information, including fundamental algorithms for sequence compar...

  10. Bioinformatics Methods for Interpreting Toxicogenomics Data: The Role of Text-Mining

    NARCIS (Netherlands)

    Hettne, K.M.; Kleinjans, J.; Stierum, R.H.; Boorsma, A.; Kors, J.A.

    2014-01-01

    This chapter concerns the application of bioinformatics methods to the analysis of toxicogenomics data. The chapter starts with an introduction covering how bioinformatics has been applied in toxicogenomics data analysis, and continues with a description of the foundations of a specific

  11. Influenza research database: an integrated bioinformatics resource for influenza virus research

    Science.gov (United States)

    The Influenza Research Database (IRD) is a U.S. National Institute of Allergy and Infectious Diseases (NIAID)-sponsored Bioinformatics Resource Center dedicated to providing bioinformatics support for influenza virus research. IRD facilitates the research and development of vaccines, diagnostics, an...

  12. BioStar: an online question & answer resource for the bioinformatics community

    Science.gov (United States)

    Although the era of big data has produced many bioinformatics tools and databases, using them effectively often requires specialized knowledge. Many groups lack bioinformatics expertise, and frequently find that software documentation is inadequate and local colleagues may be overburdened or unfamil...

  13. A Survey of Scholarly Literature Describing the Field of Bioinformatics Education and Bioinformatics Educational Research

    Science.gov (United States)

    Magana, Alejandra J.; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari

    2014-01-01

    Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the…

  14. Systems Bioinformatics: increasing precision of computational diagnostics and therapeutics through network-based approaches.

    Science.gov (United States)

    Oulas, Anastasis; Minadakis, George; Zachariou, Margarita; Sokratous, Kleitos; Bourdakou, Marilena M; Spyrou, George M

    2017-11-27

    Systems Bioinformatics is a relatively new approach, which lies in the intersection of systems biology and classical bioinformatics. It focuses on integrating information across different levels using a bottom-up approach as in systems biology with a data-driven top-down approach as in bioinformatics. The advent of omics technologies has provided the stepping-stone for the emergence of Systems Bioinformatics. These technologies provide a spectrum of information ranging from genomics, transcriptomics and proteomics to epigenomics, pharmacogenomics, metagenomics and metabolomics. Systems Bioinformatics is the framework in which systems approaches are applied to such data, setting the level of resolution as well as the boundary of the system of interest and studying the emerging properties of the system as a whole rather than the sum of the properties derived from the system's individual components. A key approach in Systems Bioinformatics is the construction of multiple networks representing each level of the omics spectrum and their integration in a layered network that exchanges information within and between layers. Here, we provide evidence on how Systems Bioinformatics enhances computational therapeutics and diagnostics, hence paving the way to precision medicine. The aim of this review is to familiarize the reader with the emerging field of Systems Bioinformatics and to provide a comprehensive overview of its current state-of-the-art methods and technologies. Moreover, we provide examples of success stories and case studies that utilize such methods and tools to significantly advance research in the fields of systems biology and systems medicine. © The Author 2017. Published by Oxford University Press.

  15. The growing need for microservices in bioinformatics

    Directory of Open Access Journals (Sweden)

    Christopher L Williams

    2016-01-01

    Full Text Available Objective: Within the information technology (IT industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise′s overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Context: Bioinformatics relies on nimble IT framework which can adapt to changing requirements. Aims: To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics Conclusions: Use of the microservices framework

  16. The growing need for microservices in bioinformatics.

    Science.gov (United States)

    Williams, Christopher L; Sica, Jeffrey C; Killen, Robert T; Balis, Ulysses G J

    2016-01-01

    Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Bioinformatics relies on nimble IT framework which can adapt to changing requirements. To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics. Use of the microservices framework is an effective methodology for the fabrication and

  17. The growing need for microservices in bioinformatics

    Science.gov (United States)

    Williams, Christopher L.; Sica, Jeffrey C.; Killen, Robert T.; Balis, Ulysses G. J.

    2016-01-01

    Objective: Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Context: Bioinformatics relies on nimble IT framework which can adapt to changing requirements. Aims: To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics Conclusions: Use of the microservices framework is an effective

  18. ZBIT Bioinformatics Toolbox: A Web-Platform for Systems Biology and Expression Data Analysis.

    Science.gov (United States)

    Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas

    2016-01-01

    Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/.

  19. Atlantic Coast Environmental Indicators Consortium

    Data.gov (United States)

    Federal Laboratory Consortium — n 2000, the US EPA granted authority to establish up to five Estuarine Indicator Research Programs. These Programs were designed to identify, evaluate, recommend and...

  20. A Portable Bioinformatics Course for Upper-Division Undergraduate Curriculum in Sciences

    Science.gov (United States)

    Floraino, Wely B.

    2008-01-01

    This article discusses the challenges that bioinformatics education is facing and describes a bioinformatics course that is successfully taught at the California State Polytechnic University, Pomona, to the fourth year undergraduate students in biological sciences, chemistry, and computer science. Information on lecture and computer practice…

  1. Computer Programming and Biomolecular Structure Studies: A Step beyond Internet Bioinformatics

    Science.gov (United States)

    Likic, Vladimir A.

    2006-01-01

    This article describes the experience of teaching structural bioinformatics to third year undergraduate students in a subject titled "Biomolecular Structure and Bioinformatics." Students were introduced to computer programming and used this knowledge in a practical application as an alternative to the well established Internet bioinformatics…

  2. The nation's first consortium to address waste management issues

    International Nuclear Information System (INIS)

    Mikel, C.J.

    1991-01-01

    On July 26, 1989, the secretary of the Department of Energy (DOE), Admiral James Watkins, announced approval of the Waste-Management Education and Research Consortium (WERC). The consortium is composed of New Mexico State University (NMSU), the University of New Mexico, the New Mexico Institute of Mining and Technology, Los Alamos National Laboratory, and Sandia National Laboratories. This pilot program is expected to form a model for other regional and national programs. The WERC mission is to expand the national capability to address issues associated with the management of hazardous, radioactive, and solid waste. Research, technology transfer, and education/training are the three areas that have been identified to accomplish the objectives set by the consortium. The members of the consortium will reach out to the DOE facilities, other government agencies and facilities, and private institutions across the country. Their goal is to provide resources for solutions to waste management problems

  3. Chapter 16: text mining for translational bioinformatics.

    Science.gov (United States)

    Cohen, K Bretonnel; Hunter, Lawrence E

    2013-04-01

    Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing.

  4. A decade of Web Server updates at the Bioinformatics Links Directory: 2003-2012.

    Science.gov (United States)

    Brazas, Michelle D; Yim, David; Yeung, Winston; Ouellette, B F Francis

    2012-07-01

    The 2012 Bioinformatics Links Directory update marks the 10th special Web Server issue from Nucleic Acids Research. Beginning with content from their 2003 publication, the Bioinformatics Links Directory in collaboration with Nucleic Acids Research has compiled and published a comprehensive list of freely accessible, online tools, databases and resource materials for the bioinformatics and life science research communities. The past decade has exhibited significant growth and change in the types of tools, databases and resources being put forth, reflecting both technology changes and the nature of research over that time. With the addition of 90 web server tools and 12 updates from the July 2012 Web Server issue of Nucleic Acids Research, the Bioinformatics Links Directory at http://bioinformatics.ca/links_directory/ now contains an impressive 134 resources, 455 databases and 1205 web server tools, mirroring the continued activity and efforts of our field.

  5. Applying Instructional Design Theories to Bioinformatics Education in Microarray Analysis and Primer Design Workshops

    Science.gov (United States)

    Shachak, Aviv; Ophir, Ron; Rubin, Eitan

    2005-01-01

    The need to support bioinformatics training has been widely recognized by scientists, industry, and government institutions. However, the discussion of instructional methods for teaching bioinformatics is only beginning. Here we report on a systematic attempt to design two bioinformatics workshops for graduate biology students on the basis of…

  6. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    Science.gov (United States)

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  7. SPECIES DATABASES AND THE BIOINFORMATICS REVOLUTION.

    Science.gov (United States)

    Biological databases are having a growth spurt. Much of this results from research in genetics and biodiversity, coupled with fast-paced developments in information technology. The revolution in bioinformatics, defined by Sugden and Pennisi (2000) as the "tools and techniques for...

  8. BioSmalltalk: a pure object system and library for bioinformatics.

    Science.gov (United States)

    Morales, Hernán F; Giovambattista, Guillermo

    2013-09-15

    We have developed BioSmalltalk, a new environment system for pure object-oriented bioinformatics programming. Adaptive end-user programming systems tend to become more important for discovering biological knowledge, as is demonstrated by the emergence of open-source programming toolkits for bioinformatics in the past years. Our software is intended to bridge the gap between bioscientists and rapid software prototyping while preserving the possibility of scaling to whole-system biology applications. BioSmalltalk performs better in terms of execution time and memory usage than Biopython and BioPerl for some classical situations. BioSmalltalk is cross-platform and freely available (MIT license) through the Google Project Hosting at http://code.google.com/p/biosmalltalk hernan.morales@gmail.com Supplementary data are available at Bioinformatics online.

  9. Bioclipse: an open source workbench for chemo- and bioinformatics

    Directory of Open Access Journals (Sweden)

    Wagener Johannes

    2007-02-01

    Full Text Available Abstract Background There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no sucessful attempts have been made to integrate chemo- and bioinformatics into a single framework. Results Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Conclusion Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL, an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at http://www.bioclipse.net.

  10. NCI Pediatric Preclinical Testing Consortium

    Science.gov (United States)

    NCI has awarded grants to five research teams to participate in its Pediatric Preclinical Testing Consortium, which is intended to help to prioritize which agents to pursue in pediatric clinical trials.

  11. Renewable Generators' Consortium: ensuring a market for green electricity

    International Nuclear Information System (INIS)

    1999-03-01

    This project summary focuses on the objectives and key achievements of the Renewable Generators Consortium (RGC) which was established to help renewable energy projects under the Non-Fossil Fuel Obligation (NFFO) to continue to generate in the open liberated post-1998 electricity market. The background to the NFFO is traced, and the development of the Consortium, and the attitudes of generators and suppliers to the Consortium are discussed along with the advantages of collective negotiations through the RGC, the Heads of Terms negotiations, and the success of RGC which has demonstrated the demand for green electricity

  12. Computational Astrophysics Consortium 3 - Supernovae, Gamma-Ray Bursts and Nucleosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Woosley, Stan [Univ. of California, Santa Cruz, CA (United States)

    2014-08-29

    Final project report for UCSC's participation in the Computational Astrophysics Consortium - Supernovae, Gamma-Ray Bursts and Nucleosynthesis. As an appendix, the report of the entire Consortium is also appended.

  13. Cultivation of algae consortium in a dairy farm wastewater for biodiesel production

    Directory of Open Access Journals (Sweden)

    S. Hena

    2015-06-01

    Full Text Available Dairy farm wastewaters are potential resources for production of microalgae biofuels. A study was conducted to evaluate the capability of production of biodiesel from consortium of native microalgae culture in dairy farm treated wastewater. Native algal strains were isolated from dairy farm wastewaters collection tank (untreated wastewater as well as from holding tank (treated wastewater. The consortium members were selected on the basis of fluorescence response after treating with Nile red reagent. Preliminary studies of two commercial and consortium of ten native strains of algae showed good growth in wastewaters. A consortium of native strains was found capable to remove more than 98% nutrients from treated wastewater. The biomass production and lipid content of consortium cultivated in treated wastewater were 153.54 t ha−1 year−1 and 16.89%, respectively. 72.70% of algal lipid obtained from consortium could be converted into biodiesel.

  14. The Genomic Standards Consortium

    DEFF Research Database (Denmark)

    Field, Dawn; Amaral-Zettler, Linda; Cochrane, Guy

    2011-01-01

    Standards Consortium (GSC), an open-membership organization that drives community-based standardization activities, Here we provide a short history of the GSC, provide an overview of its range of current activities, and make a call for the scientific community to join forces to improve the quality...

  15. Inland valley research in sub-Saharan Africa; priorities for a regional consortium

    NARCIS (Netherlands)

    Jamin, J.Y.; Andriesse, W.; Thiombiano, L.; Windmeijer, P.N.

    1996-01-01

    These proceedings are an account of an international workshop in support of research strategy development for the Inland Valley Consortium in sub-Saharan Africa. This consortium aims at concerted research planning for rice-based cropping systems in the lower parts of inland valleys. The Consortium

  16. Bioinformatics in New Generation Flavivirus Vaccines

    Directory of Open Access Journals (Sweden)

    Penelope Koraka

    2010-01-01

    Full Text Available Flavivirus infections are the most prevalent arthropod-borne infections world wide, often causing severe disease especially among children, the elderly, and the immunocompromised. In the absence of effective antiviral treatment, prevention through vaccination would greatly reduce morbidity and mortality associated with flavivirus infections. Despite the success of the empirically developed vaccines against yellow fever virus, Japanese encephalitis virus and tick-borne encephalitis virus, there is an increasing need for a more rational design and development of safe and effective vaccines. Several bioinformatic tools are available to support such rational vaccine design. In doing so, several parameters have to be taken into account, such as safety for the target population, overall immunogenicity of the candidate vaccine, and efficacy and longevity of the immune responses triggered. Examples of how bio-informatics is applied to assist in the rational design and improvements of vaccines, particularly flavivirus vaccines, are presented and discussed.

  17. Bioinformatics programs are 31-fold over-represented among the highest impact scientific papers of the past two decades.

    Science.gov (United States)

    Wren, Jonathan D

    2016-09-01

    To analyze the relative proportion of bioinformatics papers and their non-bioinformatics counterparts in the top 20 most cited papers annually for the past two decades. When defining bioinformatics papers as encompassing both those that provide software for data analysis or methods underlying data analysis software, we find that over the past two decades, more than a third (34%) of the most cited papers in science were bioinformatics papers, which is approximately a 31-fold enrichment relative to the total number of bioinformatics papers published. More than half of the most cited papers during this span were bioinformatics papers. Yet, the average 5-year JIF of top 20 bioinformatics papers was 7.7, whereas the average JIF for top 20 non-bioinformatics papers was 25.8, significantly higher (P papers, bioinformatics journals tended to have higher Gini coefficients, suggesting that development of novel bioinformatics resources may be somewhat 'hit or miss'. That is, relative to other fields, bioinformatics produces some programs that are extremely widely adopted and cited, yet there are fewer of intermediate success. jdwren@gmail.com Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. BIRCH: A user-oriented, locally-customizable, bioinformatics system

    Science.gov (United States)

    Fristensky, Brian

    2007-01-01

    Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere. PMID:17291351

  19. BIRCH: A user-oriented, locally-customizable, bioinformatics system

    Directory of Open Access Journals (Sweden)

    Fristensky Brian

    2007-02-01

    Full Text Available Abstract Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.

  20. mORCA: sailing bioinformatics world with mobile devices.

    Science.gov (United States)

    Díaz-Del-Pino, Sergio; Falgueras, Juan; Perez-Wohlfeil, Esteban; Trelles, Oswaldo

    2018-03-01

    Nearly 10 years have passed since the first mobile apps appeared. Given the fact that bioinformatics is a web-based world and that mobile devices are endowed with web-browsers, it seemed natural that bioinformatics would transit from personal computers to mobile devices but nothing could be further from the truth. The transition demands new paradigms, designs and novel implementations. Throughout an in-depth analysis of requirements of existing bioinformatics applications we designed and deployed an easy-to-use web-based lightweight mobile client. Such client is able to browse, select, compose automatically interface parameters, invoke services and monitor the execution of Web Services using the service's metadata stored in catalogs or repositories. mORCA is available at http://bitlab-es.com/morca/app as a web-app. It is also available in the App store by Apple and Play Store by Google. The software will be available for at least 2 years. ortrelles@uma.es. Source code, final web-app, training material and documentation is available at http://bitlab-es.com/morca. © The Author(s) 2017. Published by Oxford University Press.

  1. p3d--Python module for structural bioinformatics.

    Science.gov (United States)

    Fufezan, Christian; Specht, Michael

    2009-08-21

    High-throughput bioinformatic analysis tools are needed to mine the large amount of structural data via knowledge based approaches. The development of such tools requires a robust interface to access the structural data in an easy way. For this the Python scripting language is the optimal choice since its philosophy is to write an understandable source code. p3d is an object oriented Python module that adds a simple yet powerful interface to the Python interpreter to process and analyse three dimensional protein structure files (PDB files). p3d's strength arises from the combination of a) very fast spatial access to the structural data due to the implementation of a binary space partitioning (BSP) tree, b) set theory and c) functions that allow to combine a and b and that use human readable language in the search queries rather than complex computer language. All these factors combined facilitate the rapid development of bioinformatic tools that can perform quick and complex analyses of protein structures. p3d is the perfect tool to quickly develop tools for structural bioinformatics using the Python scripting language.

  2. Best practices in bioinformatics training for life scientists.

    KAUST Repository

    Via, Allegra

    2013-06-25

    The mountains of data thrusting from the new landscape of modern high-throughput biology are irrevocably changing biomedical research and creating a near-insatiable demand for training in data management and manipulation and data mining and analysis. Among life scientists, from clinicians to environmental researchers, a common theme is the need not just to use, and gain familiarity with, bioinformatics tools and resources but also to understand their underlying fundamental theoretical and practical concepts. Providing bioinformatics training to empower life scientists to handle and analyse their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource-centric demos, using interactivity, problem-solving exercises and cooperative learning to substantially enhance training quality and learning outcomes. In this context, this article discusses various pragmatic criteria for identifying training needs and learning objectives, for selecting suitable trainees and trainers, for developing and maintaining training skills and evaluating training quality. Adherence to these criteria may help not only to guide course organizers and trainers on the path towards bioinformatics training excellence but, importantly, also to improve the training experience for life scientists.

  3. A global perspective on evolving bioinformatics and data science training needs.

    Science.gov (United States)

    Attwood, Teresa K; Blackford, Sarah; Brazas, Michelle D; Davies, Angela; Schneider, Maria Victoria

    2017-08-29

    Bioinformatics is now intrinsic to life science research, but the past decade has witnessed a continuing deficiency in this essential expertise. Basic data stewardship is still taught relatively rarely in life science education programmes, creating a chasm between theory and practice, and fuelling demand for bioinformatics training across all educational levels and career roles. Concerned by this, surveys have been conducted in recent years to monitor bioinformatics and computational training needs worldwide. This article briefly reviews the principal findings of a number of these studies. We see that there is still a strong appetite for short courses to improve expertise and confidence in data analysis and interpretation; strikingly, however, the most urgent appeal is for bioinformatics to be woven into the fabric of life science degree programmes. Satisfying the relentless training needs of current and future generations of life scientists will require a concerted response from stakeholders across the globe, who need to deliver sustainable solutions capable of both transforming education curricula and cultivating a new cadre of trainer scientists. © The Author 2017. Published by Oxford University Press.

  4. Urban Consortium Energy Task Force - Year 21 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-04-01

    The Urban Consortium Energy Task Force (UCETF), comprised of representatives of large cities and counties in the United States, is a subgroup of the Urban Consortium, an organization of the nation's largest cities and counties joined together to identify, develop and deploy innovative approaches and technological solutions to pressing urban issues.

  5. Bioinformatic tools for PCR Primer design

    African Journals Online (AJOL)

    ES

    reaction (PCR), oligo hybridization and DNA sequencing. Proper primer design is actually one of the most important factors/steps in successful DNA sequencing. Various bioinformatics programs are available for selection of primer pairs from a template sequence. The plethora programs for PCR primer design reflects the.

  6. Introductory Bioinformatics Exercises Utilizing Hemoglobin and Chymotrypsin to Reinforce the Protein Sequence-Structure-Function Relationship

    Science.gov (United States)

    Inlow, Jennifer K.; Miller, Paige; Pittman, Bethany

    2007-01-01

    We describe two bioinformatics exercises intended for use in a computer laboratory setting in an upper-level undergraduate biochemistry course. To introduce students to bioinformatics, the exercises incorporate several commonly used bioinformatics tools, including BLAST, that are freely available online. The exercises build upon the students'…

  7. Advance in structural bioinformatics

    CERN Document Server

    Wei, Dongqing; Zhao, Tangzhen; Dai, Hao

    2014-01-01

    This text examines in detail mathematical and physical modeling, computational methods and systems for obtaining and analyzing biological structures, using pioneering research cases as examples. As such, it emphasizes programming and problem-solving skills. It provides information on structure bioinformatics at various levels, with individual chapters covering introductory to advanced aspects, from fundamental methods and guidelines on acquiring and analyzing genomics and proteomics sequences, the structures of protein, DNA and RNA, to the basics of physical simulations and methods for conform

  8. Bioinformatics in the Netherlands: the value of a nationwide community.

    Science.gov (United States)

    van Gelder, Celia W G; Hooft, Rob W W; van Rijswijk, Merlijn N; van den Berg, Linda; Kok, Ruben G; Reinders, Marcel; Mons, Barend; Heringa, Jaap

    2017-09-15

    This review provides a historical overview of the inception and development of bioinformatics research in the Netherlands. Rooted in theoretical biology by foundational figures such as Paulien Hogeweg (at Utrecht University since the 1970s), the developments leading to organizational structures supporting a relatively large Dutch bioinformatics community will be reviewed. We will show that the most valuable resource that we have built over these years is the close-knit national expert community that is well engaged in basic and translational life science research programmes. The Dutch bioinformatics community is accustomed to facing the ever-changing landscape of data challenges and working towards solutions together. In addition, this community is the stable factor on the road towards sustainability, especially in times where existing funding models are challenged and change rapidly. © The Author 2017. Published by Oxford University Press.

  9. What is bioinformatics? A proposed definition and overview of the field.

    Science.gov (United States)

    Luscombe, N M; Greenbaum, D; Gerstein, M

    2001-01-01

    The recent flood of data from genome sequences and functional genomics has given rise to new field, bioinformatics, which combines elements of biology and computer science. Here we propose a definition for this new field and review some of the research that is being pursued, particularly in relation to transcriptional regulatory systems. Our definition is as follows: Bioinformatics is conceptualizing biology in terms of macromolecules (in the sense of physical-chemistry) and then applying "informatics" techniques (derived from disciplines such as applied maths, computer science, and statistics) to understand and organize the information associated with these molecules, on a large-scale. Analyses in bioinformatics predominantly focus on three types of large datasets available in molecular biology: macromolecular structures, genome sequences, and the results of functional genomics experiments (e.g. expression data). Additional information includes the text of scientific papers and "relationship data" from metabolic pathways, taxonomy trees, and protein-protein interaction networks. Bioinformatics employs a wide range of computational techniques including sequence and structural alignment, database design and data mining, macromolecular geometry, phylogenetic tree construction, prediction of protein structure and function, gene finding, and expression data clustering. The emphasis is on approaches integrating a variety of computational methods and heterogeneous data sources. Finally, bioinformatics is a practical discipline. We survey some representative applications, such as finding homologues, designing drugs, and performing large-scale censuses. Additional information pertinent to the review is available over the web at http://bioinfo.mbb.yale.edu/what-is-it.

  10. An overview of topic modeling and its current applications in bioinformatics.

    Science.gov (United States)

    Liu, Lin; Tang, Lin; Dong, Wen; Yao, Shaowen; Zhou, Wei

    2016-01-01

    With the rapid accumulation of biological datasets, machine learning methods designed to automate data analysis are urgently needed. In recent years, so-called topic models that originated from the field of natural language processing have been receiving much attention in bioinformatics because of their interpretability. Our aim was to review the application and development of topic models for bioinformatics. This paper starts with the description of a topic model, with a focus on the understanding of topic modeling. A general outline is provided on how to build an application in a topic model and how to develop a topic model. Meanwhile, the literature on application of topic models to biological data was searched and analyzed in depth. According to the types of models and the analogy between the concept of document-topic-word and a biological object (as well as the tasks of a topic model), we categorized the related studies and provided an outlook on the use of topic models for the development of bioinformatics applications. Topic modeling is a useful method (in contrast to the traditional means of data reduction in bioinformatics) and enhances researchers' ability to interpret biological information. Nevertheless, due to the lack of topic models optimized for specific biological data, the studies on topic modeling in biological data still have a long and challenging road ahead. We believe that topic models are a promising method for various applications in bioinformatics research.

  11. "Extreme Programming" in a Bioinformatics Class

    Science.gov (United States)

    Kelley, Scott; Alger, Christianna; Deutschman, Douglas

    2009-01-01

    The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP).…

  12. Bioinformatics in Undergraduate Education: Practical Examples

    Science.gov (United States)

    Boyle, John A.

    2004-01-01

    Bioinformatics has emerged as an important research tool in recent years. The ability to mine large databases for relevant information has become increasingly central to many different aspects of biochemistry and molecular biology. It is important that undergraduates be introduced to the available information and methodologies. We present a…

  13. Virginia ADS consortium - thorium utilization

    International Nuclear Information System (INIS)

    Myneni, Ganapati

    2015-01-01

    A Virginia ADS consortium, consisting of Virginia Universities (UVa, VCU, VT), Industry (Casting Analysis Corporation, GEM*STAR, MuPlus Inc.), Jefferson Lab and not-for-profit ISOHIM, has been organizing International Accelerator-Driven Sub-Critical Systems (ADS) and Thorium Utilization (ThU) workshops. The third workshop of this series was hosted by VCU in Richmond, Virginia, USA Oct 2014 with CBMM and IAEA sponsorship and was endorsed by International Thorium Energy Committee (IThEC), Geneva and Virginia Nuclear Energy Consortium Authority. In this presentation a brief summary of the successful 3 rd International ADS and ThU workshop proceedings and review the worldwide ADS plans and/or programs is given. Additionally, a report on new start-ups on Molten Salt Reactor (MSR) systems is presented. Further, a discussion on potential simplistic fertile 232 Th to fissile 233 U conversion is made

  14. Modern bioinformatics meets traditional Chinese medicine.

    Science.gov (United States)

    Gu, Peiqin; Chen, Huajun

    2014-11-01

    Traditional Chinese medicine (TCM) is gaining increasing attention with the emergence of integrative medicine and personalized medicine, characterized by pattern differentiation on individual variance and treatments based on natural herbal synergism. Investigating the effectiveness and safety of the potential mechanisms of TCM and the combination principles of drug therapies will bridge the cultural gap with Western medicine and improve the development of integrative medicine. Dealing with rapidly growing amounts of biomedical data and their heterogeneous nature are two important tasks among modern biomedical communities. Bioinformatics, as an emerging interdisciplinary field of computer science and biology, has become a useful tool for easing the data deluge pressure by automating the computation processes with informatics methods. Using these methods to retrieve, store and analyze the biomedical data can effectively reveal the associated knowledge hidden in the data, and thus promote the discovery of integrated information. Recently, these techniques of bioinformatics have been used for facilitating the interactional effects of both Western medicine and TCM. The analysis of TCM data using computational technologies provides biological evidence for the basic understanding of TCM mechanisms, safety and efficacy of TCM treatments. At the same time, the carrier and targets associated with TCM remedies can inspire the rethinking of modern drug development. This review summarizes the significant achievements of applying bioinformatics techniques to many aspects of the research in TCM, such as analysis of TCM-related '-omics' data and techniques for analyzing biological processes and pharmaceutical mechanisms of TCM, which have shown certain potential of bringing new thoughts to both sides. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  15. Consortium for Verification Technology Fellowship Report.

    Energy Technology Data Exchange (ETDEWEB)

    Sadler, Lorraine E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-06-01

    As one recipient of the Consortium for Verification Technology (CVT) Fellowship, I spent eight days as a visiting scientist at the University of Michigan, Department of Nuclear Engineering and Radiological Sciences (NERS). During this time, I participated in multiple department and research group meetings and presentations, met with individual faculty and students, toured multiple laboratories, and taught one-half of a one-unit class on Risk Analysis in Nuclear Arms control (six 1.5 hour lectures). The following report describes some of the interactions that I had during my time as well as a brief discussion of the impact of this fellowship on members of the consortium and on me/my laboratory’s technical knowledge and network.

  16. Brain Tumor Epidemiology Consortium (BTEC)

    Science.gov (United States)

    The Brain Tumor Epidemiology Consortium is an open scientific forum organized to foster the development of multi-center, international and inter-disciplinary collaborations that will lead to a better understanding of the etiology, outcomes, and prevention of brain tumors.

  17. The Bioinformatics of Integrative Medical Insights: Proposals for an International Psycho-Social and Cultural Bioinformatics Project

    Directory of Open Access Journals (Sweden)

    Ernest Rossi

    2006-01-01

    Full Text Available We propose the formation of an International Psycho-Social and Cultural Bioinformatics Project (IPCBP to explore the research foundations of Integrative Medical Insights (IMI on all levels from the molecular-genomic to the psychological, cultural, social, and spiritual. Just as The Human Genome Project identified the molecular foundations of modern medicine with the new technology of sequencing DNA during the past decade, the IPCBP would extend and integrate this neuroscience knowledge base with the technology of gene expression via DNA/proteomic microarray research and brain imaging in development, stress, healing, rehabilitation, and the psychotherapeutic facilitation of existentional wellness. We anticipate that the IPCBP will require a unique international collaboration of, academic institutions, researchers, and clinical practioners for the creation of a new neuroscience of mind-body communication, brain plasticity, memory, learning, and creative processing during optimal experiential states of art, beauty, and truth. We illustrate this emerging integration of bioinformatics with medicine with a videotape of the classical 4-stage creative process in a neuroscience approach to psychotherapy.

  18. The Bioinformatics of Integrative Medical Insights: Proposals for an International PsychoSocial and Cultural Bioinformatics Project

    Directory of Open Access Journals (Sweden)

    Ernest Rossi

    2006-01-01

    Full Text Available We propose the formation of an International PsychoSocial and Cultural Bioinformatics Project (IPCBP to explore the research foundations of Integrative Medical Insights (IMI on all levels from the molecular-genomic to the psychological, cultural, social, and spiritual. Just as The Human Genome Project identified the molecular foundations of modern medicine with the new technology of sequencing DNA during the past decade, the IPCBP would extend and integrate this neuroscience knowledge base with the technology of gene expression via DNA/proteomic microarray research and brain imaging in development, stress, healing, rehabilitation, and the psychotherapeutic facilitation of existentional wellness. We anticipate that the IPCBP will require a unique international collaboration of, academic institutions, researchers, and clinical practioners for the creation of a new neuroscience of mind-body communication, brain plasticity, memory, learning, and creative processing during optimal experiential states of art, beauty, and truth. We illustrate this emerging integration of bioinformatics with medicine with a videotape of the classical 4-stage creative process in a neuroscience approach to psychotherapy.

  19. Incorporating a Collaborative Web-Based Virtual Laboratory in an Undergraduate Bioinformatics Course

    Science.gov (United States)

    Weisman, David

    2010-01-01

    Face-to-face bioinformatics courses commonly include a weekly, in-person computer lab to facilitate active learning, reinforce conceptual material, and teach practical skills. Similarly, fully-online bioinformatics courses employ hands-on exercises to achieve these outcomes, although students typically perform this work offsite. Combining a…

  20. A Summer Program Designed to Educate College Students for Careers in Bioinformatics

    Science.gov (United States)

    Krilowicz, Beverly; Johnston, Wendie; Sharp, Sandra B.; Warter-Perez, Nancy; Momand, Jamil

    2007-01-01

    A summer program was created for undergraduates and graduate students that teaches bioinformatics concepts, offers skills in professional development, and provides research opportunities in academic and industrial institutions. We estimate that 34 of 38 graduates (89%) are in a career trajectory that will use bioinformatics. Evidence from…

  1. Relax with CouchDB - Into the non-relational DBMS era of Bioinformatics

    Science.gov (United States)

    Manyam, Ganiraju; Payton, Michelle A.; Roth, Jack A.; Abruzzo, Lynne V.; Coombes, Kevin R.

    2012-01-01

    With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. PMID:22609849

  2. Keemei: cloud-based validation of tabular bioinformatics file formats in Google Sheets.

    Science.gov (United States)

    Rideout, Jai Ram; Chase, John H; Bolyen, Evan; Ackermann, Gail; González, Antonio; Knight, Rob; Caporaso, J Gregory

    2016-06-13

    Bioinformatics software often requires human-generated tabular text files as input and has specific requirements for how those data are formatted. Users frequently manage these data in spreadsheet programs, which is convenient for researchers who are compiling the requisite information because the spreadsheet programs can easily be used on different platforms including laptops and tablets, and because they provide a familiar interface. It is increasingly common for many different researchers to be involved in compiling these data, including study coordinators, clinicians, lab technicians and bioinformaticians. As a result, many research groups are shifting toward using cloud-based spreadsheet programs, such as Google Sheets, which support the concurrent editing of a single spreadsheet by different users working on different platforms. Most of the researchers who enter data are not familiar with the formatting requirements of the bioinformatics programs that will be used, so validating and correcting file formats is often a bottleneck prior to beginning bioinformatics analysis. We present Keemei, a Google Sheets Add-on, for validating tabular files used in bioinformatics analyses. Keemei is available free of charge from Google's Chrome Web Store. Keemei can be installed and run on any web browser supported by Google Sheets. Keemei currently supports the validation of two widely used tabular bioinformatics formats, the Quantitative Insights into Microbial Ecology (QIIME) sample metadata mapping file format and the Spatially Referenced Genetic Data (SRGD) format, but is designed to easily support the addition of others. Keemei will save researchers time and frustration by providing a convenient interface for tabular bioinformatics file format validation. By allowing everyone involved with data entry for a project to easily validate their data, it will reduce the validation and formatting bottlenecks that are commonly encountered when human-generated data files are

  3. NASA Systems Engineering Research Consortium: Defining the Path to Elegance in Systems

    Science.gov (United States)

    Watson, Michael D.; Farrington, Phillip A.

    2016-01-01

    The NASA Systems Engineering Research Consortium was formed at the end of 2010 to study the approaches to producing elegant systems on a consistent basis. This has been a transformative study looking at the engineering and organizational basis of systems engineering. The consortium has engaged in a variety of research topics to determine the path to elegant systems. In the second year of the consortium, a systems engineering framework emerged which structured the approach to systems engineering and guided our research. This led in the third year to set of systems engineering postulates that the consortium is continuing to refine. The consortium has conducted several research projects that have contributed significantly to the understanding of systems engineering. The consortium has surveyed the application of the NASA 17 systems engineering processes, explored the physics and statistics of systems integration, and considered organizational aspects of systems engineering discipline integration. The systems integration methods have included system exergy analysis, Akaike Information Criteria (AIC), State Variable Analysis, Multidisciplinary Coupling Analysis (MCA), Multidisciplinary Design Optimization (MDO), System Cost Modelling, System Robustness, and Value Modelling. Organizational studies have included the variability of processes in change evaluations, margin management within the organization, information theory of board structures, social categorization of unintended consequences, and initial looks at applying cognitive science to systems engineering. Consortium members have also studied the bidirectional influence of policy and law with systems engineering.

  4. A Quick Guide for Building a Successful Bioinformatics Community

    Science.gov (United States)

    Budd, Aidan; Corpas, Manuel; Brazas, Michelle D.; Fuller, Jonathan C.; Goecks, Jeremy; Mulder, Nicola J.; Michaut, Magali; Ouellette, B. F. Francis; Pawlik, Aleksandra; Blomberg, Niklas

    2015-01-01

    “Scientific community” refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i) the exchange and development of ideas and expertise; (ii) career development; (iii) coordinated funding activities; (iv) interactions and engagement with professionals from other fields; and (v) other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop “The ‘How To Guide’ for Establishing a Successful Bioinformatics Network” at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) and the 12th European Conference on Computational Biology (ECCB). PMID:25654371

  5. New Link in Bioinformatics Services Value Chain: Position, Organization and Business Model

    Directory of Open Access Journals (Sweden)

    Mladen Čudanov

    2012-11-01

    Full Text Available This paper presents development in the bioinformatics services industry value chain, based on cloud computing paradigm. As genome sequencing costs per Megabase exponentially drop, industry needs to adopt. Paper has two parts: theoretical analysis and practical example of Seven Bridges Genomics Company. We are focused on explaining organizational, business and financial aspects of new business model in bioinformatics services, rather than technical side of the problem. In the light of that we present twofold business model fit for core bioinformatics research and Information and Communication Technologie (ICT support in the new environment, with higher level of capital utilization and better resistance to business risks.

  6. Bioinformatics in High School Biology Curricula: A Study of State Science Standards

    Science.gov (United States)

    Wefer, Stephen H.; Sheppard, Keith

    2008-01-01

    The proliferation of bioinformatics in modern biology marks a modern revolution in science that promises to influence science education at all levels. This study analyzed secondary school science standards of 49 U.S. states (Iowa has no science framework) and the District of Columbia for content related to bioinformatics. The bioinformatics…

  7. XML schemas for common bioinformatic data types and their application in workflow systems.

    Science.gov (United States)

    Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert

    2006-11-06

    Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data--therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at http://bioschemas.sourceforge.net, the BioDOM library can be obtained at http://biodom.sourceforge.net. The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios.

  8. XML schemas for common bioinformatic data types and their application in workflow systems

    Science.gov (United States)

    Seibel, Philipp N; Krüger, Jan; Hartmeier, Sven; Schwarzer, Knut; Löwenthal, Kai; Mersch, Henning; Dandekar, Thomas; Giegerich, Robert

    2006-01-01

    Background Today, there is a growing need in bioinformatics to combine available software tools into chains, thus building complex applications from existing single-task tools. To create such workflows, the tools involved have to be able to work with each other's data – therefore, a common set of well-defined data formats is needed. Unfortunately, current bioinformatic tools use a great variety of heterogeneous formats. Results Acknowledging the need for common formats, the Helmholtz Open BioInformatics Technology network (HOBIT) identified several basic data types used in bioinformatics and developed appropriate format descriptions, formally defined by XML schemas, and incorporated them in a Java library (BioDOM). These schemas currently cover sequence, sequence alignment, RNA secondary structure and RNA secondary structure alignment formats in a form that is independent of any specific program, thus enabling seamless interoperation of different tools. All XML formats are available at , the BioDOM library can be obtained at . Conclusion The HOBIT XML schemas and the BioDOM library simplify adding XML support to newly created and existing bioinformatic tools, enabling these tools to interoperate seamlessly in workflow scenarios. PMID:17087823

  9. Genomics and bioinformatics resources for translational science in Rosaceae.

    Science.gov (United States)

    Jung, Sook; Main, Dorrie

    2014-01-01

    Recent technological advances in biology promise unprecedented opportunities for rapid and sustainable advancement of crop quality. Following this trend, the Rosaceae research community continues to generate large amounts of genomic, genetic and breeding data. These include annotated whole genome sequences, transcriptome and expression data, proteomic and metabolomic data, genotypic and phenotypic data, and genetic and physical maps. Analysis, storage, integration and dissemination of these data using bioinformatics tools and databases are essential to provide utility of the data for basic, translational and applied research. This review discusses the currently available genomics and bioinformatics resources for the Rosaceae family.

  10. The LBNL/JSU/AGMUS Science Consortium

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-04-01

    This report discusses the 11 year of accomplishments of the science consortium of minority graduates from Jackson State University and Ana G. Mendez University at the Lawrence Berkeley National Laboratory.

  11. Implementing bioinformatic workflows within the bioextract server

    Science.gov (United States)

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  12. Midwest Nuclear Science and Engineering Consortium

    International Nuclear Information System (INIS)

    Volkert, Wynn; Kumar, Arvind; Becker, Bryan; Schwinke, Victor; Gonzalez, Angel; McGregor, Douglas

    2010-01-01

    The objective of the Midwest Nuclear Science and Engineering Consortium (MNSEC) is to enhance the scope, quality and integration of educational and research capabilities of nuclear sciences and engineering (NS/E) programs at partner schools in support of the U.S. nuclear industry (including DOE laboratories). With INIE support, MNSEC had a productive seven years and made impressive progress in achieving these goals. Since the past three years have been no-cost-extension periods, limited -- but notable -- progress has been made in FY10. Existing programs continue to be strengthened and broadened at Consortium partner institutions. The enthusiasm generated by the academic, state, federal, and industrial communities for the MNSEC activities is reflected in the significant leveraging that has occurred for our programs.

  13. Midwest Nuclear Science and Engineering Consortium

    Energy Technology Data Exchange (ETDEWEB)

    Dr. Wynn Volkert; Dr. Arvind Kumar; Dr. Bryan Becker; Dr. Victor Schwinke; Dr. Angel Gonzalez; Dr. DOuglas McGregor

    2010-12-08

    The objective of the Midwest Nuclear Science and Engineering Consortium (MNSEC) is to enhance the scope, quality and integration of educational and research capabilities of nuclear sciences and engineering (NS/E) programs at partner schools in support of the U.S. nuclear industry (including DOE laboratories). With INIE support, MNSEC had a productive seven years and made impressive progress in achieving these goals. Since the past three years have been no-cost-extension periods, limited -- but notable -- progress has been made in FY10. Existing programs continue to be strengthened and broadened at Consortium partner institutions. The enthusiasm generated by the academic, state, federal, and industrial communities for the MNSEC activities is reflected in the significant leveraging that has occurred for our programs.

  14. Bioinformatics in Middle East Program Curricula--A Focus on the Arabian Gulf

    Science.gov (United States)

    Loucif, Samia

    2014-01-01

    The purpose of this paper is to investigate the inclusion of bioinformatics in program curricula in the Middle East, focusing on educational institutions in the Arabian Gulf. Bioinformatics is a multidisciplinary field which has emerged in response to the need for efficient data storage and retrieval, and accurate and fast computational and…

  15. Combining medical informatics and bioinformatics toward tools for personalized medicine.

    Science.gov (United States)

    Sarachan, B D; Simmons, M K; Subramanian, P; Temkin, J M

    2003-01-01

    Key bioinformatics and medical informatics research areas need to be identified to advance knowledge and understanding of disease risk factors and molecular disease pathology in the 21 st century toward new diagnoses, prognoses, and treatments. Three high-impact informatics areas are identified: predictive medicine (to identify significant correlations within clinical data using statistical and artificial intelligence methods), along with pathway informatics and cellular simulations (that combine biological knowledge with advanced informatics to elucidate molecular disease pathology). Initial predictive models have been developed for a pilot study in Huntington's disease. An initial bioinformatics platform has been developed for the reconstruction and analysis of pathways, and work has begun on pathway simulation. A bioinformatics research program has been established at GE Global Research Center as an important technology toward next generation medical diagnostics. We anticipate that 21 st century medical research will be a combination of informatics tools with traditional biology wet lab research, and that this will translate to increased use of informatics techniques in the clinic.

  16. GOBLET: the Global Organisation for Bioinformatics Learning, Education and Training.

    Science.gov (United States)

    Attwood, Teresa K; Atwood, Teresa K; Bongcam-Rudloff, Erik; Brazas, Michelle E; Corpas, Manuel; Gaudet, Pascale; Lewitter, Fran; Mulder, Nicola; Palagi, Patricia M; Schneider, Maria Victoria; van Gelder, Celia W G

    2015-04-01

    In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy--paradoxically, many are actually closing "niche" bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all.

  17. GOBLET: The Global Organisation for Bioinformatics Learning, Education and Training

    Science.gov (United States)

    Atwood, Teresa K.; Bongcam-Rudloff, Erik; Brazas, Michelle E.; Corpas, Manuel; Gaudet, Pascale; Lewitter, Fran; Mulder, Nicola; Palagi, Patricia M.; Schneider, Maria Victoria; van Gelder, Celia W. G.

    2015-01-01

    In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy—paradoxically, many are actually closing “niche” bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all. PMID:25856076

  18. Recovery of valuable metals from polymetallic mine tailings by natural microbial consortium.

    Science.gov (United States)

    Vardanyan, Narine; Sevoyan, Garegin; Navasardyan, Taron; Vardanyan, Arevik

    2018-05-28

    Possibilities for the recovery of non-ferrous and precious metals from Kapan polymetallic mine tailings (Armenia) were studied. The aim of this paper was to study the possibilities of bioleaching of samples of concentrated tailings by the natural microbial consortium of drainage water. The extent of extraction of metals from the samples of concentrated tailings by natural microbial consortium reached 41-55% and 53-73% for copper and zinc, respectively. Metal leaching efficiencies of pure culture Leptospirillum ferrooxidans Teg were higher, namely 47-93% and 73-81% for copper and zinc, respectively. The content of gold in solid phase of tailings increased about 7-16% and 2-9% after bio-oxidation process by L. ferrooxidans Teg and natural microbial consortium, respectively. It was shown that bioleaching of the samples of tailings could be performed using the natural consortium of drainage water. However, to increase the intensity of the recovery of valuable metals, natural consortium of drainage water combined with iron-oxidizing L. ferrooxidans Teg has been proposed.

  19. Mineralization of linear alkylbenzene sulfonate by a four-member aerobic bacterial consortium

    International Nuclear Information System (INIS)

    Jimenez, L.; Breen, A.; Thomas, N.; Sayler, G.S.; Federle, T.W.

    1991-01-01

    A bacterial consortium capable of linear alkylbenzene sulfonate (LAS) mineralization under aerobic conditions was isolated from a chemostat inoculated with activated sludge. The consortium, designated KJB, consisted of four members, all of which were gram-negative, rod-shaped bacteria that grew in pairs and short chains. Three isolates had biochemical properties characteristic of Pseudomonas spp.; the fourth showed characteristics of the Aeromonas spp. Cell suspensions were grown together in minimal medium with [ 14 C]LAS as the only carbon source. After 13 days of incubation, more than 25% of the [ 14 C]LAS was mineralized to 14 CO 2 by the consortium. Pure bacterial cultures and combinations lacking any one member of the KJB bacterial consortium did not mineralize LAS. Three isolates carried out primary biodegradation of the surfactant, and one did not. This study shows that the four bacteria complemented each other and synergistically mineralized LAS, indicating catabolic cooperation among the four consortium members

  20. A Survey of Bioinformatics Database and Software Usage through Mining the Literature.

    Directory of Open Access Journals (Sweden)

    Geraint Duck

    Full Text Available Computer-based resources are central to much, if not most, biological and medical research. However, while there is an ever expanding choice of bioinformatics resources to use, described within the biomedical literature, little work to date has provided an evaluation of the full range of availability or levels of usage of database and software resources. Here we use text mining to process the PubMed Central full-text corpus, identifying mentions of databases or software within the scientific literature. We provide an audit of the resources contained within the biomedical literature, and a comparison of their relative usage, both over time and between the sub-disciplines of bioinformatics, biology and medicine. We find that trends in resource usage differs between these domains. The bioinformatics literature emphasises novel resource development, while database and software usage within biology and medicine is more stable and conservative. Many resources are only mentioned in the bioinformatics literature, with a relatively small number making it out into general biology, and fewer still into the medical literature. In addition, many resources are seeing a steady decline in their usage (e.g., BLAST, SWISS-PROT, though some are instead seeing rapid growth (e.g., the GO, R. We find a striking imbalance in resource usage with the top 5% of resource names (133 names accounting for 47% of total usage, and over 70% of resources extracted being only mentioned once each. While these results highlight the dynamic and creative nature of bioinformatics research they raise questions about software reuse, choice and the sharing of bioinformatics practice. Is it acceptable that so many resources are apparently never reused? Finally, our work is a step towards automated extraction of scientific method from text. We make the dataset generated by our study available under the CC0 license here: http://dx.doi.org/10.6084/m9.figshare.1281371.

  1. Relax with CouchDB--into the non-relational DBMS era of bioinformatics.

    Science.gov (United States)

    Manyam, Ganiraju; Payton, Michelle A; Roth, Jack A; Abruzzo, Lynne V; Coombes, Kevin R

    2012-07-01

    With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Using registries to integrate bioinformatics tools and services into workbench environments

    DEFF Research Database (Denmark)

    Ménager, Hervé; Kalaš, Matúš; Rapacki, Kristoffer

    2016-01-01

    The diversity and complexity of bioinformatics resources presents significant challenges to their localisation, deployment and use, creating a need for reliable systems that address these issues. Meanwhile, users demand increasingly usable and integrated ways to access and analyse data, especially......, a software component that will ease the integration of bioinformatics resources in a workbench environment, using their description provided by the existing ELIXIR Tools and Data Services Registry....

  3. Aims, organization and activities of the consortium for underground storage

    International Nuclear Information System (INIS)

    Stucky, G.

    1977-01-01

    The consortium of Swiss authorities interested in underground storage (the petroleum oil and gas industries, for fuel storage; the nuclear industry for radioactive waste disposal), was initiated in 1972. The author outlines the motives behind the formation of the consortium and outlines its structure and objectives. The envisaged projects are outlined. (F.Q.)

  4. Prebiotics Mediate Microbial Interactions in a Consortium of the Infant Gut Microbiome.

    Science.gov (United States)

    Medina, Daniel A; Pinto, Francisco; Ovalle, Aline; Thomson, Pamela; Garrido, Daniel

    2017-10-04

    Composition of the gut microbiome is influenced by diet. Milk or formula oligosaccharides act as prebiotics, bioactives that promote the growth of beneficial gut microbes. The influence of prebiotics on microbial interactions is not well understood. Here we investigated the transformation of prebiotics by a consortium of four representative species of the infant gut microbiome, and how their interactions changed with dietary substrates. First, we optimized a culture medium resembling certain infant gut parameters. A consortium containing Bifidobacterium longum subsp. infantis , Bacteroides vulgatus , Escherichia coli and Lactobacillus acidophilus was grown on fructooligosaccharides (FOS) or 2'-fucosyllactose (2FL) in mono- or co-culture. While Bi. infantis and Ba. vulgatus dominated growth on 2FL, their combined growth was reduced. Besides, interaction coefficients indicated strong competition, especially on FOS. While FOS was rapidly consumed by the consortium, B. infantis was the only microbe displaying significant consumption of 2FL. Acid production by the consortium resembled the metabolism of microorganisms dominating growth in each substrate. Finally, the consortium was tested in a bioreactor, observing similar predominance but more pronounced acid production and substrate consumption. This study indicates that the chemical nature of prebiotics modulate microbial interactions in a consortium of infant gut species.

  5. Prebiotics Mediate Microbial Interactions in a Consortium of the Infant Gut Microbiome

    Directory of Open Access Journals (Sweden)

    Daniel A. Medina

    2017-10-01

    Full Text Available Composition of the gut microbiome is influenced by diet. Milk or formula oligosaccharides act as prebiotics, bioactives that promote the growth of beneficial gut microbes. The influence of prebiotics on microbial interactions is not well understood. Here we investigated the transformation of prebiotics by a consortium of four representative species of the infant gut microbiome, and how their interactions changed with dietary substrates. First, we optimized a culture medium resembling certain infant gut parameters. A consortium containing Bifidobacterium longum subsp. infantis, Bacteroides vulgatus, Escherichia coli and Lactobacillus acidophilus was grown on fructooligosaccharides (FOS or 2′-fucosyllactose (2FL in mono- or co-culture. While Bi. infantis and Ba. vulgatus dominated growth on 2FL, their combined growth was reduced. Besides, interaction coefficients indicated strong competition, especially on FOS. While FOS was rapidly consumed by the consortium, B. infantis was the only microbe displaying significant consumption of 2FL. Acid production by the consortium resembled the metabolism of microorganisms dominating growth in each substrate. Finally, the consortium was tested in a bioreactor, observing similar predominance but more pronounced acid production and substrate consumption. This study indicates that the chemical nature of prebiotics modulate microbial interactions in a consortium of infant gut species.

  6. p3d – Python module for structural bioinformatics

    Directory of Open Access Journals (Sweden)

    Fufezan Christian

    2009-08-01

    Full Text Available Abstract Background High-throughput bioinformatic analysis tools are needed to mine the large amount of structural data via knowledge based approaches. The development of such tools requires a robust interface to access the structural data in an easy way. For this the Python scripting language is the optimal choice since its philosophy is to write an understandable source code. Results p3d is an object oriented Python module that adds a simple yet powerful interface to the Python interpreter to process and analyse three dimensional protein structure files (PDB files. p3d's strength arises from the combination of a very fast spatial access to the structural data due to the implementation of a binary space partitioning (BSP tree, b set theory and c functions that allow to combine a and b and that use human readable language in the search queries rather than complex computer language. All these factors combined facilitate the rapid development of bioinformatic tools that can perform quick and complex analyses of protein structures. Conclusion p3d is the perfect tool to quickly develop tools for structural bioinformatics using the Python scripting language.

  7. A comparison of common programming languages used in bioinformatics.

    Science.gov (United States)

    Fourment, Mathieu; Gillings, Michael R

    2008-02-05

    The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from http://www.bioinformatics.org/benchmark/. This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language.

  8. Bioinformatics process management: information flow via a computational journal

    Directory of Open Access Journals (Sweden)

    Lushington Gerald

    2007-12-01

    Full Text Available Abstract This paper presents the Bioinformatics Computational Journal (BCJ, a framework for conducting and managing computational experiments in bioinformatics and computational biology. These experiments often involve series of computations, data searches, filters, and annotations which can benefit from a structured environment. Systems to manage computational experiments exist, ranging from libraries with standard data models to elaborate schemes to chain together input and output between applications. Yet, although such frameworks are available, their use is not widespread–ad hoc scripts are often required to bind applications together. The BCJ explores another solution to this problem through a computer based environment suitable for on-site use, which builds on the traditional laboratory notebook paradigm. It provides an intuitive, extensible paradigm designed for expressive composition of applications. Extensive features facilitate sharing data, computational methods, and entire experiments. By focusing on the bioinformatics and computational biology domain, the scope of the computational framework was narrowed, permitting us to implement a capable set of features for this domain. This report discusses the features determined critical by our system and other projects, along with design issues. We illustrate the use of our implementation of the BCJ on two domain-specific examples.

  9. The eBioKit, a stand-alone educational platform for bioinformatics.

    Science.gov (United States)

    Hernández-de-Diego, Rafael; de Villiers, Etienne P; Klingström, Tomas; Gourlé, Hadrien; Conesa, Ana; Bongcam-Rudloff, Erik

    2017-09-01

    Bioinformatics skills have become essential for many research areas; however, the availability of qualified researchers is usually lower than the demand and training to increase the number of able bioinformaticians is an important task for the bioinformatics community. When conducting training or hands-on tutorials, the lack of control over the analysis tools and repositories often results in undesirable situations during training, as unavailable online tools or version conflicts may delay, complicate, or even prevent the successful completion of a training event. The eBioKit is a stand-alone educational platform that hosts numerous tools and databases for bioinformatics research and allows training to take place in a controlled environment. A key advantage of the eBioKit over other existing teaching solutions is that all the required software and databases are locally installed on the system, significantly reducing the dependence on the internet. Furthermore, the architecture of the eBioKit has demonstrated itself to be an excellent balance between portability and performance, not only making the eBioKit an exceptional educational tool but also providing small research groups with a platform to incorporate bioinformatics analysis in their research. As a result, the eBioKit has formed an integral part of training and research performed by a wide variety of universities and organizations such as the Pan African Bioinformatics Network (H3ABioNet) as part of the initiative Human Heredity and Health in Africa (H3Africa), the Southern Africa Network for Biosciences (SAnBio) initiative, the Biosciences eastern and central Africa (BecA) hub, and the International Glossina Genome Initiative.

  10. Legacy Clinical Data from the Mission Connect Mild TBI Translational Research Consortium

    Science.gov (United States)

    2017-10-01

    AWARD NUMBER: W81XWH-16-2-0026 TITLE: Legacy Clinical Data from the Mission Connect Mild TBI Translational Research Consortium PRINCIPAL...2017 4. TITLE AND SUBTITLE Legacy Clinical Data from the Mission Connect Mild TBI Translational Research 5a. CONTRACT NUMBER Consortium 5b. GRANT...mTBI) Translational Research Consortium was to improve the diagnosis and treatment of mTBI. We enrolled a total of 88 mTBI patients and 73 orthopedic

  11. The structural bioinformatics library: modeling in biomolecular science and beyond.

    Science.gov (United States)

    Cazals, Frédéric; Dreyfus, Tom

    2017-04-01

    Software in structural bioinformatics has mainly been application driven. To favor practitioners seeking off-the-shelf applications, but also developers seeking advanced building blocks to develop novel applications, we undertook the design of the Structural Bioinformatics Library ( SBL , http://sbl.inria.fr ), a generic C ++/python cross-platform software library targeting complex problems in structural bioinformatics. Its tenet is based on a modular design offering a rich and versatile framework allowing the development of novel applications requiring well specified complex operations, without compromising robustness and performances. The SBL involves four software components (1-4 thereafter). For end-users, the SBL provides ready to use, state-of-the-art (1) applications to handle molecular models defined by unions of balls, to deal with molecular flexibility, to model macro-molecular assemblies. These applications can also be combined to tackle integrated analysis problems. For developers, the SBL provides a broad C ++ toolbox with modular design, involving core (2) algorithms , (3) biophysical models and (4) modules , the latter being especially suited to develop novel applications. The SBL comes with a thorough documentation consisting of user and reference manuals, and a bugzilla platform to handle community feedback. The SBL is available from http://sbl.inria.fr. Frederic.Cazals@inria.fr. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  12. The SIB Swiss Institute of Bioinformatics' resources: focus on curated databases

    OpenAIRE

    Bultet, Lisandra Aguilar; Aguilar Rodriguez, Jose; Ahrens, Christian H; Ahrne, Erik Lennart; Ai, Ni; Aimo, Lucila; Akalin, Altuna; Aleksiev, Tyanko; Alocci, Davide; Altenhoff, Adrian; Alves, Isabel; Ambrosini, Giovanna; Pedone, Pascale Anderle; Angelina, Paolo; Anisimova, Maria

    2016-01-01

    The SIB Swiss Institute of Bioinformatics (www.isb-sib.ch) provides world-class bioinformatics databases, software tools, services and training to the international life science community in academia and industry. These solutions allow life scientists to turn the exponentially growing amount of data into knowledge. Here, we provide an overview of SIB's resources and competence areas, with a strong focus on curated databases and SIB's most popular and widely used resources. In particular, SIB'...

  13. Green Fluorescent Protein-Focused Bioinformatics Laboratory Experiment Suitable for Undergraduates in Biochemistry Courses

    Science.gov (United States)

    Rowe, Laura

    2017-01-01

    An introductory bioinformatics laboratory experiment focused on protein analysis has been developed that is suitable for undergraduate students in introductory biochemistry courses. The laboratory experiment is designed to be potentially used as a "stand-alone" activity in which students are introduced to basic bioinformatics tools and…

  14. International Lymphoma Epidemiology Consortium (InterLymph)

    Science.gov (United States)

    A consortium designed to enhance collaboration among epidemiologists studying lymphoma, to provide a forum for the exchange of research ideas, and to create a framework for collaborating on analyses that pool data from multiple studies

  15. Rough-fuzzy pattern recognition applications in bioinformatics and medical imaging

    CERN Document Server

    Maji, Pradipta

    2012-01-01

    Learn how to apply rough-fuzzy computing techniques to solve problems in bioinformatics and medical image processing Emphasizing applications in bioinformatics and medical image processing, this text offers a clear framework that enables readers to take advantage of the latest rough-fuzzy computing techniques to build working pattern recognition models. The authors explain step by step how to integrate rough sets with fuzzy sets in order to best manage the uncertainties in mining large data sets. Chapters are logically organized according to the major phases of pattern recognition systems dev

  16. Bioinformatics and moonlighting proteins

    Directory of Open Access Journals (Sweden)

    Sergio eHernández

    2015-06-01

    Full Text Available Multitasking or moonlighting is the capability of some proteins to execute two or more biochemical functions. Usually, moonlighting proteins are experimentally revealed by serendipity. For this reason, it would be helpful that Bioinformatics could predict this multifunctionality, especially because of the large amounts of sequences from genome projects. In the present work, we analyse and describe several approaches that use sequences, structures, interactomics and current bioinformatics algorithms and programs to try to overcome this problem. Among these approaches are: a remote homology searches using Psi-Blast, b detection of functional motifs and domains, c analysis of data from protein-protein interaction databases (PPIs, d match the query protein sequence to 3D databases (i.e., algorithms as PISITE, e mutation correlation analysis between amino acids by algorithms as MISTIC. Programs designed to identify functional motif/domains detect mainly the canonical function but usually fail in the detection of the moonlighting one, Pfam and ProDom being the best methods. Remote homology search by Psi-Blast combined with data from interactomics databases (PPIs have the best performance. Structural information and mutation correlation analysis can help us to map the functional sites. Mutation correlation analysis can only be used in very specific situations –it requires the existence of multialigned family protein sequences - but can suggest how the evolutionary process of second function acquisition took place. The multitasking protein database MultitaskProtDB (http://wallace.uab.es/multitask/, previously published by our group, has been used as a benchmark for the all of the analyses.

  17. Ophthalmic epidemiology in Europe : the "European Eye Epidemiology" (E3) consortium

    NARCIS (Netherlands)

    Delcourt, Cecile; Korobelnik, Jean-Francois; Buitendijk, Gabrielle H. S.; Foster, Paul J.; Hammond, Christopher J.; Piermarocchi, Stefano; Peto, Tunde; Jansonius, Nomdo; Mirshahi, Alireza; Hogg, Ruth E.; Bretillon, Lionel; Topouzis, Fotis; Deak, Gabor; Grauslund, Jakob; Broe, Rebecca; Souied, Eric H.; Creuzot-Garcher, Catherine; Sahel, Jose; Daien, Vincent; Lehtimaki, Terho; Hense, Hans-Werner; Prokofyeva, Elena; Oexle, Konrad; Rahi, Jugnoo S.; Cumberland, Phillippa M.; Schmitz-Valckenberg, Steffen; Fauser, Sascha; Bertelsen, Geir; Hoyng, Carel; Bergen, Arthur; Silva, Rufino; Wolf, Sebastian; Lotery, Andrew; Chakravarthy, Usha; Fletcher, Astrid; Klaver, Caroline C. W.

    The European Eye Epidemiology (E3) consortium is a recently formed consortium of 29 groups from 12 European countries. It already comprises 21 population-based studies and 20 other studies (case-control, cases only, randomized trials), providing ophthalmological data on approximately 170,000

  18. Best practices in bioinformatics training for life scientists.

    KAUST Repository

    Via, Allegra; Blicher, Thomas; Bongcam-Rudloff, Erik; Brazas, Michelle D; Brooksbank, Cath; Budd, Aidan; De Las Rivas, Javier; Dreyer, Jacqueline; Fernandes, Pedro L; van Gelder, Celia; Jacob, Joachim; Jimenez, Rafael C; Loveland, Jane; Moran, Federico; Mulder, Nicola; Nyrö nen, Tommi; Rother, Kristian; Schneider, Maria Victoria; Attwood, Teresa K

    2013-01-01

    concepts. Providing bioinformatics training to empower life scientists to handle and analyse their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource

  19. Privacy Preserving PCA on Distributed Bioinformatics Datasets

    Science.gov (United States)

    Li, Xin

    2011-01-01

    In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…

  20. Engaging Students in a Bioinformatics Activity to Introduce Gene Structure and Function

    Directory of Open Access Journals (Sweden)

    Barbara J. May

    2013-02-01

    Full Text Available Bioinformatics spans many fields of biological research and plays a vital role in mining and analyzing data. Therefore, there is an ever-increasing need for students to understand not only what can be learned from this data, but also how to use basic bioinformatics tools.  This activity is designed to provide secondary and undergraduate biology students to a hands-on activity meant to explore and understand gene structure with the use of basic bioinformatic tools.  Students are provided an “unknown” sequence from which they are asked to use a free online gene finder program to identify the gene. Students then predict the putative function of this gene with the use of additional online databases.

  1. Rise and demise of bioinformatics? Promise and progress.

    Directory of Open Access Journals (Sweden)

    Christos A Ouzounis

    Full Text Available The field of bioinformatics and computational biology has gone through a number of transformations during the past 15 years, establishing itself as a key component of new biology. This spectacular growth has been challenged by a number of disruptive changes in science and technology. Despite the apparent fatigue of the linguistic use of the term itself, bioinformatics has grown perhaps to a point beyond recognition. We explore both historical aspects and future trends and argue that as the field expands, key questions remain unanswered and acquire new meaning while at the same time the range of applications is widening to cover an ever increasing number of biological disciplines. These trends appear to be pointing to a redefinition of certain objectives, milestones, and possibly the field itself.

  2. Meeting review: 2002 O'Reilly Bioinformatics Technology Conference.

    Science.gov (United States)

    Counsell, Damian

    2002-01-01

    At the end of January I travelled to the States to speak at and attend the first O'Reilly Bioinformatics Technology Conference. It was a large, well-organized and diverse meeting with an interesting history. Although the meeting was not a typical academic conference, its style will, I am sure, become more typical of meetings in both biological and computational sciences.Speakers at the event included prominent bioinformatics researchers such as Ewan Birney, Terry Gaasterland and Lincoln Stein; authors and leaders in the open source programming community like Damian Conway and Nat Torkington; and representatives from several publishing companies including the Nature Publishing Group, Current Science Group and the President of O'Reilly himself, Tim O'Reilly. There were presentations, tutorials, debates, quizzes and even a 'jam session' for musical bioinformaticists.

  3. Zinc bioaccumulation by microbial consortium isolated from nickel smelter sludge disposal site

    Directory of Open Access Journals (Sweden)

    Kvasnová Simona

    2017-06-01

    Full Text Available Heavy metal pollution is one of the most important environmental issues of today. Bioremediation by microorganisms is one of technologies extensively used for pollution treatment. In this study, we investigated the heavy metal resistance and zinc bioaccumulation by microbial consortium isolated from nickel sludge disposal site near Sereď (Slovakia. The composition of consortium was analyzed based on MALDI-TOF MS of cultivable bacteria and we have shown that the consortium was dominated by bacteria of genus Arthrobacter. While consortium showed very good growth in the zinc presence, it was able to remove only 15 % of zinc from liquid media. Selected members of consortia have shown lower growth rates in the zinc presence but selected isolates have shown much higher bioaccumulation abilities compared to whole consortium (up to 90 % of zinc removal for NH1 strain. Bioremediation is frequently accelerated through injection of native microbiota into a contaminated area. Based on data obtained in this study, we can conclude that careful selection of native microbiota could lead to the identification of bacteria with increased bioaccumulation abilities.

  4. Bacterial community composition characterization of a lead-contaminated Microcoleus sp. consortium.

    Science.gov (United States)

    Giloteaux, Ludovic; Solé, Antoni; Esteve, Isabel; Duran, Robert

    2011-08-01

    A Microcoleus sp. consortium, obtained from the Ebro delta microbial mat, was maintained under different conditions including uncontaminated, lead-contaminated, and acidic conditions. Terminal restriction fragment length polymorphism and 16S rRNA gene library analyses were performed in order to determine the effect of lead and culture conditions on the Microcoleus sp. consortium. The bacterial composition inside the consortium revealed low diversity and the presence of specific terminal-restriction fragments under lead conditions. 16S rRNA gene library analyses showed that members of the consortium were affiliated to the Alpha, Beta, and Gammaproteobacteria and Cyanobacteria. Sequences closely related to Achromobacter spp., Alcaligenes faecalis, and Thiobacillus species were exclusively found under lead conditions while sequences related to Geitlerinema sp., a cyanobacterium belonging to the Oscillatoriales, were not found in presence of lead. This result showed a strong lead selection of the bacterial members present in the Microcoleus sp. consortium. Several of the 16S rRNA sequences were affiliated to nitrogen-fixing microorganisms including members of the Rhizobiaceae and the Sphingomonadaceae. Additionally, confocal laser scanning microscopy and scanning and transmission electron microscopy showed that under lead-contaminated condition Microcoleus sp. cells were grouped and the number of electrodense intracytoplasmic inclusions was increased.

  5. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    Directory of Open Access Journals (Sweden)

    Cieślik Marcin

    2011-02-01

    Full Text Available Abstract Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'. A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption. An add-on module ('NuBio' facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures and functionality (e.g., to parse/write standard file formats. Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and

  6. The COPD Biomarker Qualification Consortium (CBQC)

    DEFF Research Database (Denmark)

    Casaburi, Richard; Celli, Bartolome; Crapo, James

    2013-01-01

    Abstract Knowledge about the pathogenesis and pathophysiology of chronic obstructive pulmonary disease (COPD) has advanced dramatically over the last 30 years. Unfortunately, this has had little impact in terms of new treatments. Over the same time frame, only one new class of medication for COPD......, and no interested party has been in a position to undertake such a process. In order to facilitate the development of novel tools to assess new treatments, the Food and Drug Administration, in collaboration with the COPD Foundation, the National Heart Lung and Blood Institute and scientists from the pharmaceutical...... industry and academia conducted a workshop to survey the available information that could contribute to new tools. Based on this, a collaborative project, the COPD Biomarkers Qualification Consortium, was initiated. The Consortium in now actively preparing integrated data sets from existing resources...

  7. RISE OF BIOINFORMATICS AND COMPUTATIONAL BIOLOGY IN INDIA: A LOOK THROUGH PUBLICATIONS

    Directory of Open Access Journals (Sweden)

    Anjali Srivastava

    2017-09-01

    Full Text Available Computational biology and bioinformatics have been part and parcel of biomedical research for few decades now. However, the institutionalization of bioinformatics research took place with the establishment of Distributed Information Centres (DISCs in the research institutions of repute in various disciplines by the Department of Biotechnology, Government of India. Though, at initial stages, this endeavor was mainly focused on providing infrastructure for using information technology and internet based communication and tools for carrying out computational biology and in-silico assisted research in varied arena of research starting from disease biology to agricultural crops, spices, veterinary science and many more, the natural outcome of establishment of such facilities resulted into new experiments with bioinformatics tools. Thus, Biotechnology Information Systems (BTIS grew into a solid movement and a large number of publications started coming out of these centres. In the end of last century, bioinformatics started developing like a full-fledged research subject. In the last decade, a need was felt to actually make a factual estimation of the result of this endeavor of DBT which had, by then, established about two hundred centres in almost all disciplines of biomedical research. In a bid to evaluate the efforts and outcome of these centres, BTIS Centre at CSIR-CDRI, Lucknow was entrusted with collecting and collating the publications of these centres. However, when the full data was compiled, the DBT task force felt that the study must include Non-BTIS centres also so as to expand the report to have a glimpse of bioinformatics publications from the country.

  8. Removal of Triphenylmethane Dyes by Bacterial Consortium

    Directory of Open Access Journals (Sweden)

    Jihane Cheriaa

    2012-01-01

    Full Text Available A new consortium of four bacterial isolates (Agrobacterium radiobacter; Bacillus spp.; Sphingomonas paucimobilis, and Aeromonas hydrophila-(CM-4 was used to degrade and to decolorize triphenylmethane dyes. All bacteria were isolated from activated sludge extracted from a wastewater treatment station of a dyeing industry plant. Individual bacterial isolates exhibited a remarkable color-removal capability against crystal violet (50 mg/L and malachite green (50 mg/L dyes within 24 h. Interestingly, the microbial consortium CM-4 shows a high decolorizing percentage for crystal violet and malachite green, respectively, 91% and 99% within 2 h. The rate of chemical oxygen demand (COD removal increases after 24 h, reaching 61.5% and 84.2% for crystal violet and malachite green, respectively. UV-Visible absorption spectra, FTIR analysis and the inspection of bacterial cells growth indicated that color removal by the CM-4 was due to biodegradation. Evaluation of mutagenicity by using Salmonella typhimurium test strains, TA98 and TA100 studies revealed that the degradation of crystal violet and malachite green by CM-4 did not lead to mutagenic products. Altogether, these results demonstrated the usefulness of the bacterial consortium in the treatment of the textile dyes.

  9. The Consortium for Advancing Renewable Energy Technology (CARET)

    Science.gov (United States)

    Gordon, E. M.; Henderson, D. O.; Buffinger, D. R.; Fuller, C. W.; Uribe, R. M.

    1998-01-01

    The Consortium for Advancing Renewable Energy (CARET) is a research and education program which uses the theme of renewable energy to build a minority scientist pipeline. CARET is also a consortium of four universities and NASA Lewis Research Center working together to promote science education and research to minority students using the theme of renewable energy. The consortium membership includes the HBCUs (Historically Black Colleges and Universities), Fisk, Wilberforce and Central State Universities as well as Kent State University and NASA Lewis Research Center. The various stages of this pipeline provide participating students experiences with a different emphasis. Some emphasize building enthusiasm for the classroom study of science and technology while others emphasize the nature of research in these disciplines. Still others focus on relating a practical application to science and technology. And, of great importance to the success of the program are the interfaces between the various stages. Successfully managing these transitions is a requirement for producing trained scientists, engineers and technologists. Presentations describing the CARET program have been given at this year's HBCU Research Conference at the Ohio Aerospace Institute and as a seminar in the Solar Circle Seminar series of the Photovoltaic and Space Environments Branch at NASA Lewis Research Center. In this report, we will describe the many positive achievements toward the fulfillment of the goals and outcomes of our program. We will begin with a description of the interactions among the consortium members and end with a description of the activities of each of the member institutions .

  10. The creation of the SAVE consortium – Saving Asia's Vultures from ...

    African Journals Online (AJOL)

    This article describes the background to this problem, caused mainly by the veterinary drug diclofenac, and the establishment and structure of the SAVE consortium created to help coordinate the necessary conservation response. The lessons learnt in Asia and the working model of such a consortium are presented, which ...

  11. Consortium for Health and Military Performance (CHAMP)

    Data.gov (United States)

    Federal Laboratory Consortium — The Center's work addresses a wide scope of trauma exposure from the consequences of combat, operations other than war, terrorism, natural and humanmade disasters,...

  12. A review of bioinformatics training applied to research in molecular medicine, agriculture and biodiversity in Costa Rica and Central America.

    Science.gov (United States)

    Orozco, Allan; Morera, Jessica; Jiménez, Sergio; Boza, Ricardo

    2013-09-01

    Today, Bioinformatics has become a scientific discipline with great relevance for the Molecular Biosciences and for the Omics sciences in general. Although developed countries have progressed with large strides in Bioinformatics education and research, in other regions, such as Central America, the advances have occurred in a gradual way and with little support from the Academia, either at the undergraduate or graduate level. To address this problem, the University of Costa Rica's Medical School, a regional leader in Bioinformatics in Central America, has been conducting a series of Bioinformatics workshops, seminars and courses, leading to the creation of the region's first Bioinformatics Master's Degree. The recent creation of the Central American Bioinformatics Network (BioCANET), associated to the deployment of a supporting computational infrastructure (HPC Cluster) devoted to provide computing support for Molecular Biology in the region, is providing a foundational stone for the development of Bioinformatics in the area. Central American bioinformaticians have participated in the creation of as well as co-founded the Iberoamerican Bioinformatics Society (SOIBIO). In this article, we review the most recent activities in education and research in Bioinformatics from several regional institutions. These activities have resulted in further advances for Molecular Medicine, Agriculture and Biodiversity research in Costa Rica and the rest of the Central American countries. Finally, we provide summary information on the first Central America Bioinformatics International Congress, as well as the creation of the first Bioinformatics company (Indromics Bioinformatics), spin-off the Academy in Central America and the Caribbean.

  13. Bioinformatics: A History of Evolution "In Silico"

    Science.gov (United States)

    Ondrej, Vladan; Dvorak, Petr

    2012-01-01

    Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…

  14. Protein raftophilicity. How bioinformatics can help membranologists

    DEFF Research Database (Denmark)

    Nielsen, Henrik; Sperotto, Maria Maddalena

    )-based bioinformatics approach. The ANN was trained to recognize feature-based patterns in proteins that are considered to be associated with lipid rafts. The trained ANN was then used to predict protein raftophilicity. We found that, in the case of α-helical membrane proteins, their hydrophobic length does not affect...

  15. Development and implementation of a bioinformatics online ...

    African Journals Online (AJOL)

    Thus, there is the need for appropriate strategies of introducing the basic components of this emerging scientific field to part of the African populace through the development of an online distance education learning tool. This study involved the design of a bioinformatics online distance educative tool an implementation of ...

  16. A Novel Methylotrophic Bacterial Consortium for Treatment of Industrial Effluents.

    Science.gov (United States)

    Hingurao, Krushi; Nerurkar, Anuradha

    2018-01-01

    Considering the importance of methylotrophs in industrial wastewater treatment, focus of the present study was on utilization of a methylotrophic bacterial consortium as a microbial seed for biotreatment of a variety of industrial effluents. For this purpose, a mixed bacterial methylotrophic AC (Ankleshwar CETP) consortium comprising of Bordetella petrii AC1, Bacillus licheniformis AC4, Salmonella subterranea AC5, and Pseudomonas stutzeri AC8 was used. The AC consortium showed efficient biotreatment of four industrial effluents procured from fertilizer, chemical and pesticide industries, and common effluent treatment plant by lowering their chemical oxygen demand (COD) of 950-2000 mg/l to below detection limit in 60-96 h in 6-l batch reactor and 9-15 days in 6-l continuous reactor. The operating variables of wastewater treatment, viz. COD, BOD, pH, MLSS, MLVSS, SVI, and F/M ratio of these effluents, were also maintained in the permissible range in both batch and continuous reactors. Therefore, formation of the AC consortium has led to the development of an efficient microbial seed capable of treating a variety of industrial effluents containing pollutants generated from their respective industries.

  17. Microbial hydrogen production from sewage sludge bioaugmented with a constructed microbial consortium

    Energy Technology Data Exchange (ETDEWEB)

    Kotay, Shireen Meher; Das, Debabrata [Department of Biotechnology, Indian Institute of Technology, Kharagpur 721302 (India)

    2010-10-15

    A constructed microbial consortium was formulated from three facultative H{sub 2}-producing anaerobic bacteria, Enterobacter cloacae IIT-BT 08, Citrobacter freundii IIT-BT L139 and Bacillus coagulans IIT-BT S1. This consortium was tested as the seed culture for H{sub 2} production. In the initial studies with defined medium (MYG), E. cloacae produced more H{sub 2} than the other two strains and it also was found to be the dominant member when consortium was used. On the other hand, B. coagulans as a pure culture gave better H{sub 2} yield (37.16 ml H{sub 2}/g COD{sub consumed}) than the other two strains using sewage sludge as substrate. The pretreatment of sludge included sterilization (15% v/v), dilution and supplementation with 0.5% w/v glucose, which was found to be essential to screen out the H{sub 2} consuming bacteria and ameliorate the H{sub 2} production. Considering (1:1:1) defined consortium as inoculum, COD reduction was higher and yield of H{sub 2} was recorded to be 41.23 ml H{sub 2}/g COD{sub reduced}. Microbial profiling of the spent sludge showed that B. coagulans was the dominant member in the constructed consortium contributing towards H{sub 2} production. Increase in H{sub 2} yield indicated that in consortium, the substrate utilization was significantly higher. The H{sub 2} yield from pretreated sludge (35.54 ml H{sub 2}/g sludge) was comparatively higher than that reported in literature (8.1-16.9 ml H{sub 2}/g sludge). Employing formulated microbial consortium for biohydrogen production is a successful attempt to augment the H{sub 2} yield from sewage sludge. (author)

  18. H3ABioNet, a sustainable pan-African bioinformatics network for human heredity and health in Africa

    Science.gov (United States)

    Mulder, Nicola J.; Adebiyi, Ezekiel; Alami, Raouf; Benkahla, Alia; Brandful, James; Doumbia, Seydou; Everett, Dean; Fadlelmola, Faisal M.; Gaboun, Fatima; Gaseitsiwe, Simani; Ghazal, Hassan; Hazelhurst, Scott; Hide, Winston; Ibrahimi, Azeddine; Jaufeerally Fakim, Yasmina; Jongeneel, C. Victor; Joubert, Fourie; Kassim, Samar; Kayondo, Jonathan; Kumuthini, Judit; Lyantagaye, Sylvester; Makani, Julie; Mansour Alzohairy, Ahmed; Masiga, Daniel; Moussa, Ahmed; Nash, Oyekanmi; Ouwe Missi Oukem-Boyer, Odile; Owusu-Dabo, Ellis; Panji, Sumir; Patterton, Hugh; Radouani, Fouzia; Sadki, Khalid; Seghrouchni, Fouad; Tastan Bishop, Özlem; Tiffin, Nicki; Ulenga, Nzovu

    2016-01-01

    The application of genomics technologies to medicine and biomedical research is increasing in popularity, made possible by new high-throughput genotyping and sequencing technologies and improved data analysis capabilities. Some of the greatest genetic diversity among humans, animals, plants, and microbiota occurs in Africa, yet genomic research outputs from the continent are limited. The Human Heredity and Health in Africa (H3Africa) initiative was established to drive the development of genomic research for human health in Africa, and through recognition of the critical role of bioinformatics in this process, spurred the establishment of H3ABioNet, a pan-African bioinformatics network for H3Africa. The limitations in bioinformatics capacity on the continent have been a major contributory factor to the lack of notable outputs in high-throughput biology research. Although pockets of high-quality bioinformatics teams have existed previously, the majority of research institutions lack experienced faculty who can train and supervise bioinformatics students. H3ABioNet aims to address this dire need, specifically in the area of human genetics and genomics, but knock-on effects are ensuring this extends to other areas of bioinformatics. Here, we describe the emergence of genomics research and the development of bioinformatics in Africa through H3ABioNet. PMID:26627985

  19. Kansas Consortium Plug-in Hybrid Medium Duty

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2012-03-31

    On September 30, 2008, the US Department of Energy (DoE), issued a cooperative agreement award, DE-FC26-08NT01914, to the Metropolitan Energy Center (MEC), for a project known as “Kansas Consortium Plug-in Hybrid Medium Duty Certification” project. The cooperative agreement was awarded pursuant to H15915 in reference to H. R. 2764 Congressionally Directed Projects. The original agreement provided funding for The Consortium to implement the established project objectives as follows: (1) to understand the current state of the development of a test protocol for PHEV configurations; (2) to work with industry stakeholders to recommend a medium duty vehicle test protocol; (3) to utilize the Phase 1 Eaton PHEV F550 Chassis or other appropriate PHEV configurations to conduct emissions testing; (4) and to make an industry PHEV certification test protocol recommendation for medium duty trucks. Subsequent amendments to the initial agreement were made, the most significant being a revised Scope of Project Objectives (SOPO) that did not address actual field data since it was not available as originally expected. This project was mated by DOE with a parallel project award given to the South Coast Air Quality Management District (SCAQMD) in California. The SCAQMD project involved designing, building and testing of five medium duty plug-in hybrid electric trucks. SCAQMD had contracted with the Electric Power Research Institute (EPRI) to manage the project. EPRI provided the required match to the federal grant funds to both the SCAQMD project and the Kansas Consortium project. The rational for linking the two projects was that the data derived from the SCAQMD project could be used to validate the protocols developed by the Kansas Consortium team. At the same time, the consortium team would be a useful resource to SCAQMD in designating their test procedures for emissions and operating parameters and determining vehicle mileage. The years between award of the cooperative

  20. Bioinformatics education in high school: implications for promoting science, technology, engineering, and mathematics careers.

    Science.gov (United States)

    Kovarik, Dina N; Patterson, Davis G; Cohen, Carolyn; Sanders, Elizabeth A; Peterson, Karen A; Porter, Sandra G; Chowning, Jeanne Ting

    2013-01-01

    We investigated the effects of our Bio-ITEST teacher professional development model and bioinformatics curricula on cognitive traits (awareness, engagement, self-efficacy, and relevance) in high school teachers and students that are known to accompany a developing interest in science, technology, engineering, and mathematics (STEM) careers. The program included best practices in adult education and diverse resources to empower teachers to integrate STEM career information into their classrooms. The introductory unit, Using Bioinformatics: Genetic Testing, uses bioinformatics to teach basic concepts in genetics and molecular biology, and the advanced unit, Using Bioinformatics: Genetic Research, utilizes bioinformatics to study evolution and support student research with DNA barcoding. Pre-post surveys demonstrated significant growth (n = 24) among teachers in their preparation to teach the curricula and infuse career awareness into their classes, and these gains were sustained through the end of the academic year. Introductory unit students (n = 289) showed significant gains in awareness, relevance, and self-efficacy. While these students did not show significant gains in engagement, advanced unit students (n = 41) showed gains in all four cognitive areas. Lessons learned during Bio-ITEST are explored in the context of recommendations for other programs that wish to increase student interest in STEM careers.

  1. Analysis of requirements for teaching materials based on the course bioinformatics for plant metabolism

    Science.gov (United States)

    Balqis, Widodo, Lukiati, Betty; Amin, Mohamad

    2017-05-01

    A way to improve the quality of learning in the course of Plant Metabolism in the Department of Biology, State University of Malang, is to develop teaching materials. This research evaluates the needs of bioinformatics-based teaching material in the course Plant Metabolism by the Analyze, Design, Develop, Implement, and Evaluate (ADDIE) development model. Data were collected through questionnaires distributed to the students in the Plant Metabolism course of the Department of Biology, University of Malang, and analysis of the plan of lectures semester (RPS). Learning gains of this course show that it is not yet integrated into the field of bioinformatics. All respondents stated that plant metabolism books do not include bioinformatics and fail to explain the metabolism of a chemical compound of a local plant in Indonesia. Respondents thought that bioinformatics can explain examples and metabolism of a secondary metabolite analysis techniques and discuss potential medicinal compounds from local plants. As many as 65% of the respondents said that the existing metabolism book could not be used to understand secondary metabolism in lectures of plant metabolism. Therefore, the development of teaching materials including plant metabolism-based bioinformatics is important to improve the understanding of the lecture material in plant metabolism.

  2. A BIOINFORMATIC STRATEGY TO RAPIDLY CHARACTERIZE CDNA LIBRARIES

    Science.gov (United States)

    A Bioinformatic Strategy to Rapidly Characterize cDNA LibrariesG. Charles Ostermeier1, David J. Dix2 and Stephen A. Krawetz1.1Departments of Obstetrics and Gynecology, Center for Molecular Medicine and Genetics, & Institute for Scientific Computing, Wayne State Univer...

  3. Overview of the Inland California Translational Consortium

    Science.gov (United States)

    Malkas, Linda H.

    2017-05-01

    The mission of the Inland California Translational Consortium (ICTC), an independent research consortium comprising a unique hub of regional institutions (City of Hope [COH], California Institute of Technology [Caltech], Jet Propulsion Laboratory [JPL], University of California Riverside [UCR], and Claremont Colleges Keck Graduate Institute [KGI], is to institute a new paradigm within the academic culture to accelerate translation of innovative biomedical discoveries into clinical applications that positively affect human health and life. The ICTC actively supports clinical translational research as well as the implementation and advancement of novel education and training models for the translation of basic discoveries into workable products and practices that preserve and improve human health while training and educating at all levels of the workforce using innovative forward-thinking approaches.

  4. Personalized cloud-based bioinformatics services for research and education: use cases and the elasticHPC package.

    Science.gov (United States)

    El-Kalioby, Mohamed; Abouelhoda, Mohamed; Krüger, Jan; Giegerich, Robert; Sczyrba, Alexander; Wall, Dennis P; Tonellato, Peter

    2012-01-01

    Bioinformatics services have been traditionally provided in the form of a web-server that is hosted at institutional infrastructure and serves multiple users. This model, however, is not flexible enough to cope with the increasing number of users, increasing data size, and new requirements in terms of speed and availability of service. The advent of cloud computing suggests a new service model that provides an efficient solution to these problems, based on the concepts of "resources-on-demand" and "pay-as-you-go". However, cloud computing has not yet been introduced within bioinformatics servers due to the lack of usage scenarios and software layers that address the requirements of the bioinformatics domain. In this paper, we provide different use case scenarios for providing cloud computing based services, considering both the technical and financial aspects of the cloud computing service model. These scenarios are for individual users seeking computational power as well as bioinformatics service providers aiming at provision of personalized bioinformatics services to their users. We also present elasticHPC, a software package and a library that facilitates the use of high performance cloud computing resources in general and the implementation of the suggested bioinformatics scenarios in particular. Concrete examples that demonstrate the suggested use case scenarios with whole bioinformatics servers and major sequence analysis tools like BLAST are presented. Experimental results with large datasets are also included to show the advantages of the cloud model. Our use case scenarios and the elasticHPC package are steps towards the provision of cloud based bioinformatics services, which would help in overcoming the data challenge of recent biological research. All resources related to elasticHPC and its web-interface are available at http://www.elasticHPC.org.

  5. Exploring Cystic Fibrosis Using Bioinformatics Tools: A Module Designed for the Freshman Biology Course

    Science.gov (United States)

    Zhang, Xiaorong

    2011-01-01

    We incorporated a bioinformatics component into the freshman biology course that allows students to explore cystic fibrosis (CF), a common genetic disorder, using bioinformatics tools and skills. Students learn about CF through searching genetic databases, analyzing genetic sequences, and observing the three-dimensional structures of proteins…

  6. 2nd Colombian Congress on Computational Biology and Bioinformatics

    CERN Document Server

    Cristancho, Marco; Isaza, Gustavo; Pinzón, Andrés; Rodríguez, Juan

    2014-01-01

    This volume compiles accepted contributions for the 2nd Edition of the Colombian Computational Biology and Bioinformatics Congress CCBCOL, after a rigorous review process in which 54 papers were accepted for publication from 119 submitted contributions. Bioinformatics and Computational Biology are areas of knowledge that have emerged due to advances that have taken place in the Biological Sciences and its integration with Information Sciences. The expansion of projects involving the study of genomes has led the way in the production of vast amounts of sequence data which needs to be organized, analyzed and stored to understand phenomena associated with living organisms related to their evolution, behavior in different ecosystems, and the development of applications that can be derived from this analysis.  .

  7. A Long Island Consortium Takes Shape. Occasional Paper No. 76-1.

    Science.gov (United States)

    Taylor, William R.

    This occasional paper, the first in a "new" series, describes the background, activities, and experiences of the Long Island Consortium, a cooperative effort of two-year and four-year colleges committed to organizing a model program of faculty development. The consortium was organized under an initial grant from the Lilly Endowment. In May and…

  8. Biowep: a workflow enactment portal for bioinformatics applications.

    Science.gov (United States)

    Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano

    2007-03-08

    The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of

  9. Biowep: a workflow enactment portal for bioinformatics applications

    Directory of Open Access Journals (Sweden)

    Romano Paolo

    2007-03-01

    Full Text Available Abstract Background The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS, can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. Results We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. Conclusion We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical

  10. 77 FR 43237 - Genome in a Bottle Consortium-Work Plan Review Workshop

    Science.gov (United States)

    2012-07-24

    ... in human whole genome variant calls. A principal motivation for this consortium is to enable... standards and quantitative performance metrics are needed to achieve the confidence in measurement results... principal motivation for this consortium is to enable science-based regulatory oversight of clinical...

  11. KBWS: an EMBOSS associated package for accessing bioinformatics web services.

    Science.gov (United States)

    Oshita, Kazuki; Arakawa, Kazuharu; Tomita, Masaru

    2011-04-29

    The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS) UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS), adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded) and http://soap.g-language.org/kbws_dl.wsdl (Document/literal).

  12. KBWS: an EMBOSS associated package for accessing bioinformatics web services

    Directory of Open Access Journals (Sweden)

    Tomita Masaru

    2011-04-01

    Full Text Available Abstract The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS, adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded and http://soap.g-language.org/kbws_dl.wsdl (Document/literal.

  13. Implementing a Web-Based Introductory Bioinformatics Course for Non-Bioinformaticians That Incorporates Practical Exercises

    Science.gov (United States)

    Vincent, Antony T.; Bourbonnais, Yves; Brouard, Jean-Simon; Deveau, Hélène; Droit, Arnaud; Gagné, Stéphane M.; Guertin, Michel; Lemieux, Claude; Rathier, Louis; Charette, Steve J.; Lagüe, Patrick

    2018-01-01

    A recent scientific discipline, bioinformatics, defined as using informatics for the study of biological problems, is now a requirement for the study of biological sciences. Bioinformatics has become such a powerful and popular discipline that several academic institutions have created programs in this field, allowing students to become…

  14. Statistical modelling in biostatistics and bioinformatics selected papers

    CERN Document Server

    Peng, Defen

    2014-01-01

    This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...

  15. Naturally selecting solutions: the use of genetic algorithms in bioinformatics.

    Science.gov (United States)

    Manning, Timmy; Sleator, Roy D; Walsh, Paul

    2013-01-01

    For decades, computer scientists have looked to nature for biologically inspired solutions to computational problems; ranging from robotic control to scheduling optimization. Paradoxically, as we move deeper into the post-genomics era, the reverse is occurring, as biologists and bioinformaticians look to computational techniques, to solve a variety of biological problems. One of the most common biologically inspired techniques are genetic algorithms (GAs), which take the Darwinian concept of natural selection as the driving force behind systems for solving real world problems, including those in the bioinformatics domain. Herein, we provide an overview of genetic algorithms and survey some of the most recent applications of this approach to bioinformatics based problems.

  16. Northeast Artificial Intelligence Consortium Annual Report - 1988 Parallel Vision. Volume 9

    Science.gov (United States)

    1989-10-01

    supports the Northeast Aritificial Intelligence Consortium (NAIC). Volume 9 Parallel Vision Report submitted by Christopher M. Brown Randal C. Nelson...NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL REPORT - 1988 Parallel Vision Syracuse University Christopher M. Brown and Randal C. Nelson...Technical Director Directorate of Intelligence & Reconnaissance FOR THE COMMANDER: IGOR G. PLONISCH Directorate of Plans & Programs If your address has

  17. LBL/JSU/AGMUS science consortium annual report, FY 1991--1992

    Energy Technology Data Exchange (ETDEWEB)

    1992-12-31

    In 1983, a formal Memorandum of Understanding joined the Ana G. Mendez University System (AGMUS), Jackson State University (JSU), and the Lawrence Berkeley Laboratory (LBL) in a consortium designed to advance the science and technology programs of JSU and AGMUS. This is the first such collaboration between a Hispanic university system, a historically Black university, and a national laboratory. The goals of this alliance are basic and direct: to develop and effect a long-term, comprehensive program that will enable the campuses of AGMUS and JSU to provide a broad, high-quality offering in the natural and computer sciences, to increase the number of minority students entering these fields, and to contribute to scientific knowledge and the federal government`s science mission through research. This report documents the progress toward these goals and includes individual success stories. The LBL/JSU/AGMUS Science Consortium has developed plans for utilizing its program successes to help other institutions to adopt or adapt those elements of the model that have produced the greatest results. Within the five-year plan formulated in 1990 are eight major components, each with defining elements and goals. These elements have become the components of the Science Consortium`s current plan for expansion and propagation.

  18. Biodegradation mechanisms and kinetics of azo dye 4BS by a microbial consortium.

    Science.gov (United States)

    He, Fang; Hu, Wenrong; Li, Yuezhong

    2004-10-01

    A microbial consortium consisting of a white-rot fungus 8-4* and a Pseudomonas 1-10 was isolated from wastewater treatment facilities of a local dyeing house by enrichment, using azo dye Direct Fast Scarlet 4BS as the sole source of carbon and energy, which had a high capacity for rapid decolorization of 4BS. To elucidate the decolorization mechanisms, decolorization of 4BS was compared between individual strains and the microbial consortium under different treatment processes. The microbial consortium showed a significant improvement on dye decolorization rates under either static or shaking culture, which might be attributed to the synergetic reaction of single strains. From the curve of COD values and the UV-visible spectra of 4BS solutions before and after decolorization cultivation with the microbial consortium, it was found that 4BS could be mineralized completely, and the results had been used for presuming the degrading pathway of 4BS. This study also examined the kinetics of 4BS decolorization by immobilized microbial consortium. The results demonstrated that the optimal decolorization activity was observed in pH range between four and 9, temperature range between 20 and 40 degrees C and the maximal specific decolorization rate occurred at 1,000 mg l(-1) of 4BS. The proliferation and distribution of microbial consortium were also microscopically observed, which further confirmed the decolorization mechanisms of 4BS.

  19. CLIMB (the Cloud Infrastructure for Microbial Bioinformatics): an online resource for the medical microbiology community.

    Science.gov (United States)

    Connor, Thomas R; Loman, Nicholas J; Thompson, Simon; Smith, Andy; Southgate, Joel; Poplawski, Radoslaw; Bull, Matthew J; Richardson, Emily; Ismail, Matthew; Thompson, Simon Elwood-; Kitchen, Christine; Guest, Martyn; Bakke, Marius; Sheppard, Samuel K; Pallen, Mark J

    2016-09-01

    The increasing availability and decreasing cost of high-throughput sequencing has transformed academic medical microbiology, delivering an explosion in available genomes while also driving advances in bioinformatics. However, many microbiologists are unable to exploit the resulting large genomics datasets because they do not have access to relevant computational resources and to an appropriate bioinformatics infrastructure. Here, we present the Cloud Infrastructure for Microbial Bioinformatics (CLIMB) facility, a shared computing infrastructure that has been designed from the ground up to provide an environment where microbiologists can share and reuse methods and data.

  20. Bioboxes: standardised containers for interchangeable bioinformatics software.

    Science.gov (United States)

    Belmann, Peter; Dröge, Johannes; Bremges, Andreas; McHardy, Alice C; Sczyrba, Alexander; Barton, Michael D

    2015-01-01

    Software is now both central and essential to modern biology, yet lack of availability, difficult installations, and complex user interfaces make software hard to obtain and use. Containerisation, as exemplified by the Docker platform, has the potential to solve the problems associated with sharing software. We propose bioboxes: containers with standardised interfaces to make bioinformatics software interchangeable.

  1. Patient-Reported Outcome (PRO) Consortium translation process: consensus development of updated best practices.

    Science.gov (United States)

    Eremenco, Sonya; Pease, Sheryl; Mann, Sarah; Berry, Pamela

    2017-01-01

    This paper describes the rationale and goals of the Patient-Reported Outcome (PRO) Consortium's instrument translation process. The PRO Consortium has developed a number of novel PRO measures which are in the process of qualification by the U.S. Food and Drug Administration (FDA) for use in clinical trials where endpoints based on these measures would support product labeling claims. Given the importance of FDA qualification of these measures, the PRO Consortium's Process Subcommittee determined that a detailed linguistic validation (LV) process was necessary to ensure that all translations of Consortium-developed PRO measures are performed using a standardized approach with the rigor required to meet regulatory and pharmaceutical industry expectations, as well as having a clearly defined instrument translation process that the translation industry can support. The consensus process involved gathering information about current best practices from 13 translation companies with expertise in LV, consolidating the findings to generate a proposed process, and obtaining iterative feedback from the translation companies and PRO Consortium member firms on the proposed process in two rounds of review in order to update existing principles of good practice in LV and to provide sufficient detail for the translation process to ensure consistency across PRO Consortium measures, sponsors, and translation companies. The consensus development resulted in a 12-step process that outlines universal and country-specific new translation approaches, as well as country-specific adaptations of existing translations. The PRO Consortium translation process will play an important role in maintaining the validity of the data generated through these measures by ensuring that they are translated by qualified linguists following a standardized and rigorous process that reflects best practice.

  2. Making Bioinformatics Projects a Meaningful Experience in an Undergraduate Biotechnology or Biomedical Science Programme

    Science.gov (United States)

    Sutcliffe, Iain C.; Cummings, Stephen P.

    2007-01-01

    Bioinformatics has emerged as an important discipline within the biological sciences that allows scientists to decipher and manage the vast quantities of data (such as genome sequences) that are now available. Consequently, there is an obvious need to provide graduates in biosciences with generic, transferable skills in bioinformatics. We present…

  3. Comparative Proteome Bioinformatics: Identification of Phosphotyrosine Signaling Proteins in the Unicellular Protozoan Ciliate Tetrahymena

    DEFF Research Database (Denmark)

    Gammeltoft, Steen; Christensen, Søren Tvorup; Joachimiak, Marcin

    2005-01-01

    Tetrahymena, bioinformatics, cilia, evolution, signaling, TtPTK1, PTK, Grb2, SH-PTP 2, Plcy, Src, PTP, PI3K, SH2, SH3, PH......Tetrahymena, bioinformatics, cilia, evolution, signaling, TtPTK1, PTK, Grb2, SH-PTP 2, Plcy, Src, PTP, PI3K, SH2, SH3, PH...

  4. Facilitating the use of large-scale biological data and tools in the era of translational bioinformatics

    DEFF Research Database (Denmark)

    Kouskoumvekaki, Irene; Shublaq, Nour; Brunak, Søren

    2014-01-01

    As both the amount of generated biological data and the processing compute power increase, computational experimentation is no longer the exclusivity of bioinformaticians, but it is moving across all biomedical domains. For bioinformatics to realize its translational potential, domain experts need...... access to user-friendly solutions to navigate, integrate and extract information out of biological databases, as well as to combine tools and data resources in bioinformatics workflows. In this review, we present services that assist biomedical scientists in incorporating bioinformatics tools...... into their research.We review recent applications of Cytoscape, BioGPS and DAVID for data visualization, integration and functional enrichment. Moreover, we illustrate the use of Taverna, Kepler, GenePattern, and Galaxy as open-access workbenches for bioinformatics workflows. Finally, we mention services...

  5. Workflows in bioinformatics: meta-analysis and prototype implementation of a workflow generator

    Directory of Open Access Journals (Sweden)

    Thoraval Samuel

    2005-04-01

    Full Text Available Abstract Background Computational methods for problem solving need to interleave information access and algorithm execution in a problem-specific workflow. The structures of these workflows are defined by a scaffold of syntactic, semantic and algebraic objects capable of representing them. Despite the proliferation of GUIs (Graphic User Interfaces in bioinformatics, only some of them provide workflow capabilities; surprisingly, no meta-analysis of workflow operators and components in bioinformatics has been reported. Results We present a set of syntactic components and algebraic operators capable of representing analytical workflows in bioinformatics. Iteration, recursion, the use of conditional statements, and management of suspend/resume tasks have traditionally been implemented on an ad hoc basis and hard-coded; by having these operators properly defined it is possible to use and parameterize them as generic re-usable components. To illustrate how these operations can be orchestrated, we present GPIPE, a prototype graphic pipeline generator for PISE that allows the definition of a pipeline, parameterization of its component methods, and storage of metadata in XML formats. This implementation goes beyond the macro capacities currently in PISE. As the entire analysis protocol is defined in XML, a complete bioinformatic experiment (linked sets of methods, parameters and results can be reproduced or shared among users. Availability: http://if-web1.imb.uq.edu.au/Pise/5.a/gpipe.html (interactive, ftp://ftp.pasteur.fr/pub/GenSoft/unix/misc/Pise/ (download. Conclusion From our meta-analysis we have identified syntactic structures and algebraic operators common to many workflows in bioinformatics. The workflow components and algebraic operators can be assimilated into re-usable software components. GPIPE, a prototype implementation of this framework, provides a GUI builder to facilitate the generation of workflows and integration of heterogeneous

  6. Analyzing the field of bioinformatics with the multi-faceted topic modeling technique.

    Science.gov (United States)

    Heo, Go Eun; Kang, Keun Young; Song, Min; Lee, Jeong-Hoon

    2017-05-31

    Bioinformatics is an interdisciplinary field at the intersection of molecular biology and computing technology. To characterize the field as convergent domain, researchers have used bibliometrics, augmented with text-mining techniques for content analysis. In previous studies, Latent Dirichlet Allocation (LDA) was the most representative topic modeling technique for identifying topic structure of subject areas. However, as opposed to revealing the topic structure in relation to metadata such as authors, publication date, and journals, LDA only displays the simple topic structure. In this paper, we adopt the Tang et al.'s Author-Conference-Topic (ACT) model to study the field of bioinformatics from the perspective of keyphrases, authors, and journals. The ACT model is capable of incorporating the paper, author, and conference into the topic distribution simultaneously. To obtain more meaningful results, we use journals and keyphrases instead of conferences and bag-of-words.. For analysis, we use PubMed to collected forty-six bioinformatics journals from the MEDLINE database. We conducted time series topic analysis over four periods from 1996 to 2015 to further examine the interdisciplinary nature of bioinformatics. We analyze the ACT Model results in each period. Additionally, for further integrated analysis, we conduct a time series analysis among the top-ranked keyphrases, journals, and authors according to their frequency. We also examine the patterns in the top journals by simultaneously identifying the topical probability in each period, as well as the top authors and keyphrases. The results indicate that in recent years diversified topics have become more prevalent and convergent topics have become more clearly represented. The results of our analysis implies that overtime the field of bioinformatics becomes more interdisciplinary where there is a steady increase in peripheral fields such as conceptual, mathematical, and system biology. These results are

  7. libcov: A C++ bioinformatic library to manipulate protein structures, sequence alignments and phylogeny

    OpenAIRE

    Butt, Davin; Roger, Andrew J; Blouin, Christian

    2005-01-01

    Background An increasing number of bioinformatics methods are considering the phylogenetic relationships between biological sequences. Implementing new methodologies using the maximum likelihood phylogenetic framework can be a time consuming task. Results The bioinformatics library libcov is a collection of C++ classes that provides a high and low-level interface to maximum likelihood phylogenetics, sequence analysis and a data structure for structural biological methods. libcov can be used ...

  8. Bioinformatics for whole-genome shotgun sequencing of microbial communities.

    Directory of Open Access Journals (Sweden)

    Kevin Chen

    2005-07-01

    Full Text Available The application of whole-genome shotgun sequencing to microbial communities represents a major development in metagenomics, the study of uncultured microbes via the tools of modern genomic analysis. In the past year, whole-genome shotgun sequencing projects of prokaryotic communities from an acid mine biofilm, the Sargasso Sea, Minnesota farm soil, three deep-sea whale falls, and deep-sea sediments have been reported, adding to previously published work on viral communities from marine and fecal samples. The interpretation of this new kind of data poses a wide variety of exciting and difficult bioinformatics problems. The aim of this review is to introduce the bioinformatics community to this emerging field by surveying existing techniques and promising new approaches for several of the most interesting of these computational problems.

  9. Promoting synergistic research and education in genomics and bioinformatics.

    Science.gov (United States)

    Yang, Jack Y; Yang, Mary Qu; Zhu, Mengxia Michelle; Arabnia, Hamid R; Deng, Youping

    2008-01-01

    Bioinformatics and Genomics are closely related disciplines that hold great promises for the advancement of research and development in complex biomedical systems, as well as public health, drug design, comparative genomics, personalized medicine and so on. Research and development in these two important areas are impacting the science and technology.High throughput sequencing and molecular imaging technologies marked the beginning of a new era for modern translational medicine and personalized healthcare. The impact of having the human sequence and personalized digital images in hand has also created tremendous demands of developing powerful supercomputing, statistical learning and artificial intelligence approaches to handle the massive bioinformatics and personalized healthcare data, which will obviously have a profound effect on how biomedical research will be conducted toward the improvement of human health and prolonging of human life in the future. The International Society of Intelligent Biological Medicine (http://www.isibm.org) and its official journals, the International Journal of Functional Informatics and Personalized Medicine (http://www.inderscience.com/ijfipm) and the International Journal of Computational Biology and Drug Design (http://www.inderscience.com/ijcbdd) in collaboration with International Conference on Bioinformatics and Computational Biology (Biocomp), touch tomorrow's bioinformatics and personalized medicine throughout today's efforts in promoting the research, education and awareness of the upcoming integrated inter/multidisciplinary field. The 2007 international conference on Bioinformatics and Computational Biology (BIOCOMP07) was held in Las Vegas, the United States of American on June 25-28, 2007. The conference attracted over 400 papers, covering broad research areas in the genomics, biomedicine and bioinformatics. The Biocomp 2007 provides a common platform for the cross fertilization of ideas, and to help shape knowledge and

  10. MACBenAbim: A Multi-platform Mobile Application for searching keyterms in Computational Biology and Bioinformatics.

    Science.gov (United States)

    Oluwagbemi, Olugbenga O; Adewumi, Adewole; Esuruoso, Abimbola

    2012-01-01

    Computational biology and bioinformatics are gradually gaining grounds in Africa and other developing nations of the world. However, in these countries, some of the challenges of computational biology and bioinformatics education are inadequate infrastructures, and lack of readily-available complementary and motivational tools to support learning as well as research. This has lowered the morale of many promising undergraduates, postgraduates and researchers from aspiring to undertake future study in these fields. In this paper, we developed and described MACBenAbim (Multi-platform Mobile Application for Computational Biology and Bioinformatics), a flexible user-friendly tool to search for, define and describe the meanings of keyterms in computational biology and bioinformatics, thus expanding the frontiers of knowledge of the users. This tool also has the capability of achieving visualization of results on a mobile multi-platform context. MACBenAbim is available from the authors for non-commercial purposes.

  11. Establishing a distributed national research infrastructure providing bioinformatics support to life science researchers in Australia.

    Science.gov (United States)

    Schneider, Maria Victoria; Griffin, Philippa C; Tyagi, Sonika; Flannery, Madison; Dayalan, Saravanan; Gladman, Simon; Watson-Haigh, Nathan; Bayer, Philipp E; Charleston, Michael; Cooke, Ira; Cook, Rob; Edwards, Richard J; Edwards, David; Gorse, Dominique; McConville, Malcolm; Powell, David; Wilkins, Marc R; Lonie, Andrew

    2017-06-30

    EMBL Australia Bioinformatics Resource (EMBL-ABR) is a developing national research infrastructure, providing bioinformatics resources and support to life science and biomedical researchers in Australia. EMBL-ABR comprises 10 geographically distributed national nodes with one coordinating hub, with current funding provided through Bioplatforms Australia and the University of Melbourne for its initial 2-year development phase. The EMBL-ABR mission is to: (1) increase Australia's capacity in bioinformatics and data sciences; (2) contribute to the development of training in bioinformatics skills; (3) showcase Australian data sets at an international level and (4) enable engagement in international programs. The activities of EMBL-ABR are focussed in six key areas, aligning with comparable international initiatives such as ELIXIR, CyVerse and NIH Commons. These key areas-Tools, Data, Standards, Platforms, Compute and Training-are described in this article. © The Author 2017. Published by Oxford University Press.

  12. Bioinformatics Meets Virology: The European Virus Bioinformatics Center's Second Annual Meeting.

    Science.gov (United States)

    Ibrahim, Bashar; Arkhipova, Ksenia; Andeweg, Arno C; Posada-Céspedes, Susana; Enault, François; Gruber, Arthur; Koonin, Eugene V; Kupczok, Anne; Lemey, Philippe; McHardy, Alice C; McMahon, Dino P; Pickett, Brett E; Robertson, David L; Scheuermann, Richard H; Zhernakova, Alexandra; Zwart, Mark P; Schönhuth, Alexander; Dutilh, Bas E; Marz, Manja

    2018-05-14

    The Second Annual Meeting of the European Virus Bioinformatics Center (EVBC), held in Utrecht, Netherlands, focused on computational approaches in virology, with topics including (but not limited to) virus discovery, diagnostics, (meta-)genomics, modeling, epidemiology, molecular structure, evolution, and viral ecology. The goals of the Second Annual Meeting were threefold: (i) to bring together virologists and bioinformaticians from across the academic, industrial, professional, and training sectors to share best practice; (ii) to provide a meaningful and interactive scientific environment to promote discussion and collaboration between students, postdoctoral fellows, and both new and established investigators; (iii) to inspire and suggest new research directions and questions. Approximately 120 researchers from around the world attended the Second Annual Meeting of the EVBC this year, including 15 renowned international speakers. This report presents an overview of new developments and novel research findings that emerged during the meeting.

  13. Consortium de recherche pour le développement de l'agriculture en ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Research Consortium for the Development of Agriculture in Haiti. Even before it was hit by a devastating earthquake in January 2010, Haiti's children suffered some of the worst rates of undernutrition in Latin America and the Caribbean. View moreResearch Consortium for the Development of Agriculture in Haiti ...

  14. Update on the US Government's Biometric Consortium

    National Research Council Canada - National Science Library

    Campbell, Joseph

    1997-01-01

    .... The goals of the consortium remain largely the same under this new leadership. The current emphasis is on the formal approval of our charter and on the establishment of a national biometric test and evaluation laboratory.

  15. Bioinformatics and Microarray Data Analysis on the Cloud.

    Science.gov (United States)

    Calabrese, Barbara; Cannataro, Mario

    2016-01-01

    High-throughput platforms such as microarray, mass spectrometry, and next-generation sequencing are producing an increasing volume of omics data that needs large data storage and computing power. Cloud computing offers massive scalable computing and storage, data sharing, on-demand anytime and anywhere access to resources and applications, and thus, it may represent the key technology for facing those issues. In fact, in the recent years it has been adopted for the deployment of different bioinformatics solutions and services both in academia and in the industry. Although this, cloud computing presents several issues regarding the security and privacy of data, that are particularly important when analyzing patients data, such as in personalized medicine. This chapter reviews main academic and industrial cloud-based bioinformatics solutions; with a special focus on microarray data analysis solutions and underlines main issues and problems related to the use of such platforms for the storage and analysis of patients data.

  16. Protecting innovation in bioinformatics and in-silico biology.

    Science.gov (United States)

    Harrison, Robert

    2003-01-01

    Commercial success or failure of innovation in bioinformatics and in-silico biology requires the appropriate use of legal tools for protecting and exploiting intellectual property. These tools include patents, copyrights, trademarks, design rights, and limiting information in the form of 'trade secrets'. Potentially patentable components of bioinformatics programmes include lines of code, algorithms, data content, data structure and user interfaces. In both the US and the European Union, copyright protection is granted for software as a literary work, and most other major industrial countries have adopted similar rules. Nonetheless, the grant of software patents remains controversial and is being challenged in some countries. Current debate extends to aspects such as whether patents can claim not only the apparatus and methods but also the data signals and/or products, such as a CD-ROM, on which the programme is stored. The patentability of substances discovered using in-silico methods is a separate debate that is unlikely to be resolved in the near future.

  17. Pay-as-you-go data integration for bio-informatics

    NARCIS (Netherlands)

    Wanders, B.

    2012-01-01

    Scientific research in bio-informatics is often data-driven and supported by numerous biological databases. A biological database contains factual information collected from scientific experiments and computational analyses about areas including genomics, proteomics, metabolomics, microarray gene

  18. BioShaDock: a community driven bioinformatics shared Docker-based tools registry [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    François Moreews

    2015-12-01

    Full Text Available Linux container technologies, as represented by Docker, provide an alternative to complex and time-consuming installation processes needed for scientific software. The ease of deployment and the process isolation they enable, as well as the reproducibility they permit across environments and versions, are among the qualities that make them interesting candidates for the construction of bioinformatic infrastructures, at any scale from single workstations to high throughput computing architectures. The Docker Hub is a public registry which can be used to distribute bioinformatic software as Docker images. However, its lack of curation and its genericity make it difficult for a bioinformatics user to find the most appropriate images needed. BioShaDock is a bioinformatics-focused Docker registry, which provides a local and fully controlled environment to build and publish bioinformatic software as portable Docker images. It provides a number of improvements over the base Docker registry on authentication and permissions management, that enable its integration in existing bioinformatic infrastructures such as computing platforms. The metadata associated with the registered images are domain-centric, including for instance concepts defined in the EDAM ontology, a shared and structured vocabulary of commonly used terms in bioinformatics. The registry also includes user defined tags to facilitate its discovery, as well as a link to the tool description in the ELIXIR registry if it already exists. If it does not, the BioShaDock registry will synchronize with the registry to create a new description in the Elixir registry, based on the BioShaDock entry metadata. This link will help users get more information on the tool such as its EDAM operations, input and output types. This allows integration with the ELIXIR Tools and Data Services Registry, thus providing the appropriate visibility of such images to the bioinformatics community.

  19. Nispero: a cloud-computing based Scala tool specially suited for bioinformatics data processing

    OpenAIRE

    Evdokim Kovach; Alexey Alekhin; Eduardo Pareja Tobes; Raquel Tobes; Eduardo Pareja; Marina Manrique

    2014-01-01

    Nowadays it is widely accepted that the bioinformatics data analysis is a real bottleneck in many research activities related to life sciences. High-throughput technologies like Next Generation Sequencing (NGS) have completely reshaped the biology and bioinformatics landscape. Undoubtedly NGS has allowed important progress in many life-sciences related fields but has also presented interesting challenges in terms of computation capabilities and algorithms. Many kinds of tasks related with NGS...

  20. Ecotoxicological effects of enrofloxacin and its removal by monoculture of microalgal species and their consortium.

    Science.gov (United States)

    Xiong, Jiu-Qiang; Kurade, Mayur B; Jeon, Byong-Hun

    2017-07-01

    Enrofloxacin (ENR), a fluoroquinolone antibiotic, has gained big scientific concern due to its ecotoxicity on aquatic microbiota. The ecotoxicity and removal of ENR by five individual microalgae species and their consortium were studied to correlate the behavior and interaction of ENR in natural systems. The individual microalgal species (Scenedesmus obliquus, Chlamydomonas mexicana, Chlorella vulgaris, Ourococcus multisporus, Micractinium resseri) and their consortium could withstand high doses of ENR (≤1 mg L -1 ). Growth inhibition (68-81%) of the individual microalgae species and their consortium was observed in ENR (100 mg L -1 ) compared to control after 11 days of cultivation. The calculated 96 h EC 50 of ENR for individual microalgae species and microalgae consortium was 9.6-15.0 mg ENR L -1 . All the microalgae could recover from the toxicity of high concentrations of ENR during cultivation. The biochemical characteristics (total chlorophyll, carotenoid, and malondialdehyde) were significantly influenced by ENR (1-100 mg L -1 ) stress. The individual microalgae species and microalgae consortium removed 18-26% ENR at day 11. Although the microalgae consortium showed a higher sensitivity (with lower EC 50 ) toward ENR than the individual microalgae species, the removal efficiency of ENR by the constructed microalgae consortium was comparable to that of the most effective microalgal species. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. An Adaptive Hybrid Multiprocessor technique for bioinformatics sequence alignment

    KAUST Repository

    Bonny, Talal; Salama, Khaled N.; Zidan, Mohammed A.

    2012-01-01

    Sequence alignment algorithms such as the Smith-Waterman algorithm are among the most important applications in the development of bioinformatics. Sequence alignment algorithms must process large amounts of data which may take a long time. Here, we

  2. 'Students-as-partners' scheme enhances postgraduate students' employability skills while addressing gaps in bioinformatics education.

    Science.gov (United States)

    Mello, Luciane V; Tregilgas, Luke; Cowley, Gwen; Gupta, Anshul; Makki, Fatima; Jhutty, Anjeet; Shanmugasundram, Achchuthan

    2017-01-01

    Teaching bioinformatics is a longstanding challenge for educators who need to demonstrate to students how skills developed in the classroom may be applied to real world research. This study employed an action research methodology which utilised student-staff partnership and peer-learning. It was centred on the experiences of peer-facilitators, students who had previously taken a postgraduate bioinformatics module, and had applied knowledge and skills gained from it to their own research. It aimed to demonstrate to peer-receivers, current students, how bioinformatics could be used in their own research while developing peer-facilitators' teaching and mentoring skills. This student-centred approach was well received by the peer-receivers, who claimed to have gained improved understanding of bioinformatics and its relevance to research. Equally, peer-facilitators also developed a better understanding of the subject and appreciated that the activity was a rare and invaluable opportunity to develop their teaching and mentoring skills, enhancing their employability.

  3. A bioinformatics-based overview of protein Lys-Ne-acetylation

    Science.gov (United States)

    Among posttranslational modifications, there are some conceptual similarities between Lys-N'-acetylation and Ser/Thr/Tyr O-phosphorylation. Herein we present a bioinformatics-based overview of reversible protein Lys-acetylation, including some comparisons with reversible protein phosphorylation. T...

  4. jORCA: easily integrating bioinformatics Web Services.

    Science.gov (United States)

    Martín-Requena, Victoria; Ríos, Javier; García, Maximiliano; Ramírez, Sergio; Trelles, Oswaldo

    2010-02-15

    Web services technology is becoming the option of choice to deploy bioinformatics tools that are universally available. One of the major strengths of this approach is that it supports machine-to-machine interoperability over a network. However, a weakness of this approach is that various Web Services differ in their definition and invocation protocols, as well as their communication and data formats-and this presents a barrier to service interoperability. jORCA is a desktop client aimed at facilitating seamless integration of Web Services. It does so by making a uniform representation of the different web resources, supporting scalable service discovery, and automatic composition of workflows. Usability is at the top of the jORCA agenda; thus it is a highly customizable and extensible application that accommodates a broad range of user skills featuring double-click invocation of services in conjunction with advanced execution-control, on the fly data standardization, extensibility of viewer plug-ins, drag-and-drop editing capabilities, plus a file-based browsing style and organization of favourite tools. The integration of bioinformatics Web Services is made easier to support a wider range of users. .

  5. Missing "Links" in Bioinformatics Education: Expanding Students' Conceptions of Bioinformatics Using a Biodiversity Database of Living and Fossil Reef Corals

    Science.gov (United States)

    Nehm, Ross H.; Budd, Ann F.

    2006-01-01

    NMITA is a reef coral biodiversity database that we use to introduce students to the expansive realm of bioinformatics beyond genetics. We introduce a series of lessons that have students use this database, thereby accessing real data that can be used to test hypotheses about biodiversity and evolution while targeting the "National Science …

  6. G2LC: Resources Autoscaling for Real Time Bioinformatics Applications in IaaS

    Directory of Open Access Journals (Sweden)

    Rongdong Hu

    2015-01-01

    Full Text Available Cloud computing has started to change the way how bioinformatics research is being carried out. Researchers who have taken advantage of this technology can process larger amounts of data and speed up scientific discovery. The variability in data volume results in variable computing requirements. Therefore, bioinformatics researchers are pursuing more reliable and efficient methods for conducting sequencing analyses. This paper proposes an automated resource provisioning method, G2LC, for bioinformatics applications in IaaS. It enables application to output the results in a real time manner. Its main purpose is to guarantee applications performance, while improving resource utilization. Real sequence searching data of BLAST is used to evaluate the effectiveness of G2LC. Experimental results show that G2LC guarantees the application performance, while resource is saved up to 20.14%.

  7. Primary Immune Deficiency Treatment Consortium (PIDTC) report

    NARCIS (Netherlands)

    L.M. Griffith (Linda); M. Cowan (Morton); L.D. Notarangelo (Luigi Daniele); R. Kohn (Robert); J. Puck (Jennifer); S.-Y. Pai (Sung-Yun); B. Ballard (Barbara); S.C. Bauer (Sarah); J. Bleesing (Jack); M. Boyle (Marcia); R.W. Brower (Ronald); R.H. Buckley (Rebecca); M. van der Burg (Mirjam); L.M. Burroughs (Lauri); F. Candotti (Fabio); A. Cant (Andrew); T. Chatila (Talal); C. Cunningham-Rundles (Charlotte); M.C. Dinauer (Mary); J. Dvorak (Jennie); A. Filipovich (Alexandra); L.A. Fleisher (Lee); H.B. Gaspar (Bobby); T. Gungor (Tayfun); E. Haddad (Elie); E. Hovermale (Emily); F. Huang (Faith); A. Hurley (Alan); M. Hurley (Mary); S.K. Iyengar (Sudha); E.M. Kang (Elizabeth); B.R. Logan (Brent); J.R. Long-Boyle (Janel); H. Malech (Harry); S.A. McGhee (Sean); S. Modell (Sieglinde); S. Modell (Sieglinde); H.D. Ochs (Hans); R.J. O'Reilly (Richard); R. Parkman (Robertson); D. Rawlings (D.); J.M. Routes (John); P. Shearer (P.); T.N. Small (Trudy); H. Smith (H.); K.E. Sullivan (Kathleen); P. Szabolcs (Paul); A.J. Thrasher (Adrian); D. Torgerson; P. Veys (Paul); K. Weinberg (Kenneth); J.C. Zuniga-Pflucker (Juan Carlos)

    2014-01-01

    textabstractThe Primary Immune Deficiency Treatment Consortium (PIDTC) is a network of 33 centers in North America that study the treatment of rare and severe primary immunodeficiency diseases. Current protocols address the natural history of patients treated for severe combined immunodeficiency

  8. Measuring Consortium Impact on User Perceptions: OhioLINK and LibQUAL+[TM

    Science.gov (United States)

    Gatten, Jeffrey N.

    2004-01-01

    What is the impact of an academic library consortium on the perceptions of library services experienced by users of the member institutions' libraries? What is the impact of an academic library consortium on the perceptions of library services experienced by users of the member institutions libraries? In 2002 and 2003, OhioLINK (Ohio's consortium…

  9. Hawaii Space Grant Consortium

    Science.gov (United States)

    Flynn, Luke P.

    2005-01-01

    The Hawai'i Space Grant Consortium is composed of ten institutions of higher learning including the University of Hawai'i at Manoa, the University of Hawai'i at Hilo, the University of Guam, and seven Community Colleges spread over the 4 main Hawaiian islands. Geographic separation is not the only obstacle that we face as a Consortium. Hawai'i has been mired in an economic downturn due to a lack of tourism for almost all of the period (2001 - 2004) covered by this report, although hotel occupancy rates and real estate sales have sky-rocketed in the last year. Our challenges have been many including providing quality educational opportunities in the face of shrinking State and Federal budgets, encouraging science and technology course instruction at the K-12 level in a public school system that is becoming less focused on high technology and more focused on developing basic reading and math skills, and assembling community college programs with instructors who are expected to teach more classes for the same salary. Motivated people can overcome these problems. Fortunately, the Hawai'i Space Grant Consortium (HSGC) consists of a group of highly motivated and talented individuals who have not only overcome these obstacles, but have excelled with the Program. We fill a critical need within the State of Hawai'i to provide our children with opportunities to pursue their dreams of becoming the next generation of NASA astronauts, engineers, and explorers. Our strength lies not only in our diligent and creative HSGC advisory board, but also with Hawai'i's teachers, students, parents, and industry executives who are willing to invest their time, effort, and resources into Hawai'i's future. Our operational philosophy is to FACE the Future, meaning that we will facilitate, administer, catalyze, and educate in order to achieve our objective of creating a highly technically capable workforce both here in Hawai'i and for NASA. In addition to administering to programs and

  10. GAS STORAGE TECHNOLOGY CONSORTIUM

    Energy Technology Data Exchange (ETDEWEB)

    Robert W. Watson

    2004-10-18

    Gas storage is a critical element in the natural gas industry. Producers, transmission and distribution companies, marketers, and end users all benefit directly from the load balancing function of storage. The unbundling process has fundamentally changed the way storage is used and valued. As an unbundled service, the value of storage is being recovered at rates that reflect its value. Moreover, the marketplace has differentiated between various types of storage services, and has increasingly rewarded flexibility, safety, and reliability. The size of the natural gas market has increased and is projected to continue to increase towards 30 trillion cubic feet (TCF) over the next 10 to 15 years. Much of this increase is projected to come from electric generation, particularly peaking units. Gas storage, particularly the flexible services that are most suited to electric loads, is critical in meeting the needs of these new markets. In order to address the gas storage needs of the natural gas industry, an industry-driven consortium was created--the Gas Storage Technology Consortium (GSTC). The objective of the GSTC is to provide a means to accomplish industry-driven research and development designed to enhance operational flexibility and deliverability of the Nation's gas storage system, and provide a cost effective, safe, and reliable supply of natural gas to meet domestic demand. To accomplish this objective, the project is divided into three phases that are managed and directed by the GSTC Coordinator. The first phase, Phase 1A, was initiated on September 30, 2003, and was completed on March 31, 2004. Phase 1A of the project included the creation of the GSTC structure, development and refinement of a technical approach (work plan) for deliverability enhancement and reservoir management. This report deals with Phase 1B and encompasses the period July 1, 2004, through September 30, 2004. During this time period there were three main activities. First was the

  11. Penalized feature selection and classification in bioinformatics

    OpenAIRE

    Ma, Shuangge; Huang, Jian

    2008-01-01

    In bioinformatics studies, supervised classification with high-dimensional input variables is frequently encountered. Examples routinely arise in genomic, epigenetic and proteomic studies. Feature selection can be employed along with classifier construction to avoid over-fitting, to generate more reliable classifier and to provide more insights into the underlying causal relationships. In this article, we provide a review of several recently developed penalized feature selection and classific...

  12. Self-organization, layered structure, and aggregation enhance persistence of a synthetic biofilm consortium.

    Directory of Open Access Journals (Sweden)

    Katie Brenner

    Full Text Available Microbial consortia constitute a majority of the earth's biomass, but little is known about how these cooperating communities persist despite competition among community members. Theory suggests that non-random spatial structures contribute to the persistence of mixed communities; when particular structures form, they may provide associated community members with a growth advantage over unassociated members. If true, this has implications for the rise and persistence of multi-cellular organisms. However, this theory is difficult to study because we rarely observe initial instances of non-random physical structure in natural populations. Using two engineered strains of Escherichia coli that constitute a synthetic symbiotic microbial consortium, we fortuitously observed such spatial self-organization. This consortium forms a biofilm and, after several days, adopts a defined layered structure that is associated with two unexpected, measurable growth advantages. First, the consortium cannot successfully colonize a new, downstream environment until it self-organizes in the initial environment; in other words, the structure enhances the ability of the consortium to survive environmental disruptions. Second, when the layered structure forms in downstream environments the consortium accumulates significantly more biomass than it did in the initial environment; in other words, the structure enhances the global productivity of the consortium. We also observed that the layered structure only assembles in downstream environments that are colonized by aggregates from a previous, structured community. These results demonstrate roles for self-organization and aggregation in persistence of multi-cellular communities, and also illustrate a role for the techniques of synthetic biology in elucidating fundamental biological principles.

  13. Activities of the Alabama Consortium on forestry education and research, 1993-1999

    Science.gov (United States)

    John Schelhas

    2002-01-01

    The Alabama Consortium on Forestry Education and Research was established in 1992 to promote communication and collaboration among diverse institutions involved in forestry in the State of Alabama. It was organized to advance forestry education and research in ways that could not be accomplished by individual members alone. This report tells the story of the consortium...

  14. Bioremoval of Am-241 and Cs-137 from liquid radioactive wasters by bacterial consortiums

    International Nuclear Information System (INIS)

    Ferreira, Rafael Vicente de Padua; Lima, Josenilson B. de; Gomes, Mirella C.; Borba, Tania R.; Bellini, Maria Helena; Marumo, Julio Takehiro; Sakata, Solange Kazumi

    2011-01-01

    This paper evaluates the capacity of two bacterial consortiums of impacted areas in removing the Am-241 and Cs-137 from liquid radioactive wastes.The experiments indicated that the two study consortiums were able to remove 100% of the Cs-137 and Am-241 presents in the waste from 4 days of contact. These results suggest that the bio removal with the selected consortiums, can be a viable technique for the treatment of radioactive wastes containing Am-241 and Cs-137

  15. Efficiency of consortium for in-situ bioremediation and CO2 evolution method of refines petroleum oil in microcosms study

    OpenAIRE

    Dutta, Shreyasri; Singh, Padma

    2017-01-01

    An in-situ bioremediation study was conducted in a laboratory by using mixed microbial consortium. An indigenous microbial consortium was developed by assemble of two Pseudomonas spp. and two Aspergillus spp. which were isolated from various oil contaminated sites of India. The laboratory feasibility study was conducted in a 225 m2 block. Six treatment options-Oil alone, Oil+Best remediater, Oil+Bacterial consortium, Oil+Fungal consortium, Oil+Mixed microbial consortium, Oil+Indigenous microf...

  16. Toward Personalized Pressure Ulcer Care Planning: Development of a Bioinformatics System for Individualized Prioritization of Clinical Pratice Guideline

    Science.gov (United States)

    2016-10-01

    AWARD NUMBER: W81XWH-15-1-0342 TITLE: Toward Personalized Pressure Ulcer Care Planning: Development of a Bioinformatics System for Individualized...Planning: Development of a Bioinformatics System for Individualized Prioritization of Clinical Pratice Guideline 5a. CONTRACT NUMBER 5b. GRANT...recommendations of CPG has been identified by experts in the field. We will use bioinformatics to enable data extraction, storage, and analysis to support

  17. Nuclear Fabrication Consortium

    Energy Technology Data Exchange (ETDEWEB)

    Levesque, Stephen [EWI, Columbus, OH (United States)

    2013-04-05

    This report summarizes the activities undertaken by EWI while under contract from the Department of Energy (DOE) Office of Nuclear Energy (NE) for the management and operation of the Nuclear Fabrication Consortium (NFC). The NFC was established by EWI to independently develop, evaluate, and deploy fabrication approaches and data that support the re-establishment of the U.S. nuclear industry: ensuring that the supply chain will be competitive on a global stage, enabling more cost-effective and reliable nuclear power in a carbon constrained environment. The NFC provided a forum for member original equipment manufactures (OEM), fabricators, manufacturers, and materials suppliers to effectively engage with each other and rebuild the capacity of this supply chain by : Identifying and removing impediments to the implementation of new construction and fabrication techniques and approaches for nuclear equipment, including system components and nuclear plants. Providing and facilitating detailed scientific-based studies on new approaches and technologies that will have positive impacts on the cost of building of nuclear plants. Analyzing and disseminating information about future nuclear fabrication technologies and how they could impact the North American and the International Nuclear Marketplace. Facilitating dialog and initiate alignment among fabricators, owners, trade associations, and government agencies. Supporting industry in helping to create a larger qualified nuclear supplier network. Acting as an unbiased technology resource to evaluate, develop, and demonstrate new manufacturing technologies. Creating welder and inspector training programs to help enable the necessary workforce for the upcoming construction work. Serving as a focal point for technology, policy, and politically interested parties to share ideas and concepts associated with fabrication across the nuclear industry. The report the objectives and summaries of the Nuclear Fabrication Consortium

  18. Latest Developments of the Isprs Student Consortium

    Science.gov (United States)

    Detchev, I.; Kanjir, U.; Reyes, S. R.; Miyazaki, H.; Aktas, A. F.

    2016-06-01

    The International Society for Photogrammetry and Remote Sensing (ISPRS) Student Consortium (SC) is a network for young professionals studying or working within the fields of photogrammetry, remote sensing, Geographical Information Systems (GIS), and other related geo-spatial sciences. The main goal of the network is to provide means for information exchange for its young members and thus help promote and integrate youth into the ISPRS. Over the past four years the Student Consortium has successfully continued to fulfil its mission in both formal and informal ways. The formal means of communication of the SC are its website, newsletter, e-mail announcements and summer schools, while its informal ones are multiple social media outlets and various social activities during student related events. The newsletter is published every three to four months and provides both technical and experiential content relevant for the young people in the ISPRS. The SC has been in charge or at least has helped with organizing one or more summer schools every year. The organization's e-mail list has over 1,100 subscribers, its website hosts over 1,300 members from 100 countries across the entire globe, and its public Facebook group currently has over 4,500 joined visitors, who connect among one another and share information relevant for their professional careers. These numbers show that the Student Consortium has grown into a significant online-united community. The paper will present the organization's on-going and past activities for the last four years, its current priorities and a strategic plan and aspirations for the future four-year period.

  19. The Pharmaceutical Industry Beamline of Pharmaceutical Consortium for Protein Structure Analysis

    International Nuclear Information System (INIS)

    Nishijima, Kazumi; Katsuya, Yoshio

    2002-01-01

    The Pharmaceutical Industry Beamline was constructed by the Pharmaceutical Consortium for Protein Structure Analysis which was established in April 2001. The consortium is composed of 22 pharmaceutical companies affiliating with the Japan Pharmaceutical Manufacturers Association. The beamline is the first exclusive on that is owned by pharmaceutical enterprises at SPring-8. The specification and equipments of the Pharmaceutical Industry Beamline is almost same as that of RIKEN Structural Genomics Beamline I and II. (author)

  20. Architecture exploration of FPGA based accelerators for bioinformatics applications

    CERN Document Server

    Varma, B Sharat Chandra; Balakrishnan, M

    2016-01-01

    This book presents an evaluation methodology to design future FPGA fabrics incorporating hard embedded blocks (HEBs) to accelerate applications. This methodology will be useful for selection of blocks to be embedded into the fabric and for evaluating the performance gain that can be achieved by such an embedding. The authors illustrate the use of their methodology by studying the impact of HEBs on two important bioinformatics applications: protein docking and genome assembly. The book also explains how the respective HEBs are designed and how hardware implementation of the application is done using these HEBs. It shows that significant speedups can be achieved over pure software implementations by using such FPGA-based accelerators. The methodology presented in this book may also be used for designing HEBs for accelerating software implementations in other domains besides bioinformatics. This book will prove useful to students, researchers, and practicing engineers alike.

  1. John Glenn Biomedical Engineering Consortium

    Science.gov (United States)

    Nall, Marsha

    2004-01-01

    The John Glenn Biomedical Engineering Consortium is an inter-institutional research and technology development, beginning with ten projects in FY02 that are aimed at applying GRC expertise in fluid physics and sensor development with local biomedical expertise to mitigate the risks of space flight on the health, safety, and performance of astronauts. It is anticipated that several new technologies will be developed that are applicable to both medical needs in space and on earth.

  2. The Historically Black Colleges and Universities/Minority Institutions Environmental Technology Consortium annual report, 1991--1992

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-12-31

    The member institutions of the Consortium continue to play a significant role in increasing the number of African Americans who enter the environmental professions through the implementation of the Consortium`s RETT Plan for Research, Education, and Technology Transfer. The four major program areas identified in the RETT Plan are as follows: (1) minority outreach and precollege education; (2) undergraduate education and postsecondary training; (3) graduate and postgraduate education and research; and (4) technology transfer.

  3. Staff Scientist - RNA Bioinformatics | Center for Cancer Research

    Science.gov (United States)

    The newly established RNA Biology Laboratory (RBL) at the Center for Cancer Research (CCR), National Cancer Institute (NCI), National Institutes of Health (NIH) in Frederick, Maryland is recruiting a Staff Scientist with strong expertise in RNA bioinformatics to join the Intramural Research Program’s mission of high impact, high reward science. The RBL is the equivalent of an

  4. 34 CFR 636.5 - What are the matching contribution and planning consortium requirements?

    Science.gov (United States)

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false What are the matching contribution and planning... PROGRAM General § 636.5 What are the matching contribution and planning consortium requirements? (a) The... agreed to by the members of a planning consortium. (Authority: 20 U.S.C. 1136b, 1136e) ...

  5. Open discovery: An integrated live Linux platform of Bioinformatics tools.

    Science.gov (United States)

    Vetrivel, Umashankar; Pilla, Kalabharath

    2008-01-01

    Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.

  6. Rationale and design of the multiethnic Pharmacogenomics in Childhood Asthma consortium

    DEFF Research Database (Denmark)

    Farzan, Niloufar; Vijverberg, Susanne J; Andiappan, Anand K

    2017-01-01

    AIM: International collaboration is needed to enable large-scale pharmacogenomics studies in childhood asthma. Here, we describe the design of the Pharmacogenomics in Childhood Asthma (PiCA) consortium. MATERIALS & METHODS: Investigators of each study participating in PiCA provided data...... corticosteroid users. Among patients from 13 studies with available data on asthma exacerbations, a third reported exacerbations despite inhaled corticosteroid use. In the future pharmacogenomics studies within the consortium, the pharmacogenomics analyses will be performed separately in each center...

  7. ‘Students-as-partners’ scheme enhances postgraduate students’ employability skills while addressing gaps in bioinformatics education

    Science.gov (United States)

    Mello, Luciane V.; Tregilgas, Luke; Cowley, Gwen; Gupta, Anshul; Makki, Fatima; Jhutty, Anjeet; Shanmugasundram, Achchuthan

    2017-01-01

    Abstract Teaching bioinformatics is a longstanding challenge for educators who need to demonstrate to students how skills developed in the classroom may be applied to real world research. This study employed an action research methodology which utilised student–staff partnership and peer-learning. It was centred on the experiences of peer-facilitators, students who had previously taken a postgraduate bioinformatics module, and had applied knowledge and skills gained from it to their own research. It aimed to demonstrate to peer-receivers, current students, how bioinformatics could be used in their own research while developing peer-facilitators’ teaching and mentoring skills. This student-centred approach was well received by the peer-receivers, who claimed to have gained improved understanding of bioinformatics and its relevance to research. Equally, peer-facilitators also developed a better understanding of the subject and appreciated that the activity was a rare and invaluable opportunity to develop their teaching and mentoring skills, enhancing their employability. PMID:29098185

  8. Bioremediation of diuron contaminated soils by a novel degrading microbial consortium.

    Science.gov (United States)

    Villaverde, J; Rubio-Bellido, M; Merchán, F; Morillo, E

    2017-03-01

    Diuron is a biologically active pollutant present in soil, water and sediments. It is persistent in soil, water and groundwater and slightly toxic to mammals and birds as well as moderately toxic to aquatic invertebrates. Its principal product of biodegradation, 3,4-dichloroaniline, exhibits a higher toxicity than diuron and is also persistent in the environment. On this basis, the objective of the study was to determine the potential capacity of a proposed novel diuron-degrading microbial consortium (DMC) for achieving not only diuron degradation, but its mineralisation both in solution as well as in soils with different properties. The consortium was tested in a soil solution where diuron was the only carbon source, and more than 98.8% of the diuron initially added was mineralised after only a few days. The consortium was composed of three diuron-degrading strains, Arthrobacter sulfonivorans, Variovorax soli and Advenella sp. JRO, the latter had been isolated in our laboratory from a highly contaminated industrial site. This work shows for the first time the potential capacity of a member of the genus Advenella to remediate pesticide-contaminated soils. However, neither of the three strains separately achieved mineralisation (ring- 14 C) of diuron in a mineral medium (MSM) with a trace nutrient solution (NS); combined in pairs, they mineralised 40% of diuron in solution, but the most relevant result was obtained in the presence of the three-member consortium, where complete diuron mineralisation was achieved after only a few days. In the presence of the investigated soils in suspension, the capacity of the consortium to mineralise diuron was evaluated, achieving mineralisation of a wide range of herbicides from 22.9 to 69.0%. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Enhanced bio-decolorization of azo dyes by co-immobilized quinone-reducing consortium and anthraquinone

    DEFF Research Database (Denmark)

    Su, YY; Zhang, Yifeng; Wang, J

    2009-01-01

    In the present study, the accelerating effect of co-immobilized anthraquinone and quinone-reducing consortium was investigated in the bio-decolorization process. The anthraquinone and quinone-reducing consortium were co-immobilized by entrapment in calcium alginate. The co-immobilized beads...

  10. Bioinformatics tools for development of fast and cost effective simple ...

    African Journals Online (AJOL)

    Bioinformatics tools for development of fast and cost effective simple sequence repeat ... comparative mapping and exploration of functional genetic diversity in the ... Already, a number of computer programs have been implemented that aim at ...

  11. Virginia Bioinformatics Institute to expand cyberinfrastructure education and outreach project

    OpenAIRE

    Whyte, Barry James

    2008-01-01

    The National Science Foundation has awarded the Virginia Bioinformatics Institute at Virginia Tech $918,000 to expand its education and outreach program in Cyberinfrastructure - Training, Education, Advancement and Mentoring, commonly known as the CI-TEAM.

  12. 6th International Conference on Practical Applications of Computational Biology & Bioinformatics

    CERN Document Server

    Luscombe, Nicholas; Fdez-Riverola, Florentino; Rodríguez, Juan; Practical Applications of Computational Biology & Bioinformatics

    2012-01-01

    The growth in the Bioinformatics and Computational Biology fields over the last few years has been remarkable.. The analysis of the datasets of Next Generation Sequencing needs new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Also Systems Biology has also been emerging as an alternative to the reductionist view that dominated biological research in the last decades. This book presents the results of the  6th International Conference on Practical Applications of Computational Biology & Bioinformatics held at University of Salamanca, Spain, 28-30th March, 2012 which brought together interdisciplinary scientists that have a strong background in the biological and computational sciences.

  13. The Historically Black Colleges and Universities/Minority Institutions Environmental Technology Consortium annual report draft, 1995--1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-07-01

    The HBCU/MI ET Consortium was established in January 1990, through a memorandum of Understanding (MOU) among its member institutions. This group of research-oriented Historically Black Colleges and Universities and Minority Institutions (HBCUs/MIs) agreed to work together to initiate or revise educational programs, develop research partnerships with public and private sector organizations, and promote technology development and transfer to address the nation`s critical environmental problems. While the Consortium`s Research, Education and Technology Transfer (RETT) Plan is the cornerstone of its overall program efforts, the initial programmatic activities of the Consortium focused on environmental education at all levels with the objective of addressing the underrepresentation of minorities in the environmental professions. This 1996 Annual Report provides an update on the activities of the Consortium with a focus on environmental curriculum development for the Technical Qualifications Program (TQP) and Education for Sustainability.

  14. Integration of Bioinformatics into an Undergraduate Biology Curriculum and the Impact on Development of Mathematical Skills

    Science.gov (United States)

    Wightman, Bruce; Hark, Amy T.

    2012-01-01

    The development of fields such as bioinformatics and genomics has created new challenges and opportunities for undergraduate biology curricula. Students preparing for careers in science, technology, and medicine need more intensive study of bioinformatics and more sophisticated training in the mathematics on which this field is based. In this…

  15. The Historically Black Colleges and Universities/Minority Institutions Environmental Technology Consortium annual report 1994--1995

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-07-01

    The HBCU/MI ET Consortium was established in January 1990, through a Memorandum of Understanding (MOU) among its member institutions. This group of research oriented Historically Black Colleges and Universities and Minority Institutions (HBCU/MIs) agreed to work together to initiate or revise education programs, develop research partnerships with public and private sector organizations, and promote technology development to address the nation`s critical environmental contamination problems. The Consortium`s Research, Education and Technology Transfer (RETT) Plan became the working agenda. The Consortium is a resource for collaboration among the member institutions and with federal an state agencies, national and federal laboratories, industries, (including small businesses), majority universities, and two and four-year technical colleges. As a group of 17 institutions geographically located in the southern US, the Consortium is well positioned to reach a diverse group of women and minority populations of African Americans, Hispanics and American Indians. This Report provides a status update on activities and achievements in environmental curriculum development, outreach at the K--12 level, undergraduate and graduate education, research and development, and technology transfer.

  16. Evaluating robustness of a diesel-degrading bacterial consortium isolated from contaminated soil

    DEFF Research Database (Denmark)

    Sydow, Mateusz; Owsianiak, Mikolaj; Szczepaniak, Zuzanna

    2016-01-01

    It is not known whether diesel-degrading bacterial communities are structurally and functionally robust when exposed to different hydrocarbon types. Here, we exposed a diesel-degrading consortium to model either alkanes, cycloalkanes or aromatic hydrocarbons as carbon sources to study its...... structural resistance. The structural resistance was low, with changes in relative abundances of up to four orders of magnitude, depending on hydrocarbon type and bacterial taxon. This low resistance is explained by the presence of hydrocarbon-degrading specialists in the consortium and differences in growth...... kinetics on individual hydrocarbons. However, despite this low resistance, structural and functional resilience were high, as verified by re-exposing the hydrocarbon-perturbed consortium to diesel fuel. The high resilience is either due to the short exposure time, insufficient for permanent changes...

  17. Consolidated Bio-Processing of Cellulosic Biomass for Efficient Biofuel Production Using Yeast Consortium

    Science.gov (United States)

    Goyal, Garima

    Fossil fuels have been the major source for liquid transportation fuels for ages. However, decline in oil reserves and environmental concerns have raised a lot of interest in alternative and renewable energy sources. One promising alternative is the conversion of plant biomass into ethanol. The primary biomass feed stocks currently being used for the ethanol industry have been food based biomass (corn and sugar cane). However, interest has recently shifted to replace these traditional feed-stocks with more abundant, non-food based cellulosic biomass such as agriculture wastes (corn stover) or crops (switch grass). The use of cellulosic biomass as feed stock for the production of ethanol via bio-chemical routes presents many technical challenges not faced with the use of corn or sugar-cane as feed-stock. Recently, a new process called consolidated Bio-processing (CBP) has been proposed. This process combines simultaneous saccharification of lignocellulose with fermentation of the resulting sugars into a single process step mediated by a single microorganism or microbial consortium. Although there is no natural microorganism that possesses all properties of lignocellulose utilization and ethanol production desired for CBP, some bacteria and fungi exhibit some of the essential traits. The yeast Saccharomyces cerevisiae is the most attractive host organism for the usage of this strategy due to its high ethanol productivity at close to theoretical yields (0.51g ethanol/g glucose consumed), high osmo- and ethanol- tolerance, natural robustness in industrial processes, and ease of genetic manipulation. Introduction of the cellulosome, found naturally in microorganisms, has shown new directions to deal with recalcitrant biomass. In this case enzymes work in synergy in order to hydrolyze biomass more effectively than in case of free enzymes. A microbial consortium has been successfully developed, which ensures the functional assembly of minicellulosome on the yeast surface

  18. Fermentative hydrogen production by microbial consortium

    Energy Technology Data Exchange (ETDEWEB)

    Maintinguer, Sandra I.; Fernandes, Bruna S.; Duarte, Iolanda C.S.; Saavedra, Nora Katia; Adorno, M. Angela T.; Varesche, M. Bernadete [Department of Hydraulics and Sanitation, School of Engineering of Sao Carlos, University of Sao Paulo, Av. Trabalhador Sao-carlense, 400, 13566-590 Sao Carlos-SP (Brazil)

    2008-08-15

    Heat pre-treatment of the inoculum associated to the pH control was applied to select hydrogen-producing bacteria and endospores-forming bacteria. The source of inoculum to the heat pre-treatment was from a UASB reactor used in the slaughterhouse waste treatment. The molecular biology analyses indicated that the microbial consortium presented microorganisms affiliated with Enterobacter cloacae (97% and 98%), Clostridium sp. (98%) and Clostridium acetobutyricum (96%), recognized as H{sub 2} and volatile acids' producers. The following assays were carried out in batch reactors in order to verify the efficiencies of sucrose conversion to H{sub 2} by the microbial consortium: (1) 630.0 mg sucrose/L, (2) 1184.0 mg sucrose/L, (3) 1816.0 mg sucrose/L and (4) 4128.0 mg sucrose/L. The subsequent yields were obtained as follows: 15% (1.2 mol H{sub 2}/mol sucrose), 20% (1.6 mol H{sub 2}/mol sucrose), 15% (1.2 mol H{sub 2}/mol sucrose) and 4% (0.3 mol H{sub 2}/mol sucrose), respectively. The intermediary products were acetic acid, butyric acid, methanol and ethanol in all of the anaerobic reactors. (author)

  19. Institutional support for the Utah Consortium for Energy Research and Education. Annual report

    Energy Technology Data Exchange (ETDEWEB)

    1979-06-01

    The Utah Consortium for Energy Research and Education is made up of three colleges and universities in Utah. The scope of the Consortium plan is the marshalling of the academic research resources, as well as the appropriate non-academic resources within Utah to pursue, as appropriate, energy-related research activities. The heart of this effort has been the institutional contract between DOE and the University of Utah, acting as fiscal agent for the Consortium. Sixteen programs are currently being funded, but only ten of the projects are described in this report. Three projects are on fission/fusion; three on environment and safety; four on fossil energy; three on basic energy sciences; one each on conservation, geothermal, and solar.

  20. Bioinformatics for Undergraduates: Steps toward a Quantitative Bioscience Curriculum

    Science.gov (United States)

    Chapman, Barbara S.; Christmann, James L.; Thatcher, Eileen F.

    2006-01-01

    We describe an innovative bioinformatics course developed under grants from the National Science Foundation and the California State University Program in Research and Education in Biotechnology for undergraduate biology students. The project has been part of a continuing effort to offer students classroom experiences focused on principles and…

  1. Bioinformatic tools and guideline for PCR primer design | Abd ...

    African Journals Online (AJOL)

    Bioinformatics has become an essential tool not only for basic research but also for applied research in biotechnology and biomedical sciences. Optimal primer sequence and appropriate primer concentration are essential for maximal specificity and efficiency of PCR. A poorly designed primer can result in little or no ...

  2. Bioinformatics analysis and detection of gelatinase encoded gene in Lysinibacillussphaericus

    Science.gov (United States)

    Repin, Rul Aisyah Mat; Mutalib, Sahilah Abdul; Shahimi, Safiyyah; Khalid, Rozida Mohd.; Ayob, Mohd. Khan; Bakar, Mohd. Faizal Abu; Isa, Mohd Noor Mat

    2016-11-01

    In this study, we performed bioinformatics analysis toward genome sequence of Lysinibacillussphaericus (L. sphaericus) to determine gene encoded for gelatinase. L. sphaericus was isolated from soil and gelatinase species-specific bacterium to porcine and bovine gelatin. This bacterium offers the possibility of enzymes production which is specific to both species of meat, respectively. The main focus of this research is to identify the gelatinase encoded gene within the bacteria of L. Sphaericus using bioinformatics analysis of partially sequence genome. From the research study, three candidate gene were identified which was, gelatinase candidate gene 1 (P1), NODE_71_length_93919_cov_158.931839_21 which containing 1563 base pair (bp) in size with 520 amino acids sequence; Secondly, gelatinase candidate gene 2 (P2), NODE_23_length_52851_cov_190.061386_17 which containing 1776 bp in size with 591 amino acids sequence; and Thirdly, gelatinase candidate gene 3 (P3), NODE_106_length_32943_cov_169.147919_8 containing 1701 bp in size with 566 amino acids sequence. Three pairs of oligonucleotide primers were designed and namely as, F1, R1, F2, R2, F3 and R3 were targeted short sequences of cDNA by PCR. The amplicons were reliably results in 1563 bp in size for candidate gene P1 and 1701 bp in size for candidate gene P3. Therefore, the results of bioinformatics analysis of L. Sphaericus resulting in gene encoded gelatinase were identified.

  3. Integration of Proteomics, Bioinformatics, and Systems Biology in Traumatic Brain Injury Biomarker Discovery

    Science.gov (United States)

    Guingab-Cagmat, J.D.; Cagmat, E.B.; Hayes, R.L.; Anagli, J.

    2013-01-01

    Traumatic brain injury (TBI) is a major medical crisis without any FDA-approved pharmacological therapies that have been demonstrated to improve functional outcomes. It has been argued that discovery of disease-relevant biomarkers might help to guide successful clinical trials for TBI. Major advances in mass spectrometry (MS) have revolutionized the field of proteomic biomarker discovery and facilitated the identification of several candidate markers that are being further evaluated for their efficacy as TBI biomarkers. However, several hurdles have to be overcome even during the discovery phase which is only the first step in the long process of biomarker development. The high-throughput nature of MS-based proteomic experiments generates a massive amount of mass spectral data presenting great challenges in downstream interpretation. Currently, different bioinformatics platforms are available for functional analysis and data mining of MS-generated proteomic data. These tools provide a way to convert data sets to biologically interpretable results and functional outcomes. A strategy that has promise in advancing biomarker development involves the triad of proteomics, bioinformatics, and systems biology. In this review, a brief overview of how bioinformatics and systems biology tools analyze, transform, and interpret complex MS datasets into biologically relevant results is discussed. In addition, challenges and limitations of proteomics, bioinformatics, and systems biology in TBI biomarker discovery are presented. A brief survey of researches that utilized these three overlapping disciplines in TBI biomarker discovery is also presented. Finally, examples of TBI biomarkers and their applications are discussed. PMID:23750150

  4. Massachusetts Institute of Technology Consortium Agreement

    Science.gov (United States)

    1999-03-01

    This is the third progress report of the M.I.T. Home Automation and Healthcare Consortium-Phase Two. It covers majority of the new findings, concepts...research projects of home automation and healthcare, ranging from human modeling, patient monitoring, and diagnosis to new sensors and actuators, physical...aids, human-machine interface and home automation infrastructure. This report contains several patentable concepts, algorithms, and designs.

  5. In silico cloning and bioinformatic analysis of PEPCK gene in ...

    African Journals Online (AJOL)

    Phosphoenolpyruvate carboxykinase (PEPCK), a critical gluconeogenic enzyme, catalyzes the first committed step in the diversion of tricarboxylic acid cycle intermediates toward gluconeogenesis. According to the relative conservation of homologous gene, a bioinformatics strategy was applied to clone Fusarium ...

  6. mockrobiota: a Public Resource for Microbiome Bioinformatics Benchmarking.

    Science.gov (United States)

    Bokulich, Nicholas A; Rideout, Jai Ram; Mercurio, William G; Shiffer, Arron; Wolfe, Benjamin; Maurice, Corinne F; Dutton, Rachel J; Turnbaugh, Peter J; Knight, Rob; Caporaso, J Gregory

    2016-01-01

    Mock communities are an important tool for validating, optimizing, and comparing bioinformatics methods for microbial community analysis. We present mockrobiota, a public resource for sharing, validating, and documenting mock community data resources, available at http://caporaso-lab.github.io/mockrobiota/. The materials contained in mockrobiota include data set and sample metadata, expected composition data (taxonomy or gene annotations or reference sequences for mock community members), and links to raw data (e.g., raw sequence data) for each mock community data set. mockrobiota does not supply physical sample materials directly, but the data set metadata included for each mock community indicate whether physical sample materials are available. At the time of this writing, mockrobiota contains 11 mock community data sets with known species compositions, including bacterial, archaeal, and eukaryotic mock communities, analyzed by high-throughput marker gene sequencing. IMPORTANCE The availability of standard and public mock community data will facilitate ongoing method optimizations, comparisons across studies that share source data, and greater transparency and access and eliminate redundancy. These are also valuable resources for bioinformatics teaching and training. This dynamic resource is intended to expand and evolve to meet the changing needs of the omics community.

  7. 25 CFR 1000.21 - When does a Tribe/Consortium have a “material audit exception”?

    Science.gov (United States)

    2010-04-01

    ...-Governance Eligibility § 1000.21 When does a Tribe/Consortium have a “material audit exception”? A Tribe/Consortium has a material audit exception if any of the audits that it submitted under § 1000.17(c...

  8. The Revolution in Viral Genomics as Exemplified by the Bioinformatic Analysis of Human Adenoviruses

    Directory of Open Access Journals (Sweden)

    Sarah Torres

    2010-06-01

    Full Text Available Over the past 30 years, genomic and bioinformatic analysis of human adenoviruses has been achieved using a variety of DNA sequencing methods; initially with the use of restriction enzymes and more currently with the use of the GS FLX pyrosequencing technology. Following the conception of DNA sequencing in the 1970s, analysis of adenoviruses has evolved from 100 base pair mRNA fragments to entire genomes. Comparative genomics of adenoviruses made its debut in 1984 when nucleotides and amino acids of coding sequences within the hexon genes of two human adenoviruses (HAdV, HAdV–C2 and HAdV–C5, were compared and analyzed. It was determined that there were three different zones (1-393, 394-1410, 1411-2910 within the hexon gene, of which HAdV–C2 and HAdV–C5 shared zones 1 and 3 with 95% and 89.5% nucleotide identity, respectively. In 1992, HAdV-C5 became the first adenovirus genome to be fully sequenced using the Sanger method. Over the next seven years, whole genome analysis and characterization was completed using bioinformatic tools such as blastn, tblastx, ClustalV and FASTA, in order to determine key proteins in species HAdV-A through HAdV-F. The bioinformatic revolution was initiated with the introduction of a novel species, HAdV-G, that was typed and named by the use of whole genome sequencing and phylogenetics as opposed to traditional serology. HAdV bioinformatics will continue to advance as the latest sequencing technology enables scientists to add to and expand the resource databases. As a result of these advancements, how novel HAdVs are typed has changed. Bioinformatic analysis has become the revolutionary tool that has significantly accelerated the in-depth study of HAdV microevolution through comparative genomics.

  9. Bioinformatics in the Netherlands : The value of a nationwide community

    NARCIS (Netherlands)

    van Gelder, Celia W.G.; Hooft, Rob; van Rijswijk, Merlijn; van den Berg, Linda; Kok, Ruben; Reinders, M.J.T.; Mons, Barend; Heringa, Jaap

    2017-01-01

    This review provides a historical overview of the inception and development of bioinformatics research in the Netherlands. Rooted in theoretical biology by foundational figures such as Paulien Hogeweg (at Utrecht University since the 1970s), the developments leading to organizational structures

  10. Maryland Family Support Services Consortium. Final Report.

    Science.gov (United States)

    Gardner, James F.; Markowitz, Ricka Keeney

    The Maryland Family Support Services Consortium is a 3-year demonstration project which developed unique family support models at five sites serving the needs of families with a developmentally disabled child (ages birth to 21). Caseworkers provided direct intensive services to 224 families over the 3-year period, including counseling, liaison and…

  11. An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics

    International Nuclear Information System (INIS)

    Taylor, Ronald C.

    2010-01-01

    Bioinformatics researchers are increasingly confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce. An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBase project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date.

  12. Publisher Correction: Whole genome sequencing in psychiatric disorders: the WGSPD consortium.

    Science.gov (United States)

    Sanders, Stephan J; Neale, Benjamin M; Huang, Hailiang; Werling, Donna M; An, Joon-Yong; Dong, Shan; Abecasis, Goncalo; Arguello, P Alexander; Blangero, John; Boehnke, Michael; Daly, Mark J; Eggan, Kevin; Geschwind, Daniel H; Glahn, David C; Goldstein, David B; Gur, Raquel E; Handsaker, Robert E; McCarroll, Steven A; Ophoff, Roel A; Palotie, Aarno; Pato, Carlos N; Sabatti, Chiara; State, Matthew W; Willsey, A Jeremy; Hyman, Steven E; Addington, Anjene M; Lehner, Thomas; Freimer, Nelson B

    2018-03-16

    In the version of this article initially published, the consortium authorship and corresponding authors were not presented correctly. In the PDF and print versions, the Whole Genome Sequencing for Psychiatric Disorders (WGSPD) consortium was missing from the author list at the beginning of the paper, where it should have appeared as the seventh author; it was present in the author list at the end of the paper, but the footnote directing readers to the Supplementary Note for a list of members was missing. In the HTML version, the consortium was listed as the last author instead of as the seventh, and the line directing readers to the Supplementary Note for a list of members appeared at the end of the paper under Author Information but not in association with the consortium name itself. Also, this line stated that both member names and affiliations could be found in the Supplementary Note; in fact, only names are given. In all versions of the paper, the corresponding author symbols were attached to A. Jeremy Willsey, Steven E. Hyman, Anjene M. Addington and Thomas Lehner; they should have been attached, respectively, to Steven E. Hyman, Anjene M. Addington, Thomas Lehner and Nelson B. Freimer. As a result of this shift, the respective contact links in the HTML version did not lead to the indicated individuals. The errors have been corrected in the HTML and PDF versions of the article.

  13. Biodegradation of phenanthrene in bioaugmented microcosm by consortium ASP developed from coastal sediment of Alang-Sosiya ship breaking yard.

    Science.gov (United States)

    Patel, Vilas; Patel, Janki; Madamwar, Datta

    2013-09-15

    A phenanthrene-degrading bacterial consortium (ASP) was developed using sediment from the Alang-Sosiya shipbreaking yard at Gujarat, India. 16S rRNA gene-based molecular analyses revealed that the bacterial consortium consisted of six bacterial strains: Bacillus sp. ASP1, Pseudomonas sp. ASP2, Stenotrophomonas maltophilia strain ASP3, Staphylococcus sp. ASP4, Geobacillus sp. ASP5 and Alcaligenes sp. ASP6. The consortium was able to degrade 300 ppm of phenanthrene and 1000 ppm of naphthalene within 120 h and 48 h, respectively. Tween 80 showed a positive effect on phenanthrene degradation. The consortium was able to consume maximum phenanthrene at the rate of 46 mg/h/l and degrade phenanthrene in the presence of other petroleum hydrocarbons. A microcosm study was conducted to test the consortium's bioremediation potential. Phenanthrene degradation increased from 61% to 94% in sediment bioaugmented with the consortium. Simultaneously, bacterial counts and dehydrogenase activities also increased in the bioaugmented sediment. These results suggest that microbial consortium bioaugmentation may be a promising technology for bioremediation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Washoe Tribe Nevada Inter-Tribal Energy Consortium Energy Organization Enhancement Project Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Jennifer [Washoe Tribe of NV and Ca

    2014-11-06

    The Washoe Tribe of Nevada and California was awarded funding from the Department of Energy to complete the Nevada Inter-Tribal Energy Consortium Energy Organization Enhancement Project. The main goal of the project was to enhance the capacity of the Nevada Inter-Tribal Energy Consortium (NITEC) to effectively assist tribes within Nevada to technically manage tribal energy resources and implement tribal energy projects.

  15. Learning Genetics through an Authentic Research Simulation in Bioinformatics

    Science.gov (United States)

    Gelbart, Hadas; Yarden, Anat

    2006-01-01

    Following the rationale that learning is an active process of knowledge construction as well as enculturation into a community of experts, we developed a novel web-based learning environment in bioinformatics for high-school biology majors in Israel. The learning environment enables the learners to actively participate in a guided inquiry process…

  16. Atlas – a data warehouse for integrative bioinformatics

    Directory of Open Access Journals (Sweden)

    Yuen Macaire MS

    2005-02-01

    Full Text Available Abstract Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL calls that are implemented in a set of Application Programming Interfaces (APIs. The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD, Biomolecular Interaction Network Database (BIND, Database of Interacting Proteins (DIP, Molecular Interactions Database (MINT, IntAct, NCBI Taxonomy, Gene Ontology (GO, Online Mendelian Inheritance in Man (OMIM, LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First

  17. Augmentation of a Microbial Consortium for Enhanced Polylactide (PLA) Degradation.

    Science.gov (United States)

    Nair, Nimisha R; Sekhar, Vini C; Nampoothiri, K Madhavan

    2016-03-01

    Bioplastics are eco-friendly and derived from renewable biomass sources. Innovation in recycling methods will tackle some of the critical issues facing the acceptance of bioplastics. Polylactic acid (PLA) is the commonly used and well-studied bioplastic that is presumed to be biodegradable. Considering their demand and use in near future, exploration for microbes capable of bioplastic degradation has high potential. Four PLA degrading strains were isolated and identified as Penicillium chrysogenum, Cladosporium sphaerospermum, Serratia marcescens and Rhodotorula mucilaginosa. A consortium of above strains degraded 44 % (w/w) PLA in 30 days time in laboratory conditions. Subsequently, the microbial consortium employed effectively for PLA composting.

  18. Effects of the Consortium of Pseudomonas, Bacillus and ...

    African Journals Online (AJOL)

    The effect of the consortium of Pseudomonas, Bacillus and Micrococcus spp on polycyclic aromatic hydrocarbons in crude oil was carried out using standard microbiological methods. Spectrophotometer, gas chromatography and viable count which determined the optical density, the polycyclic aromatic hydrocarbons and ...

  19. Comprehensive analysis of the N-glycan biosynthetic pathway using bioinformatics to generate UniCorn: A theoretical N-glycan structure database.

    Science.gov (United States)

    Akune, Yukie; Lin, Chi-Hung; Abrahams, Jodie L; Zhang, Jingyu; Packer, Nicolle H; Aoki-Kinoshita, Kiyoko F; Campbell, Matthew P

    2016-08-05

    Glycan structures attached to proteins are comprised of diverse monosaccharide sequences and linkages that are produced from precursor nucleotide-sugars by a series of glycosyltransferases. Databases of these structures are an essential resource for the interpretation of analytical data and the development of bioinformatics tools. However, with no template to predict what structures are possible the human glycan structure databases are incomplete and rely heavily on the curation of published, experimentally determined, glycan structure data. In this work, a library of 45 human glycosyltransferases was used to generate a theoretical database of N-glycan structures comprised of 15 or less monosaccharide residues. Enzyme specificities were sourced from major online databases including Kyoto Encyclopedia of Genes and Genomes (KEGG) Glycan, Consortium for Functional Glycomics (CFG), Carbohydrate-Active enZymes (CAZy), GlycoGene DataBase (GGDB) and BRENDA. Based on the known activities, more than 1.1 million theoretical structures and 4.7 million synthetic reactions were generated and stored in our database called UniCorn. Furthermore, we analyzed the differences between the predicted glycan structures in UniCorn and those contained in UniCarbKB (www.unicarbkb.org), a database which stores experimentally described glycan structures reported in the literature, and demonstrate that UniCorn can be used to aid in the assignment of ambiguous structures whilst also serving as a discovery database. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Inner-City Energy and Environmental Education Consortium

    Energy Technology Data Exchange (ETDEWEB)

    1993-06-11

    The numbers of individuals with adequate education and training to participate effectively in the highly technical aspects of environmental site cleanup are insufficient to meet the increasing demands of industry and government. Young people are particularly sensitive to these issues and want to become better equipped to solve the problems which will confront them during their lives. Educational institutions, on the other hand, have been slow in offering courses and curricula which will allow students to fulfill these interests. This has been in part due to the lack of federal funding to support new academic programs. This Consortium has been organized to initiate focused educational effort to reach inner-city youth with interesting and useful energy and environmental programs which can lead to well-paying and satisfying careers. Successful Consortium programs can be replicated in other parts of the nation. This report describes a pilot program in Washington, DC, Philadelphia, and Baltimore with the goal to attract and retain inner-city youth to pursue careers in energy-related scientific and technical areas, environmental restoration, and waste management.

  1. Overview of the carbon products consortium (CPC)

    Energy Technology Data Exchange (ETDEWEB)

    Irwin, C.L. [West Virginia Univ., Morgantown, WV (United States)

    1996-08-01

    The Carbon Products Consortium (CPC) is an industry, university, government cooperative research team which has evolved over the past seven years to produce and evaluate coal-derived feedstocks for carbon products. The members of the Carbon Products Consortium are UCAR Carbon Company, Koppers Industries, CONOCO, Aluminum Company of America, AMOCO Polymers, and West Virginia University. The Carbon and Insulation Materials Technology Group at Oak Ridge National Laboratory, Fiber Materials Inc., and BASF Corporation are affiliates of the CPC. The initial work on coal-derived nuclear graphites was supported by a grant to WVU, UCAR Carbon, and ORNL from the U.S. DOE New Production Reactor program. More recently, the CPC program has been supported through the Fossil Energy Materials program and through PETC`s Liquefaction program. The coal processing technologies involve hydrogenation, extraction by solvents such as N-methyl pyrolidone and toluene, material blending, and calcination. The breadth of carbon science expertise and manufacturing capability available in the CPC enables it to address virtually all research and development issues of importance to the carbon products industry.

  2. PyPedia: using the wiki paradigm as crowd sourcing environment for bioinformatics protocols.

    Science.gov (United States)

    Kanterakis, Alexandros; Kuiper, Joël; Potamias, George; Swertz, Morris A

    2015-01-01

    Today researchers can choose from many bioinformatics protocols for all types of life sciences research, computational environments and coding languages. Although the majority of these are open source, few of them possess all virtues to maximize reuse and promote reproducible science. Wikipedia has proven a great tool to disseminate information and enhance collaboration between users with varying expertise and background to author qualitative content via crowdsourcing. However, it remains an open question whether the wiki paradigm can be applied to bioinformatics protocols. We piloted PyPedia, a wiki where each article is both implementation and documentation of a bioinformatics computational protocol in the python language. Hyperlinks within the wiki can be used to compose complex workflows and induce reuse. A RESTful API enables code execution outside the wiki. Initial content of PyPedia contains articles for population statistics, bioinformatics format conversions and genotype imputation. Use of the easy to learn wiki syntax effectively lowers the barriers to bring expert programmers and less computer savvy researchers on the same page. PyPedia demonstrates how wiki can provide a collaborative development, sharing and even execution environment for biologists and bioinformaticians that complement existing resources, useful for local and multi-center research teams. PyPedia is available online at: http://www.pypedia.com. The source code and installation instructions are available at: https://github.com/kantale/PyPedia_server. The PyPedia python library is available at: https://github.com/kantale/pypedia. PyPedia is open-source, available under the BSD 2-Clause License.

  3. Intrageneric Primer Design: Bringing Bioinformatics Tools to the Class

    Science.gov (United States)

    Lima, Andre O. S.; Garces, Sergio P. S.

    2006-01-01

    Bioinformatics is one of the fastest growing scientific areas over the last decade. It focuses on the use of informatics tools for the organization and analysis of biological data. An example of their importance is the availability nowadays of dozens of software programs for genomic and proteomic studies. Thus, there is a growing field (private…

  4. Bacterial consortium for copper extraction from sulphide ore consisting mainly of chalcopyrite

    Directory of Open Access Journals (Sweden)

    E. Romo

    2013-01-01

    Full Text Available The mining industry is looking forward for bacterial consortia for economic extraction of copper from low-grade ores. The main objective was to determine an optimal bacterial consortium from several bacterial strains to obtain copper from the leach of chalcopyrite. The major native bacterial species involved in the bioleaching of sulphide ore (Acidithiobacillus ferrooxidans, Acidithiobacillus thiooxidans, Leptospirillum ferrooxidans and Leptospirillum ferriphilum were isolated and the assays were performed with individual bacteria and in combination with At. thiooxidans. In conclusion, it was found that the consortium integrated by At. ferrooxidans and At. thiooxidans removed 70% of copper in 35 days from the selected ore, showing significant differences with the other consortia, which removed only 35% of copper in 35 days. To validate the assays was done an escalation in columns, where the bacterial consortium achieved a higher percentage of copper extraction regarding to control.

  5. Call for participation in the neurogenetics consortium within the Human Variome Project.

    Science.gov (United States)

    Haworth, Andrea; Bertram, Lars; Carrera, Paola; Elson, Joanna L; Braastad, Corey D; Cox, Diane W; Cruts, Marc; den Dunnen, Johann T; Farrer, Matthew J; Fink, John K; Hamed, Sherifa A; Houlden, Henry; Johnson, Dennis R; Nuytemans, Karen; Palau, Francesc; Rayan, Dipa L Raja; Robinson, Peter N; Salas, Antonio; Schüle, Birgitt; Sweeney, Mary G; Woods, Michael O; Amigo, Jorge; Cotton, Richard G H; Sobrido, Maria-Jesus

    2011-08-01

    The rate of DNA variation discovery has accelerated the need to collate, store and interpret the data in a standardised coherent way and is becoming a critical step in maximising the impact of discovery on the understanding and treatment of human disease. This particularly applies to the field of neurology as neurological function is impaired in many human disorders. Furthermore, the field of neurogenetics has been proven to show remarkably complex genotype-to-phenotype relationships. To facilitate the collection of DNA sequence variation pertaining to neurogenetic disorders, we have initiated the "Neurogenetics Consortium" under the umbrella of the Human Variome Project. The Consortium's founding group consisted of basic researchers, clinicians, informaticians and database creators. This report outlines the strategic aims established at the preliminary meetings of the Neurogenetics Consortium and calls for the involvement of the wider neurogenetic community in enabling the development of this important resource.

  6. Results From the John Glenn Biomedical Engineering Consortium. A Success Story for NASA and Northeast Ohio

    Science.gov (United States)

    Nall, Marsha M.; Barna, Gerald J.

    2009-01-01

    The John Glenn Biomedical Engineering Consortium was established by NASA in 2002 to formulate and implement an integrated, interdisciplinary research program to address risks faced by astronauts during long-duration space missions. The consortium is comprised of a preeminent team of Northeast Ohio institutions that include Case Western Reserve University, the Cleveland Clinic, University Hospitals Case Medical Center, The National Center for Space Exploration Research, and the NASA Glenn Research Center. The John Glenn Biomedical Engineering Consortium research is focused on fluid physics and sensor technology that addresses the critical risks to crew health, safety, and performance. Effectively utilizing the unique skills, capabilities and facilities of the consortium members is also of prime importance. Research efforts were initiated with a general call for proposals to the consortium members. The top proposals were selected for funding through a rigorous, peer review process. The review included participation from NASA's Johnson Space Center, which has programmatic responsibility for NASA's Human Research Program. The projects range in scope from delivery of prototype hardware to applied research that enables future development of advanced technology devices. All of the projects selected for funding have been completed and the results are summarized. Because of the success of the consortium, the member institutions have extended the original agreement to continue this highly effective research collaboration through 2011.

  7. Consortium formation for a coal-fired power plant in the People`s Republic of China

    Energy Technology Data Exchange (ETDEWEB)

    Kostal, K.T.

    1994-12-31

    The advent of developed power projects within the People`s Republic of China brings the benefits of new financing methods and the energies and resources of new participants. By necessity, it also results in fundamental changes in the many contractual relationships needed to support financial closing. The key element is the contract to design, procure, and construct the power plant. This paper compares and contrasts the requirements of these turnkey contracts with more traditional fixed price equipment supply contracts within the People`s Republic of China. The emphasis of the paper is upon issues and concerns related to the successful formation of a consortium, including the effective integration of Chinese construction companies and design institutes into the process. The issues are explored from the viewpoint of the consortium`s international engineer, who often participates as consortium leader and equipment procurer, in addition to detailed designer.

  8. The IRIS consortium: international cooperation in advanced reactor development

    International Nuclear Information System (INIS)

    Carelli, M.; Petrovic, B.; Miller, K.; Lombardi, C.; Ricotti, M.E.

    2005-01-01

    Besides its many outstanding technical innovations in the design and safety, the most innovative feature of the International Reactor Innovative and Secure (IRIS), is perhaps the international cooperation which carries on its development. IRIS is designed by an international consortium which currently numbers 21 organizations from ten countries across four continents. It includes reactor, fuel and fuel cycle vendors, component manufacturers, laboratories, academia, architect engineers and power producers. The defining organizational characteristics of IRIS is that while Westinghouse has overall lead and responsibility, this lead is of the type of 'primus inter pares' (first among equals) rather than the traditional owner versus suppliers/contractors relationship. All members of the IRIS consortium contribute and expect to have a return, should IRIS be successfully deployed, commensurate to their investment. The nature of such return will be tailored to the type of each organization, because it will of course be of a different nature for say a component manufacturer, university, or architect engineer. One fundamental tenet of the consortium is that all members, regardless of their amount of contribution, have equal access to all information developed within the project. Technical work is thus being coordinated by integrated subgroups and the whole team meets twice a year to perform an overall review of the work, discuss policy and strategy and plan future activities. Personnel from consortium members have performed internships, mostly at Westinghouse locations in Pittsburgh, Pennsylvania, and Windsor, Connecticut, but also at other members, as it has been the case for several graduate students. In fact, more than one hundred students at the various universities have been working on IRIS, most of them conducting graduate theses at the master or doctoral level. The IRIS experience has proved very helpful to the students in successfully landing their employment choice

  9. Recommendations From the International Consortium on Professional Nursing Practice in Long-Term Care Homes.

    Science.gov (United States)

    McGilton, Katherine S; Bowers, Barbara J; Heath, Hazel; Shannon, Kay; Dellefield, Mary Ellen; Prentice, Dawn; Siegel, Elena O; Meyer, Julienne; Chu, Charlene H; Ploeg, Jenny; Boscart, Veronique M; Corazzini, Kirsten N; Anderson, Ruth A; Mueller, Christine A

    2016-02-01

    In response to the International Association of Gerontology and Geriatrics' global agenda for clinical research and quality of care in long-term care homes (LTCHs), the International Consortium on Professional Nursing Practice in Long Term Care Homes (the Consortium) was formed to develop nursing leadership capacity and address the concerns regarding the current state of professional nursing practice in LTCHs. At its invitational, 2-day inaugural meeting, the Consortium brought together international nurse experts to explore the potential of registered nurses (RNs) who work as supervisors or charge nurses within the LTCHs and the value of their contribution in nursing homes, consider what RN competencies might be needed, discuss effective educational (curriculum and practice) experiences, health care policy, and human resources planning requirements, and to identify what sustainable nurse leadership strategies and models might enhance the effectiveness of RNs in improving resident, family, and staff outcomes. The Consortium made recommendations about the following priority issues for action: (1) define the competencies of RNs required to care for older adults in LTCHs; (2) create an LTCH environment in which the RN role is differentiated from other team members and RNs can practice to their full scope; and (3) prepare RN leaders to operate effectively in person-centered care LTCH environments. In addition to clear recommendations for practice, the Consortium identified several areas in which further research is needed. The Consortium advocated for a research agenda that emphasizes an international coordination of research efforts to explore similar issues, the pursuit of examining the impact of nursing and organizational models, and the showcasing of excellence in nursing practice in care homes, so that others might learn from what works. Several studies already under way are also described. Copyright © 2016 AMDA – The Society for Post-Acute and Long-Term Care

  10. Regional Development and the European Consortium of Innovative Universities.

    Science.gov (United States)

    Hansen, Saskia Loer; Kokkeler, Ben; van der Sijde, P. C.

    2002-01-01

    The European Consortium of Innovative Universities is a network that shares information not just among universities but with affiliated incubators, research parks, and other regional entities. The learning network contributes to regional development.(JOW)

  11. Rapid cloning and bioinformatic analysis of spinach Y chromosome ...

    Indian Academy of Sciences (India)

    Rapid cloning and bioinformatic analysis of spinach Y chromosome- specific EST sequences. Chuan-Liang Deng, Wei-Li Zhang, Ying Cao, Shao-Jing Wang, ... Arabidopsis thaliana mRNA for mitochondrial half-ABC transporter (STA1 gene). 389 2.31E-13. 98.96. SP3−12. Betula pendula histidine kinase 3 (HK3) mRNA, ...

  12. Novel approaches for bioinformatic analysis of salivary RNA sequencing data for development.

    Science.gov (United States)

    Kaczor-Urbanowicz, Karolina Elzbieta; Kim, Yong; Li, Feng; Galeev, Timur; Kitchen, Rob R; Gerstein, Mark; Koyano, Kikuye; Jeong, Sung-Hee; Wang, Xiaoyan; Elashoff, David; Kang, So Young; Kim, Su Mi; Kim, Kyoung; Kim, Sung; Chia, David; Xiao, Xinshu; Rozowsky, Joel; Wong, David T W

    2018-01-01

    Analysis of RNA sequencing (RNA-Seq) data in human saliva is challenging. Lack of standardization and unification of the bioinformatic procedures undermines saliva's diagnostic potential. Thus, it motivated us to perform this study. We applied principal pipelines for bioinformatic analysis of small RNA-Seq data of saliva of 98 healthy Korean volunteers including either direct or indirect mapping of the reads to the human genome using Bowtie1. Analysis of alignments to exogenous genomes by another pipeline revealed that almost all of the reads map to bacterial genomes. Thus, salivary exRNA has fundamental properties that warrant the design of unique additional steps while performing the bioinformatic analysis. Our pipelines can serve as potential guidelines for processing of RNA-Seq data of human saliva. Processing and analysis results of the experimental data generated by the exceRpt (v4.6.3) small RNA-seq pipeline (github.gersteinlab.org/exceRpt) are available from exRNA atlas (exrna-atlas.org). Alignment to exogenous genomes and their quantification results were used in this paper for the analyses of small RNAs of exogenous origin. dtww@ucla.edu. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  13. 25 CFR 1000.18 - May a Consortium member Tribe withdraw from the Consortium and become a member of the applicant...

    Science.gov (United States)

    2010-04-01

    ...-governance activities for a member Tribe, that planning activity and report may be used to satisfy the planning requirements for the member Tribe if it applies for self-governance status on its own. (b) Submit... for Participation in Tribal Self-Governance Eligibility § 1000.18 May a Consortium member Tribe...

  14. Nanoinformatics: an emerging area of information technology at the intersection of bioinformatics, computational chemistry and nanobiotechnology

    Directory of Open Access Journals (Sweden)

    Fernando González-Nilo

    2011-01-01

    Full Text Available After the progress made during the genomics era, bioinformatics was tasked with supporting the flow of information generated by nanobiotechnology efforts. This challenge requires adapting classical bioinformatic and computational chemistry tools to store, standardize, analyze, and visualize nanobiotechnological information. Thus, old and new bioinformatic and computational chemistry tools have been merged into a new sub-discipline: nanoinformatics. This review takes a second look at the development of this new and exciting area as seen from the perspective of the evolution of nanobiotechnology applied to the life sciences. The knowledge obtained at the nano-scale level implies answers to new questions and the development of new concepts in different fields. The rapid convergence of technologies around nanobiotechnologies has spun off collaborative networks and web platforms created for sharing and discussing the knowledge generated in nanobiotechnology. The implementation of new database schemes suitable for storage, processing and integrating physical, chemical, and biological properties of nanoparticles will be a key element in achieving the promises in this convergent field. In this work, we will review some applications of nanobiotechnology to life sciences in generating new requirements for diverse scientific fields, such as bioinformatics and computational chemistry.

  15. Zijm Consortium: Engineering a Sustainable Supply Chain System

    NARCIS (Netherlands)

    Knofius, Nils; Rahimi Ghahroodi, Sajjad; van Capelleveen, Guido Cornelis; Yazdanpanah, Vahid

    2018-01-01

    In this paper we address one of the current major research areas of the Zijm consortium; engineering sustainable supply chain systems by transforming traditionally linear practices to circular systems. We illustrate this field of research with a case consisting of a network of three firms Willem

  16. CROSSWORK for Glycans: Glycan Identificatin Through Mass Spectrometry and Bioinformatics

    DEFF Research Database (Denmark)

    Rasmussen, Morten; Thaysen-Andersen, Morten; Højrup, Peter

      We have developed "GLYCANthrope " - CROSSWORKS for glycans:  a bioinformatics tool, which assists in identifying N-linked glycosylated peptides as well as their glycan moieties from MS2 data of enzymatically digested glycoproteins. The program runs either as a stand-alone application or as a plug...

  17. Hidden in the Middle: Culture, Value and Reward in Bioinformatics

    Science.gov (United States)

    Lewis, Jamie; Bartlett, Andrew; Atkinson, Paul

    2016-01-01

    Bioinformatics--the so-called shotgun marriage between biology and computer science--is an interdiscipline. Despite interdisciplinarity being seen as a virtue, for having the capacity to solve complex problems and foster innovation, it has the potential to place projects and people in anomalous categories. For example, valorised…

  18. A middleware-based platform for the integration of bioinformatic services

    Directory of Open Access Journals (Sweden)

    Guzmán Llambías

    2015-08-01

    Full Text Available Performing Bioinformatic´s experiments involve an intensive access to distributed services and information resources through Internet. Although existing tools facilitate the implementation of workflow-oriented applications, they lack of capabilities to integrate services beyond low-scale applications, particularly integrating services with heterogeneous interaction patterns and in a larger scale. This is particularly required to enable a large-scale distributed processing of biological data generated by massive sequencing technologies. On the other hand, such integration mechanisms are provided by middleware products like Enterprise Service Buses (ESB, which enable to integrate distributed systems following a Service Oriented Architecture. This paper proposes an integration platform, based on enterprise middleware, to integrate Bioinformatics services. It presents a multi-level reference architecture and focuses on ESB-based mechanisms to provide asynchronous communications, event-based interactions and data transformation capabilities. The paper presents a formal specification of the platform using the Event-B model.

  19. A Survey on Evolutionary Algorithm Based Hybrid Intelligence in Bioinformatics

    Directory of Open Access Journals (Sweden)

    Shan Li

    2014-01-01

    Full Text Available With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks.

  20. Effective bioremediation of a petroleum-polluted saline soil by a surfactant-producing Pseudomonas aeruginosa consortium

    Directory of Open Access Journals (Sweden)

    Ali Ebadi

    2017-11-01

    Full Text Available Bacteria able to produce biosurfactants can use petroleum-based hydrocarbons as a carbon source. Herein, four biosurfactant-producing Pseudomonas aeruginosa strains, isolated from oil-contaminated saline soil, were combined to form a bacterial consortium. The inoculation of the consortium to contaminated soil alleviated the adverse effects of salinity on biodegradation and increased the rate of degradation of petroleum hydrocarbon approximately 30% compared to the rate achieved in non-treated soil. In saline condition, treatment of polluted soil with the consortium led to a significant boost in the activity of dehydrogenase (approximately 2-fold. A lettuce seedling bioassay showed that, following the treatment, the soil's level of phytotoxicity was reduced up to 30% compared to non-treated soil. Treatment with an appropriate bacterial consortium can represent an effective means of reducing the adverse effects of salinity on the microbial degradation of petroleum and thus provides enhancement in the efficiency of microbial remediation of oil-contaminated saline soils.

  1. Novel fungal consortium pretreatment of waste oat straw to enhance economic and efficient biohydrogen production

    Directory of Open Access Journals (Sweden)

    Lirong Zhou

    2016-12-01

    Full Text Available Bio-pretreatment using a fungal consortium to enhance the efficiency of lignocellulosic biohydrogen production was explored.  A fungal consortium comprised of T. viride and P. chrysosporium as microbial inoculum was compared with untreated and single-species-inoculated samples. Fungal bio-pretreatment was carried out at atmospheric conditions with limited external energy input.  The effectiveness of the pretreatment is evaluated according to its lignin removal and digestibility. Enhancement of biohydrogen production is observed through scanning electron microscopy (SEM analysis. Fungal consortium pretreatment effectively degraded oat straw lignin (by >47% in 7 days leading to decomposition of cell-wall structure as revealed in SEM images, increasing biohydrogen yield. The hydrogen produced from the fungal consortium pretreated straw increased by 165% 6 days later, and was more than produced from either a single fungi species of T. viride or P. chrysosponium pretreated straw (94% and 106%, respectively. No inhibitory effect on hydrogen production was observed.

  2. Stable carbon isotope fractionation of chlorinated ethenes by a microbial consortium containing multiple dechlorinating genes.

    Science.gov (United States)

    Liu, Na; Ding, Longzhen; Li, Haijun; Zhang, Pengpeng; Zheng, Jixing; Weng, Chih-Huang

    2018-08-01

    The study aimed to determine the possible contribution of specific growth conditions and community structures to variable carbon enrichment factors (Ɛ- carbon ) values for the degradation of chlorinated ethenes (CEs) by a bacterial consortium with multiple dechlorinating genes. Ɛ- carbon values for trichloroethylene, cis-1,2-dichloroethylene, and vinyl chloride were -7.24% ± 0.59%, -14.6% ± 1.71%, and -21.1% ± 1.14%, respectively, during their degradation by a microbial consortium containing multiple dechlorinating genes including tceA and vcrA. The Ɛ- carbon values of all CEs were not greatly affected by changes in growth conditions and community structures, which directly or indirectly affected reductive dechlorination of CEs by this consortium. Stability analysis provided evidence that the presence of multiple dechlorinating genes within a microbial consortium had little effect on carbon isotope fractionation, as long as the genes have definite, non-overlapping functions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. DNA-based and culture-based characterization of a hydrocarbon-degrading consortium enriched from Arctic soil

    Energy Technology Data Exchange (ETDEWEB)

    Thomassin-Lacroix, E. J. M.; Reimer, K. J. [Royal Military College, Dept. of Chemistry and Chemical Engineering, Kingston, On (Canada); Yu, Z.; Mohn, W. W. [British Columbia Univ., Dept. of Microbiology and Immunology, Vancouver, BC (Canada); Eriksson, M. [Royal Inst. of Technology, Dept. of Biotechnology, Stockholm (Sweden)

    2001-12-01

    Oil spills are fairly common in polar tundra regions, including remote locations, and are a threat to the relatively fragile ecosystem. Remediation must be done economically and with minimum additional damage. Bioremediation is considered to be the appropriate technology, although its application in polar tundra regions is not well documented. Most studies of hydrocarbon remediation in polar regions have concerned marine oil spills, while a few studies have demonstrated on-site polar tundra soil remediation. A few of these demonstrated the presence of psychrotolerant hydrocarbon-degrading bacteria in polar tundra soils. Because fuels are complex mixtures of hydrocarbons, microbial consortia rather than pure cultures may be the most effective agents in degrading fuels. Despite their potential advantages for bioaugmentation applications, consortia are difficult to characterize and monitor. Molecular methods based on DNA analysis partially address these difficulties. One such approach is to randomly clone rRNA gene (rDNA) fragments and to sequence as a set of clones. The relative abundance of individual sequences in the clone library is related to the relative abundance of the corresponding organism in the community. In this study a psychrotolerant, fuel-degrading consortium was enriched with Arctic tundra soil. The enrichment substrate for the consortium was Jet A-1 fuel, which is very similar to Arctic diesel fuel, a common contaminant in the region. The objectives of the study were to (1) characterize thr consortium by DNA- and culture-based methods, (2) develop quantitative polymerase chain reaction assays for populations of predominant consortium members, and (3) determine the dynamics of those populations during incubation of the consortium. Result showed that is possible to quantitatively monitor members of a microbial consortium, with potential application for bioremediation of Arctic tundra soil. The relative abundance of consortium members was found to vary

  4. Introducing bioinformatics, the biosciences' genomic revolution

    CERN Document Server

    Zanella, Paolo

    1999-01-01

    The general audience for these lectures is mainly physicists, computer scientists, engineers or the general public wanting to know more about what’s going on in the biosciences. What’s bioinformatics and why is all this fuss being made about it ? What’s this revolution triggered by the human genome project ? Are there any results yet ? What are the problems ? What new avenues of research have been opened up ? What about the technology ? These new developments will be compared with what happened at CERN earlier in its evolution, and it is hoped that the similiraties and contrasts will stimulate new curiosity and provoke new thoughts.

  5. Exploring the potential of fungal-bacterial consortium for low-cost biodegradation and detoxification of textile effluent

    Directory of Open Access Journals (Sweden)

    Lade Harshad

    2016-12-01

    Full Text Available In the present study, the enrichment and isolation of textile effluent decolorizing bacteria were carried out in wheat bran (WB medium. The isolated bacterium Providencia rettgeri strain HSL1 was then tested for decolorization of textile effluent in consortium with a dyestuff degrading fungus Aspergillus ochraceus NCIM 1146. Decolorization study suggests that A. ochraceus NCIM 1146 and P. rettgeri strain HSL1 alone re moves only 6 and 32% of textile effluent American Dye Manufacturing Institute respectively in 30 h at 30 ±0.2°C of microaerophilic incubation, while the fungal-bacterial consortium does 92% ADMI removal within the same time period. The fungal-bacterial consortium exhibited enhanced decolorization rate due to the induction in activities of catalytic enzymes laccase (196%, lignin peroxidase (77%, azoreductase (80% and NADH-DCIP reductase (84%. The HPLC analysis confirmed the biodegradation of textile effluent into various metabolites. Detoxification studies of textile effluent before and after treatment with fungal-bacterial consortium revealed reduced toxicity of degradation metabolites. The efficient degradation and detoxification by fungal-bacterial consortium pre-grown in agricultural based medium thus suggest a promising approach in designing low-cost treatment technologies for textile effluent.

  6. Decolorization of azo dyes (Direct Blue 151 and Direct Red 31 by moderately alkaliphilic bacterial consortium

    Directory of Open Access Journals (Sweden)

    Sylvine Lalnunhlimi

    2016-03-01

    Full Text Available Abstract Removal of synthetic dyes is one of the main challenges before releasing the wastes discharged by textile industries. Biodegradation of azo dyes by alkaliphilic bacterial consortium is one of the environmental-friendly methods used for the removal of dyes from textile effluents. Hence, this study presents isolation of a bacterial consortium from soil samples of saline environment and its use for the decolorization of azo dyes, Direct Blue 151 (DB 151 and Direct Red 31 (DR 31. The decolorization of azo dyes was studied at various concentrations (100–300 mg/L. The bacterial consortium, when subjected to an application of 200 mg/L of the dyes, decolorized DB 151 and DR 31 by 97.57% and 95.25% respectively, within 5 days. The growth of the bacterial consortium was optimized with pH, temperature, and carbon and nitrogen sources; and decolorization of azo dyes was analyzed. In this study, the decolorization efficiency of mixed dyes was improved with yeast extract and sucrose, which were used as nitrogen and carbon sources, respectively. Such an alkaliphilic bacterial consortium can be used in the removal of azo dyes from contaminated saline environment.

  7. Naphthalene degradation by bacterial consortium (DV-AL) developed from Alang-Sosiya ship breaking yard, Gujarat, India.

    Science.gov (United States)

    Patel, Vilas; Jain, Siddharth; Madamwar, Datta

    2012-03-01

    Naphthalene degrading bacterial consortium (DV-AL) was developed by enrichment culture technique from sediment collected from the Alang-Sosiya ship breaking yard, Gujarat, India. The 16S rRNA gene based molecular analyzes revealed that the bacterial consortium (DV-AL) consisted of four strains namely, Achromobacter sp. BAB239, Pseudomonas sp. DV-AL2, Enterobacter sp. BAB240 and Pseudomonas sp. BAB241. Consortium DV-AL was able to degrade 1000 ppm of naphthalene in Bushnell Haas medium (BHM) containing peptone (0.1%) as co-substrate with an initial pH of 8.0 at 37°C under shaking conditions (150 rpm) within 24h. Maximum growth rate and naphthalene degradation rate were found to be 0.0389 h(-1) and 80 mg h(-1), respectively. Consortium DV-AL was able to utilize other aromatic and aliphatic hydrocarbons such as benzene, phenol, carbazole, petroleum oil, diesel fuel, and phenanthrene and 2-methyl naphthalene as sole carbon source. Consortium DV-AL was also efficient to degrade naphthalene in the presence of other pollutants such as petroleum hydrocarbons and heavy metals. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Emergent Computation Emphasizing Bioinformatics

    CERN Document Server

    Simon, Matthew

    2005-01-01

    Emergent Computation is concerned with recent applications of Mathematical Linguistics or Automata Theory. This subject has a primary focus upon "Bioinformatics" (the Genome and arising interest in the Proteome), but the closing chapter also examines applications in Biology, Medicine, Anthropology, etc. The book is composed of an organized examination of DNA, RNA, and the assembly of amino acids into proteins. Rather than examine these areas from a purely mathematical viewpoint (that excludes much of the biochemical reality), the author uses scientific papers written mostly by biochemists based upon their laboratory observations. Thus while DNA may exist in its double stranded form, triple stranded forms are not excluded. Similarly, while bases exist in Watson-Crick complements, mismatched bases and abasic pairs are not excluded, nor are Hoogsteen bonds. Just as there are four bases naturally found in DNA, the existence of additional bases is not ignored, nor amino acids in addition to the usual complement of...

  9. Epidemiology of Endometrial Cancer Consortium (E2C2)

    Science.gov (United States)

    The Epidemiology of Endometrial Cancer Consortium studies the etiology of this common cancer and build on resources from existing studies by combining data across studies in order to advance the understanding of the etiology of this disease.

  10. Evaluating the Effectiveness of a Practical Inquiry-Based Learning Bioinformatics Module on Undergraduate Student Engagement and Applied Skills

    Science.gov (United States)

    Brown, James A. L.

    2016-01-01

    A pedagogic intervention, in the form of an inquiry-based peer-assisted learning project (as a practical student-led bioinformatics module), was assessed for its ability to increase students' engagement, practical bioinformatic skills and process-specific knowledge. Elements assessed were process-specific knowledge following module completion,…

  11. Neonatal Informatics: Transforming Neonatal Care Through Translational Bioinformatics

    Science.gov (United States)

    Palma, Jonathan P.; Benitz, William E.; Tarczy-Hornoch, Peter; Butte, Atul J.; Longhurst, Christopher A.

    2012-01-01

    The future of neonatal informatics will be driven by the availability of increasingly vast amounts of clinical and genetic data. The field of translational bioinformatics is concerned with linking and learning from these data and applying new findings to clinical care to transform the data into proactive, predictive, preventive, and participatory health. As a result of advances in translational informatics, the care of neonates will become more data driven, evidence based, and personalized. PMID:22924023

  12. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.

    Science.gov (United States)

    Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E

    2012-03-19

    A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly

  13. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    Science.gov (United States)

    2012-01-01

    Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the

  14. Multiobjective optimization in bioinformatics and computational biology.

    Science.gov (United States)

    Handl, Julia; Kell, Douglas B; Knowles, Joshua

    2007-01-01

    This paper reviews the application of multiobjective optimization in the fields of bioinformatics and computational biology. A survey of existing work, organized by application area, forms the main body of the review, following an introduction to the key concepts in multiobjective optimization. An original contribution of the review is the identification of five distinct "contexts," giving rise to multiple objectives: These are used to explain the reasons behind the use of multiobjective optimization in each application area and also to point the way to potential future uses of the technique.

  15. The Activities of the European Consortium on Nuclear Data Development and Analysis for Fusion

    International Nuclear Information System (INIS)

    Fischer, U.; Avrigeanu, M.; Avrigeanu, V.; Cabellos, O.; Kodeli, I.; Koning, A.; Konobeyev, A.Yu.; Leeb, H.; Rochman, D.; Pereslavtsev, P.; Sauvan, P.; Sublet, J.-C.; Trkov, A.; Dupont, E.; Leichtle, D.; Izquierdo, J.

    2014-01-01

    This paper presents an overview of the activities of the European Consortium on Nuclear Data Development and Analysis for Fusion. The Consortium combines available European expertise to provide services for the generation, maintenance, and validation of nuclear data evaluations and data files relevant for ITER, IFMIF and DEMO, as well as codes and software tools required for related nuclear calculations

  16. The Activities of the European Consortium on Nuclear Data Development and Analysis for Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, U., E-mail: ulrich.fischer@kit.edu [Karlsruhe Institute of Technology, Institute for Neutron Physic and Reactor Technology, 76344 Eggenstein-Leopoldshafen (Germany); Avrigeanu, M.; Avrigeanu, V. [Horia Hulubei National Institute of Physics and Nuclear Engineering (IFIN-HH), RO-077125 Magurele (Romania); Cabellos, O. [Departamento de Ingenieria Nuclear, Universidad Politecnica de Madrid, 28006 Madrid (Spain); Kodeli, I. [Jozef Stefan Institute (JSI), Jamova 39, 1000 Ljubljana (Slovenia); Koning, A. [Nuclear Research and Consultancy Group (NRG), Westerduinweg 3, 1755 LE Petten (Netherlands); Konobeyev, A.Yu. [Karlsruhe Institute of Technology, Institute for Neutron Physic and Reactor Technology, 76344 Eggenstein-Leopoldshafen (Germany); Leeb, H. [Technische Universitaet Wien, Atominstitut, Wiedner Hauptstrasse 8–10, 1040 Wien (Austria); Rochman, D. [Nuclear Research and Consultancy Group (NRG), Westerduinweg 3, 1755 LE Petten (Netherlands); Pereslavtsev, P. [Karlsruhe Institute of Technology, Institute for Neutron Physic and Reactor Technology, 76344 Eggenstein-Leopoldshafen (Germany); Sauvan, P. [Universidad Nacional de Educacion a Distancia, C. Juan del Rosal, 12, 28040 Madrid (Spain); Sublet, J.-C. [Euratom/CCFE Fusion Association, Culham Science Centre, OX14 3DB (United Kingdom); Trkov, A. [Jozef Stefan Institute (JSI), Jamova 39, 1000 Ljubljana (Slovenia); Dupont, E. [OECD Nuclear Energy Agency, Paris (France); Leichtle, D.; Izquierdo, J. [Fusion for Energy, Barcelona (Spain)

    2014-06-15

    This paper presents an overview of the activities of the European Consortium on Nuclear Data Development and Analysis for Fusion. The Consortium combines available European expertise to provide services for the generation, maintenance, and validation of nuclear data evaluations and data files relevant for ITER, IFMIF and DEMO, as well as codes and software tools required for related nuclear calculations.

  17. The secondary metabolite bioinformatics portal

    DEFF Research Database (Denmark)

    Weber, Tilmann; Kim, Hyun Uk

    2016-01-01

    . In this context, this review gives a summary of tools and databases that currently are available to mine, identify and characterize natural product biosynthesis pathways and their producers based on ‘omics data. A web portal called Secondary Metabolite Bioinformatics Portal (SMBP at http...... analytical and chemical methods gave access to this group of compounds, nowadays genomics-based methods offer complementary approaches to find, identify and characterize such molecules. This paradigm shift also resulted in a high demand for computational tools to assist researchers in their daily work......Natural products are among the most important sources of lead molecules for drug discovery. With the development of affordable whole-genome sequencing technologies and other ‘omics tools, the field of natural products research is currently undergoing a shift in paradigms. While, for decades, mainly...

  18. The Worker Rights Consortium Makes Strides toward Legitimacy.

    Science.gov (United States)

    Van der Werf, Martin

    2000-01-01

    Discusses the rapid growth of the Workers Rights Consortium, a student-originated group with 44 member institutions which opposes sweatshop labor conditions especially in the apparel industry. Notes disagreements about the number of administrators on the board of directors and about the role of industry representives. Compares this group with the…

  19. Academic Library Consortium in Jordan: An Evaluation Study

    Science.gov (United States)

    Ahmed, Mustafa H.; Suleiman, Raid Jameel

    2013-01-01

    Purpose: Due to the current financial and managerial difficulties that are encountered by libraries in public universities in Jordan and the geographical diffusion of these academic institutions, the idea of establishing a consortium was proposed by the Council of Higher Education to combine these libraries. This article reviews the reality of…

  20. Microbial Consortium with High Cellulolytic Activity (MCHCA for enhanced biogas production.

    Directory of Open Access Journals (Sweden)

    Krzysztof ePoszytek

    2016-03-01

    Full Text Available The use of lignocellulosic biomass as a substrate in agricultural biogas plants is very popular and yields good results. However, the efficiency of anaerobic digestion, and thus biogas production, is not always satisfactory due to the slow or incomplete degradation (hydrolysis of plant matter. To enhance the solubilization of the lignocellulosic biomass various physical, chemical and biological pretreatment methods are used.The aim of this study was to select and characterize cellulose-degrading bacteria, and to construct a microbial consortium, dedicated for degradation of maize silage and enhancing biogas production from this substrate.Over one hundred strains of cellulose-degrading bacteria were isolated from: sewage sludge, hydrolyzer from an agricultural biogas plant, cattle slurry and manure. After physiological characterization of the isolates, sixteen strains (representatives of Bacillus, Providencia and Ochrobactrum genera were chosen for the construction of a Microbial Consortium with High Cellulolytic Activity, called MCHCA. The selected strains had a high endoglucanase activity (exceeding 0.21 IU/mL CMCase activity and a wide range of tolerance to various physical and chemical conditions. Lab-scale simulation of biogas production using the selected strains for degradation of maize silage was carried out in a two-bioreactor system, similar to those used in agricultural biogas plants.The obtained results showed that the constructed MCHCA consortium is capable of efficient hydrolysis of maize silage, and increases biogas production by even 38%, depending on the inoculum used for methane fermentation. The results in this work indicate that the mesophilic Microbial Consortium with High Cellulolytic Activity has a great potential for application on industrial scale in agricultural biogas plants.

  1. 32 CFR 37.515 - Must I do anything additional to determine the qualification of a consortium?

    Science.gov (United States)

    2010-07-01

    ... SECRETARY OF DEFENSE DoD GRANT AND AGREEMENT REGULATIONS TECHNOLOGY INVESTMENT AGREEMENTS Pre-Award Business... relationship is essential to increase the research project's chances of success. (b) The collaboration... things, the consortium's: (1) Management structure. (2) Method of making payments to consortium members...

  2. Computational Lipidomics and Lipid Bioinformatics: Filling In the Blanks.

    Science.gov (United States)

    Pauling, Josch; Klipp, Edda

    2016-12-22

    Lipids are highly diverse metabolites of pronounced importance in health and disease. While metabolomics is a broad field under the omics umbrella that may also relate to lipids, lipidomics is an emerging field which specializes in the identification, quantification and functional interpretation of complex lipidomes. Today, it is possible to identify and distinguish lipids in a high-resolution, high-throughput manner and simultaneously with a lot of structural detail. However, doing so may produce thousands of mass spectra in a single experiment which has created a high demand for specialized computational support to analyze these spectral libraries. The computational biology and bioinformatics community has so far established methodology in genomics, transcriptomics and proteomics but there are many (combinatorial) challenges when it comes to structural diversity of lipids and their identification, quantification and interpretation. This review gives an overview and outlook on lipidomics research and illustrates ongoing computational and bioinformatics efforts. These efforts are important and necessary steps to advance the lipidomics field alongside analytic, biochemistry, biomedical and biology communities and to close the gap in available computational methodology between lipidomics and other omics sub-branches.

  3. A review of bioinformatic methods for forensic DNA analyses.

    Science.gov (United States)

    Liu, Yao-Yuan; Harbison, SallyAnn

    2018-03-01

    Short tandem repeats, single nucleotide polymorphisms, and whole mitochondrial analyses are three classes of markers which will play an important role in the future of forensic DNA typing. The arrival of massively parallel sequencing platforms in forensic science reveals new information such as insights into the complexity and variability of the markers that were previously unseen, along with amounts of data too immense for analyses by manual means. Along with the sequencing chemistries employed, bioinformatic methods are required to process and interpret this new and extensive data. As more is learnt about the use of these new technologies for forensic applications, development and standardization of efficient, favourable tools for each stage of data processing is being carried out, and faster, more accurate methods that improve on the original approaches have been developed. As forensic laboratories search for the optimal pipeline of tools, sequencer manufacturers have incorporated pipelines into sequencer software to make analyses convenient. This review explores the current state of bioinformatic methods and tools used for the analyses of forensic markers sequenced on the massively parallel sequencing (MPS) platforms currently most widely used. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Airway Clearance Techniques (ACTs)

    Medline Plus

    Full Text Available ... THERAPIES RESEARCH CONSORTIUM Therapeutics Development Network TDN Coordinating Center Study Services Working With the TDN Tools and Resources Antimicrobial Tools and Resources Bioinformatics Tools for CF CFTR Antibodies Distribution Program CFTR Assays CFFT Biorepository CFTR Chemical Compound ...

  5. A generally applicable lightweight method for calculating a value structure for tools and services in bioinformatics infrastructure projects.

    Science.gov (United States)

    Mayer, Gerhard; Quast, Christian; Felden, Janine; Lange, Matthias; Prinz, Manuel; Pühler, Alfred; Lawerenz, Chris; Scholz, Uwe; Glöckner, Frank Oliver; Müller, Wolfgang; Marcus, Katrin; Eisenacher, Martin

    2017-10-30

    Sustainable noncommercial bioinformatics infrastructures are a prerequisite to use and take advantage of the potential of big data analysis for research and economy. Consequently, funders, universities and institutes as well as users ask for a transparent value model for the tools and services offered. In this article, a generally applicable lightweight method is described by which bioinformatics infrastructure projects can estimate the value of tools and services offered without determining exactly the total costs of ownership. Five representative scenarios for value estimation from a rough estimation to a detailed breakdown of costs are presented. To account for the diversity in bioinformatics applications and services, the notion of service-specific 'service provision units' is introduced together with the factors influencing them and the main underlying assumptions for these 'value influencing factors'. Special attention is given on how to handle personnel costs and indirect costs such as electricity. Four examples are presented for the calculation of the value of tools and services provided by the German Network for Bioinformatics Infrastructure (de.NBI): one for tool usage, one for (Web-based) database analyses, one for consulting services and one for bioinformatics training events. Finally, from the discussed values, the costs of direct funding and the costs of payment of services by funded projects are calculated and compared. © The Author 2017. Published by Oxford University Press.

  6. Kubernetes as an approach for solving bioinformatic problems.

    OpenAIRE

    Markstedt, Olof

    2017-01-01

    The cluster orchestration tool Kubernetes enables easy deployment and reproducibility of life science research by utilizing the advantages of the container technology. The container technology allows for easy tool creation, sharing and runs on any Linux system once it has been built. The applicability of Kubernetes as an approach to run bioinformatic workflows was evaluated and resulted in some examples of how Kubernetes and containers could be used within the field of life science and how th...

  7. Using Bioinformatics to Develop and Test Hypotheses: E. coli-Specific Virulence Determinants

    Directory of Open Access Journals (Sweden)

    Joanna R. Klein

    2012-09-01

    Full Text Available Bioinformatics, the use of computer resources to understand biological information, is an important tool in research, and can be easily integrated into the curriculum of undergraduate courses. Such an example is provided in this series of four activities that introduces students to the field of bioinformatics as they design PCR based tests for pathogenic E. coli strains. A variety of computer tools are used including BLAST searches at NCBI, bacterial genome searches at the Integrated Microbial Genomes (IMG database, protein analysis at Pfam and literature research at PubMed. In the process, students also learn about virulence factors, enzyme function and horizontal gene transfer. Some or all of the four activities can be incorporated into microbiology or general biology courses taken by students at a variety of levels, ranging from high school through college. The activities build on one another as they teach and reinforce knowledge and skills, promote critical thinking, and provide for student collaboration and presentation. The computer-based activities can be done either in class or outside of class, thus are appropriate for inclusion in online or blended learning formats. Assessment data showed that students learned general microbiology concepts related to pathogenesis and enzyme function, gained skills in using tools of bioinformatics and molecular biology, and successfully developed and tested a scientific hypothesis.

  8. The Optic Disc Drusen Studies Consortium Recommendations for Diagnosis of Optic Disc Drusen Using Optical Coherence Tomography

    DEFF Research Database (Denmark)

    Malmqvist, Lasse; Bursztyn, Lulu; Costello, Fiona

    2018-01-01

    imaging optical coherence tomography (EDI-OCT) has improved the visualization of more deeply buried ODD. There is, however, no consensus regarding the diagnosis of ODD using OCT. The purpose of this study was to develop a consensus recommendation for diagnosing ODD using OCT. METHODS: The members...... of the Optic Disc Drusen Studies (ODDS) Consortium are either fellowship trained neuro-ophthalmologists with an interest in ODD, or researchers with an interest in ODD. Four standardization steps were performed by the consortium members with a focus on both image acquisition and diagnosis of ODD. RESULTS......: Based on prior knowledge and experiences from the standardization steps, the ODDS Consortium reached a consensus regarding OCT acquisition and diagnosis of ODD. The recommendations from the ODDS Consortium include scanning protocol, data selection, data analysis, and nomenclature. CONCLUSIONS: The ODDS...

  9. Data mining in bioinformatics using Weka.

    Science.gov (United States)

    Frank, Eibe; Hall, Mark; Trigg, Len; Holmes, Geoffrey; Witten, Ian H

    2004-10-12

    The Weka machine learning workbench provides a general-purpose environment for automatic classification, regression, clustering and feature selection-common data mining problems in bioinformatics research. It contains an extensive collection of machine learning algorithms and data pre-processing methods complemented by graphical user interfaces for data exploration and the experimental comparison of different machine learning techniques on the same problem. Weka can process data given in the form of a single relational table. Its main objectives are to (a) assist users in extracting useful information from data and (b) enable them to easily identify a suitable algorithm for generating an accurate predictive model from it. http://www.cs.waikato.ac.nz/ml/weka.

  10. Bioremediation of crude oil waste contaminated soil using petrophilic consortium and Azotobacter sp.

    Directory of Open Access Journals (Sweden)

    M. Fauzi

    2016-01-01

    Full Text Available This study was aimed to determine the effect Petrophilic and Azotobacter sp. consortium on the rate of degradation of hydrocarbons, Azotobacter growth, and Petrophilic fungi growth in an Inceptisol contaminated with crude oil waste originating from Balongan refinery, one of Pertamina (Indonesia’s largest state-owned oil and gas company units in Indramayu – West Java. This study was conducted from March to April 2014 in the glasshouse of research station of the Faculty of Agriculture, Padjadjaran University at Ciparanje, Jatinangor District, Sumedang Regency of West Java. This study used a factorial completely randomized design with two treatments. The first treatment factor was Petrophilic microbes (A consisting of four levels (without treatment, 2% Petrophilic fungi, 2% Petrophilic bacteria, and the 2% Petrophilic consortium, and Azotobacter sp. The second treatment factor was Azotobacter sp. (B consisting of four levels (without treatment, 0.5%, Azotobacter sp., 1% Azotobacter sp., and 1.5% Azotobacter sp. The results demonstrated interaction between Petrophilic microbes and Azotobacter sp. towards hydrocarbon degradation rate, but no interaction was found towards the growth rate of Azotobacter sp. and Petrophilic fungi. Treatments of a1b3 (2% consortium of Petrophilic fungi with 1.5% Azotobacter sp. and a3b3 (2% Petrophilic consortium and 1.5% Azotobacter sp. had hydrocarbon degradation rate at 0.22 ppm/day for each treatment, showing the highest hydrocarbon degradation rate.

  11. Medical Physics Residency Consortium: collaborative endeavors to meet the ABR 2014 certification requirements

    Science.gov (United States)

    Parker, Brent C.; Duhon, John; Yang, Claus C.; Wu, H. Terry; Hogstrom, Kenneth R.

    2014-01-01

    In 2009, Mary Bird Perkins Cancer Center (MBPCC) established a Radiation Oncology Physics Residency Program to provide opportunities for medical physics residency training to MS and PhD graduates of the CAMPEP‐accredited Louisiana State University (LSU)‐MBPCC Medical Physics Graduate Program. The LSU‐MBPCC Program graduates approximately six students yearly, which equates to a need for up to twelve residency positions in a two‐year program. To address this need for residency positions, MBPCC has expanded its Program by developing a Consortium consisting of partnerships with medical physics groups located at other nearby clinical institutions. The consortium model offers the residents exposure to a broader range of procedures, technology, and faculty than available at the individual institutions. The Consortium institutions have shown a great deal of support from their medical physics groups and administrations in developing these partnerships. Details of these partnerships are specified within affiliation agreements between MBPCC and each participating institution. All partner sites began resident training in 2011. The Consortium is a network of for‐profit, nonprofit, academic, community, and private entities. We feel that these types of collaborative endeavors will be required nationally to reach the number of residency positions needed to meet the 2014 ABR certification requirements and to maintain graduate medical physics training programs. PACS numbers: 01.40.Fk, 01.40.gb PMID:24710434

  12. Laboratory scale bioremediation of diesel hydrocarbon in soil by indigenous bacterial consortium.

    Science.gov (United States)

    Sharma, Anjana; Rehman, Meenal Budholia

    2009-09-01

    In vitro experiment was performed by taking petrol pump soils and diesel in flasks with the micronutrients and macronutrients supplements. Cemented bioreactors having sterilized soil and diesel was used for in vivo analysis of diesel hydrocarbon degradation. There were two sets of experiments, first having three bioreactors (1) inoculated by KI. pneumoniae subsp. aerogenes with soil and diesel; (2) with addition of NH4NO3; and (3) served as control. In second set, one bioreactor was inoculated by bacterial consortium containing Moraxella saccharolytica, Alteromonas putrefaciens, KI. pneumoniae subsp. aerogenes and Pseudomonas fragi along with soil and diesel. The remaining two bioreactors (having NH4NO3 and control) were similar to the first set. The experiments were incubated for 30 days. Ability of bacterial inoculum to degrade diesel was analyzed through GC-MS. Smaller chain compounds were obtained after experimental period of 30 days. Rate of diesel degradation was better with the present bacterial consortium than individual bacteria. Present bacterial consortium can be a better choice for faster and complete remediation of contaminated hydrocarbon soils.

  13. Ability of sea-water bacterial consortium to produce electricity and denitrify water

    Science.gov (United States)

    Maruvada, Nagasamrat V. V.; Tommasi, Tonia; Kaza, Kesava Rao; Ruggeri, Bernardo

    Sea is a store house for varied types of microbes with an ability to reduce and oxidize substances like iron, sulphur, carbon dioxide, etc. Most of these processes happen in the sea water environment, but can be applied for purification of wastewater. In the present paper, we discuss the use of a consortium of seawater bacteria in a fuel cell to produce electricity by oxidizing organic matter and reducing nitrates. We also discuss how the growth of the bacterial consortium can lead to an increased electricity production and decreased diffusional resistance in the cell. The analysis was done using electrochemical impedance spectroscopy (EIS), and linear sweep voltammetry (LSV). Here, we use bicarbonate buffered solution, which is the natural buffering agent found in sea. We show that the seawater bacterial consortium can be used in both the anode and cathode parts of the cell. The results confirm the adaptability of the seawater bacteria to different environments and can be used for various applications. Heritage, Erasmus Mundus Programme, European Commission.

  14. Video Bioinformatics Analysis of Human Embryonic Stem Cell Colony Growth

    Science.gov (United States)

    Lin, Sabrina; Fonteno, Shawn; Satish, Shruthi; Bhanu, Bir; Talbot, Prue

    2010-01-01

    Because video data are complex and are comprised of many images, mining information from video material is difficult to do without the aid of computer software. Video bioinformatics is a powerful quantitative approach for extracting spatio-temporal data from video images using computer software to perform dating mining and analysis. In this article, we introduce a video bioinformatics method for quantifying the growth of human embryonic stem cells (hESC) by analyzing time-lapse videos collected in a Nikon BioStation CT incubator equipped with a camera for video imaging. In our experiments, hESC colonies that were attached to Matrigel were filmed for 48 hours in the BioStation CT. To determine the rate of growth of these colonies, recipes were developed using CL-Quant software which enables users to extract various types of data from video images. To accurately evaluate colony growth, three recipes were created. The first segmented the image into the colony and background, the second enhanced the image to define colonies throughout the video sequence accurately, and the third measured the number of pixels in the colony over time. The three recipes were run in sequence on video data collected in a BioStation CT to analyze the rate of growth of individual hESC colonies over 48 hours. To verify the truthfulness of the CL-Quant recipes, the same data were analyzed manually using Adobe Photoshop software. When the data obtained using the CL-Quant recipes and Photoshop were compared, results were virtually identical, indicating the CL-Quant recipes were truthful. The method described here could be applied to any video data to measure growth rates of hESC or other cells that grow in colonies. In addition, other video bioinformatics recipes can be developed in the future for other cell processes such as migration, apoptosis, and cell adhesion. PMID:20495527

  15. Bioinformatics and structural characterization of a hypothetical protein from Streptococcus mutans

    DEFF Research Database (Denmark)

    Nan, Jie; Brostromer, Erik; Liu, Xiang-Yu

    2009-01-01

    . From the interlinking structural and bioinformatics studies, we have concluded that SMU.440 could be involved in polyketide-like antibiotic resistance, providing a better understanding of this hypothetical protein. Besides, the combination of multiple methods in this study can be used as a general...

  16. A Review of Recent Advances in Translational Bioinformatics: Bridges from Biology to Medicine.

    Science.gov (United States)

    Vamathevan, J; Birney, E

    2017-08-01

    Objectives: To highlight and provide insights into key developments in translational bioinformatics between 2014 and 2016. Methods: This review describes some of the most influential bioinformatics papers and resources that have been published between 2014 and 2016 as well as the national genome sequencing initiatives that utilize these resources to routinely embed genomic medicine into healthcare. Also discussed are some applications of the secondary use of patient data followed by a comprehensive view of the open challenges and emergent technologies. Results: Although data generation can be performed routinely, analyses and data integration methods still require active research and standardization to improve streamlining of clinical interpretation. The secondary use of patient data has resulted in the development of novel algorithms and has enabled a refined understanding of cellular and phenotypic mechanisms. New data storage and data sharing approaches are required to enable diverse biomedical communities to contribute to genomic discovery. Conclusion: The translation of genomics data into actionable knowledge for use in healthcare is transforming the clinical landscape in an unprecedented way. Exciting and innovative models that bridge the gap between clinical and academic research are set to open up the field of translational bioinformatics for rapid growth in a digital era. Georg Thieme Verlag KG Stuttgart.

  17. Biodegradation of low and high molecular weight hydrocarbons in petroleum refinery wastewater by a thermophilic bacterial consortium.

    Science.gov (United States)

    Pugazhendi, Arulazhagan; Abbad Wazin, Hadeel; Qari, Huda; Basahi, Jalal Mohammad Al-Badry; Godon, Jean Jacques; Dhavamani, Jeyakumar

    2017-10-01

    Clean-up of contaminated wastewater remains to be a major challenge in petroleum refinery. Here, we describe the capacity of a bacterial consortium enriched from crude oil drilling site in Al-Khobar, Saudi Arabia, to utilize polycyclic aromatic hydrocarbons (PAHs) as sole carbon source at 60°C. The consortium reduced low molecular weight (LMW; naphthalene, phenanthrene, fluorene and anthracene) and high molecular weight (HMW; pyrene, benzo(e)pyrene and benzo(k)fluoranthene) PAH loads of up to 1.5 g/L with removal efficiencies of 90% and 80% within 10 days. PAH biodegradation was verified by the presence of PAH metabolites and evolution of carbon dioxide (90 ± 3%). Biodegradation led to a reduction of the surface tension to 34 ± 1 mN/m thus suggesting biosurfactant production by the consortium. Phylogenetic analysis of the consortium revealed the presence of the thermophilic PAH degrader Pseudomonas aeruginosa strain CEES1 (KU664514) and Bacillus thermosaudia (KU664515) strain CEES2. The consortium was further found to treat petroleum wastewater in continuous stirred tank reactor with 96 ± 2% chemical oxygen demand removal and complete PAH degradation in 24 days.

  18. Degradation of Lignocellulosic Components in Un-pretreated Vinegar Residue Using an Artificially Constructed Fungal Consortium

    Directory of Open Access Journals (Sweden)

    Yaoming Cui

    2015-04-01

    Full Text Available The objective of this work was to degrade lignocellulosic components in un-pretreated vinegar residue (VR using a fungal consortium. Consortium-29, consisting of P. chrysosporium, T. koningii, A. niger, and A. ficuum NTG-23, was constructed using orthogonal design combined with two-way interaction analysis. After seven days of cultivation, the reducing sugar yield reached 35.57 mg per gram of dry substrate (gds-1, which was 108.01% higher than the control (17.10 mg gds-1. Additionally, the xylanase and CMCase activity reached 439.07 U gds-1 and 8.15 U gds-1, which were 432.08% and 243.88% higher than that of pure cultures of A. niger (82.52 U gds-1 and P. chrysosporium (2.37 U gds-1, respectively. The cellulose, hemicellulose, and lignin contents decreased by 17.11%, 68.61%, and 14.44%, respectively, compared with that of the raw VR. The optimal fermentation conditions of consortium-29 were as follows: incubation temperature 25 °C, initial pH 6, initial moisture content 70%, inoculum size 1 x 10^6 spores/mL, incubation time 5 days, urea/VR 1%, and MnSO4 . H2O/VR 0.03%. This study suggests that consortium-29 is an efficient fungal consortium for un-pretreated VR degradation and has a potential application in lignocellulosic waste utilization with a low cost of operation.

  19. The ARC (Astrophysical Research Consortium) telescope project.

    Science.gov (United States)

    Anderson, K. S.

    A consortium of universities intends to construct a 3.5 meter optical-infrared telescope at a site in south-central New Mexico. The use of innovative mirror technology, a fast primary, and an alt-azimuth mounting results in a compact and lightweight instrument. This telescope will be uniquely well-suited for addressing certain observational programs by virtue of its capability for fully remote operation and rapid instrument changes.

  20. p-Cresol mineralization by a nitrifying consortium

    International Nuclear Information System (INIS)

    Silva-Luna, C. D.; Gomez, J.; Houbron, E.; Cuervo Lopez, F. M.; Texier, A. C.

    2009-01-01

    Nitrification and denitrification processes are considered economically feasible technologies for nitrogen removal from wastewater. Knowledge of the toxic or inhibitory effects of cresols on the nitrifying respiratory process is still insufficient. The aim of this study was to evaluate the kinetic behavior and oxidizing ability of a nitrifying consortium exposed to p-cresol in batch cultures. Biotransformation of p-cresol was investigated by identifying the different intermediates formed. (Author)

  1. Northeast Artificial Intelligence Consortium Annual Report. 1988 Interference Techniques for Knowledge Base Maintenance Using Logic Programming Methodologies. Volume 11

    Science.gov (United States)

    1989-10-01

    Northeast Aritificial Intelligence Consortium (NAIC). i Table of Contents Execu tive Sum m ary...o g~nIl ’vLr COPY o~ T- RADC-TR-89-259, Vol XI (of twelve) N Interim Report SOctober 1989 NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL REPORT...ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION Northeast Artificial (If applicable) Intelligence Consortium (NAIC) . Rome Air Development

  2. Databases and Associated Bioinformatic Tools in Studies of Food Allergens, Epitopes and Haptens – a Review

    Directory of Open Access Journals (Sweden)

    Bucholska Justyna

    2018-06-01

    Full Text Available Allergies and/or food intolerances are a growing problem of the modern world. Diffi culties associated with the correct diagnosis of food allergies result in the need to classify the factors causing allergies and allergens themselves. Therefore, internet databases and other bioinformatic tools play a special role in deepening knowledge of biologically-important compounds. Internet repositories, as a source of information on different chemical compounds, including those related to allergy and intolerance, are increasingly being used by scientists. Bioinformatic methods play a signifi cant role in biological and medical sciences, and their importance in food science is increasing. This study aimed at presenting selected databases and tools of bioinformatic analysis useful in research on food allergies, allergens (11 databases, epitopes (7 databases, and haptens (2 databases. It also presents examples of the application of computer methods in studies related to allergies.

  3. The Arizona Universities Library Consortium patron-driven e-book model

    Directory of Open Access Journals (Sweden)

    Jeanne Richardson

    2013-03-01

    Full Text Available Building on Arizona State University's patron-driven acquisitions (PDA initiative in 2009, the Arizona Universities Library Consortium, in partnership with the Ingram Content Group, created a cooperative patron-driven model to acquire electronic books (e-books. The model provides the opportunity for faculty and students at the universities governed by the Arizona Board of Regents (ABOR to access a core of e-books made accessible through resource discovery services and online catalogs. These books are available for significantly less than a single ABOR university would expend for the same materials. The patron-driven model described is one of many evolving models in digital scholarship, and, although the Arizona Universities Library Consortium reports a successful experience, patron-driven models pose questions to stakeholders in the academic publishing industry.

  4. Bioinformatics prediction of swine MHC class I epitopes from Porcine Reproductive and Respiratory Syndrome Virus

    DEFF Research Database (Denmark)

    Welner, Simon; Nielsen, Morten; Lund, Ole

    an effective CTL response against PRRSV, we have taken a bioinformatics approach to identify common PRRSV epitopes predicted to react broadly with predominant swine MHC (SLA) alleles. First, the genomic integrity and sequencing method was examined for 334 available complete PRRSV type 2 genomes leaving 104...... by the PopCover algorithm, providing a final list of 54 epitopes prioritized according to maximum coverage of PRRSV strains and SLA alleles. This bioinformatics approach provides a rational strategy for selecting peptides for a CTL-activating vaccine with broad coverage of both virus and swine diversity...

  5. Designing a course model for distance-based online bioinformatics training in Africa: The H3ABioNet experience

    Science.gov (United States)

    Panji, Sumir; Fernandes, Pedro L.; Judge, David P.; Ghouila, Amel; Salifu, Samson P.; Ahmed, Rehab; Kayondo, Jonathan; Ssemwanga, Deogratius

    2017-01-01

    Africa is not unique in its need for basic bioinformatics training for individuals from a diverse range of academic backgrounds. However, particular logistical challenges in Africa, most notably access to bioinformatics expertise and internet stability, must be addressed in order to meet this need on the continent. H3ABioNet (www.h3abionet.org), the Pan African Bioinformatics Network for H3Africa, has therefore developed an innovative, free-of-charge “Introduction to Bioinformatics” course, taking these challenges into account as part of its educational efforts to provide on-site training and develop local expertise inside its network. A multiple-delivery–mode learning model was selected for this 3-month course in order to increase access to (mostly) African, expert bioinformatics trainers. The content of the course was developed to include a range of fundamental bioinformatics topics at the introductory level. For the first iteration of the course (2016), classrooms with a total of 364 enrolled participants were hosted at 20 institutions across 10 African countries. To ensure that classroom success did not depend on stable internet, trainers pre-recorded their lectures, and classrooms downloaded and watched these locally during biweekly contact sessions. The trainers were available via video conferencing to take questions during contact sessions, as well as via online “question and discussion” forums outside of contact session time. This learning model, developed for a resource-limited setting, could easily be adapted to other settings. PMID:28981516

  6. Northeast Artificial Intelligence Consortium Annual Report. Volume 2. 1988 Discussing, Using, and Recognizing Plans (NLP)

    Science.gov (United States)

    1989-10-01

    Encontro Portugues de Inteligencia Artificial (EPIA), Oporto, Portugal, September 1985. [15] N. J. Nilsson. Principles Of Artificial Intelligence. Tioga...FI1 F COPY () RADC-TR-89-259, Vol II (of twelve) Interim Report October 1969 AD-A218 154 NORTHEAST ARTIFICIAL INTELLIGENCE CONSORTIUM ANNUAL...7a. NAME OF MONITORING ORGANIZATION Northeast Artificial Of p0ilcabe) Intelligence Consortium (NAIC) Rome_____ Air___ Development____Center

  7. EDAM: an ontology of bioinformatics operations, types of data and identifiers, topics and formats

    Science.gov (United States)

    Ison, Jon; Kalaš, Matúš; Jonassen, Inge; Bolser, Dan; Uludag, Mahmut; McWilliam, Hamish; Malone, James; Lopez, Rodrigo; Pettifer, Steve; Rice, Peter

    2013-01-01

    Motivation: Advancing the search, publication and integration of bioinformatics tools and resources demands consistent machine-understandable descriptions. A comprehensive ontology allowing such descriptions is therefore required. Results: EDAM is an ontology of bioinformatics operations (tool or workflow functions), types of data and identifiers, application domains and data formats. EDAM supports semantic annotation of diverse entities such as Web services, databases, programmatic libraries, standalone tools, interactive applications, data schemas, datasets and publications within bioinformatics. EDAM applies to organizing and finding suitable tools and data and to automating their integration into complex applications or workflows. It includes over 2200 defined concepts and has successfully been used for annotations and implementations. Availability: The latest stable version of EDAM is available in OWL format from http://edamontology.org/EDAM.owl and in OBO format from http://edamontology.org/EDAM.obo. It can be viewed online at the NCBO BioPortal and the EBI Ontology Lookup Service. For documentation and license please refer to http://edamontology.org. This article describes version 1.2 available at http://edamontology.org/EDAM_1.2.owl. Contact: jison@ebi.ac.uk PMID:23479348

  8. Gas Storage Technology Consortium

    Energy Technology Data Exchange (ETDEWEB)

    Joel Morrison; Elizabeth Wood; Barbara Robuck

    2010-09-30

    The EMS Energy Institute at The Pennsylvania State University (Penn State) has managed the Gas Storage Technology Consortium (GSTC) since its inception in 2003. The GSTC infrastructure provided a means to accomplish industry-driven research and development designed to enhance the operational flexibility and deliverability of the nation's gas storage system, and provide a cost-effective, safe, and reliable supply of natural gas to meet domestic demand. The GSTC received base funding from the U.S. Department of Energy's (DOE) National Energy Technology Laboratory (NETL) Oil & Natural Gas Supply Program. The GSTC base funds were highly leveraged with industry funding for individual projects. Since its inception, the GSTC has engaged 67 members. The GSTC membership base was diverse, coming from 19 states, the District of Columbia, and Canada. The membership was comprised of natural gas storage field operators, service companies, industry consultants, industry trade organizations, and academia. The GSTC organized and hosted a total of 18 meetings since 2003. Of these, 8 meetings were held to review, discuss, and select proposals submitted for funding consideration. The GSTC reviewed a total of 75 proposals and committed co-funding to support 31 industry-driven projects. The GSTC committed co-funding to 41.3% of the proposals that it received and reviewed. The 31 projects had a total project value of $6,203,071 of which the GSTC committed $3,205,978 in co-funding. The committed GSTC project funding represented an average program cost share of 51.7%. Project applicants provided an average program cost share of 48.3%. In addition to the GSTC co-funding, the consortium provided the domestic natural gas storage industry with a technology transfer and outreach infrastructure. The technology transfer and outreach were conducted by having project mentoring teams and a GSTC website, and by working closely with the Pipeline Research Council International (PRCI) to

  9. GeneDig: a web application for accessing genomic and bioinformatics knowledge.

    Science.gov (United States)

    Suciu, Radu M; Aydin, Emir; Chen, Brian E

    2015-02-28

    With the exponential increase and widespread availability of genomic, transcriptomic, and proteomic data, accessing these '-omics' data is becoming increasingly difficult. The current resources for accessing and analyzing these data have been created to perform highly specific functions intended for specialists, and thus typically emphasize functionality over user experience. We have developed a web-based application, GeneDig.org, that allows any general user access to genomic information with ease and efficiency. GeneDig allows for searching and browsing genes and genomes, while a dynamic navigator displays genomic, RNA, and protein information simultaneously for co-navigation. We demonstrate that our application allows more than five times faster and efficient access to genomic information than any currently available methods. We have developed GeneDig as a platform for bioinformatics integration focused on usability as its central design. This platform will introduce genomic navigation to broader audiences while aiding the bioinformatics analyses performed in everyday biology research.

  10. BIODEGRADATION OF MTBE BY A MICROORGANISM CONSORTIUM

    Directory of Open Access Journals (Sweden)

    M. Alimohammadi, A. R. Mesdaghinia, M. Mahmoodi, S. Nasseri, A. H. Mahvi and J. Nouri

    2005-10-01

    Full Text Available Methyl Tert-Butyl Ether (MTBE is one of the ether oxygenates which its use has been increased within the last twenty years. This compound is produced from isobutylene and methanol reaction that is used as octane index enhancer and also increases dissolved oxygen in gasoline and decreases carbon monoxide emission in four phased motors because of better combustion of gasoline. High solubility in water (52 g/L, high vapor pressure (0.54 kg/cm3, low absorption to organic carbon of soil and presence of MTBE in the list of potentially-carcinogens of U.S EPA has made its use of great concern. The culture media used in this study was Mineral Salt Medium (MSM. The study lasted for 236 days and in three different concentrations of MTBE of 200, 5 and 0.8 mg/L. A control sample was also used to compare the results. This research studied the isolation methods of microbial consortium in the MTBE polluted soils in Tehran and Abadan petroleum refinery besides MTBE degradation. The results showed the capability of bacteria in consuming MTBE as carbon source. Final microbial isolation was performed with several microbial passages as well as keeping consortium in a certain amount of MTBE as the carbon source.

  11. In-Vessel Co-Composting of Food Waste Employing Enriched Bacterial Consortium.

    Science.gov (United States)

    Awasthi, Mukesh Kumar; Wang, Quan; Wang, Meijing; Chen, Hongyu; Ren, Xiuna; Zhao, Junchao; Zhang, Zengqiang

    2018-03-01

    The aim of the present study is to develop a good initial composting mix using a bacterial consortium and 2% lime for effective co-composting of food waste in a 60-litre in-vessel composter. In the experiment that lasted for 42 days, the food waste was first mixed with sawdust and 2% lime (by dry mass), then one of the reactors was inoculated with an enriched bacterial consortium, while the other served as control. The results show that inoculation of the enriched natural bacterial consortium effectively overcame the oil-laden co-composting mass in the composter and increased the rate of mineralization. In addition, CO 2 evolution rate of (0.81±0.2) g/(kg·day), seed germination index of (105±3) %, extractable ammonium mass fraction of 305.78 mg/kg, C/N ratio of 16.18, pH=7.6 and electrical conductivity of 3.12 mS/cm clearly indicate that the compost was well matured and met the composting standard requirements. In contrast, control treatment exhibited a delayed thermophilic phase and did not mature after 42 days, as evidenced by the maturity parameters. Therefore, a good composting mix and potential bacterial inoculum to degrade the oil are essential for food waste co-composting systems.

  12. In-Vessel Co-Composting of Food Waste Employing Enriched Bacterial Consortium

    Directory of Open Access Journals (Sweden)

    Mukesh Kumar Awasthi

    2018-01-01

    Full Text Available The aim of the present study is to develop a good initial composting mix using a bacterial consortium and 2 % lime for effective co-composting of food waste in a 60-litre in-vessel composter. In the experiment that lasted for 42 days, the food waste was first mixed with sawdust and 2 % lime (by dry mass, then one of the reactors was inoculated with an enriched bacterial consortium, while the other served as control. The results show that inoculation of the enriched natural bacterial consortium effectively overcame the oil-laden co-composting mass in the composter and increased the rate of mineralization. In addition, CO2 evolution rate of (0.81±0.2 g/(kg·day, seed germination index of (105±3 %, extractable ammonium mass fraction of 305.78 mg/kg, C/N ratio of 16.18, pH=7.6 and electrical conductivity of 3.12 mS/cm clearly indicate that the compost was well matured and met the composting standard requirements. In contrast, control treatment exhibited a delayed thermophilic phase and did not mature after 42 days, as evidenced by the maturity parameters. Therefore, a good composting mix and potential bacterial inoculum to degrade the oil are essential for food waste co-composting systems.

  13. Host-parasite interactions and ecology of the malaria parasite-a bioinformatics approach.

    Science.gov (United States)

    Izak, Dariusz; Klim, Joanna; Kaczanowski, Szymon

    2018-04-25

    Malaria remains one of the highest mortality infectious diseases. Malaria is caused by parasites from the genus Plasmodium. Most deaths are caused by infections involving Plasmodium falciparum, which has a complex life cycle. Malaria parasites are extremely well adapted for interactions with their host and their host's immune system and are able to suppress the human immune system, erase immunological memory and rapidly alter exposed antigens. Owing to this rapid evolution, parasites develop drug resistance and express novel forms of antigenic proteins that are not recognized by the host immune system. There is an emerging need for novel interventions, including novel drugs and vaccines. Designing novel therapies requires knowledge about host-parasite interactions, which is still limited. However, significant progress has recently been achieved in this field through the application of bioinformatics analysis of parasite genome sequences. In this review, we describe the main achievements in 'malarial' bioinformatics and provide examples of successful applications of protein sequence analysis. These examples include the prediction of protein functions based on homology and the prediction of protein surface localization via domain and motif analysis. Additionally, we describe PlasmoDB, a database that stores accumulated experimental data. This tool allows data mining of the stored information and will play an important role in the development of malaria science. Finally, we illustrate the application of bioinformatics in the development of population genetics research on malaria parasites, an approach referred to as reverse ecology.

  14. BioXSD: the common data-exchange format for everyday bioinformatics web services.

    Science.gov (United States)

    Kalas, Matús; Puntervoll, Pål; Joseph, Alexandre; Bartaseviciūte, Edita; Töpfer, Armin; Venkataraman, Prabakar; Pettifer, Steve; Bryne, Jan Christian; Ison, Jon; Blanchet, Christophe; Rapacki, Kristoffer; Jonassen, Inge

    2010-09-15

    The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use of the resources is hampered by the lack of widely used, standard data-exchange formats for the basic, everyday bioinformatics data types. BioXSD has been developed as a candidate for standard, canonical exchange format for basic bioinformatics data. BioXSD is represented by a dedicated XML Schema and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web. The BioXSD 1.0 XML Schema is freely available at http://www.bioxsd.org/BioXSD-1.0.xsd under the Creative Commons BY-ND 3.0 license. The http://bioxsd.org web page offers documentation, examples of data in BioXSD format, example workflows with source codes in common programming languages, an updated list of compatible web services and tools and a repository of feature requests from the community.

  15. MicroRNA from tuberculosis RNA: A bioinformatics study

    OpenAIRE

    Wiwanitkit, Somsri; Wiwanitkit, Viroj

    2012-01-01

    The role of microRNA in the pathogenesis of pulmonary tuberculosis is the interesting topic in chest medicine at present. Recently, it was proposed that the microRNA can be a useful biomarker for monitoring of pulmonary tuberculosis and might be the important part in pathogenesis of disease. Here, the authors perform a bioinformatics study to assess the microRNA within known tuberculosis RNA. The microRNA part can be detected and this can be important key information in further study of the p...

  16. The Historically Black Colleges and Universities/Minority Institutions Environmental Technology and Waste Management Consortium annual report, 1990--1991

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1991-12-31

    The HBCU/MI Environmental Technology and Waste Management Consortium was established in January 1990, through a Memorandum of Understanding (MOU) among the member institutions. This group of research-oriented Historically Black Colleges and Universities and Minority Institutions (HBCU/MI) agreed to work together to initiate research, technology development and education programs to address the nation`s critical environmental problems. As a group the HBCU/MI Consortium is uniquely positioned to reach women and the minority populations of African Americans, Hispanics and American Indians. As part of their initial work, they developed the Research, Education, and Technology Transfer (RETT) Plan to actualize the Consortium`s guiding principles. In addition to developing a comprehensive research agenda, four major programs were begun to meet these goals. This report summarizes the 1990--1991 progress.

  17. Bioinformatics for Next Generation Sequencing Data

    Directory of Open Access Journals (Sweden)

    Alberto Magi

    2010-09-01

    Full Text Available The emergence of next-generation sequencing (NGS platforms imposes increasing demands on statistical methods and bioinformatic tools for the analysis and the management of the huge amounts of data generated by these technologies. Even at the early stages of their commercial availability, a large number of softwares already exist for analyzing NGS data. These tools can be fit into many general categories including alignment of sequence reads to a reference, base-calling and/or polymorphism detection, de novo assembly from paired or unpaired reads, structural variant detection and genome browsing. This manuscript aims to guide readers in the choice of the available computational tools that can be used to face the several steps of the data analysis workflow.

  18. STROKOG (stroke and cognition consortium): An international consortium to examine the epidemiology, diagnosis, and treatment of neurocognitive disorders in relation to cerebrovascular disease.

    Science.gov (United States)

    Sachdev, Perminder S; Lo, Jessica W; Crawford, John D; Mellon, Lisa; Hickey, Anne; Williams, David; Bordet, Régis; Mendyk, Anne-Marie; Gelé, Patrick; Deplanque, Dominique; Bae, Hee-Joon; Lim, Jae-Sung; Brodtmann, Amy; Werden, Emilio; Cumming, Toby; Köhler, Sebastian; Verhey, Frans R J; Dong, Yan-Hong; Tan, Hui Hui; Chen, Christopher; Xin, Xu; Kalaria, Raj N; Allan, Louise M; Akinyemi, Rufus O; Ogunniyi, Adesola; Klimkowicz-Mrowiec, Aleksandra; Dichgans, Martin; Wollenweber, Frank A; Zietemann, Vera; Hoffmann, Michael; Desmond, David W; Linden, Thomas; Blomstrand, Christian; Fagerberg, Björn; Skoog, Ingmar; Godefroy, Olivier; Barbay, Mélanie; Roussel, Martine; Lee, Byung-Chul; Yu, Kyung-Ho; Wardlaw, Joanna; Makin, Stephen J; Doubal, Fergus N; Chappell, Francesca M; Srikanth, Velandai K; Thrift, Amanda G; Donnan, Geoffrey A; Kandiah, Nagaendran; Chander, Russell J; Lin, Xuling; Cordonnier, Charlotte; Moulin, Solene; Rossi, Costanza; Sabayan, Behnam; Stott, David J; Jukema, J Wouter; Melkas, Susanna; Jokinen, Hanna; Erkinjuntti, Timo; Mok, Vincent C T; Wong, Adrian; Lam, Bonnie Y K; Leys, Didier; Hénon, Hilde; Bombois, Stéphanie; Lipnicki, Darren M; Kochan, Nicole A

    2017-01-01

    The Stroke and Cognition consortium (STROKOG) aims to facilitate a better understanding of the determinants of vascular contributions to cognitive disorders and help improve the diagnosis and treatment of vascular cognitive disorders (VCD). Longitudinal studies with ≥75 participants who had suffered or were at risk of stroke or TIA and which evaluated cognitive function were invited to join STROKOG. The consortium will facilitate projects investigating rates and patterns of cognitive decline, risk factors for VCD, and biomarkers of vascular dementia. Currently, STROKOG includes 25 (21 published) studies, with 12,092 participants from five continents. The duration of follow-up ranges from 3 months to 21 years. Although data harmonization will be a key challenge, STROKOG is in a unique position to reuse and combine international cohort data and fully explore patient level characteristics and outcomes. STROKOG could potentially transform our understanding of VCD and have a worldwide impact on promoting better vascular cognitive outcomes.

  19. Characteristics of a bioflocculant produced by a consortium of ...

    African Journals Online (AJOL)

    The characteristics of a bioflocculant produced by a consortium of 2 bacteria belonging to the genera Cobetia and Bacillus was investigated. The extracellular bioflocculant was composed of 66% uronic acid and 31% protein and showed an optimum flocculation (90% flocculating activity) of kaolin suspension at a dosage of ...

  20. Highly migratory shark fisheries research by the National Shark Research Consortium (NSRC), 2002-2007

    OpenAIRE

    Hueter, Robert E.; Cailliet, Gregor M.; Ebert, David A.; Musick, John A.; Burgess, George H.

    2007-01-01

    The National Shark Research Consortium (NSRC) includes the Center for Shark Research at Mote Marine Laboratory, the Pacific Shark Research Center at Moss Landing Marine Laboratories, the Shark Research Program at the Virginia Institute of Marine Science, and the Florida Program for Shark Research at the University of Florida. The consortium objectives include shark-related research in the Gulf of Mexico and along the Atlantic and Pacific coasts of the U.S., education and scientific cooperation.

  1. Consortium for oral health-related informatics: improving dental research, education, and treatment.

    Science.gov (United States)

    Stark, Paul C; Kalenderian, Elsbeth; White, Joel M; Walji, Muhammad F; Stewart, Denice C L; Kimmes, Nicole; Meng, Thomas R; Willis, George P; DeVries, Ted; Chapman, Robert J

    2010-10-01

    Advances in informatics, particularly the implementation of electronic health records (EHR), in dentistry have facilitated the exchange of information. The majority of dental schools in North America use the same EHR system, providing an unprecedented opportunity to integrate these data into a repository that can be used for oral health education and research. In 2007, fourteen dental schools formed the Consortium for Oral Health-Related Informatics (COHRI). Since its inception, COHRI has established structural and operational processes, governance and bylaws, and a number of work groups organized in two divisions: one focused on research (data standardization, integration, and analysis), and one focused on education (performance evaluations, virtual standardized patients, and objective structured clinical examinations). To date, COHRI (which now includes twenty dental schools) has been successful in developing a data repository, pilot-testing data integration, and sharing EHR enhancements among the group. This consortium has collaborated on standardizing medical and dental histories, developing diagnostic terminology, and promoting the utilization of informatics in dental education. The consortium is in the process of assembling the largest oral health database ever created. This will be an invaluable resource for research and provide a foundation for evidence-based dentistry for years to come.

  2. The ENIGMA Consortium: large-scale collaborative analyses of neuroimaging and genetic data.

    Science.gov (United States)

    Thompson, Paul M; Stein, Jason L; Medland, Sarah E; Hibar, Derrek P; Vasquez, Alejandro Arias; Renteria, Miguel E; Toro, Roberto; Jahanshad, Neda; Schumann, Gunter; Franke, Barbara; Wright, Margaret J; Martin, Nicholas G; Agartz, Ingrid; Alda, Martin; Alhusaini, Saud; Almasy, Laura; Almeida, Jorge; Alpert, Kathryn; Andreasen, Nancy C; Andreassen, Ole A; Apostolova, Liana G; Appel, Katja; Armstrong, Nicola J; Aribisala, Benjamin; Bastin, Mark E; Bauer, Michael; Bearden, Carrie E; Bergmann, Orjan; Binder, Elisabeth B; Blangero, John; Bockholt, Henry J; Bøen, Erlend; Bois, Catherine; Boomsma, Dorret I; Booth, Tom; Bowman, Ian J; Bralten, Janita; Brouwer, Rachel M; Brunner, Han G; Brohawn, David G; Buckner, Randy L; Buitelaar, Jan; Bulayeva, Kazima; Bustillo, Juan R; Calhoun, Vince D; Cannon, Dara M; Cantor, Rita M; Carless, Melanie A; Caseras, Xavier; Cavalleri, Gianpiero L; Chakravarty, M Mallar; Chang, Kiki D; Ching, Christopher R K; Christoforou, Andrea; Cichon, Sven; Clark, Vincent P; Conrod, Patricia; Coppola, Giovanni; Crespo-Facorro, Benedicto; Curran, Joanne E; Czisch, Michael; Deary, Ian J; de Geus, Eco J C; den Braber, Anouk; Delvecchio, Giuseppe; Depondt, Chantal; de Haan, Lieuwe; de Zubicaray, Greig I; Dima, Danai; Dimitrova, Rali; Djurovic, Srdjan; Dong, Hongwei; Donohoe, Gary; Duggirala, Ravindranath; Dyer, Thomas D; Ehrlich, Stefan; Ekman, Carl Johan; Elvsåshagen, Torbjørn; Emsell, Louise; Erk, Susanne; Espeseth, Thomas; Fagerness, Jesen; Fears, Scott; Fedko, Iryna; Fernández, Guillén; Fisher, Simon E; Foroud, Tatiana; Fox, Peter T; Francks, Clyde; Frangou, Sophia; Frey, Eva Maria; Frodl, Thomas; Frouin, Vincent; Garavan, Hugh; Giddaluru, Sudheer; Glahn, David C; Godlewska, Beata; Goldstein, Rita Z; Gollub, Randy L; Grabe, Hans J; Grimm, Oliver; Gruber, Oliver; Guadalupe, Tulio; Gur, Raquel E; Gur, Ruben C; Göring, Harald H H; Hagenaars, Saskia; Hajek, Tomas; Hall, Geoffrey B; Hall, Jeremy; Hardy, John; Hartman, Catharina A; Hass, Johanna; Hatton, Sean N; Haukvik, Unn K; Hegenscheid, Katrin; Heinz, Andreas; Hickie, Ian B; Ho, Beng-Choon; Hoehn, David; Hoekstra, Pieter J; Hollinshead, Marisa; Holmes, Avram J; Homuth, Georg; Hoogman, Martine; Hong, L Elliot; Hosten, Norbert; Hottenga, Jouke-Jan; Hulshoff Pol, Hilleke E; Hwang, Kristy S; Jack, Clifford R; Jenkinson, Mark; Johnston, Caroline; Jönsson, Erik G; Kahn, René S; Kasperaviciute, Dalia; Kelly, Sinead; Kim, Sungeun; Kochunov, Peter; Koenders, Laura; Krämer, Bernd; Kwok, John B J; Lagopoulos, Jim; Laje, Gonzalo; Landen, Mikael; Landman, Bennett A; Lauriello, John; Lawrie, Stephen M; Lee, Phil H; Le Hellard, Stephanie; Lemaître, Herve; Leonardo, Cassandra D; Li, Chiang-Shan; Liberg, Benny; Liewald, David C; Liu, Xinmin; Lopez, Lorna M; Loth, Eva; Lourdusamy, Anbarasu; Luciano, Michelle; Macciardi, Fabio; Machielsen, Marise W J; Macqueen, Glenda M; Malt, Ulrik F; Mandl, René; Manoach, Dara S; Martinot, Jean-Luc; Matarin, Mar; Mather, Karen A; Mattheisen, Manuel; Mattingsdal, Morten; Meyer-Lindenberg, Andreas; McDonald, Colm; McIntosh, Andrew M; McMahon, Francis J; McMahon, Katie L; Meisenzahl, Eva; Melle, Ingrid; Milaneschi, Yuri; Mohnke, Sebastian; Montgomery, Grant W; Morris, Derek W; Moses, Eric K; Mueller, Bryon A; Muñoz Maniega, Susana; Mühleisen, Thomas W; Müller-Myhsok, Bertram; Mwangi, Benson; Nauck, Matthias; Nho, Kwangsik; Nichols, Thomas E; Nilsson, Lars-Göran; Nugent, Allison C; Nyberg, Lars; Olvera, Rene L; Oosterlaan, Jaap; Ophoff, Roel A; Pandolfo, Massimo; Papalampropoulou-Tsiridou, Melina; Papmeyer, Martina; Paus, Tomas; Pausova, Zdenka; Pearlson, Godfrey D; Penninx, Brenda W; Peterson, Charles P; Pfennig, Andrea; Phillips, Mary; Pike, G Bruce; Poline, Jean-Baptiste; Potkin, Steven G; Pütz, Benno; Ramasamy, Adaikalavan; Rasmussen, Jerod; Rietschel, Marcella; Rijpkema, Mark; Risacher, Shannon L; Roffman, Joshua L; Roiz-Santiañez, Roberto; Romanczuk-Seiferth, Nina; Rose, Emma J; Royle, Natalie A; Rujescu, Dan; Ryten, Mina; Sachdev, Perminder S; Salami, Alireza; Satterthwaite, Theodore D; Savitz, Jonathan; Saykin, Andrew J; Scanlon, Cathy; Schmaal, Lianne; Schnack, Hugo G; Schork, Andrew J; Schulz, S Charles; Schür, Remmelt; Seidman, Larry; Shen, Li; Shoemaker, Jody M; Simmons, Andrew; Sisodiya, Sanjay M; Smith, Colin; Smoller, Jordan W; Soares, Jair C; Sponheim, Scott R; Sprooten, Emma; Starr, John M; Steen, Vidar M; Strakowski, Stephen; Strike, Lachlan; Sussmann, Jessika; Sämann, Philipp G; Teumer, Alexander; Toga, Arthur W; Tordesillas-Gutierrez, Diana; Trabzuni, Daniah; Trost, Sarah; Turner, Jessica; Van den Heuvel, Martijn; van der Wee, Nic J; van Eijk, Kristel; van Erp, Theo G M; van Haren, Neeltje E M; van 't Ent, Dennis; van Tol, Marie-Jose; Valdés Hernández, Maria C; Veltman, Dick J; Versace, Amelia; Völzke, Henry; Walker, Robert; Walter, Henrik; Wang, Lei; Wardlaw, Joanna M; Weale, Michael E; Weiner, Michael W; Wen, Wei; Westlye, Lars T; Whalley, Heather C; Whelan, Christopher D; White, Tonya; Winkler, Anderson M; Wittfeld, Katharina; Woldehawariat, Girma; Wolf, Christiane; Zilles, David; Zwiers, Marcel P; Thalamuthu, Anbupalam; Schofield, Peter R; Freimer, Nelson B; Lawrence, Natalia S; Drevets, Wayne

    2014-06-01

    The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience, genetics, and medicine, ENIGMA studies have analyzed neuroimaging data from over 12,826 subjects. In addition, data from 12,171 individuals were provided by the CHARGE consortium for replication of findings, in a total of 24,997 subjects. By meta-analyzing results from many sites, ENIGMA has detected factors that affect the brain that no individual site could detect on its own, and that require larger numbers of subjects than any individual neuroimaging study has currently collected. ENIGMA's first project was a genome-wide association study identifying common variants in the genome associated with hippocampal volume or intracranial volume. Continuing work is exploring genetic associations with subcortical volumes (ENIGMA2) and white matter microstructure (ENIGMA-DTI). Working groups also focus on understanding how schizophrenia, bipolar illness, major depression and attention deficit/hyperactivity disorder (ADHD) affect the brain. We review the current progress of the ENIGMA Consortium, along with challenges and unexpected discoveries made on the way.

  3. Culture-dependent and -independent approaches establish the complexity of a PAH-degrading microbial consortium

    Energy Technology Data Exchange (ETDEWEB)

    Vinas, M.; Sabate, J.; Solanas, A.M. [Barcelona Univ., Barcelona (Spain). Dept. of Microbiology; Guasp, C.; Lalucat, J. [Illes Balears Univ., Palma de Mallorca (Spain). Dept. of Biology

    2005-11-15

    Microbial consortia are used in the decontamination of polluted environmental sites. A microbial consortium obtained by batch enrichment culture is a closed system with controlled conditions in which micro-organisms with a potentially high growth rate are selected and become dominant. The aim of this study was to identify the members of consortium AM, in which earlier batch enrichment work had shown high biodegradation rates of the aromatic fraction of polycyclic aromatic hydrocarbon (PAH). The AM consortium was obtained by sequential enrichment in liquid culture with a PAH mixture of 3- and 4- ringed PAHs as the sole source of carbon and energy. The consortium was examined using a triple approach method based on various cultivation strategies, denaturing gradient electrophoresis (DGGE) and the screening of 16S and 18S rRNA gene clone libraries. Eleven different sequences by culture-dependent techniques and 7 by both DGGE and clone libraries were obtained, yielding 19 different microbial components. Proteobacteria were the dominant group, representing 83 per cent of the total, while the Cytophaga-Flexibactor-Bacteroides group (CFB) was 11 per cent, and Ascomycota fungi were 6 per cent. It was determined that {beta}-Proteobacteria were predominant in the DGGE and clone library methods, whereas they were a minority in culturable strains. The highest diversity and number of noncoincident sequences was achieved by the cultivation method that showed members of the {alpha},{beta}, and {gamma}-Proteobacteria, CFB bacterial group, and Ascomycota fungi. Only 6 of the 11 strains isolated showed PAH-degrading capability. The bacterial strain (AMS7) and the fungal strain (AMF1) achieved the greatest PAH depletion. Results indicated that polyphasic assessment is necessary for a proper understanding of the composition of a microbial consortium. It was concluded that microbial consortia are more complex than previously realized. 54 refs., 3 tabs., 3 figs.

  4. Shared Bioinformatics Databases within the Unipro UGENE Platform

    Directory of Open Access Journals (Sweden)

    Protsyuk Ivan V.

    2015-03-01

    Full Text Available Unipro UGENE is an open-source bioinformatics toolkit that integrates popular tools along with original instruments for molecular biologists within a unified user interface. Nowadays, most bioinformatics desktop applications, including UGENE, make use of a local data model while processing different types of data. Such an approach causes an inconvenience for scientists working cooperatively and relying on the same data. This refers to the need of making multiple copies of certain files for every workplace and maintaining synchronization between them in case of modifications. Therefore, we focused on delivering a collaborative work into the UGENE user experience. Currently, several UGENE installations can be connected to a designated shared database and users can interact with it simultaneously. Such databases can be created by UGENE users and be used at their discretion. Objects of each data type, supported by UGENE such as sequences, annotations, multiple alignments, etc., can now be easily imported from or exported to a remote storage. One of the main advantages of this system, compared to existing ones, is the almost simultaneous access of client applications to shared data regardless of their volume. Moreover, the system is capable of storing millions of objects. The storage itself is a regular database server so even an inexpert user is able to deploy it. Thus, UGENE may provide access to shared data for users located, for example, in the same laboratory or institution. UGENE is available at: http://ugene.net/download.html.

  5. Enhanced bioremediation of soil contaminated with viscous oil through microbial consortium construction and ultraviolet mutation.

    Science.gov (United States)

    Chen, Jing; Yang, Qiuyan; Huang, Taipeng; Zhang, Yongkui; Ding, Ranfeng

    2011-06-01

    This study focused on enhancing the bioremediation of soil contaminated with viscous oil by microorganisms and evaluating two strategies. Construction of microbial consortium and ultraviolet mutation were both effective applications in the remediation of soil contaminated with viscous oil. Results demonstrated that an interaction among the microorganisms existed and affected the biodegradation rate. Strains inoculated equally into the test showed the best remediation, and an optimal microbial consortium was achieved with a 7 days' degradation rate of 49.22%. On the other hand, the use of ultraviolet mutation increased one strain's degrading ability from 41.83 to 52.42% in 7 days. Gas chromatography and mass spectrum analysis showed that microbial consortium could treat more organic fractions of viscous oil, while ultraviolet mutation could be more effect on increasing one strain's degrading ability.

  6. Chemometric formulation of bacterial consortium-AVS for improved decolorization of resonance-stabilized and heteropolyaromatic dyes.

    Science.gov (United States)

    Kumar, Madhava Anil; Kumar, Vaidyanathan Vinoth; Premkumar, Manickam Periyaraman; Baskaralingam, Palanichamy; Thiruvengadaravi, Kadathur Varathachary; Dhanasekaran, Anuradha; Sivanesan, Subramanian

    2012-11-01

    A bacterial consortium-AVS, consisting of Pseudomonas desmolyticum NCIM 2112, Kocuria rosea MTCC 1532 and Micrococcus glutamicus NCIM 2168 was formulated chemometrically, using the mixture design matrix based on the design of experiments methodology. The formulated consortium-AVS decolorized acid blue 15 and methylene blue with a higher average decolorization rate, which is more rapid than that of the pure cultures. The UV-vis spectrophotometric, Fourier transform infra red spectrophotometric and high performance liquid chromatographic analysis confirm that the decolorization was due to biodegradation by oxido-reductive enzymes, produced by the consortium-AVS. The toxicological assessment of plant growth parameters and the chlorophyll pigment concentrations of Phaseolus mungo and Triticum aestivum seedlings revealed the reduced toxic nature of the biodegraded products. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Robust Bioinformatics Recognition with VLSI Biochip Microsystem

    Science.gov (United States)

    Lue, Jaw-Chyng L.; Fang, Wai-Chi

    2006-01-01

    A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.

  8. The Black Rock Forest Consortium: A narrative

    Science.gov (United States)

    Buzzetto-More, Nicole Antoinette

    The Black Rock Forest is a 3,785-acre wilderness area whose richly forested landscape represents the splendor of the Hudson Valley Region of New York State. Although originally intended to become the home of wealthy banker James Stillman, it was his son Ernest whose love of conservation caused him to embrace the then new and revolutionary practice of sustainable forestry and establish Black Rock in 1928. Due to Ernest Stillman's foresight, the property was protected from development and bequeathed to Harvard University following his death for the establishment of an experimental forest. The modern environmental movement in America began when the Black Rock Forest was threatened with development by Consolidated Edison, and the people of the surrounding community banded together, battling tirelessly for over 17 years to stop the degradation of this historic forest. The outcome of this crusade marked a hallmark win for the environment leaving an illustrious and inveterate legacy. The campaign resulted in the watershed legislation the National Environmental Policy Act, the formation of several environmental advocacy groups, the creation of the Council on Environmental Quality of the Executive Office of the President, as well as set a precedent for communities to initiate and win cases against major corporations in order to safeguard natural resources. In the midst of the controversy it became apparent that alternative futures for the Forest needed to be explored. As a result of a committee report and one man's vision, the idea emerged to create a consortium that would purchase and steward the Forest. With a formation that took nearly fifteen years, the Black Rock Forest Consortium was formed, a unique amalgamation of K--12 public and private schools, colleges and universities, and science and cultural centers that successfully collaborate to enhance scientific research, environmental conservation, and education. The Consortium works to bridge the gaps between learners

  9. 25 CFR 1000.54 - How will a Tribe/Consortium know whether or not it has been selected to receive an advance...

    Science.gov (United States)

    2010-04-01

    ...) Planning and Negotiation Grants Advance Planning Grant Funding § 1000.54 How will a Tribe/Consortium know... Director will notify the Tribe/Consortium by letter whether it has been selected to receive an advance... 25 Indians 2 2010-04-01 2010-04-01 false How will a Tribe/Consortium know whether or not it has...

  10. A programmable Escherichia coli consortium via tunable symbiosis.

    Directory of Open Access Journals (Sweden)

    Alissa Kerner

    Full Text Available Synthetic microbial consortia that can mimic natural systems have the potential to become a powerful biotechnology for various applications. One highly desirable feature of these consortia is that they can be precisely regulated. In this work we designed a programmable, symbiotic circuit that enables continuous tuning of the growth rate and composition of a synthetic consortium. We implemented our general design through the cross-feeding of tryptophan and tyrosine by two E. coli auxotrophs. By regulating the expression of genes related to the export or production of these amino acids, we were able to tune the metabolite exchanges and achieve a wide range of growth rates and strain ratios. In addition, by inverting the relationship of growth/ratio vs. inducer concentrations, we were able to "program" the co-culture for pre-specified attributes with the proper addition of inducing chemicals. This programmable proof-of-concept circuit or its variants can be applied to more complex systems where precise tuning of the consortium would facilitate the optimization of specific objectives, such as increasing the overall efficiency of microbial production of biofuels or pharmaceuticals.

  11. Microbial bioinformatics 2020.

    Science.gov (United States)

    Pallen, Mark J

    2016-09-01

    Microbial bioinformatics in 2020 will remain a vibrant, creative discipline, adding value to the ever-growing flood of new sequence data, while embracing novel technologies and fresh approaches. Databases and search strategies will struggle to cope and manual curation will not be sustainable during the scale-up to the million-microbial-genome era. Microbial taxonomy will have to adapt to a situation in which most microorganisms are discovered and characterised through the analysis of sequences. Genome sequencing will become a routine approach in clinical and research laboratories, with fresh demands for interpretable user-friendly outputs. The "internet of things" will penetrate healthcare systems, so that even a piece of hospital plumbing might have its own IP address that can be integrated with pathogen genome sequences. Microbiome mania will continue, but the tide will turn from molecular barcoding towards metagenomics. Crowd-sourced analyses will collide with cloud computing, but eternal vigilance will be the price of preventing the misinterpretation and overselling of microbial sequence data. Output from hand-held sequencers will be analysed on mobile devices. Open-source training materials will address the need for the development of a skilled labour force. As we boldly go into the third decade of the twenty-first century, microbial sequence space will remain the final frontier! © 2016 The Author. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  12. Tissue damage in organic rainbow trout muscle investigated by proteomics and bioinformatics

    DEFF Research Database (Denmark)

    Wulff, Tune; Silva, T.; Nielsen, Michael Engelbrecht

    2013-01-01

    and magnitude of the cellular response, in the context of a regenerative process. Using a bioinformatics approach, the main biological function of these proteins were assigned, showing the regulation of proteins involved in processes like apoptosis, iron homeostasis and regulation of muscular structure...

  13. Why Choose This One? Factors in Scientists' Selection of Bioinformatics Tools

    Science.gov (United States)

    Bartlett, Joan C.; Ishimura, Yusuke; Kloda, Lorie A.

    2011-01-01

    Purpose: The objective was to identify and understand the factors involved in scientists' selection of preferred bioinformatics tools, such as databases of gene or protein sequence information (e.g., GenBank) or programs that manipulate and analyse biological data (e.g., BLAST). Methods: Eight scientists maintained research diaries for a two-week…

  14. Tissue Banking, Bioinformatics, and Electronic Medical Records: The Front-End Requirements for Personalized Medicine

    Science.gov (United States)

    Suh, K. Stephen; Sarojini, Sreeja; Youssif, Maher; Nalley, Kip; Milinovikj, Natasha; Elloumi, Fathi; Russell, Steven; Pecora, Andrew; Schecter, Elyssa; Goy, Andre

    2013-01-01

    Personalized medicine promises patient-tailored treatments that enhance patient care and decrease overall treatment costs by focusing on genetics and “-omics” data obtained from patient biospecimens and records to guide therapy choices that generate good clinical outcomes. The approach relies on diagnostic and prognostic use of novel biomarkers discovered through combinations of tissue banking, bioinformatics, and electronic medical records (EMRs). The analytical power of bioinformatic platforms combined with patient clinical data from EMRs can reveal potential biomarkers and clinical phenotypes that allow researchers to develop experimental strategies using selected patient biospecimens stored in tissue banks. For cancer, high-quality biospecimens collected at diagnosis, first relapse, and various treatment stages provide crucial resources for study designs. To enlarge biospecimen collections, patient education regarding the value of specimen donation is vital. One approach for increasing consent is to offer publically available illustrations and game-like engagements demonstrating how wider sample availability facilitates development of novel therapies. The critical value of tissue bank samples, bioinformatics, and EMR in the early stages of the biomarker discovery process for personalized medicine is often overlooked. The data obtained also require cross-disciplinary collaborations to translate experimental results into clinical practice and diagnostic and prognostic use in personalized medicine. PMID:23818899

  15. HIV Pathogenesis: Abstracts from the March 2017 Cleveland Immunopathogenesis Consortium Meeting

    Directory of Open Access Journals (Sweden)

    Michael M. Lederman

    2017-06-01

    Full Text Available The Cleveland Immunopathogenesis Consortium (CLIC was launched in March 2004 by a small group of investigators (Ron Bosch, Jason Brenchley,  Steven Deeks, Danny Douek, Zvi Grossman, Robert Kalayjian, Clifford Harding, Michael Lederman, Leonid Margolis, Miguel Quinones, Benigno Rodriguez, Rafick Sekaly, Scott Sieg, and Guido Silvestri who were increasingly persuaded that immune activation was an important driver of HIV pathogenesis. We met around a chalk board and scribbled our models of pathogenesis, designed some experiments then went back home to do them. We met again soon to review our new and unpublished findings that refined and shaped these models. The data presentations were short, informal and heavy on discussion. The model worked well, the consortium was productive and the meetings catalyzed numerous collaborations and scores of high impact papers. The CLIC (less formally, the Bad Boys of Cleveland [1] has been meeting regularly since then. Consortium membership has expanded to include other investigators (some are listed in the presentations below. Whether the goal is to prevent the morbid complications of HIV infection, to understand the determinants of HIV persistence or the factors that protect from acquisition of infection, a more clear understanding of HIV immunopathogenesis is central. Here in this issue of Pathogens and Immunity is a brief summary of the most recent CLIC//BBC meeting held in Cleveland in March 2017.

  16. The Research Consortium, 1977-2010: Contributions, Milestones, and Trends

    Science.gov (United States)

    Cardinal, Bradley J.; Claman, Gayle

    2010-01-01

    Research and innovation are a cornerstone of any progressive organization. The Research Consortium (RC) has served as the principal organization fulfilling this function on behalf of the American Alliance for Health, Physical Education, Recreation and Dance (AAHPERD) throughout much of its history. The RC is an organization of approximately 5,000…

  17. BioXSD: the common data-exchange format for everyday bioinformatics web services

    Science.gov (United States)

    Kalaš, Matúš; Puntervoll, Pæl; Joseph, Alexandre; Bartaševičiūtė, Edita; Töpfer, Armin; Venkataraman, Prabakar; Pettifer, Steve; Bryne, Jan Christian; Ison, Jon; Blanchet, Christophe; Rapacki, Kristoffer; Jonassen, Inge

    2010-01-01

    Motivation: The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use of the resources is hampered by the lack of widely used, standard data-exchange formats for the basic, everyday bioinformatics data types. Results: BioXSD has been developed as a candidate for standard, canonical exchange format for basic bioinformatics data. BioXSD is represented by a dedicated XML Schema and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web. Availability: The BioXSD 1.0 XML Schema is freely available at http://www.bioxsd.org/BioXSD-1.0.xsd under the Creative Commons BY-ND 3.0 license. The http://bioxsd.org web page offers documentation, examples of data in BioXSD format, example workflows with source codes in common programming languages, an updated list of compatible web services and tools and a repository of feature requests from the community. Contact: matus.kalas@bccs.uib.no; developers@bioxsd.org; support@bioxsd.org PMID:20823319

  18. 25 CFR 1000.367 - Will the Department evaluate a Tribe's/Consortium's performance of non-trust related programs?

    Science.gov (United States)

    2010-04-01

    ... Evaluations § 1000.367 Will the Department evaluate a Tribe's/Consortium's performance of non-trust related... 25 Indians 2 2010-04-01 2010-04-01 false Will the Department evaluate a Tribe's/Consortium's performance of non-trust related programs? 1000.367 Section 1000.367 Indians OFFICE OF THE ASSISTANT SECRETARY...

  19. Bioinformatic analysis of functional differences between the immunoproteasome and the constitutive proteasome

    DEFF Research Database (Denmark)

    Kesmir, Can; van Noort, V.; de Boer, R.J.

    2003-01-01

    not yet been quantified how different the specificity of two forms of the proteasome are. The main question, which still lacks direct evidence, is whether the immunoproteasome generates more MHC ligands. Here we use bioinformatics tools to quantify these differences and show that the immunoproteasome...

  20. In-depth analysis of the adipocyte proteome by mass spectrometry and bioinformatics

    DEFF Research Database (Denmark)

    Adachi, Jun; Kumar, Chanchal; Zhang, Yanling

    2007-01-01

    , mitochondria, membrane, and cytosol of 3T3-L1 adipocytes. We identified 3,287 proteins while essentially eliminating false positives, making this one of the largest high confidence proteomes reported to date. Comprehensive bioinformatics analysis revealed that the adipocyte proteome, despite its specialized...

  1. Bioinformatics analysis of Brucella vaccines and vaccine targets using VIOLIN.

    Science.gov (United States)

    He, Yongqun; Xiang, Zuoshuang

    2010-09-27

    Brucella spp. are Gram-negative, facultative intracellular bacteria that cause brucellosis, one of the commonest zoonotic diseases found worldwide in humans and a variety of animal species. While several animal vaccines are available, there is no effective and safe vaccine for prevention of brucellosis in humans. VIOLIN (http://www.violinet.org) is a web-based vaccine database and analysis system that curates, stores, and analyzes published data of commercialized vaccines, and vaccines in clinical trials or in research. VIOLIN contains information for 454 vaccines or vaccine candidates for 73 pathogens. VIOLIN also contains many bioinformatics tools for vaccine data analysis, data integration, and vaccine target prediction. To demonstrate the applicability of VIOLIN for vaccine research, VIOLIN was used for bioinformatics analysis of existing Brucella vaccines and prediction of new Brucella vaccine targets. VIOLIN contains many literature mining programs (e.g., Vaxmesh) that provide in-depth analysis of Brucella vaccine literature. As a result of manual literature curation, VIOLIN contains information for 38 Brucella vaccines or vaccine candidates, 14 protective Brucella antigens, and 68 host response studies to Brucella vaccines from 97 peer-reviewed articles. These Brucella vaccines are classified in the Vaccine Ontology (VO) system and used for different ontological applications. The web-based VIOLIN vaccine target prediction program Vaxign was used to predict new Brucella vaccine targets. Vaxign identified 14 outer membrane proteins that are conserved in six virulent strains from B. abortus, B. melitensis, and B. suis that are pathogenic in humans. Of the 14 membrane proteins, two proteins (Omp2b and Omp31-1) are not present in B. ovis, a Brucella species that is not pathogenic in humans. Brucella vaccine data stored in VIOLIN were compared and analyzed using the VIOLIN query system. Bioinformatics curation and ontological representation of Brucella vaccines

  2. G-DOC Plus - an integrative bioinformatics platform for precision medicine.

    Science.gov (United States)

    Bhuvaneshwar, Krithika; Belouali, Anas; Singh, Varun; Johnson, Robert M; Song, Lei; Alaoui, Adil; Harris, Michael A; Clarke, Robert; Weiner, Louis M; Gusev, Yuriy; Madhavan, Subha

    2016-04-30

    G-DOC Plus is a data integration and bioinformatics platform that uses cloud computing and other advanced computational tools to handle a variety of biomedical BIG DATA including gene expression arrays, NGS and medical images so that they can be analyzed in the full context of other omics and clinical information. G-DOC Plus currently holds data from over 10,000 patients selected from private and public resources including Gene Expression Omnibus (GEO), The Cancer Genome Atlas (TCGA) and the recently added datasets from REpository for Molecular BRAin Neoplasia DaTa (REMBRANDT), caArray studies of lung and colon cancer, ImmPort and the 1000 genomes data sets. The system allows researchers to explore clinical-omic data one sample at a time, as a cohort of samples; or at the level of population, providing the user with a comprehensive view of the data. G-DOC Plus tools have been leveraged in cancer and non-cancer studies for hypothesis generation and validation; biomarker discovery and multi-omics analysis, to explore somatic mutations and cancer MRI images; as well as for training and graduate education in bioinformatics, data and computational sciences. Several of these use cases are described in this paper to demonstrate its multifaceted usability. G-DOC Plus can be used to support a variety of user groups in multiple domains to enable hypothesis generation for precision medicine research. The long-term vision of G-DOC Plus is to extend this translational bioinformatics platform to stay current with emerging omics technologies and analysis methods to continue supporting novel hypothesis generation, analysis and validation for integrative biomedical research. By integrating several aspects of the disease and exposing various data elements, such as outpatient lab workup, pathology, radiology, current treatments, molecular signatures and expected outcomes over a web interface, G-DOC Plus will continue to strengthen precision medicine research. G-DOC Plus is available

  3. Bioinformatics: Cheap and robust method to explore biomaterial from Indonesia biodiversity

    Science.gov (United States)

    Widodo

    2015-02-01

    Indonesia has a huge amount of biodiversity, which may contain many biomaterials for pharmaceutical application. These resources potency should be explored to discover new drugs for human wealth. However, the bioactive screening using conventional methods is very expensive and time-consuming. Therefore, we developed a methodology for screening the potential of natural resources based on bioinformatics. The method is developed based on the fact that organisms in the same taxon will have similar genes, metabolism and secondary metabolites product. Then we employ bioinformatics to explore the potency of biomaterial from Indonesia biodiversity by comparing species with the well-known taxon containing the active compound through published paper or chemical database. Then we analyze drug-likeness, bioactivity and the target proteins of the active compound based on their molecular structure. The target protein was examined their interaction with other proteins in the cell to determine action mechanism of the active compounds in the cellular level, as well as to predict its side effects and toxicity. By using this method, we succeeded to screen anti-cancer, immunomodulators and anti-inflammation from Indonesia biodiversity. For example, we found anticancer from marine invertebrate by employing the method. The anti-cancer was explore based on the isolated compounds of marine invertebrate from published article and database, and then identified the protein target, followed by molecular pathway analysis. The data suggested that the active compound of the invertebrate able to kill cancer cell. Further, we collect and extract the active compound from the invertebrate, and then examined the activity on cancer cell (MCF7). The MTT result showed that the methanol extract of marine invertebrate was highly potent in killing MCF7 cells. Therefore, we concluded that bioinformatics is cheap and robust way to explore bioactive from Indonesia biodiversity for source of drug and another

  4. OralCard: a bioinformatic tool for the study of oral proteome.

    Science.gov (United States)

    Arrais, Joel P; Rosa, Nuno; Melo, José; Coelho, Edgar D; Amaral, Diana; Correia, Maria José; Barros, Marlene; Oliveira, José Luís

    2013-07-01

    The molecular complexity of the human oral cavity can only be clarified through identification of components that participate within it. However current proteomic techniques produce high volumes of information that are dispersed over several online databases. Collecting all of this data and using an integrative approach capable of identifying unknown associations is still an unsolved problem. This is the main motivation for this work. We present the online bioinformatic tool OralCard, which comprises results from 55 manually curated articles reflecting the oral molecular ecosystem (OralPhysiOme). It comprises experimental information available from the oral proteome both of human (OralOme) and microbial origin (MicroOralOme) structured in protein, disease and organism. This tool is a key resource for researchers to understand the molecular foundations implicated in biology and disease mechanisms of the oral cavity. The usefulness of this tool is illustrated with the analysis of the oral proteome associated with diabetes melitus type 2. OralCard is available at http://bioinformatics.ua.pt/oralcard. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Efficient Feature Selection and Classification of Protein Sequence Data in Bioinformatics

    Science.gov (United States)

    Faye, Ibrahima; Samir, Brahim Belhaouari; Md Said, Abas

    2014-01-01

    Bioinformatics has been an emerging area of research for the last three decades. The ultimate aims of bioinformatics were to store and manage the biological data, and develop and analyze computational tools to enhance their understanding. The size of data accumulated under various sequencing projects is increasing exponentially, which presents difficulties for the experimental methods. To reduce the gap between newly sequenced protein and proteins with known functions, many computational techniques involving classification and clustering algorithms were proposed in the past. The classification of protein sequences into existing superfamilies is helpful in predicting the structure and function of large amount of newly discovered proteins. The existing classification results are unsatisfactory due to a huge size of features obtained through various feature encoding methods. In this work, a statistical metric-based feature selection technique has been proposed in order to reduce the size of the extracted feature vector. The proposed method of protein classification shows significant improvement in terms of performance measure metrics: accuracy, sensitivity, specificity, recall, F-measure, and so forth. PMID:25045727

  6. Geodesy and the UNAVCO Consortium: Three Decades of Innovations

    Science.gov (United States)

    Rowan, L. R.; Miller, M. M.; Meertens, C. M.; Mattioli, G. S.

    2015-12-01

    UNAVCO, a non-profit, university consortium that supports geoscience research using geodesy, began with the ingenious recognition that the nascent Global Positioning System constellation (GPS) could be used to investigate earth processes. The consortium purchased one of the first commercially available GPS receivers, Texas Instrument's TI-4100 NAVSTAR Navigator, in 1984 to measure plate deformation. This early work was highlighted in a technology magazine, GPSWorld, in 1990. Over a 30-year period, UNAVCO and the community have helped advance instrument design for mobility, flexibility, efficiency and interoperability, so research could proceed with higher precision and under ever challenging conditions. Other innovations have been made in data collection, processing, analysis, management and archiving. These innovations in tools, methods and data have had broader impacts as they have found greater utility beyond research for timing, precise positioning, safety, communication, navigation, surveying, engineering and recreation. Innovations in research have expanded the utility of geodetic tools beyond the solid earth science through creative analysis of the data and the methods. For example, GPS sounding of the atmosphere is now used for atmospheric and space sciences. GPS reflectrometry, another critical advance, supports soil science, snow science and ecological research. Some research advances have had broader impacts for society by driving innovations in hazards risk reduction, hazards response, resource management, land use planning, surveying, engineering and other uses. Furthermore, the geodetic data is vital for the design of space missions, testing and advancing communications, and testing and dealing with interference and GPS jamming. We will discuss three decades (and counting) of advances by the National Science Foundation's premiere geodetic facility, consortium and some of the many geoscience principal investigators that have driven innovations in

  7. External RNA Controls Consortium Beta Version Update.

    Science.gov (United States)

    Lee, Hangnoh; Pine, P Scott; McDaniel, Jennifer; Salit, Marc; Oliver, Brian

    2016-01-01

    Spike-in RNAs are valuable controls for a variety of gene expression measurements. The External RNA Controls Consortium developed test sets that were used in a number of published reports. Here we provide an authoritative table that summarizes, updates, and corrects errors in the test version that ultimately resulted in the certified Standard Reference Material 2374. We have noted existence of anti-sense RNA controls in the material, corrected sub-pool memberships, and commented on control RNAs that displayed inconsistent behavior.

  8. MOWServ: a web client for integration of bioinformatic resources

    Science.gov (United States)

    Ramírez, Sergio; Muñoz-Mérida, Antonio; Karlsson, Johan; García, Maximiliano; Pérez-Pulido, Antonio J.; Claros, M. Gonzalo; Trelles, Oswaldo

    2010-01-01

    The productivity of any scientist is affected by cumbersome, tedious and time-consuming tasks that try to make the heterogeneous web services compatible so that they can be useful in their research. MOWServ, the bioinformatic platform offered by the Spanish National Institute of Bioinformatics, was released to provide integrated access to databases and analytical tools. Since its release, the number of available services has grown dramatically, and it has become one of the main contributors of registered services in the EMBRACE Biocatalogue. The ontology that enables most of the web-service compatibility has been curated, improved and extended. The service discovery has been greatly enhanced by Magallanes software and biodataSF. User data are securely stored on the main server by an authentication protocol that enables the monitoring of current or already-finished user’s tasks, as well as the pipelining of successive data processing services. The BioMoby standard has been greatly extended with the new features included in the MOWServ, such as management of additional information (metadata such as extended descriptions, keywords and datafile examples), a qualified registry, error handling, asynchronous services and service replication. All of them have increased the MOWServ service quality, usability and robustness. MOWServ is available at http://www.inab.org/MOWServ/ and has a mirror at http://www.bitlab-es.com/MOWServ/. PMID:20525794

  9. Glycan array data management at Consortium for Functional Glycomics.

    Science.gov (United States)

    Venkataraman, Maha; Sasisekharan, Ram; Raman, Rahul

    2015-01-01

    Glycomics or the study of structure-function relationships of complex glycans has reshaped post-genomics biology. Glycans mediate fundamental biological functions via their specific interactions with a variety of proteins. Recognizing the importance of glycomics, large-scale research initiatives such as the Consortium for Functional Glycomics (CFG) were established to address these challenges. Over the past decade, the Consortium for Functional Glycomics (CFG) has generated novel reagents and technologies for glycomics analyses, which in turn have led to generation of diverse datasets. These datasets have contributed to understanding glycan diversity and structure-function relationships at molecular (glycan-protein interactions), cellular (gene expression and glycan analysis), and whole organism (mouse phenotyping) levels. Among these analyses and datasets, screening of glycan-protein interactions on glycan array platforms has gained much prominence and has contributed to cross-disciplinary realization of the importance of glycomics in areas such as immunology, infectious diseases, cancer biomarkers, etc. This manuscript outlines methodologies for capturing data from glycan array experiments and online tools to access and visualize glycan array data implemented at the CFG.

  10. Multiple Syntrophic Interactions in a Terephthalate-Degrading Methanogenic Consortium

    Energy Technology Data Exchange (ETDEWEB)

    Lykidis, Athanasios; Chen, Chia-Lung; Tringe, Susannah G.; McHardy, Alice C.; Copeland, Alex 5; Kyrpides, Nikos C.; Hugenholtz, Philip; Liu, Wen-Tso

    2010-08-05

    Terephthalate (TA) is one of the top 50 chemicals produced worldwide. Its production results in a TA-containing wastewater that is treated by anaerobic processes through a poorly understood methanogenic syntrophy. Using metagenomics, we characterized the methanogenic consortium tinside a hyper-mesophilic (i.e., between mesophilic and thermophilic), TA-degrading bioreactor. We identified genes belonging to dominant Pelotomaculum species presumably involved in TA degradation through decarboxylation, dearomatization, and modified ?-oxidation to H{sub 2}/CO{sub 2} and acetate. These intermediates are converted to CH{sub 4}/CO{sub 2} by three novel hyper-mesophilic methanogens. Additional secondary syntrophic interactions were predicted in Thermotogae, Syntrophus and candidate phyla OP5 and WWE1 populations. The OP5 encodes genes capable of anaerobic autotrophic butyrate production and Thermotogae, Syntrophus and WWE1 have the genetic potential to oxidize butyrate to COsub 2}/H{sub 2} and acetate. These observations suggest that the TA-degrading consortium consists of additional syntrophic interactions beyond the standard H{sub 2}-producing syntroph ? methanogen partnership that may serve to improve community stability.

  11. Simultaneous biodegradation of three mononitrophenol isomers by a tailor-made microbial consortium immobilized in sequential batch reactors.

    Science.gov (United States)

    Fu, H; Zhang, J-J; Xu, Y; Chao, H-J; Zhou, N-Y

    2017-03-01

    The ortho-nitrophenol (ONP)-utilizing Alcaligenes sp. strain NyZ215, meta-nitrophenol (MNP)-utilizing Cupriavidus necator JMP134 and para-nitrophenol (PNP)-utilizing Pseudomonas sp. strain WBC-3 were assembled as a consortium to degrade three nitrophenol isomers in sequential batch reactors. Pilot test was conducted in flasks to demonstrate that a mixture of three mononitrophenols at 0·5 mol l -1 each could be mineralized by this microbial consortium within 84 h. Interestingly, neither ONP nor MNP was degraded until PNP was almost consumed by strain WBC-3. By immobilizing this consortium into polyurethane cubes, all three mononitrophenols were continuously degraded in lab-scale sequential reactors for six batch cycles over 18 days. Total concentrations of ONP, MMP and PNP that were degraded were 2·8, 1·5 and 2·3 mol l -1 during this time course respectively. Quantitative real-time PCR analysis showed that each member in the microbial consortium was relatively stable during the entire degradation process. This study provides a novel approach to treat polluted water, particularly with a mixture of co-existing isomers. Nitroaromatic compounds are readily spread in the environment and pose great potential toxicity concerns. Here, we report the simultaneous degradation of three isomers of mononitrophenol in a single system by employing a consortium of three bacteria, both in flasks and lab-scale sequential batch reactors. The results demonstrate that simultaneous biodegradation of three mononitrophenol isomers can be achieved by a tailor-made microbial consortium immobilized in sequential batch reactors, providing a pilot study for a novel approach for the bioremediation of mixed pollutants, especially isomers present in wastewater. © 2016 The Society for Applied Microbiology.

  12. Single-Cell Transcriptomics Bioinformatics and Computational Challenges

    Directory of Open Access Journals (Sweden)

    Lana Garmire

    2016-09-01

    Full Text Available The emerging single-cell RNA-Seq (scRNA-Seq technology holds the promise to revolutionize our understanding of diseases and associated biological processes at an unprecedented resolution. It opens the door to reveal the intercellular heterogeneity and has been employed to a variety of applications, ranging from characterizing cancer cells subpopulations to elucidating tumor resistance mechanisms. Parallel to improving experimental protocols to deal with technological issues, deriving new analytical methods to reveal the complexity in scRNA-Seq data is just as challenging. Here we review the current state-of-the-art bioinformatics tools and methods for scRNA-Seq analysis, as well as addressing some critical analytical challenges that the field faces.

  13. DoD Alcohol and Substance Abuse Consortium Award

    Science.gov (United States)

    2017-10-01

    formerly ORG 34517) in Veterans with Co-morbid PTSD/AUD” (Principal Investigator: Dewleen G. Baker, MD) The primary objective of this study is to...test the efficacy, safety, and tolerability of a novel GR antagonist PT150 (formerly ORG 34517) for AUD/PTSD dual diagnosis treatment in veterans. The...Pharmacotherapies for Alcohol and Substance Abuse (PASA) Consortium PI: Rick Williams, PhD & Thomas Kosten, MD Org : RTI International Study Research Planning

  14. Validating genetic risk associations for ovarian cancer through the international Ovarian Cancer Association Consortium

    DEFF Research Database (Denmark)

    Pearce, C L; Near, A M; Van Den Berg, D J

    2009-01-01

    The search for genetic variants associated with ovarian cancer risk has focused on pathways including sex steroid hormones, DNA repair, and cell cycle control. The Ovarian Cancer Association Consortium (OCAC) identified 10 single-nucleotide polymorphisms (SNPs) in genes in these pathways, which had...... been genotyped by Consortium members and a pooled analysis of these data was conducted. Three of the 10 SNPs showed evidence of an association with ovarian cancer at P... and risk of ovarian cancer suggests that this pathway may be involved in ovarian carcinogenesis. Additional follow-up is warranted....

  15. Simultaneous cell growth and ethanol production from cellulose by an engineered yeast consortium displaying a functional mini-cellulosome

    Directory of Open Access Journals (Sweden)

    Madan Bhawna

    2011-11-01

    Full Text Available Abstract Background The recalcitrant nature of cellulosic materials and the high cost of enzymes required for efficient hydrolysis are the major impeding steps to their practical usage for ethanol production. Ideally, a recombinant microorganism, possessing the capability to utilize cellulose for simultaneous growth and ethanol production, is of great interest. We have reported recently the use of a yeast consortium for the functional presentation of a mini-cellulosome structure onto the yeast surface by exploiting the specific interaction of different cohesin-dockerin pairs. In this study, we engineered a yeast consortium capable of displaying a functional mini-cellulosome for the simultaneous growth and ethanol production on phosphoric acid swollen cellulose (PASC. Results A yeast consortium composed of four different populations was engineered to display a functional mini-cellulosome containing an endoglucanase, an exoglucanase and a β-glucosidase. The resulting consortium was demonstrated to utilize PASC for growth and ethanol production. The final ethanol production of 1.25 g/L corresponded to 87% of the theoretical value and was 3-fold higher than a similar yeast consortium secreting only the three cellulases. Quantitative PCR was used to enumerate the dynamics of each individual yeast population for the two consortia. Results indicated that the slight difference in cell growth cannot explain the 3-fold increase in PASC hydrolysis and ethanol production. Instead, the substantial increase in ethanol production is consistent with the reported synergistic effect on cellulose hydrolysis using the displayed mini-cellulosome. Conclusions This report represents a significant step towards the goal of cellulosic ethanol production. This engineered yeast consortium displaying a functional mini-cellulosome demonstrated not only the ability to grow on the released sugars from PASC but also a 3-fold higher ethanol production than a similar yeast

  16. Synergy between Medical Informatics and Bioinformatics: Facilitating Genomic Medicine for Future Health Care

    Czech Academy of Sciences Publication Activity Database

    Martin-Sanchez, F.; Iakovidis, I.; Norager, S.; Maojo, V.; de Groen, P.; Van der Lei, J.; Jones, T.; Abraham-Fuchs, K.; Apweiler, R.; Babic, A.; Baud, R.; Breton, V.; Cinquin, P.; Doupi, P.; Dugas, M.; Eils, R.; Engelbrecht, R.; Ghazal, P.; Jehenson, P.; Kulikowski, C.; Lampe, K.; De Moor, G.; Orphanoudakis, S.; Rossing, N.; Sarachan, B.; Sousa, A.; Spekowius, G.; Thireos, G.; Zahlmann, G.; Zvárová, Jana; Hermosilla, I.; Vicente, F. J.

    2004-01-01

    Roč. 37, - (2004), s. 30-42 ISSN 1532-0464 Institutional research plan: CEZ:AV0Z1030915 Keywords : bioinformatics * medical informatics * genomics * genomic medicine * biomedical informatics Subject RIV: BD - Theory of Information Impact factor: 1.013, year: 2004

  17. Bioinformatics analysis identifies several intrinsically disordered human E3 ubiquitin-protein ligases

    DEFF Research Database (Denmark)

    Boomsma, Wouter Krogh; Nielsen, Sofie Vincents; Lindorff-Larsen, Kresten

    2016-01-01

    conduct a bioinformatics analysis to examine >600 human and S. cerevisiae E3 ligases to identify enzymes that are similar to San1 in terms of function and/or mechanism of substrate recognition. An initial sequence-based database search was found to detect candidates primarily based on the homology...

  18. Strategies for Using Peer-Assisted Learning Effectively in an Undergraduate Bioinformatics Course

    Science.gov (United States)

    Shapiro, Casey; Ayon, Carlos; Moberg-Parker, Jordan; Levis-Fitzgerald, Marc; Sanders, Erin R.

    2013-01-01

    This study used a mixed methods approach to evaluate hybrid peer-assisted learning approaches incorporated into a bioinformatics tutorial for a genome annotation research project. Quantitative and qualitative data were collected from undergraduates who enrolled in a research-based laboratory course during two different academic terms at UCLA.…

  19. The Consortium for the Valuation of Applications Benefits Linked with Earth Science (VALUABLES)

    Science.gov (United States)

    Kuwayama, Y.; Mabee, B.; Wulf Tregar, S.

    2017-12-01

    National and international organizations are placing greater emphasis on the societal and economic benefits that can be derived from applications of Earth observations, yet improvements are needed to connect to the decision processes that produce actions with direct societal benefits. There is a need to substantiate the benefits of Earth science applications in socially and economically meaningful terms in order to demonstrate return on investment and to prioritize investments across data products, modeling capabilities, and information systems. However, methods and techniques for quantifying the value proposition of Earth observations are currently not fully established. Furthermore, it has been challenging to communicate the value of these investments to audiences beyond the Earth science community. The Consortium for the Valuation of Applications Benefits Linked with Earth Science (VALUABLES), a cooperative agreement between Resources for the Future (RFF) and the National Aeronautics and Space Administration (NASA), has the goal of advancing methods for the valuation and communication of the applied benefits linked with Earth observations. The VALUABLES Consortium will focus on three pillars: (a) a research pillar that will apply existing and innovative methods to quantify the socioeconomic benefits of information from Earth observations; (b) a capacity building pillar to catalyze interdisciplinary linkages between Earth scientists and social scientists; and (c) a communications pillar that will convey the value of Earth observations to stakeholders in government, universities, the NGO community, and the interested public. In this presentation, we will describe ongoing and future activities of the VALUABLES Consortium, provide a brief overview of frameworks to quantify the socioeconomic value of Earth observations, and describe how Earth scientists and social scientist can get involved in the Consortium's activities.

  20. University Research Consortium annual review meeting program

    International Nuclear Information System (INIS)

    1996-07-01

    This brochure presents the program for the first annual review meeting of the University Research Consortium (URC) of the Idaho National Engineering Laboratory (INEL). INEL is a multiprogram laboratory with a distinctive role in applied engineering. It also conducts basic science research and development, and complex facility operations. The URC program consists of a portfolio of research projects funded by INEL and conducted at universities in the United States. In this program, summaries and participant lists for each project are presented as received from the principal investigators

  1. University Research Consortium annual review meeting program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-07-01

    This brochure presents the program for the first annual review meeting of the University Research Consortium (URC) of the Idaho National Engineering Laboratory (INEL). INEL is a multiprogram laboratory with a distinctive role in applied engineering. It also conducts basic science research and development, and complex facility operations. The URC program consists of a portfolio of research projects funded by INEL and conducted at universities in the United States. In this program, summaries and participant lists for each project are presented as received from the principal investigators.

  2. Combining multiple decisions: applications to bioinformatics

    International Nuclear Information System (INIS)

    Yukinawa, N; Ishii, S; Takenouchi, T; Oba, S

    2008-01-01

    Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods

  3. Bioinformatics Analysis Reveals Genes Involved in the Pathogenesis of Ameloblastoma and Keratocystic Odontogenic Tumor.

    Science.gov (United States)

    Santos, Eliane Macedo Sobrinho; Santos, Hércules Otacílio; Dos Santos Dias, Ivoneth; Santos, Sérgio Henrique; Batista de Paula, Alfredo Maurício; Feltenberger, John David; Sena Guimarães, André Luiz; Farias, Lucyana Conceição

    2016-01-01

    Pathogenesis of odontogenic tumors is not well known. It is important to identify genetic deregulations and molecular alterations. This study aimed to investigate, through bioinformatic analysis, the possible genes involved in the pathogenesis of ameloblastoma (AM) and keratocystic odontogenic tumor (KCOT). Genes involved in the pathogenesis of AM and KCOT were identified in GeneCards. Gene list was expanded, and the gene interactions network was mapped using the STRING software. "Weighted number of links" (WNL) was calculated to identify "leader genes" (highest WNL). Genes were ranked by K-means method and Kruskal-Wallis test was used (Preview data was used to corroborate the bioinformatics data. CDK1 was identified as leader gene for AM. In KCOT group, results show PCNA and TP53 . Both tumors exhibit a power law behavior. Our topological analysis suggested leader genes possibly important in the pathogenesis of AM and KCOT, by clustering coefficient calculated for both odontogenic tumors (0.028 for AM, zero for KCOT). The results obtained in the scatter diagram suggest an important relationship of these genes with the molecular processes involved in AM and KCOT. Ontological analysis for both AM and KCOT demonstrated different mechanisms. Bioinformatics analyzes were confirmed through literature review. These results may suggest the involvement of promising genes for a better understanding of the pathogenesis of AM and KCOT.

  4. Secretome Analysis of Lipid-Induced Insulin Resistance in Skeletal Muscle Cells by a Combined Experimental and Bioinformatics Workflow

    DEFF Research Database (Denmark)

    Deshmukh, Atul S; Cox, Juergen; Jensen, Lars Juhl

    2015-01-01

    , in principle, allows an unbiased and comprehensive analysis of cellular secretomes; however, the distinction of bona fide secreted proteins from proteins released upon lysis of a small fraction of dying cells remains challenging. Here we applied highly sensitive MS and streamlined bioinformatics to analyze......-resistant conditions. Our study demonstrates an efficient combined experimental and bioinformatics workflow to identify putative secreted proteins from insulin-resistant skeletal muscle cells, which could easily be adapted to other cellular models....

  5. The Science of Sustaining Health Behavior Change: The Health Maintenance Consortium

    Science.gov (United States)

    Ory, Marcia G.; Smith, Matthew Lee; Mier, Nelda; Wernicke, Meghan M.

    2013-01-01

    Objective The Health Maintenance Consortium (HMC) is a multisite Grantee Consortium funded by the National Institutes of Health from 2004–2009. The goal of HMC is to enhance understanding of the long-term maintenance of behavior change, as well as effective strategies for achieving sustainable health promotion and disease prevention. Methods This introductory research synthesis prepared by the Resource Center gives context to this theme issue by providing an overview of the HMC and the articles in this journal. Results It explores the contributions to our conceptualization of behavior change processes and intervention strategies, the trajectory of effectiveness of behavioral and social interventions, and factors influencing the long-term maintenance of behavioral and social interventions. Conclusions Future directions for furthering the science of maintaining behavior change and reducing the gaps between research and practice are recommended. PMID:20604691

  6. Biodegradation of BOD and ammonia-free using bacterial consortium in aerated fixed film bioreactor (AF2B)

    Science.gov (United States)

    Prayitno, Rulianah, Sri; Saroso, Hadi; Meilany, Diah

    2017-06-01

    BOD and Ammonia-free (NH3-N) are pollutants of hospital wastewater which often exceed the quality standards. It is because biological processes in wastewater treatment plant (WWTP) have not been effective in degrading BOD and NH3-N. Therefore, a study on factors that influence the biodegradation of BOD and NH3-N by choosing the type of bacteria to improve the mechanisms of biodegradation processes is required. Bacterial consortium is a collection of several types of bacteria obtained from isolation process, which is known to be more effective than a single bacterial in degrading pollutants. On the other hand, AF2B is a type of reactor in wastewater treatment system. The AF2B contains a filter media that has a large surface area so that the biodegradation process of pollutants by microorganism can be improved. The objective of this research is to determine the effect of volume of starter and air supplies on decreasing BOD and NH3-N in hospital wastewater using bacterial consortium in the AF2B on batch process. The research was conducted in three stages: the making of the growth curve of the bacterial consortium, bacterial consortium acclimatization, and hospital wastewater treatment in the AF2B with batch process. The variables used are the volume of starter (65%, 75%, and 85% in volume) and air supplies (2.5, 5, and 7.5 L/min). Meanwhile, the materials used are hospital wastewater, bacterial consortium (Pseudomonas diminuta, Pseudomonas capica, Bacillius sp, and Nitrobacter sp), blower, and AF2B. AF2B is a plastic basin containing a filter media with a wasp-nest shape used as a medium for growing the bacterial consortium. In the process of making the growth curve, a solid form of bacterial consortium was dissolved in sterilized water, then grown in a nutrient broth (NB). Then, shaking and sampling were done at any time to determine the path growth of bacterial consortium. In the acclimatization process, bacterial isolates were grown using hospital wastewater as a

  7. A Critical Analysis of Assessment Quality in Genomics and Bioinformatics Education Research

    Science.gov (United States)

    Campbell, Chad E.; Nehm, Ross H.

    2013-01-01

    The growing importance of genomics and bioinformatics methods and paradigms in biology has been accompanied by an explosion of new curricula and pedagogies. An important question to ask about these educational innovations is whether they are having a meaningful impact on students' knowledge, attitudes, or skills. Although assessments are…

  8. Inner-City Energy and Environmental Education Consortium: Inventory of existing programs. Appendix 13.5

    Energy Technology Data Exchange (ETDEWEB)

    1992-08-21

    This is the ``first effort`` to prepare an inventory of existing educational programs, focused primarily on inner-city youth, in operation in Washington, DC, Baltimore, and Philadelphia. The purpose of the inventory is to identify existing programs which could be augmented, adapted, or otherwise strengthened to help fulfil the mission of the Department of Energy-sponsored Inner-City Energy and Environmental Education Consortium, the mission of which is to recruit and retain inner-city youth to pursue careers in energy-related scientific and technical areas and in environmental restoration and waste management. The Consortium does not want to ``reinvent the wheel`` and all of its members need to learn what others are doing. Each of the 30 participating academic institutions was invited to submit as many program descriptions as they wished. Due to the summer holidays, or because they did not believe than they were carrying out programs relevant to the mission of the Consortium, some institutions did not submit any program descriptions. In addition, several industries, governmental agencies, and not-for-profit institutions were invited to submit program descriptions.

  9. Creating a specialist protein resource network: a meeting report for the protein bioinformatics and community resources retreat

    DEFF Research Database (Denmark)

    Babbitt, Patricia C.; Bagos, Pantelis G.; Bairoch, Amos

    2015-01-01

    During 11–12 August 2014, a Protein Bioinformatics and Community Resources Retreat was held at the Wellcome Trust Genome Campus in Hinxton, UK. This meeting brought together the principal investigators of several specialized protein resources (such as CAZy, TCDB and MEROPS) as well as those from...... protein databases from the large Bioinformatics centres (including UniProt and RefSeq). The retreat was divided into five sessions: (1) key challenges, (2) the databases represented, (3) best practices for maintenance and curation, (4) information flow to and from large data centers and (5) communication...

  10. Midwest Superconductivity Consortium: 1994 Progress report

    Energy Technology Data Exchange (ETDEWEB)

    1995-01-01

    The mission of the Midwest Superconductivity Consortium, MISCON, is to advance the science and understanding of high {Tc} superconductivity. During the past year, 27 projects produced over 123 talks and 139 publications. Group activities and interactions involved 2 MISCON group meetings (held in August and January); with the second MISCON Workshop held in August; 13 external speakers; 79 collaborations (with universities, industry, Federal laboratories, and foreign research centers); and 48 exchanges of samples and/or measurements. Research achievements this past year focused on understanding the effects of processing phenomena on structure-property interrelationships and the fundamental nature of transport properties in high-temperature superconductors.

  11. History of the Tinnitus Research Consortium.

    Science.gov (United States)

    Snow, James B

    2016-04-01

    This article describes the creation and accomplishments of the Tinnitus Research Consortium (TRC), founded and supported through philanthropy and intended to enrich the field of tinnitus research. Bringing together a group of distinguished auditory researchers, most of whom were not involved in tinnitus research, over the fifteen years of its life it developed novel research approaches and recruited a number of new investigators into the field. The purpose of this special issue is to highlight some of the significant accomplishments of the investigators supported by the TRC. This article is part of a Special Issue entitled "Tinnitus". Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Midwest Superconductivity Consortium: 1994 Progress report

    International Nuclear Information System (INIS)

    1995-01-01

    The mission of the Midwest Superconductivity Consortium, MISCON, is to advance the science and understanding of high T c superconductivity. During the past year, 27 projects produced over 123 talks and 139 publications. Group activities and interactions involved 2 MISCON group meetings (held in August and January); with the second MISCON Workshop held in August; 13 external speakers; 79 collaborations (with universities, industry, Federal laboratories, and foreign research centers); and 48 exchanges of samples and/or measurements. Research achievements this past year focused on understanding the effects of processing phenomena on structure-property interrelationships and the fundamental nature of transport properties in high-temperature superconductors

  13. Functionality and Evolutionary History of the Chaperonins in Thermophilic Archaea. A Bioinformatical Perspective

    Science.gov (United States)

    Karlin, Samuel

    2004-01-01

    We used bioinformatics methods to study phylogenetic relations and differentiation patterns of the archaeal chaperonin 60 kDa heat-shock protein (HSP60) genes in support of the study of differential expression patterns of the three chaperonin genes encoded in Sulfolobus shibatae.

  14. Bioremediation and reclamation of soil contaminated with petroleum oil hydrocarbons by exogenously seeded bacterial consortium: a pilot-scale study.

    Science.gov (United States)

    Mukherjee, Ashis K; Bordoloi, Naba K

    2011-03-01

    Spillage of petroleum hydrocarbons causes significant environmental pollution. Bioremediation is an effective process to remediate petroleum oil contaminant from the ecosystem. The aim of the present study was to reclaim a petroleum oil-contaminated soil which was unsuitable for the cultivation of crop plants by using petroleum oil hydrocarbon-degrading microbial consortium. Bacterial consortium consisting of Bacillus subtilis DM-04 and Pseudomonas aeruginosa M and NM strains were seeded to 20% (v/w) petroleum oil-contaminated soil, and bioremediation experiment was carried out for 180 days under laboratory condition. The kinetics of hydrocarbon degradation was analyzed using biochemical and gas chromatographic (GC) techniques. The ecotoxicity of the elutriates obtained from petroleum oil-contaminated soil before and post-treatment with microbial consortium was tested on germination and growth of Bengal gram (Cicer aretinum) and green gram (Phaseolus mungo) seeds. Bacterial consortium showed a significant reduction in total petroleum hydrocarbon level in contaminated soil (76% degradation) as compared to the control soil (3.6% degradation) 180 days post-inoculation. The GC analysis confirmed that bacterial consortium was more effective in degrading the alkane fraction compared to aromatic fraction of crude petroleum oil hydrocarbons in soil. The nitrogen, sulfur, and oxygen compounds fraction was least degraded. The reclaimed soil supported the germination and growth of crop plants (C. aretinum and P. mungo). In contrast, seeds could not be germinated in petroleum oil-contaminated soil. The present study reinforces the application of bacterial consortium rather than individual bacterium for the effective bioremediation and reclamation of soil contaminated with petroleum oil.

  15. Cluster Flow: A user-friendly bioinformatics workflow tool [version 1; referees: 3 approved

    Directory of Open Access Journals (Sweden)

    Philip Ewels

    2016-12-01

    Full Text Available Pipeline tools are becoming increasingly important within the field of bioinformatics. Using a pipeline manager to manage and run workflows comprised of multiple tools reduces workload and makes analysis results more reproducible. Existing tools require significant work to install and get running, typically needing pipeline scripts to be written from scratch before running any analysis. We present Cluster Flow, a simple and flexible bioinformatics pipeline tool designed to be quick and easy to install. Cluster Flow comes with 40 modules for common NGS processing steps, ready to work out of the box. Pipelines are assembled using these modules with a simple syntax that can be easily modified as required. Core helper functions automate many common NGS procedures, making running pipelines simple. Cluster Flow is available with an GNU GPLv3 license on GitHub. Documentation, examples and an online demo are available at http://clusterflow.io.

  16. Agonist Binding to Chemosensory Receptors: A Systematic Bioinformatics Analysis

    Directory of Open Access Journals (Sweden)

    Fabrizio Fierro

    2017-09-01

    Full Text Available Human G-protein coupled receptors (hGPCRs constitute a large and highly pharmaceutically relevant membrane receptor superfamily. About half of the hGPCRs' family members are chemosensory receptors, involved in bitter taste and olfaction, along with a variety of other physiological processes. Hence these receptors constitute promising targets for pharmaceutical intervention. Molecular modeling has been so far the most important tool to get insights on agonist binding and receptor activation. Here we investigate both aspects by bioinformatics-based predictions across all bitter taste and odorant receptors for which site-directed mutagenesis data are available. First, we observe that state-of-the-art homology modeling combined with previously used docking procedures turned out to reproduce only a limited fraction of ligand/receptor interactions inferred by experiments. This is most probably caused by the low sequence identity with available structural templates, which limits the accuracy of the protein model and in particular of the side-chains' orientations. Methods which transcend the limited sampling of the conformational space of docking may improve the predictions. As an example corroborating this, we review here multi-scale simulations from our lab and show that, for the three complexes studied so far, they significantly enhance the predictive power of the computational approach. Second, our bioinformatics analysis provides support to previous claims that several residues, including those at positions 1.50, 2.50, and 7.52, are involved in receptor activation.

  17. MAPI: towards the integrated exploitation of bioinformatics Web Services.

    Science.gov (United States)

    Ramirez, Sergio; Karlsson, Johan; Trelles, Oswaldo

    2011-10-27

    Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI) that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others).

  18. PipeCraft: Flexible open-source toolkit for bioinformatics analysis of custom high-throughput amplicon sequencing data.

    Science.gov (United States)

    Anslan, Sten; Bahram, Mohammad; Hiiesalu, Indrek; Tedersoo, Leho

    2017-11-01

    High-throughput sequencing methods have become a routine analysis tool in environmental sciences as well as in public and private sector. These methods provide vast amount of data, which need to be analysed in several steps. Although the bioinformatics may be applied using several public tools, many analytical pipelines allow too few options for the optimal analysis for more complicated or customized designs. Here, we introduce PipeCraft, a flexible and handy bioinformatics pipeline with a user-friendly graphical interface that links several public tools for analysing amplicon sequencing data. Users are able to customize the pipeline by selecting the most suitable tools and options to process raw sequences from Illumina, Pacific Biosciences, Ion Torrent and Roche 454 sequencing platforms. We described the design and options of PipeCraft and evaluated its performance by analysing the data sets from three different sequencing platforms. We demonstrated that PipeCraft is able to process large data sets within 24 hr. The graphical user interface and the automated links between various bioinformatics tools enable easy customization of the workflow. All analytical steps and options are recorded in log files and are easily traceable. © 2017 John Wiley & Sons Ltd.

  19. Ergatis: a web interface and scalable software system for bioinformatics workflows

    Science.gov (United States)

    Orvis, Joshua; Crabtree, Jonathan; Galens, Kevin; Gussman, Aaron; Inman, Jason M.; Lee, Eduardo; Nampally, Sreenath; Riley, David; Sundaram, Jaideep P.; Felix, Victor; Whitty, Brett; Mahurkar, Anup; Wortman, Jennifer; White, Owen; Angiuoli, Samuel V.

    2010-01-01

    Motivation: The growth of sequence data has been accompanied by an increasing need to analyze data on distributed computer clusters. The use of these systems for routine analysis requires scalable and robust software for data management of large datasets. Software is also needed to simplify data management and make large-scale bioinformatics analysis accessible and reproducible to a wide class of target users. Results: We have developed a workflow management system named Ergatis that enables users to build, execute and monitor pipelines for computational analysis of genomics data. Ergatis contains preconfigured components and template pipelines for a number of common bioinformatics tasks such as prokaryotic genome annotation and genome comparisons. Outputs from many of these components can be loaded into a Chado relational database. Ergatis was designed to be accessible to a broad class of users and provides a user friendly, web-based interface. Ergatis supports high-throughput batch processing on distributed compute clusters and has been used for data management in a number of genome annotation and comparative genomics projects. Availability: Ergatis is an open-source project and is freely available at http://ergatis.sourceforge.net Contact: jorvis@users.sourceforge.net PMID:20413634

  20. The Latin American Consortium of Studies in Obesity (LASO)

    Science.gov (United States)

    Bautista, L. E.; Casas, J. P.; Herrera, V. M.; Miranda, J. J.; Perel, P.; Pichardo, R.; González, A.; Sanchez, J. R.; Ferreccio, C.; Aguilera, X.; Silva, E.; Oróstegui, M.; Gómez, L. F.; Chirinos, J. A.; Medina-Lezama, J.; Pérez, C. M.; Suárez, E.; Ortiz, A. P.; Rosero, L.; Schapochnik, N.; Ortiz, Z.; Ferrante, D.

    2009-01-01

    Summary Current, high-quality data are needed to evaluate the health impact of the epidemic of obesity in Latin America. The Latin American Consortium of Studies of Obesity (LASO) has been established, with the objectives of (i) Accurately estimating the prevalence of obesity and its distribution by sociodemographic characteristics; (ii) Identifying ethnic, socioeconomic and behavioural determinants of obesity; (iii) Estimating the association between various anthropometric indicators or obesity and major cardiovascular risk factors and (iv) Quantifying the validity of standard definitions of the various indexes of obesity in Latin American population. To achieve these objectives, LASO makes use of individual data from existing studies. To date, the LASO consortium includes data from 11 studies from eight countries (Argentina, Chile, Colombia, Costa Rica, Dominican Republic, Peru, Puerto Rico and Venezuela), including a total of 32 462 subjects. This article describes the overall organization of LASO, the individual studies involved and the overall strategy for data analysis. LASO will foster the development of collaborative obesity research among Latin American investigators. More important, results from LASO will be instrumental to inform health policies aiming to curtail the epidemic of obesity in the region. PMID:19438980