Schneider, Maria V; Walter, Peter; Blatter, Marie-Claude; Watson, James; Brazas, Michelle D; Rother, Kristian; Budd, Aidan; Via, Allegra; van Gelder, Celia W G; Jacob, Joachim; Fernandes, Pedro; Nyrönen, Tommi H; De Las Rivas, Javier; Blicher, Thomas; Jimenez, Rafael C; Loveland, Jane; McDowall, Jennifer; Jones, Phil; Vaughan, Brendan W; Lopez, Rodrigo; Attwood, Teresa K; Brooksbank, Catherine
Funding bodies are increasingly recognizing the need to provide graduates and researchers with access to short intensive courses in a variety of disciplines, in order both to improve the general skills base and to provide solid foundations on which researchers may build their careers. In response to the development of 'high-throughput biology', the need for training in the field of bioinformatics, in particular, is seeing a resurgence: it has been defined as a key priority by many Institutions and research programmes and is now an important component of many grant proposals. Nevertheless, when it comes to planning and preparing to meet such training needs, tension arises between the reward structures that predominate in the scientific community which compel individuals to publish or perish, and the time that must be devoted to the design, delivery and maintenance of high-quality training materials. Conversely, there is much relevant teaching material and training expertise available worldwide that, were it properly organized, could be exploited by anyone who needs to provide training or needs to set up a new course. To do this, however, the materials would have to be centralized in a database and clearly tagged in relation to target audiences, learning objectives, etc. Ideally, they would also be peer reviewed, and easily and efficiently accessible for downloading. Here, we present the Bioinformatics Training Network (BTN), a new enterprise that has been initiated to address these needs and review it, respectively, to similar initiatives and collections.
Schneider, Maria V.; Walter, Peter; Blatter, Marie-Claude
Funding bodies are increasingly recognizing the need to provide graduates and researchers with access to short intensive courses in a variety of disciplines, in order both to improve the general skills base and to provide solid foundations on which researchers may build their careers. In response...... to the development of ‘high-throughput biology’, the need for training in the field of bioinformatics, in particular, is seeing a resurgence: it has been defined as a key priority by many Institutions and research programmes and is now an important component of many grant proposals. Nevertheless, when it comes...... to planning and preparing to meet such training needs, tension arises between the reward structures that predominate in the scientific community which compel individuals to publish or perish, and the time that must be devoted to the design, delivery and maintenance of high-quality training materials...
Hiew, Hong Liang; Bellgard, Matthew
Life Science research faces the constant challenge of how to effectively handle an ever-growing body of bioinformatics software and online resources. The users and developers of bioinformatics resources have a diverse set of competing demands on how these resources need to be developed and organised. Unfortunately, there does not exist an adequate community-wide framework to integrate such competing demands. The problems that arise from this include unstructured standards development, the emergence of tools that do not meet specific needs of researchers, and often times a communications gap between those who use the tools and those who supply them. This paper presents an overview of the different functions and needs of bioinformatics stakeholders to determine what may be required in a community-wide framework. A Bioinformatics Reference Model is proposed as a basis for such a framework. The reference model outlines the functional relationship between research usage and technical aspects of bioinformatics resources. It separates important functions into multiple structured layers, clarifies how they relate to each other, and highlights the gaps that need to be addressed for progress towards a diverse, manageable, and sustainable body of resources. The relevance of this reference model to the bioscience research community, and its implications in progress for organising our bioinformatics resources, are discussed.
Baldi, Pierre; Brunak, Søren
, and medicine will be particularly affected by the new results and the increased understanding of life at the molecular level. Bioinformatics is the development and application of computer methods for analysis, interpretation, and prediction, as well as for the design of experiments. It has emerged...
Colucci, S; Donini, F M; Di Sciascio, E
Comparison of resources is a frequent task in different bio-informatics applications, including drug-target interaction, drug repositioning and mechanism of action understanding, among others. This paper proposes a general method for the logical comparison of resources modeled in Resource Description Framework and shows its distinguishing features with reference to the comparison of drugs. In particular, the method returns a description of the commonalities between resources, rather than a numerical value estimating their similarity and/or relatedness. The approach is domain-independent and may be flexibly adapted to heterogeneous use cases, according to a process for setting parameters which is completely explicit. The paper also presents an experiment using the dataset Bioportal as knowledge source; the experiment is fully reproducible, thanks to the elicitation of criteria and values for parameter customization. Copyright © 2017 Elsevier Inc. All rights reserved.
Attwood, Terri K.; Selimas, Ioannis; Buis, Rob; Altenburg, Ruud; Herzog, Robert; Ledent, Valerie; Ghita, Viorica; Fernandes, Pedro; Marques, Isabel; Brugman, Marc
EMBER was a European project aiming to develop bioinformatics teaching materials on the Web and CD-ROM to help address the recognised skills shortage in bioinformatics. The project grew out of pilot work on the development of an interactive web-based bioinformatics tutorial and the desire to repackage that resource with the help of a professional…
Velankar, S; McNeil, P; Mittard-Runte, V; Suarez, A; Barrell, D; Apweiler, R; Henrick, K
The Macromolecular Structure Database (MSD) group (http://www.ebi.ac.uk/msd/) continues to enhance the quality and consistency of macromolecular structure data in the worldwide Protein Data Bank (wwPDB) and to work towards the integration of various bioinformatics data resources. One of the major obstacles to the improved integration of structural databases such as MSD and sequence databases like UniProt is the absence of up to date and well-maintained mapping between corresponding entries. We have worked closely with the UniProt group at the EBI to clean up the taxonomy and sequence cross-reference information in the MSD and UniProt databases. This information is vital for the reliable integration of the sequence family databases such as Pfam and Interpro with the structure-oriented databases of SCOP and CATH. This information has been made available to the eFamily group (http://www.efamily.org.uk/) and now forms the basis of the regular interchange of information between the member databases (MSD, UniProt, Pfam, Interpro, SCOP and CATH). This exchange of annotation information has enriched the structural information in the MSD database with annotation from wider sequence-oriented resources. This work was carried out under the 'Structure Integration with Function, Taxonomy and Sequences (SIFTS)' initiative (http://www.ebi.ac.uk/msd-srv/docs/sifts) in the MSD group.
Fu, Zhiyan; Lin, Jing
The rapidly increasing number of characterized allergens has created huge demands for advanced information storage, retrieval, and analysis. Bioinformatics and machine learning approaches provide useful tools for the study of allergens and epitopes prediction, which greatly complement traditional laboratory techniques. The specific applications mainly include identification of B- and T-cell epitopes, and assessment of allergenicity and cross-reactivity. In order to facilitate the work of clinical and basic researchers who are not familiar with bioinformatics, we review in this chapter the most important databases, bioinformatic tools, and methods with relevance to the study of allergens.
The Influenza Research Database (IRD) is a U.S. National Institute of Allergy and Infectious Diseases (NIAID)-sponsored Bioinformatics Resource Center dedicated to providing bioinformatics support for influenza virus research. IRD facilitates the research and development of vaccines, diagnostics, an...
Suh, K. Stephen; Sarojini, Sreeja; Youssif, Maher; Nalley, Kip; Milinovikj, Natasha; Elloumi, Fathi; Russell, Steven; Pecora, Andrew; Schecter, Elyssa; Goy, Andre
Personalized medicine promises patient-tailored treatments that enhance patient care and decrease overall treatment costs by focusing on genetics and “-omics” data obtained from patient biospecimens and records to guide therapy choices that generate good clinical outcomes. The approach relies on diagnostic and prognostic use of novel biomarkers discovered through combinations of tissue banking, bioinformatics, and electronic medical records (EMRs). The analytical power of bioinformatic platforms combined with patient clinical data from EMRs can reveal potential biomarkers and clinical phenotypes that allow researchers to develop experimental strategies using selected patient biospecimens stored in tissue banks. For cancer, high-quality biospecimens collected at diagnosis, first relapse, and various treatment stages provide crucial resources for study designs. To enlarge biospecimen collections, patient education regarding the value of specimen donation is vital. One approach for increasing consent is to offer publically available illustrations and game-like engagements demonstrating how wider sample availability facilitates development of novel therapies. The critical value of tissue bank samples, bioinformatics, and EMR in the early stages of the biomarker discovery process for personalized medicine is often overlooked. The data obtained also require cross-disciplinary collaborations to translate experimental results into clinical practice and diagnostic and prognostic use in personalized medicine. PMID:23818899
Full Text Available The immensely popular fields of cancer research and bioinformatics overlap in many different areas, e.g. large data repositories that allow for users to analyze data from many experiments (data handling, databases, pattern mining, microarray data analysis, and interpretation of proteomics data. There are many newly available resources in these areas that may be unfamiliar to most cancer researchers wanting to incorporate bioinformatics tools and analyses into their work, and also to bioinformaticians looking for real data to develop and test algorithms. This review reveals the interdependence of cancer research and bioinformatics, and highlight the most appropriate and useful resources available to cancer researchers. These include not only public databases, but general and specific bioinformatics tools which can be useful to the cancer researcher. The primary foci are function and structure prediction tools of protein genes. The result is a useful reference to cancer researchers and bioinformaticians studying cancer alike.
Ramírez, Sergio; Muñoz-Mérida, Antonio; Karlsson, Johan; García, Maximiliano; Pérez-Pulido, Antonio J.; Claros, M. Gonzalo; Trelles, Oswaldo
The productivity of any scientist is affected by cumbersome, tedious and time-consuming tasks that try to make the heterogeneous web services compatible so that they can be useful in their research. MOWServ, the bioinformatic platform offered by the Spanish National Institute of Bioinformatics, was released to provide integrated access to databases and analytical tools. Since its release, the number of available services has grown dramatically, and it has become one of the main contributors of registered services in the EMBRACE Biocatalogue. The ontology that enables most of the web-service compatibility has been curated, improved and extended. The service discovery has been greatly enhanced by Magallanes software and biodataSF. User data are securely stored on the main server by an authentication protocol that enables the monitoring of current or already-finished user’s tasks, as well as the pipelining of successive data processing services. The BioMoby standard has been greatly extended with the new features included in the MOWServ, such as management of additional information (metadata such as extended descriptions, keywords and datafile examples), a qualified registry, error handling, asynchronous services and service replication. All of them have increased the MOWServ service quality, usability and robustness. MOWServ is available at http://www.inab.org/MOWServ/ and has a mirror at http://www.bitlab-es.com/MOWServ/. PMID:20525794
Dr. M. Panneerselvam
Electronic Human Resource Management is an essence the revolution of human resource functions to management and employees. These functions are typically used via intranet and web technology. This helps the organization to improve their standards where they can able to review and forward. All those documents can be viewed within a fraction of second with help of client and server links. The phenomenon of E- HRM deserves closer and more fundamental roots to HR activity. The E-HRM develops and b...
Full Text Available Cloud computing has started to change the way how bioinformatics research is being carried out. Researchers who have taken advantage of this technology can process larger amounts of data and speed up scientific discovery. The variability in data volume results in variable computing requirements. Therefore, bioinformatics researchers are pursuing more reliable and efficient methods for conducting sequencing analyses. This paper proposes an automated resource provisioning method, G2LC, for bioinformatics applications in IaaS. It enables application to output the results in a real time manner. Its main purpose is to guarantee applications performance, while improving resource utilization. Real sequence searching data of BLAST is used to evaluate the effectiveness of G2LC. Experimental results show that G2LC guarantees the application performance, while resource is saved up to 20.14%.
Hu, Rongdong; Liu, Guangming; Jiang, Jingfei; Wang, Lixin
Cloud computing has started to change the way how bioinformatics research is being carried out. Researchers who have taken advantage of this technology can process larger amounts of data and speed up scientific discovery. The variability in data volume results in variable computing requirements. Therefore, bioinformatics researchers are pursuing more reliable and efficient methods for conducting sequencing analyses. This paper proposes an automated resource provisioning method, G2LC, for bioinformatics applications in IaaS. It enables application to output the results in a real time manner. Its main purpose is to guarantee applications performance, while improving resource utilization. Real sequence searching data of BLAST is used to evaluate the effectiveness of G2LC. Experimental results show that G2LC guarantees the application performance, while resource is saved up to 20.14%.
Babbitt, P.C.; Bagos, P.G.; Bairoch, A.; Bateman, A.; Chatonnet, A.; Chen, M.J.; Craik, D.J.; Finn, R.D.; Gloriam, D.; Haft, D.H.; Henrissat, B.; Holliday, G.L.; Isberg, V.; Kaas, Q.; Landsman, D.; Lenfant, N.; Manning, G.; Nagano, N.; Srinivasan, N.; O'Donovan, C.; Pruitt, K.D.; Sowdhamini, R.; Rawlings, N.D.; Saier, M.H., Jr.; Sharman, J.L.; Spedding, M.; Tsirigos, K.D.; Vastermark, A.; Vriend, G.
During 11-12 August 2014, a Protein Bioinformatics and Community Resources Retreat was held at the Wellcome Trust Genome Campus in Hinxton, UK. This meeting brought together the principal investigators of several specialized protein resources (such as CAZy, TCDB and MEROPS) as well as those from
The advent of the Internet and the World Wide Web (WWW) has substantially increased the availability of information and computational resources available to experimental biologists. This review will describe the current on-line resources available, including protein and nucleic acids sequence alignment. Key words: ...
Babbitt, Patricia C; Bagos, Pantelis G; Bairoch, Amos; Bateman, Alex; Chatonnet, Arnaud; Chen, Mark Jinan; Craik, David J; Finn, Robert D; Gloriam, David; Haft, Daniel H; Henrissat, Bernard; Holliday, Gemma L; Isberg, Vignir; Kaas, Quentin; Landsman, David; Lenfant, Nicolas; Manning, Gerard; Nagano, Nozomi; Srinivasan, Narayanaswamy; O'Donovan, Claire; Pruitt, Kim D; Sowdhamini, Ramanathan; Rawlings, Neil D; Saier, Milton H; Sharman, Joanna L; Spedding, Michael; Tsirigos, Konstantinos D; Vastermark, Ake; Vriend, Gerrit
During 11-12 August 2014, a Protein Bioinformatics and Community Resources Retreat was held at the Wellcome Trust Genome Campus in Hinxton, UK. This meeting brought together the principal investigators of several specialized protein resources (such as CAZy, TCDB and MEROPS) as well as those from protein databases from the large Bioinformatics centres (including UniProt and RefSeq). The retreat was divided into five sessions: (1) key challenges, (2) the databases represented, (3) best practices for maintenance and curation, (4) information flow to and from large data centers and (5) communication and funding. An important outcome of this meeting was the creation of a Specialist Protein Resource Network that we believe will improve coordination of the activities of its member resources. We invite further protein database resources to join the network and continue the dialogue.
Schmutzer, Thomas; Bolger, Marie E; Rudd, Stephen; Chen, Jinbo; Gundlach, Heidrun; Arend, Daniel; Oppermann, Markus; Weise, Stephan; Lange, Matthias; Spannagl, Manuel; Usadel, Björn; Mayer, Klaus F X; Scholz, Uwe
Plant genetic resources are a substantial opportunity for plant breeding, preservation and maintenance of biological diversity. As part of the German Network for Bioinformatics Infrastructure (de.NBI) the German Crop BioGreenformatics Network (GCBN) focuses mainly on crop plants and provides both data and software infrastructure which are tailored to the needs of the plant research community. Our mission and key objectives include: (1) provision of transparent access to germplasm seeds, (2) the delivery of improved workflows for plant gene annotation, and (3) implementation of bioinformatics services that link genotypes and phenotypes. This review introduces the GCBN's spectrum of web-services and integrated data resources that address common research problems in the plant genomics community. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Babbitt, Patricia C.; Bagos, Pantelis G.; Bairoch, Amos
protein databases from the large Bioinformatics centres (including UniProt and RefSeq). The retreat was divided into five sessions: (1) key challenges, (2) the databases represented, (3) best practices for maintenance and curation, (4) information flow to and from large data centers and (5) communication...... and funding. An important outcome of this meeting was the creation of a Specialist Protein Resource Network that we believe will improve coordination of the activities of its member resources. We invite further protein database resources to join the network and continue the dialogue....
Abrams, Kimberly R.
We have now reached a tipping point at which electronic resources comprise more than half of academic library budgets. Because of the increasing work associated with the ever-increasing number of e-resources, there is a trend to distribute work throughout the library even in the presence of an electronic resources department. In 2013, the author…
Ison, Jon; Rapacki, Kristoffer; Ménager, Hervé; Kalaš, Matúš; Rydza, Emil; Chmura, Piotr; Anthon, Christian; Beard, Niall; Berka, Karel; Bolser, Dan; Booth, Tim; Bretaudeau, Anthony; Brezovsky, Jan; Casadio, Rita; Cesareni, Gianni; Coppens, Frederik; Cornell, Michael; Cuccuru, Gianmauro; Davidsen, Kristian; Vedova, Gianluca Della; Dogan, Tunca; Doppelt-Azeroual, Olivia; Emery, Laura; Gasteiger, Elisabeth; Gatter, Thomas; Goldberg, Tatyana; Grosjean, Marie; Grüning, Björn; Helmer-Citterich, Manuela; Ienasescu, Hans; Ioannidis, Vassilios; Jespersen, Martin Closter; Jimenez, Rafael; Juty, Nick; Juvan, Peter; Koch, Maximilian; Laibe, Camille; Li, Jing-Woei; Licata, Luana; Mareuil, Fabien; Mičetić, Ivan; Friborg, Rune Møllegaard; Moretti, Sebastien; Morris, Chris; Möller, Steffen; Nenadic, Aleksandra; Peterson, Hedi; Profiti, Giuseppe; Rice, Peter; Romano, Paolo; Roncaglia, Paola; Saidi, Rabie; Schafferhans, Andrea; Schwämmle, Veit; Smith, Callum; Sperotto, Maria Maddalena; Stockinger, Heinz; Vařeková, Radka Svobodová; Tosatto, Silvio C.E.; de la Torre, Victor; Uva, Paolo; Via, Allegra; Yachdav, Guy; Zambelli, Federico; Vriend, Gert; Rost, Burkhard; Parkinson, Helen; Løngreen, Peter; Brunak, Søren
Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task at hand. Here we present a community-driven curation effort, supported by ELIXIR—the European infrastructure for biological information—that aspires to a comprehensive and consistent registry of information about bioinformatics resources. The sustainable upkeep of this Tools and Data Services Registry is assured by a curation effort driven by and tailored to local needs, and shared amongst a network of engaged partners. As of November 2015, the registry includes 1785 resources, with depositions from 126 individual registrations including 52 institutional providers and 74 individuals. With community support, the registry can become a standard for dissemination of information about bioinformatics resources: we welcome everyone to join us in this common endeavour. The registry is freely available at https://bio.tools. PMID:26538599
Evelina Y. Basenko
Full Text Available FungiDB (fungidb.org is a free online resource for data mining and functional genomics analysis for fungal and oomycete species. FungiDB is part of the Eukaryotic Pathogen Genomics Database Resource (EuPathDB, eupathdb.org platform that integrates genomic, transcriptomic, proteomic, and phenotypic datasets, and other types of data for pathogenic and nonpathogenic, free-living and parasitic organisms. FungiDB is one of the largest EuPathDB databases containing nearly 100 genomes obtained from GenBank, Aspergillus Genome Database (AspGD, The Broad Institute, Joint Genome Institute (JGI, Ensembl, and other sources. FungiDB offers a user-friendly web interface with embedded bioinformatics tools that support custom in silico experiments that leverage FungiDB-integrated data. In addition, a Galaxy-based workspace enables users to generate custom pipelines for large-scale data analysis (e.g., RNA-Seq, variant calling, etc.. This review provides an introduction to the FungiDB resources and focuses on available features, tools, and queries and how they can be used to mine data across a diverse range of integrated FungiDB datasets and records.
Although the era of big data has produced many bioinformatics tools and databases, using them effectively often requires specialized knowledge. Many groups lack bioinformatics expertise, and frequently find that software documentation is inadequate and local colleagues may be overburdened or unfamil...
Zhang, J; Feuk, L; Duggan, G E; Khaja, R; Scherer, S W
The discovery of an abundance of copy number variants (CNVs; gains and losses of DNA sequences >1 kb) and other structural variants in the human genome is influencing the way research and diagnostic analyses are being designed and interpreted. As such, comprehensive databases with the most relevant information will be critical to fully understand the results and have impact in a diverse range of disciplines ranging from molecular biology to clinical genetics. Here, we describe the development of bioinformatics resources to facilitate these studies. The Database of Genomic Variants (http://projects.tcag.ca/variation/) is a comprehensive catalogue of structural variation in the human genome. The database currently contains 1,267 regions reported to contain copy number variation or inversions in apparently healthy human cases. We describe the current contents of the database and how it can serve as a resource for interpretation of array comparative genomic hybridization (array CGH) and other DNA copy imbalance data. We also present the structure of the database, which was built using a new data modeling methodology termed Cross-Referenced Tables (XRT). This is a generic and easy-to-use platform, which is strong in handling textual data and complex relationships. Web-based presentation tools have been built allowing publication of XRT data to the web immediately along with rapid sharing of files with other databases and genome browsers. We also describe a novel tool named eFISH (electronic fluorescence in situ hybridization) (http://projects.tcag.ca/efish/), a BLAST-based program that was developed to facilitate the choice of appropriate clones for FISH and CGH experiments, as well as interpretation of results in which genomic DNA probes are used in hybridization-based experiments. Copyright (c) 2006 S. Karger AG, Basel.
Full Text Available Abstract Background The proliferation of data repositories in bioinformatics has resulted in the development of numerous interfaces that allow scientists to browse, search and analyse the data that they contain. Interfaces typically support repository access by means of web pages, but other means are also used, such as desktop applications and command line tools. Interfaces often duplicate functionality amongst each other, and this implies that associated development activities are repeated in different laboratories. Interfaces developed by public laboratories are often created with limited developer resources. In such environments, reducing the time spent on creating user interfaces allows for a better deployment of resources for specialised tasks, such as data integration or analysis. Laboratories maintaining data resources are challenged to reconcile requirements for software that is reliable, functional and flexible with limitations on software development resources. Results This paper proposes a model-driven approach for the partial generation of user interfaces for searching and browsing bioinformatics data repositories. Inspired by the Model Driven Architecture (MDA of the Object Management Group (OMG, we have developed a system that generates interfaces designed for use with bioinformatics resources. This approach helps laboratory domain experts decrease the amount of time they have to spend dealing with the repetitive aspects of user interface development. As a result, the amount of time they can spend on gathering requirements and helping develop specialised features increases. The resulting system is known as Pierre, and has been validated through its application to use cases in the life sciences, including the PEDRoDB proteomics database and the e-Fungi data warehouse. Conclusion MDAs focus on generating software from models that describe aspects of service capabilities, and can be applied to support rapid development of repository
Weir, Ryan O
Informative, useful, current, Managing Electronic Resources: A LITA Guide shows how to successfully manage time, resources, and relationships with vendors and staff to ensure personal, professional, and institutional success.
A 2010 electronic resource management survey conducted by Maria Collins of North Carolina State University and Jill E. Grogg of University of Alabama Libraries found that the top six electronic resources management priorities included workflow management, communications management, license management, statistics management, administrative…
Ramli, Rindra M.
This presentation describes the electronic resources management project undertaken by the KAUST library. The objectives of this project is to migrate information from MS Sharepoint to Millennium ERM module. One of the advantages of this migration is to consolidate all electronic resources into a single and centralized location. This would allow for better information sharing among library staff.
Wang, Daxi; Korhonen, Pasi K; Gasser, Robin B; Young, Neil D
Clonorchis sinensis (family Opisthorchiidae) is an important foodborne parasite that has a major socioeconomic impact on ~35 million people predominantly in China, Vietnam, Korea and the Russian Far East. In humans, infection with C. sinensis causes clonorchiasis, a complex hepatobiliary disease that can induce cholangiocarcinoma (CCA), a malignant cancer of the bile ducts. Central to understanding the epidemiology of this disease is knowledge of genetic variation within and among populations of this parasite. Although most published molecular studies seem to suggest that C. sinensis represents a single species, evidence of karyotypic variation within C. sinensis and cryptic species within a related opisthorchiid fluke (Opisthorchis viverrini) emphasise the importance of studying and comparing the genes and genomes of geographically distinct isolates of C. sinensis. Recently, we sequenced, assembled and characterised a draft nuclear genome of a C. sinensis isolate from Korea and compared it with a published draft genome of a Chinese isolate of this species using a bioinformatic workflow established for comparing draft genome assemblies and their gene annotations. We identified that 50.6% and 51.3% of the Korean and Chinese C. sinensis genomic scaffolds were syntenic, respectively. Within aligned syntenic blocks, the genomes had a high level of nucleotide identity (99.1%) and encoded 15 variable proteins likely to be involved in diverse biological processes. Here, we review current technical challenges of using draft genome assemblies to undertake comparative genomic analyses to quantify genetic variation between isolates of the same species. Using a workflow that overcomes these challenges, we report on a high-quality draft genome for C. sinensis from Korea and comparative genomic analyses, as a basis for future investigations of the genetic structures of C. sinensis populations, and discuss the biotechnological implications of these explorations. Copyright © 2018
Ramli, Rindra M.
This recommendation report provides an overview of the selection process for the new Electronic Resources Management System. The library has decided to move away from Innovative Interfaces Millennium ERM module. The library reviewed 3 system as potential replacements namely: Proquest 360 Resource Manager, Ex Libris Alma and Open Source CORAL ERMS. After comparing and trialling the systems, it was decided to go for Proquest 360 Resource Manager.
Di Génova, Alex; Aravena, Andrés; Zapata, Luis; González, Mauricio; Maass, Alejandro; Iturra, Patricia
SalmonDB is a new multiorganism database containing EST sequences from Salmo salar, Oncorhynchus mykiss and the whole genome sequence of Danio rerio, Gasterosteus aculeatus, Tetraodon nigroviridis, Oryzias latipes and Takifugu rubripes, built with core components from GMOD project, GOPArc system and the BioMart project. The information provided by this resource includes Gene Ontology terms, metabolic pathways, SNP prediction, CDS prediction, orthologs prediction, several precalculated BLAST searches and domains. It also provides a BLAST server for matching user-provided sequences to any of the databases and an advanced query tool (BioMart) that allows easy browsing of EST databases with user-defined criteria. These tools make SalmonDB database a valuable resource for researchers searching for transcripts and genomic information regarding S. salar and other salmonid species. The database is expected to grow in the near feature, particularly with the S. salar genome sequencing project. Database URL: http://genomicasalmones.dim.uchile.cl/ PMID:22120661
Karimi, Kamran; Vize, Peter D
As a model organism database, Xenbase has been providing informatics and genomic data on Xenopus (Silurana) tropicalis and Xenopus laevis frogs for more than a decade. The Xenbase database contains curated, as well as community-contributed and automatically harvested literature, gene and genomic data. A GBrowse genome browser, a BLAST+ server and stock center support are available on the site. When this resource was first built, all software services and components in Xenbase ran on a single physical server, with inherent reliability, scalability and inter-dependence issues. Recent advances in networking and virtualization techniques allowed us to move Xenbase to a virtual environment, and more specifically to a private cloud. To do so we decoupled the different software services and components, such that each would run on a different virtual machine. In the process, we also upgraded many of the components. The resulting system is faster and more reliable. System maintenance is easier, as individual virtual machines can now be updated, backed up and changed independently. We are also experiencing more effective resource allocation and utilization. Database URL: www.xenbase.org. © The Author(s) 2014. Published by Oxford University Press.
Kawano, Shin; Ono, Hiromasa; Takagi, Toshihisa; Bono, Hidemasa
In recent years, biological web resources such as databases and tools have become more complex because of the enormous amounts of data generated in the field of life sciences. Traditional methods of distributing tutorials include publishing textbooks and posting web documents, but these static contents cannot adequately describe recent dynamic web services. Due to improvements in computer technology, it is now possible to create dynamic content such as video with minimal effort and low cost on most modern computers. The ease of creating and distributing video tutorials instead of static content improves accessibility for researchers, annotators and curators. This article focuses on online video repositories for educational and tutorial videos provided by resource developers and users. It also describes a project in Japan named TogoTV (http://togotv.dbcls.jp/en/) and discusses the production and distribution of high-quality tutorial videos, which would be useful to viewer, with examples. This article intends to stimulate and encourage researchers who develop and use databases and tools to distribute how-to videos as a tool to enhance product usability.
Wattam, Alice R; Brettin, Thomas; Davis, James J; Gerdes, Svetlana; Kenyon, Ronald; Machi, Dustin; Mao, Chunhong; Olson, Robert; Overbeek, Ross; Pusch, Gordon D; Shukla, Maulik P; Stevens, Rick; Vonstein, Veronika; Warren, Andrew; Xia, Fangfang; Yoo, Hyunseung
In the "big data" era, research biologists are faced with analyzing new types that usually require some level of computational expertise. A number of programs and pipelines exist, but acquiring the expertise to run them, and then understanding the output can be a challenge.The Pathosystems Resource Integration Center (PATRIC, www.patricbrc.org ) has created an end-to-end analysis platform that allows researchers to take their raw reads, assemble a genome, annotate it, and then use a suite of user-friendly tools to compare it to any public data that is available in the repository. With close to 113,000 bacterial and more than 1000 archaeal genomes, PATRIC creates a unique research experience with "virtual integration" of private and public data. PATRIC contains many diverse tools and functionalities to explore both genome-scale and gene expression data, but the main focus of this chapter is on assembly, annotation, and the downstream comparative analysis functionality that is freely available in the resource.
The Internet consists of a vast inhomogeneous reservoir of data. Developing software that can integrate a wide variety of different data sources is a major challenge that must be addressed for the realisation of the full potential of the Internet as a scientific research tool. This article presents a semi-automated object-oriented programming system for integrating web-based resources. We demonstrate that the current Internet standards (HTML, CGI [common gateway interface], Java, etc.) can be exploited to develop a data retrieval system that scans existing web interfaces and then uses a set of rules to generate new Java code that can automatically retrieve data from the Web. The validity of the software has been demonstrated by testing it on several biological databases. We also examine the current limitations of the Internet and discuss the need for the development of universal standards for web-based data.
Good, Benjamin M; Tennis, Joseph T; Wilkinson, Mark D
Academic social tagging systems, such as Connotea and CiteULike, provide researchers with a means to organize personal collections of online references with keywords (tags) and to share these collections with others. One of the side-effects of the operation of these systems is the generation of large, publicly accessible metadata repositories describing the resources in the collections. In light of the well-known expansion of information in the life sciences and the need for metadata to enhance its value, these repositories present a potentially valuable new resource for application developers. Here we characterize the current contents of two scientifically relevant metadata repositories created through social tagging. This investigation helps to establish how such socially constructed metadata might be used as it stands currently and to suggest ways that new social tagging systems might be designed that would yield better aggregate products. We assessed the metadata that users of CiteULike and Connotea associated with citations in PubMed with the following metrics: coverage of the document space, density of metadata (tags) per document, rates of inter-annotator agreement, and rates of agreement with MeSH indexing. CiteULike and Connotea were very similar on all of the measurements. In comparison to PubMed, document coverage and per-document metadata density were much lower for the social tagging systems. Inter-annotator agreement within the social tagging systems and the agreement between the aggregated social tagging metadata and MeSH indexing was low though the latter could be increased through voting. The most promising uses of metadata from current academic social tagging repositories will be those that find ways to utilize the novel relationships between users, tags, and documents exposed through these systems. For more traditional kinds of indexing-based applications (such as keyword-based search) to benefit substantially from socially generated metadata in
Tennis Joseph T
Full Text Available Abstract Background Academic social tagging systems, such as Connotea and CiteULike, provide researchers with a means to organize personal collections of online references with keywords (tags and to share these collections with others. One of the side-effects of the operation of these systems is the generation of large, publicly accessible metadata repositories describing the resources in the collections. In light of the well-known expansion of information in the life sciences and the need for metadata to enhance its value, these repositories present a potentially valuable new resource for application developers. Here we characterize the current contents of two scientifically relevant metadata repositories created through social tagging. This investigation helps to establish how such socially constructed metadata might be used as it stands currently and to suggest ways that new social tagging systems might be designed that would yield better aggregate products. Results We assessed the metadata that users of CiteULike and Connotea associated with citations in PubMed with the following metrics: coverage of the document space, density of metadata (tags per document, rates of inter-annotator agreement, and rates of agreement with MeSH indexing. CiteULike and Connotea were very similar on all of the measurements. In comparison to PubMed, document coverage and per-document metadata density were much lower for the social tagging systems. Inter-annotator agreement within the social tagging systems and the agreement between the aggregated social tagging metadata and MeSH indexing was low though the latter could be increased through voting. Conclusion The most promising uses of metadata from current academic social tagging repositories will be those that find ways to utilize the novel relationships between users, tags, and documents exposed through these systems. For more traditional kinds of indexing-based applications (such as keyword-based search to
Chen, Liang; Heikkinen, Liisa; Wang, ChangLiang; Yang, Yang; Knott, K Emily
Abstract Hundreds of bioinformatics tools have been developed for MicroRNA (miRNA) investigations including those used for identification, target prediction, structure and expression profile analysis. However, finding the correct tool for a specific application requires the tedious and laborious process of locating, downloading, testing and validating the appropriate tool from a group of nearly a thousand. In order to facilitate this process, we developed a novel database portal named miRToolsGallery. We constructed the portal by manually curating > 950 miRNA analysis tools and resources. In the portal, a query to locate the appropriate tool is expedited by being searchable, filterable and rankable. The ranking feature is vital to quickly identify and prioritize the more useful from the obscure tools. Tools are ranked via different criteria including the PageRank algorithm, date of publication, number of citations, average of votes and number of publications. miRToolsGallery provides links and data for the comprehensive collection of currently available miRNA tools with a ranking function which can be adjusted using different criteria according to specific requirements. Database URL: http://www.mirtoolsgallery.org
This paper discusses the role of policy for proper and efficient library services in the electronic era. It points out ... New approaches in acquisition, accessing, selection, preservation and choices on whether to operate digital, or combine traditional print and digital resources in the library have to be worked out and adopted.
electronic resources, electronic books, electronic learning, electronic journals, as well as electronic archive among others is intensely powerful and has permeated all segments and sectors of the society. Electronic information resources (EIRS) as reported by Meitz (2004), are "Library materials produced in electronic format.
The rating of electron resources is devoted to count by theories, directions in this work. The calculating model of rating of ER by entering and exiting directions on bases of used widely PageRank is produced for calculating the rating of web pages in Google searching system. The rating of ER is taken into account for calculating the ratings of entering direction and the calculating exiting direction is accomplished by equitable distribution of ER. And also the calculating rating ER among kin...
Blansit, B D; Connor, E
Changes in the practice of medicine and technological developments offer librarians unprecedented opportunities to select and organize electronic resources, use the Web to deliver content throughout the organization, and improve knowledge at the point of need. The confusing array of available products, access routes, and pricing plans makes it difficult to anticipate the needs of users, identify the top resources, budget effectively, make sound collection management decisions, and organize the resources effectively and seamlessly. The electronic resource marketplace requires much vigilance, considerable patience, and continuous evaluation. There are several strategies that librarians can employ to stay ahead of the electronic resource curve, including taking advantage of free trials from publishers; marketing free trials and involving users in evaluating new products; watching and testing products marketed to the clientele; agreeing to beta test new products and services; working with aggregators or republishers; joining vendor advisory boards; benchmarking institutional resources against five to eight competitors; and forming or joining a consortium for group negotiating and purchasing. This article provides a brief snapshot of leading biomedical resources; showcases several libraries that have excelled in identifying, acquiring, and organizing electronic resources; and discusses strategies and trends of potential interest to biomedical librarians, especially those working in hospital settings.
Anderson, Elsa K
To get to the bottom of a successful approach to Electronic Resource Management (ERM), Anderson interviewed staff at 11 institutions about their ERM implementations. Among her conclusions, presented in this issue of Library Technology Reports, is that grasping the intricacies of your workflow-analyzing each step to reveal the gaps and problems-at the beginning is crucial to selecting and implementing an ERM. Whether the system will be used to fill a gap, aggregate critical data, or replace a tedious manual process, the best solution for your library depends on factors such as your current soft
Andrew B. Kinghorn
Full Text Available Aptamers are short nucleic acid sequences capable of specific, high-affinity molecular binding. They are isolated via SELEX (Systematic Evolution of Ligands by Exponential Enrichment, an evolutionary process that involves iterative rounds of selection and amplification before sequencing and aptamer characterization. As aptamers are genetic in nature, bioinformatic approaches have been used to improve both aptamers and their selection. This review will discuss the advancements made in several enclaves of aptamer bioinformatics, including simulation of aptamer selection, fragment-based aptamer design, patterning of libraries, identification of lead aptamers from high-throughput sequencing (HTS data and in silico aptamer optimization.
... printer, and audio-visuals are equally available. Student have unlimited accessibility in the utilization of electronic resources, students frequently utilized electronic information resources in Ramat Library. It is recommended, among others, that registered students should utilize and access electronic information resources ...
Results indicated that use of electronic resources had a positive impact on students' academic performance. Based on the findings of this study, it is recommended that more emphasis should be laid on the acquisition of electronic resources so as to give room for wider and multiple access to information resources in order to ...
The study examined awareness and constraints in the use of electronic resources by lecturers and students of Ajayi Crowther University, Oyo, Nigeria. It aimed at justifying the resources expended in the provision of electronic resources in terms of awareness, patronage and factors that may be affecting awareness and use ...
Davis, Trisha L.
Selection of electronic resources--CD-ROMs, dial access databases, electronic journals, and World Wide Web products--requires a more extensive set of criteria than do print resources. Discusses two factors influencing collection development of electronic products: technology options and licensing issues, and outlines how traditional selection…
Full Text Available In den letzten zehn Jahren spielen elektronische Ressourcen im Bereich der Erwerbung eine zunehmend wichtige Rolle: Eindeutig lässt sich hier ein Wandel in den Bibliotheken (fort vom reinen Printbestand zu immer größeren E-Only-Beständen feststellen. Die stetig wachsende Menge an E-Ressourcen und deren Heterogenität stellt Bibliotheken vor die Herausforderung, die E-Ressourcen effizient zu verwalten. Nicht nur Bibliotheken, sondern auch verhandlungsführende Institutionen von Konsortial- und Allianzlizenzen benötigen ein geeignetes Instrument zur Verwaltung von Lizenzinformationen, welches den komplexen Anforderungen moderner E-Ressourcen gerecht wird. Die Deutsche Forschungsgemeinschaft (DFG unterstützt ein Projekt des Hochschulbibliothekszentrums des Landes Nordrhein-Westfalen (hbz, der Universitätsbibliothek Freiburg, der Verbundzentrale des Gemeinsamen Bibliotheksverbundes (GBV und der Universitätsbibliothek Frankfurt, in dem ein bundesweit verfügbares Electronic Ressource Managementsystem (ERMS aufgebaut werden soll. Ein solches ERMS soll auf Basis einer zentralen Knowledge Base eine einheitliche Nutzung von Daten zur Lizenzverwaltung elektronischer Ressourcen auf lokaler, regionaler und nationaler Ebene ermöglichen. Statistische Auswertungen, Rechteverwaltung für alle angeschlossenen Bibliotheken, kooperative Datenpflege sowie ein über standardisierte Schnittstellen geführter Datenaustausch stehen bei der Erarbeitung der Anforderungen ebenso im Fokus wie die Entwicklung eines Daten- und Funktionsmodells. In the last few years the importance of electronic resources in library acquisitions has increased significantly. There has been a shift from mere print holdings to both e- and print combinations and even e-only subscriptions. This shift poses a double challenge for libraries: On the one hand they have to provide their e-resource collections to library users in an appealing way, on the other hand they have to manage these
Electronic resources access and usage among the postgraduates of a Nigerian University of Technology. ... by postgraduates in using e-resources include takes too much time to find, e-resources are not always accessible, lack of supporting structures (connection, downloading, printing limits) and too many resources.
The study investigated the utilization of Electronic Information resources by the academic staff of Makerere University in Uganda. It examined the academic staff awareness of the resources available, the types of resources provided by the Makerere University Library, the factors affecting resource utilization. The study was ...
Brazas, Michelle D.; Ouellette, B. F. Francis
With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable...
Misra, Namrata; Parida, Bikram Kumar
Abstract Microalgal biofuels offer great promise in contributing to the growing global demand for alternative sources of renewable energy. However, to make algae-based fuels cost competitive with petroleum, lipid production capabilities of microalgae need to improve substantially. Recent progress in algal genomics, in conjunction with other “omic” approaches, has accelerated the ability to identify metabolic pathways and genes that are potential targets in the development of genetically engineered microalgal strains with optimum lipid content. In this review, we summarize the current bioeconomic status of global biofuel feedstocks with particular reference to the role of “omics” in optimizing sustainable biofuel production. We also provide an overview of the various databases and bioinformatics resources available to gain a more complete understanding of lipid metabolism across algal species, along with the recent contributions of “omic” approaches in the metabolic pathway studies for microalgal biofuel production. PMID:24044362
Misra, Namrata; Panda, Prasanna Kumar; Parida, Bikram Kumar
Microalgal biofuels offer great promise in contributing to the growing global demand for alternative sources of renewable energy. However, to make algae-based fuels cost competitive with petroleum, lipid production capabilities of microalgae need to improve substantially. Recent progress in algal genomics, in conjunction with other "omic" approaches, has accelerated the ability to identify metabolic pathways and genes that are potential targets in the development of genetically engineered microalgal strains with optimum lipid content. In this review, we summarize the current bioeconomic status of global biofuel feedstocks with particular reference to the role of "omics" in optimizing sustainable biofuel production. We also provide an overview of the various databases and bioinformatics resources available to gain a more complete understanding of lipid metabolism across algal species, along with the recent contributions of "omic" approaches in the metabolic pathway studies for microalgal biofuel production.
This paper examines the impact of the use of electronic information resources on research output in the universities in Tanzania. Research for this paper was conducted in five public universities in Tanzania with varied levels of access to electronic information resources. The selection of the sample universities was ...
Chang, Chen-Chi; Jong, Ay; Huang, Fu-Chang
Students acquire skills in problem solving and critical thinking through the process as well as team work on problem-based learning courses. Many courses have started to involve the online learning environment and integrate these courses with electronic resources. Teachers use electronic resources in their classes. To overcome the problem of the…
Abstract. This paper examines the impact of the use of electronic information resources on research output in the universities in Tanzania. Research for this paper was conducted in five public universities in Tanzania with varied levels of access to electronic information resources. The selection of the sample universities was ...
Pomerantz, Sarah B.
With the ongoing shift to electronic formats for library resources, acquisitions librarians, like the rest of the profession, must adapt to the rapidly changing landscape of electronic resources by keeping up with trends and mastering new skills related to digital publishing, technology, and licensing. The author sought to know what roles…
The purpose of this study is to know the extent of use of electronic resources and identify the type of electronic resources used by undergraduates in universities in Nigeria. Questionnaire was used for data collection. The study population includes all undergraduate students in the faculty of engineering in Niger Delta ...
Beck, Jimmy B; Tieder, Joel S
There is little research on pediatric hospitalists' use of evidence-based resources. The aim of this study was to determine the electronic resources that pediatric hospitalists prefer. Using a web-based survey, the authors determined hospitalists' preferred electronic resources, as well as their attitudes toward lifelong learning, practice, and experience characteristics. One hundred sixteen hospitalists completed the survey. The most preferred resource for general information, patient handouts, and treatment was UpToDate. Online search engines were ranked second for general information and patient handouts. Pediatric hospitalists tend to utilize less rigorous electronic resources such as UpToDate and Google. These results can set a platform for discussing the quality of resources that pediatric hospitalists use.
Good, Benjamin M; Su, Andrew I
Bioinformatics is faced with a variety of problems that require human involvement. Tasks like genome annotation, image analysis, knowledge-base population and protein structure determination all benefit from human input. In some cases, people are needed in vast quantities, whereas in others, we need just a few with rare abilities. Crowdsourcing encompasses an emerging collection of approaches for harnessing such distributed human intelligence. Recently, the bioinformatics community has begun to apply crowdsourcing in a variety of contexts, yet few resources are available that describe how these human-powered systems work and how to use them effectively in scientific domains. Here, we provide a framework for understanding and applying several different types of crowdsourcing. The framework considers two broad classes: systems for solving large-volume 'microtasks' and systems for solving high-difficulty 'megatasks'. Within these classes, we discuss system types, including volunteer labor, games with a purpose, microtask markets and open innovation contests. We illustrate each system type with successful examples in bioinformatics and conclude with a guide for matching problems to crowdsourcing solutions that highlights the positives and negatives of different approaches.
Frandsen, Tove Faber; Tibyampansha, Dativa; Ibrahim, Glory
of implementing training programmes to encourage the use of the e-library. Findings: Training sessions increase the usage of library e-resources significantly; however, the effect seems to be short-lived and training sessions alone may not increase the overall long-term usage. Originality/value: The present paper...
Herrera, Gail; Aldana, Lynda
Describes a project at the University of Mississippi Libraries to catalog purchased electronic resources so that access to these resources is available only via the Web-based library catalog. Discusses collaboration between cataloging and systems personnel; and describes the MARC catalog record field that contains the information needed to locate…
This study assesses the use of information resources, specifically, electronic databases by lecturers/teachers in Universities and Colleges of Education in South Western Nigeria. Information resources are central to teachers' education. It provides lecturers/teachers access to information that enhances research and ...
The major holdings of the broadcast libraries of the Nigerian Television Authority (NTA) are electronic information resources; therefore, providing safe places for general management of these resources have aroused interest in the industry in Nigeria for sometimes. The need to study the preservation and conservation of ...
Huser, Vojtech; Del Fiol, Guilherme; Rocha, Roberto A.
Provision of access to reference electronic resources to clinicians is becoming increasingly important. We have created a framework for librarians to manage access to these resources at an enterprise level, rather than at the individual hospital libraries. We describe initial project requirements, implementation details, and some preliminary results.
The Euler Project. Karlsruhe
The European Libraries and Electronic Resources (EULER) Project in Mathematical Sciences provides the EulerService site for searching out "mathematical resources such as books, pre-prints, web-pages, abstracts, proceedings, serials, technical reports preprints) and NetLab (for Internet resources), this outstanding engine is capable of simple, full, and refined searches. It also offers a browse option, which responds to entries in the author, keyword, and title fields. Further information about the Project is provided at the EULER homepage.
Lee, Stuart D
This practical book guides information professionals step-by-step through building and managing an electronic resource collection. It outlines the range of electronic products currently available in abstracting and indexing, bibliographic, and other services and then describes how to effectively select, evaluate and purchase them.
d'Acierno, Antonio; Facchiano, Angelo; Marabotti, Anna
We describe the GALT-Prot database and its related web-based application that have been developed to collect information about the structural and functional effects of mutations on the human enzyme galactose-1-phosphate uridyltransferase (GALT) involved in the genetic disease named galactosemia type I. Besides a list of missense mutations at gene and protein sequence levels, GALT-Prot reports the analysis results of mutant GALT structures. In addition to the structural information about the wild-type enzyme, the database also includes structures of over 100 single point mutants simulated by means of a computational procedure, and the analysis to each mutant was made with several bioinformatics programs in order to investigate the effect of the mutations. The web-based interface allows querying of the database, and several links are also provided in order to guarantee a high integration with other resources already present on the web. Moreover, the architecture of the database and the web application is flexible and can be easily adapted to store data related to other proteins with point mutations. GALT-Prot is freely available at http://bioinformatica.isa.cnr.it/GALT/.
Ирина Карловна Войтович
Full Text Available The article examines the experience of the Udmurt State University in conducting competitions of educational publications and electronic resources. The purpose of such competitions is to provide methodological support to educational process. The main focus is on competition of electronic educational resources. The technology of such contests is discussed through detailed analysis of the main stages of the contest. It is noted that the main task of the preparatory stage of the competition is related to the development of regulations on competition and the definition of criteria for selection of the submitted works. The paper also proposes a system of evaluation criteria of electronic educational resources developed by members of the contest organizing committee and jury members. The article emphasizes the importance of not only the preparatory stages of the competition, but also measures for its completion, aimed at training teachers create quality e-learning resources.
Nelson Rex T
Full Text Available Abstract Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap" offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS] used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL, genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL, Resource Description Framework (RDF and eXtensible Markup Language (XML documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST. Conclusions The need for semantic integration technologies has preceded
Nelson, Rex T; Avraham, Shulamit; Shoemaker, Randy C; May, Gregory D; Ware, Doreen; Gessler, Damian Dg
Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap") offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS]) used SSWAP to semantically describe selected data and web services. We selected high-priority Quantitative Trait Locus (QTL), genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL), Resource Description Framework (RDF) and eXtensible Markup Language (XML) documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST). The need for semantic integration technologies has preceded available solutions. We report the feasibility of mapping high
Thomas K Karikari
Full Text Available Until recently, bioinformatics, an important discipline in the biological sciences, was largely limited to countries with advanced scientific resources. Nonetheless, several developing countries have lately been making progress in bioinformatics training and applications. In Africa, leading countries in the discipline include South Africa, Nigeria, and Kenya. However, one country that is less known when it comes to bioinformatics is Ghana. Here, I provide a first description of the development of bioinformatics activities in Ghana and how these activities contribute to the overall development of the discipline in Africa. Over the past decade, scientists in Ghana have been involved in publications incorporating bioinformatics analyses, aimed at addressing research questions in biomedical science and agriculture. Scarce research funding and inadequate training opportunities are some of the challenges that need to be addressed for Ghanaian scientists to continue developing their expertise in bioinformatics.
Karikari, Thomas K
Until recently, bioinformatics, an important discipline in the biological sciences, was largely limited to countries with advanced scientific resources. Nonetheless, several developing countries have lately been making progress in bioinformatics training and applications. In Africa, leading countries in the discipline include South Africa, Nigeria, and Kenya. However, one country that is less known when it comes to bioinformatics is Ghana. Here, I provide a first description of the development of bioinformatics activities in Ghana and how these activities contribute to the overall development of the discipline in Africa. Over the past decade, scientists in Ghana have been involved in publications incorporating bioinformatics analyses, aimed at addressing research questions in biomedical science and agriculture. Scarce research funding and inadequate training opportunities are some of the challenges that need to be addressed for Ghanaian scientists to continue developing their expertise in bioinformatics.
Full Text Available The widespread introduction of electronic educational resources in the educational process requires the development of a scientific basis for all aspects related to their creation and use. These modern means are designed not just to convey to learners the required course material, but also to create conditions for its most effective study. This is possible in conditions of reasonable approach to the presentation of educational material on the screen. The article is devoted to consideration of the problem of presenting educational material in electronic educational resources. Visuals are powerful didactic tool that enhances the perception and understanding of educational information. Particular attention is paid to the use of such a powerful medium like video. Investigated the role and importance of video in the learning process, their educational opportunities and benefits. Shows types of video and their use in electronic educational resources. Grounded requirements for training videos. The recommendations are given on the use of video in combination with other media in electronic educational resources. Adduced the example a real electronic multimedia educational resource and shows the possibility of using video.
Weber, Tilmann; Kim, Hyun Uk
. In this context, this review gives a summary of tools and databases that currently are available to mine, identify and characterize natural product biosynthesis pathways and their producers based on ‘omics data. A web portal called Secondary Metabolite Bioinformatics Portal (SMBP at http......://www.secondarymetabolites.org) is introduced to provide a one-stop catalog and links to these bioinformatics resources. In addition, an outlook is presented how the existing tools and those to be developed will influence synthetic biology approaches in the natural products field....
Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G
Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).
Full Text Available A complete overview of library activity implies a complete and reliable measurement of the use of both electronic resources and printed materials. This measurement is based on three sets of definitions: document types, use types and user types. There is a common model of definitions for printed materials, but a lot of questions and technical issues remain for electronic resources. In 2006 a French national working group studied these questions. It relied on the COUNTER standard, but found it insufficient and pointed out the need for local tools such as web markers and deep analysis of proxy logs. Within the French national consortium COUPERIN, a new working group is testing ERMS, SUSHI standards, Shibboleth authentication, along with COUNTER standards, to improve the counting of the electronic resources use. At this stage this counting is insufficient and its improvement will be a European challenge for the future.
Full Text Available With the advents of internet, the importance of electronic resources is growing. Due to the increasing expensiveness of electronic resources, university libraries normally received budgets from parent institutions annually. They necessarily applied effective and systematic methods for decision making in electronic resources purchase or re-subscription. However, there are some difficulties in practices: First of all, libraries are unable to receive user records; second, the COUNTER statistics does not include details about users and their affiliation. As a result, one cannot conduct advanced user analysis based on the usage of users, institutions, and departments. To overcome the difficulties, this study presents a feasible model to analyze electronic resource usage effectively and flexibly. We set up a proxy server to collect actual usage raw data. By analyzing items in internet browsing records, associated with original library automatic system, this study aims at exploring how to use effective ways to analyze big data of website log data. We also propose the process of how original data to be transformed, cleared, integrated, and demonstrated. This study adopted a medical university library and its subscription of medical electronic resources as a case. Our data analysis includes (1 year of subscription,(2 title of journal, (3 affiliation, (4 subjects, and (5 specific journal requirements, etc. The findings of the study are contributed to obtain further understanding in policy making and user behavior analysis. The integrated data provides multiple applications in informatics research, information behavior, bibliomining, presenting diverse views and extended issues for further discussion.
Full Text Available Research into access to electronic resources by visually impaired people undertaken by the Centre for Research in Library and Information Management has not only explored the accessibility of websites and levels of awareness in providing websites that adhere to design for all principles, but has sought to enhance understanding of information seeking behaviour of blind and visually impaired people when using digital resources.
Full Text Available Flow cytometry bioinformatics is the application of bioinformatics to flow cytometry data, which involves storing, retrieving, organizing, and analyzing flow cytometry data using extensive computational resources and tools. Flow cytometry bioinformatics requires extensive use of and contributes to the development of techniques from computational statistics and machine learning. Flow cytometry and related methods allow the quantification of multiple independent biomarkers on large numbers of single cells. The rapid growth in the multidimensionality and throughput of flow cytometry data, particularly in the 2000s, has led to the creation of a variety of computational analysis methods, data standards, and public databases for the sharing of results. Computational methods exist to assist in the preprocessing of flow cytometry data, identifying cell populations within it, matching those cell populations across samples, and performing diagnosis and discovery using the results of previous steps. For preprocessing, this includes compensating for spectral overlap, transforming data onto scales conducive to visualization and analysis, assessing data for quality, and normalizing data across samples and experiments. For population identification, tools are available to aid traditional manual identification of populations in two-dimensional scatter plots (gating, to use dimensionality reduction to aid gating, and to find populations automatically in higher dimensional space in a variety of ways. It is also possible to characterize data in more comprehensive ways, such as the density-guided binary space partitioning technique known as probability binning, or by combinatorial gating. Finally, diagnosis using flow cytometry data can be aided by supervised learning techniques, and discovery of new cell types of biological importance by high-throughput statistical methods, as part of pipelines incorporating all of the aforementioned methods. Open standards, data
From full-text article databases to digitized collections of primary source materials, newly emerging electronic resources have radically impacted how research in the humanities is conducted and discovered. This book, covering high-quality, up-to-date electronic resources for the humanities, is an easy-to-use annotated guide for the librarian, student, and scholar alike. It covers online databases, indexes, archives, and many other critical tools in key humanities disciplines including philosophy, religion, languages and literature, and performing and visual arts. Succinct overviews of key eme
Brazas, Michelle D; Ouellette, B F Francis
With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable in the learning continuum. Bioinformatics.ca, which hosts the Canadian Bioinformatics Workshops, has blended more traditional learning styles with current online and social learning styles. Here we share our growing experiences over the past 12 years and look toward what the future holds for bioinformatics training programs.
Gulledge, Thomas R.; Sommer, Rainer; Tarimcilar, M. Murat
Electronic Commerce Resource Centers focus on transferring emerging technologies to small businesses through university/industry partnerships. Successful implementation hinges on a strategic operating plan, creation of measurable value for customers, investment in customer-targeted training, and measurement of performance outputs. (SK)
This paper examines the use of printed and electronic resources by agricultural science students in three Nigerian universities. A two-part questionnaire was designed to elicit necessary information from the respondents selected for the study. One thousand three hundred (1300) respondents from faculties of Agriculture in ...
The study explored the state of electronic information resource sharing among university libraries in Southern part of Nigeria, highlighting the prospects and the challenges. The study was an empirical research which adopted the descriptive survey as the design. The questionnaire was used to collect data from the ...
undergraduate students use electronic resources such as NUC virtual library, HINARI, E- journals, CD-ROMs, AGORA, and ... to finance and geographical location. Furthermore, in developed countries like United Kingdom, students get access to .... databases, web sources and audio-video tapes. Furthermore, studies also ...
Modern ICT Tools: Online Electronic Resources Sharing Using Web 2.0 and Its Implications For Library And Information Practice In Nigeria. ... The PDF file you selected should load here if your Web browser has a PDF reader plug-in installed (for example, a recent version of Adobe Acrobat Reader). If you would like more ...
Users satisfaction with electronic information resources and services in A.B.U & UNIBEN MTN Net Libraries. ... Lastly, management of the MTN Net Libraries should conduct user studies annually in order to have feedback from users on how well the library is meeting their information needs. The results of the survey should ...
Olena Yu. Balalaieva
Full Text Available The article investigates the current state of development of e-learning content in the Latin language. It is noted that the introduction of ICT in the educational space has expanded the possibility of studying Latin, opened access to digital libraries resources, made it possible to use scientific and educational potential and teaching Latin best practices of world's leading universities. A review of foreign and Ukrainian information resources and electronic editions for the study of Latin is given. Much attention was paid to the didactic potential of local and online multimedia courses of Latin, electronic textbooks, workbooks of interactive tests and exercises, various dictionaries and software translators, databases and digital libraries. Based on analysis of the world market of educational services and products the main trends in the development of information resources and electronic books are examined. It was found that multimedia courses with interactive exercises or workbooks with interactive tests, online dictionaries and translators are the most widely represented and demanded. The noticeable lagging of Ukrainian education and computer linguistics in quantitative and qualitative measures in this industry is established. The obvious drawback of existing Ukrainian resources and electronic editions for the study of Latin is their noninteractive nature. The prospects of e-learning content in Latin in Ukraine are outlined.
This article is based on an empirical study that examined the association between gender and the use of electronic information resources among postgraduate students at the University of Dar es salaam, Tanzania. The study was conducted in December 2005 and integrated both qualitative and quantitative research ...
The study aimed at finding out the use of electronic information resources among undergraduate students in the Federal University of Technology, Akure. The study is based on descriptive survey design method and the population consists of 16,962 undergraduate students across different schools at the Federal University ...
This study investigated the adoption and use of electronic information resources by medical science students of the University of Benin. The descriptive survey research design was adopted for the study and 390 students provided the data. Data collected were analysed with descriptive Statistics(Simple percentage and ...
Holley, Robert P.; Powell, Ronald R.
This paper reports the results of a survey of student satisfaction with electronic library resources other than the online catalog at Wayne State University. Undertaken in Fall Term 2000 as a class project for a marketing course, a student team designed, administered, and analyzed a survey of a random sample of students. Almost 40% of the…
This article explores whether technical communicator is a useful model for electronic resources (ER) librarians. The fields of ER librarianship and technical communication (TC) originated and continue to develop in relation to evolving technologies. A review of the literature reveals four common themes for ER librarianship and TC. While the…
The paper discusses access to electronic information resources by students of Federal Colleges of Education in Eha-Amufu and Umunze. Descriptive survey design was used to investigate sample of 526 students. Sampling technique used was a Multi sampling technique. Data for the study were generated using ...
Haft Daniel H
Full Text Available Abstract Background Enzymes in the radical SAM (rSAM domain family serve in a wide variety of biological processes, including RNA modification, enzyme activation, bacteriocin core peptide maturation, and cofactor biosynthesis. Evolutionary pressures and relationships to other cellular constituents impose recognizable grammars on each class of rSAM-containing system, shaping patterns in results obtained through various comparative genomics analyses. Results An uncharacterized gene cluster found in many Actinobacteria and sporadically in Firmicutes, Chloroflexi, Deltaproteobacteria, and one Archaeal plasmid contains a PqqE-like rSAM protein family that includes Rv0693 from Mycobacterium tuberculosis. Members occur clustered with a strikingly well-conserved small polypeptide we designate "mycofactocin," similar in size to bacteriocins and PqqA, precursor of pyrroloquinoline quinone (PQQ. Partial Phylogenetic Profiling (PPP based on the distribution of these markers identifies the mycofactocin cluster, but also a second tier of high-scoring proteins. This tier, strikingly, is filled with up to thirty-one members per genome from three variant subfamilies that occur, one each, in three unrelated classes of nicotinoproteins. The pattern suggests these variant enzymes require not only NAD(P, but also the novel gene cluster. Further study was conducted using SIMBAL, a PPP-like tool, to search these nicotinoproteins for subsequences best correlated across multiple genomes to the presence of mycofactocin. For both the short chain dehydrogenase/reductase (SDR and iron-containing dehydrogenase families, aligning SIMBAL's top-scoring sequences to homologous solved crystal structures shows signals centered over NAD(P-binding sites rather than over substrate-binding or active site residues. Previous studies on some of these proteins have revealed a non-exchangeable NAD cofactor, such that enzymatic activity in vitro requires an artificial electron acceptor such
Saparova, Dinara; Nolan, Nathanial S
Current US medical students have begun to rely on electronic information repositories-such as UpToDate, AccessMedicine, and Wikipedia-for their pre-clerkship medical education. However, it is unclear whether these resources are appropriate for this level of learning due to factors involving information quality, level of evidence, and the requisite knowledgebase. This study evaluated appropriateness of electronic information resources from a novel perspective: amount of mental effort learners invest in interactions with these resources and effects of the experienced mental effort on learning. Eighteen first-year medical students read about three unstudied diseases in the above-mentioned resources (a total of fifty-four observations). Their eye movement characteristics (i.e., fixation duration, fixation count, visit duration, and task-evoked pupillary response) were recorded and used as psychophysiological indicators of the experienced mental effort. Post reading, students' learning was assessed with multiple-choice tests. Eye metrics and test results constituted quantitative data analyzed according to the repeated Latin square design. Students' perceptions of interacting with the information resources were also collected. Participants' feedback during semi-structured interviews constituted qualitative data and was reviewed, transcribed, and open coded for emergent themes. Compared to AccessMedicine and Wikipedia, UpToDate was associated with significantly higher values of eye metrics, suggesting learners experienced higher mental effort. No statistically significant difference between the amount of mental effort and learning outcomes was found. More so, descriptive statistical analysis of the knowledge test scores suggested similar levels of learning regardless of the information resource used. Judging by the learning outcomes, all three information resources were found appropriate for learning. UpToDate, however, when used alone, may be less appropriate for first
Background MicroRNAs (miRNAs) are noncoding RNAs that direct post-transcriptional regulation of protein coding genes. Recent studies have shown miRNAs are important for controlling many biological processes, including nervous system development, and are highly conserved across species. Given their importance, computational tools are necessary for analysis, interpretation and integration of high-throughput (HTP) miRNA data in an increasing number of model species. The Bioinformatics Resource Manager (BRM) v2.3 is a software environment for data management, mining, integration and functional annotation of HTP biological data. In this study, we report recent updates to BRM for miRNA data analysis and cross-species comparisons across datasets. Results BRM v2.3 has the capability to query predicted miRNA targets from multiple databases, retrieve potential regulatory miRNAs for known genes, integrate experimentally derived miRNA and mRNA datasets, perform ortholog mapping across species, and retrieve annotation and cross-reference identifiers for an expanded number of species. Here we use BRM to show that developmental exposure of zebrafish to 30 uM nicotine from 6–48 hours post fertilization (hpf) results in behavioral hyperactivity in larval zebrafish and alteration of putative miRNA gene targets in whole embryos at developmental stages that encompass early neurogenesis. We show typical workflows for using BRM to integrate experimental zebrafish miRNA and mRNA microarray datasets with example retrievals for zebrafish, including pathway annotation and mapping to human ortholog. Functional analysis of differentially regulated (p<0.05) gene targets in BRM indicates that nicotine exposure disrupts genes involved in neurogenesis, possibly through misregulation of nicotine-sensitive miRNAs. Conclusions BRM provides the ability to mine complex data for identification of candidate miRNAs or pathways that drive phenotypic outcome and, therefore, is a useful hypothesis
Tilton Susan C
Full Text Available Abstract Background MicroRNAs (miRNAs are noncoding RNAs that direct post-transcriptional regulation of protein coding genes. Recent studies have shown miRNAs are important for controlling many biological processes, including nervous system development, and are highly conserved across species. Given their importance, computational tools are necessary for analysis, interpretation and integration of high-throughput (HTP miRNA data in an increasing number of model species. The Bioinformatics Resource Manager (BRM v2.3 is a software environment for data management, mining, integration and functional annotation of HTP biological data. In this study, we report recent updates to BRM for miRNA data analysis and cross-species comparisons across datasets. Results BRM v2.3 has the capability to query predicted miRNA targets from multiple databases, retrieve potential regulatory miRNAs for known genes, integrate experimentally derived miRNA and mRNA datasets, perform ortholog mapping across species, and retrieve annotation and cross-reference identifiers for an expanded number of species. Here we use BRM to show that developmental exposure of zebrafish to 30 uM nicotine from 6–48 hours post fertilization (hpf results in behavioral hyperactivity in larval zebrafish and alteration of putative miRNA gene targets in whole embryos at developmental stages that encompass early neurogenesis. We show typical workflows for using BRM to integrate experimental zebrafish miRNA and mRNA microarray datasets with example retrievals for zebrafish, including pathway annotation and mapping to human ortholog. Functional analysis of differentially regulated (p Conclusions BRM provides the ability to mine complex data for identification of candidate miRNAs or pathways that drive phenotypic outcome and, therefore, is a useful hypothesis generation tool for systems biology. The miRNA workflow in BRM allows for efficient processing of multiple miRNA and mRNA datasets in a single
Svitlana G. Lytvynova
The article deals with the scientific and methodological approaches to the examination of quality of electronic educational resources (EER) for secondary schools. It was defined conceptual apparatus, described the object of examination, clarified certain aspects of the functions of examination, determined the basic tasks of expertise, summarized the principles of expertise (scientific, personalization, active involvement in the learning process), described the requirements to the participants...
Long, Jie; Hulse, Nathan C; Tao, Cui
Infobuttons provide context-aware educational materials to both providers and patients and are becoming an important element in modern electronic health records (EHR) and patient health records (PHR). However, the content from different electronic resources (e-resource) as responses from infobutton manager has not been fully analyzed and evaluated. In this paper, we propose a method for automatically analyzing responses from infobutton manager. A tool is implemented to retrieve and analyze responses from infobutton manager. To test the tool, we extracted and sampled common and uncommon concepts from EHR usage data in Intermountain Healthcare's enterprise data warehouse. From the output of the tool, we evaluate infobutton performance by multiple categories, including against the most and less common used concepts, grouped by different modules in patient portal, by different e-resources, and by type of access (standardized Health Level Seven (HL7) vs not). Based on the results of our evaluation, we provide suggestions for further enhancements of infobuttons to the current implementation, including suggesting accessing priorities of e-resources and encouraging the use of the HL7 standard.
Full Text Available Jay M Margolis,1 Elizabeth T Masters,2 Joseph C Cappelleri,3 David M Smith,1 Steven Faulkner4 1Truven Health Analytics, Life Sciences, Outcomes Research, Bethesda, MD, 2Pfizer Inc, Outcomes & Evidence, New York, NY, 3Pfizer Inc, Statistics, Groton, CT, 4Pfizer Inc, North American Medical Affairs, Medical Outcomes Specialists, St Louis, MO, USA Objective: The management of fibromyalgia (FM, a chronic musculoskeletal disease, remains challenging, and patients with FM are often characterized by high health care resource utilization. This study sought to explore potential drivers of all-cause health care resource utilization and other factors associated with high resource use, using a large electronic health records (EHR database to explore data from patients diagnosed with FM. Methods: This was a retrospective analysis of de-identified EHR data from the Humedica database. Adults (≥18 years with FM were identified based on ≥2 International Classification of Diseases, Ninth Revision codes for FM (729.1 ≥30 days apart between January 1, 2008 and December 31, 2012 and were required to have evidence of ≥12 months continuous care pre- and post-index; first FM diagnosis was the index event; 12-month pre- and post-index reporting periods. Multivariable analysis evaluated relationships between variables and resource utilization. Results: Patients were predominantly female (81.4%, Caucasian (87.7%, with a mean (standard deviation age of 54.4 (14.8 years. The highest health care resource utilization was observed for the categories of “medication orders” and “physician office visits,” with 12-month post-index means of 21.2 (21.5 drug orders/patient and 15.1 (18.1 office visits/patient; the latter accounted for 73.3% of all health care visits. Opioids were the most common prescription medication, 44.3% of all patients. The chance of high resource use was significantly increased (P<0.001 26% among African-Americans vs Caucasians and for patients
Schneider, Maria Victoria; Watson, James; Attwood, Teresa; Rother, Kristian; Budd, Aidan; McDowall, Jennifer; Via, Allegra; Fernandes, Pedro; Nyronen, Tommy; Blicher, Thomas; Jones, Phil; Blatter, Marie-Claude; De Las Rivas, Javier; Judge, David Phillip; van der Gool, Wouter; Brooksbank, Cath
As bioinformatics becomes increasingly central to research in the molecular life sciences, the need to train non-bioinformaticians to make the most of bioinformatics resources is growing. Here, we review the key challenges and pitfalls to providing effective training for users of bioinformatics services, and discuss successful training strategies shared by a diverse set of bioinformatics trainers. We also identify steps that trainers in bioinformatics could take together to advance the state of the art in current training practices. The ideas presented in this article derive from the first Trainer Networking Session held under the auspices of the EU-funded SLING Integrating Activity, which took place in November 2009.
Schneider, M.V.; Watson, J.; Attwood, T.
As bioinformatics becomes increasingly central to research in the molecular life sciences, the need to train non-bioinformaticians to make the most of bioinformatics resources is growing. Here, we review the key challenges and pitfalls to providing effective training for users of bioinformatics...... services, and discuss successful training strategies shared by a diverse set of bioinformatics trainers. We also identify steps that trainers in bioinformatics could take together to advance the state of the art in current training practices. The ideas presented in this article derive from the first...
Fuller, Jonathan C; Khoueiry, Pierre; Dinkel, Holger; Forslund, Kristoffer; Stamatakis, Alexandros; Barry, Joseph; Budd, Aidan; Soldatos, Theodoros G; Linssen, Katja; Rajput, Abdul Mateen
The third Heidelberg Unseminars in Bioinformatics (HUB) was held on 18th October 2012, at Heidelberg University, Germany. HUB brought together around 40 bioinformaticians from academia and industry to discuss the 'Biggest Challenges in Bioinformatics' in a 'World Café' style event.
Fuller, Jonathan C; Khoueiry, Pierre; Dinkel, Holger; Forslund, Kristoffer; Stamatakis, Alexandros; Barry, Joseph; Budd, Aidan; Soldatos, Theodoros G; Linssen, Katja; Rajput, Abdul Mateen
The third Heidelberg Unseminars in Bioinformatics (HUB) was held in October at Heidelberg University in Germany. HUB brought together around 40 bioinformaticians from academia and industry to discuss the ‘Biggest Challenges in Bioinformatics' in a ‘World Café' style event.
Parajuly, Keshav; Habib, Komal; Cimpan, Ciprian
Integrating product design with appropriate end-of-life (EoL) processing is widely recognized to have huge potentials in improving resource recovery from electronic products. In this study, we investigate both the product characteristics and EoL processing of robotic vacuum cleaner (RVC), as a case......-case scenario, only 47% of the total materials in RVCs are ultimately recycled. While this low material recovery is mainly due to the lower plastic recycling rate, other market realities and the complex material flows in the recycling chain also contribute to it. The study provides a robust methodological...... approach for assessing the EoL performance based on the knowledge of a product and its complex recycling chain. The lessons learned can be used to support both the design and EoL processing of products with similar features, which carry a high potential for resource recovery, especially at the initial...
The paper discussed electronic-commerce's influence on enterprise human resources management, proposed and proved the human resources management strategy which electronic commerce enterprise should adopt from recruitment strategy to training strategy, keeping talent strategy and other ways.
Bhukuvhani, Crispen; Chiparausha, Blessing; Zuvalinyenga, Dorcas
Lecturers use various electronic resources at different frequencies. The university library's information literacy skills workshops and seminars are the main sources of knowledge of accessing electronic resources. The use of electronic resources can be said to have positively affected lecturers' pedagogical practices and their work in general. The…
Researchers take on challenges and opportunities to mine "Big Data" for answers to complex biological questions. Learn how bioinformatics uses advanced computing, mathematics, and technological platforms to store, manage, analyze, and understand data.
Full Text Available This case study serve as exemplar regarding what can go wrong with the implementation of an electronic document management system. Knowledge agility and knowledge as capital, is outlined against the backdrop of the information society and knowledge economy. The importance of electronic document management and control is sketched thereafter. The literature review is concluded with the impact of human resource management on knowledge agility, which includes references to the learning organisation and complexity theory. The intervention methodology, comprising three phases, follows next. The results of the three phases are presented thereafter. Partial success has been achieved with improving the human efficacy of electronic document management, however the client opted to discontinue the system in use. Opsomming Die gevalle studie dien as voorbeeld van wat kan verkeerd loop met die implementering van ’n elektroniese dokumentbestuur sisteem. Teen die agtergrond van die inligtingsgemeenskap en kennishuishouding word kennissoepelheid en kennis as kapitaal bespreek. Die literatuurstudie word afgesluit met die inpak van menslikehulpbronbestuur op kennissoepelheid, wat ook die verwysings na die leerorganisasie en kompleksietydsteorie insluit. Die metodologie van die intervensie, wat uit drie fases bestaan, volg daarna. Die resultate van die drie fases word vervolgens aangebied. Slegs gedeelte welslae is behaal met die verbetering van die menslike doeltreffendheid ten opsigte van elektroniese dokumentbestuur. Die klient besluit egter om nie voort te gaan om die huidige sisteem te gebruik nie.
Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh
In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: email@example.com.
Svitlana G. Lytvynova
Full Text Available The article deals with the scientific and methodological approaches to the examination of quality of electronic educational resources (EER for secondary schools. It was defined conceptual apparatus, described the object of examination, clarified certain aspects of the functions of examination, determined the basic tasks of expertise, summarized the principles of expertise (scientific, personalization, active involvement in the learning process, described the requirements to the participants of EER expertise, grounded EER accordance to didactic and methodological requirements, described an algorithm of preparation for the examination object to determine compliance with the requirements of didactic. It is established that the assessment is aimed to the receipt from the experts of corresponding data and acceptance on their basis of competent decisions about expedience of the use in general educational establishments.
Wang, Yijun; Lu, Wenjie; Deng, Dexiang
Diverse bioinformatic resources have been developed for plant transcription factor (TF) research. This review presents the bioinformatic resources and methodologies for the elucidation of plant TF-mediated biological events. Such information is helpful to dissect the transcriptional regulatory systems in the three reference plants Arabidopsis , rice, and maize and translation to other plants. Transcription factors (TFs) orchestrate diverse biological programs by the modulation of spatiotemporal patterns of gene expression via binding cis-regulatory elements. Advanced sequencing platforms accompanied by emerging bioinformatic tools revolutionize the scope and extent of TF research. The system-level integration of bioinformatic resources is beneficial to the decoding of TF-involved networks. Herein, we first briefly introduce general and specialized databases for TF research in three reference plants Arabidopsis, rice, and maize. Then, as proof of concept, we identified and characterized heat shock transcription factor (HSF) members through the TF databases. Finally, we present how the integration of bioinformatic resources at -omics layers can aid the dissection of TF-mediated pathways. We also suggest ways forward to improve the bioinformatic resources of plant TFs. Leveraging these bioinformatic resources and methodologies opens new avenues for the elucidation of transcriptional regulatory systems in the three model systems and translation to other plants.
Barilo, Nick F.
The Pacific Northwest National Laboratory (PNNL) Hydrogen Safety Program conducted a planning session in Los Angeles, CA on April 1, 2014 to consider what electronic safety tools would benefit the next phase of hydrogen and fuel cell commercialization. A diverse, 20-person team led by an experienced facilitator considered the question as it applied to the eight most relevant user groups. The results and subsequent evaluation activities revealed several possible resource tools that could greatly benefit users. The tool identified as having the greatest potential for impact is a hydrogen safety portal, which can be the central location for integrating and disseminating safety information (including most of the tools identified in this report). Such a tool can provide credible and reliable information from a trustworthy source. Other impactful tools identified include a codes and standards wizard to guide users through a series of questions relating to application and specific features of the requirements; a scenario-based virtual reality training for first responders; peer networking tools to bring users from focused groups together to discuss and collaborate on hydrogen safety issues; and a focused tool for training inspectors. Table ES.1 provides results of the planning session, including proposed new tools and changes to existing tools.
Mihalas, George I; Tudor, Anca; Paralescu, Sorin; Andor, Minodora; Stoicu-Tivadar, Lacramioara
The paper refers to our methodology and experience in establishing the content of the course in bioinformatics introduced to the school of "Information Systems in Healthcare" (SIIS), master level. The syllabi of both lectures and laboratory works are presented and discussed.
Smith, Fred Hewitt
Described herein are devices and techniques for remotely controlling user access to a restricted computer resource. The process includes pre-determining an association of the restricted computer resource and computer-resource-proximal environmental information. Indicia of user-proximal environmental information are received from a user requesting access to the restricted computer resource. Received indicia of user-proximal environmental information are compared to associated computer-resource-proximal environmental information. User access to the restricted computer resource is selectively granted responsive to a favorable comparison in which the user-proximal environmental information is sufficiently similar to the computer-resource proximal environmental information. In at least some embodiments, the process further includes comparing user-supplied biometric measure and comparing it with a predetermined association of at least one biometric measure of an authorized user. Access to the restricted computer resource is granted in response to a favorable comparison.
Furge, Laura Lowe; Stevens-Truss, Regina; Moore, D. Blaine; Langeland, James A.
Bioinformatics education for undergraduates has been approached primarily in two ways: introduction of new courses with largely bioinformatics focus or introduction of bioinformatics experiences into existing courses. For small colleges such as Kalamazoo, creation of new courses within an already resource-stretched setting has not been an option.…
... Colon cancer - resources Cystic fibrosis - resources Depression - resources Diabetes - resources Digestive disease - resources Drug abuse - resources Eating disorders - resources Elder care - resources Epilepsy - resources Family ...
Egle, Jonathan P; Smeenge, David M; Kassem, Kamal M; Mittal, Vijay K
Electronic sources of medical information are plentiful, and numerous studies have demonstrated the use of the Internet by patients and the variable reliability of these sources. Studies have investigated neither the use of web-based resources by residents, nor the reliability of the information available on these websites. A web-based survey was distributed to surgical residents in Michigan and third- and fourth-year medical students at an American allopathic and osteopathic medical school and a Caribbean allopathic school regarding their preferred sources of medical information in various situations. A set of 254 queries simulating those faced by medical trainees on rounds, on a written examination, or during patient care was developed. The top 5 electronic resources cited by the trainees were evaluated for their ability to answer these questions accurately, using standard textbooks as the point of reference. The respondents reported a wide variety of overall preferred resources. Most of the 73 responding medical trainees favored textbooks or board review books for prolonged studying, but electronic resources are frequently used for quick studying, clinical decision-making questions, and medication queries. The most commonly used electronic resources were UpToDate, Google, Medscape, Wikipedia, and Epocrates. UpToDate and Epocrates had the highest percentage of correct answers (47%) and Wikipedia had the lowest (26%). Epocrates also had the highest percentage of wrong answers (30%), whereas Google had the lowest percentage (18%). All resources had a significant number of questions that they were unable to answer. Though hardcopy books have not been completely replaced by electronic resources, more than half of medical students and nearly half of residents prefer web-based sources of information. For quick questions and studying, both groups prefer Internet sources. However, the most commonly used electronic resources fail to answer clinical queries more than half
Mulder, Nicola J; Adebiyi, Ezekiel; Adebiyi, Marion; Adeyemi, Seun; Ahmed, Azza; Ahmed, Rehab; Akanle, Bola; Alibi, Mohamed; Armstrong, Don L; Aron, Shaun; Ashano, Efejiro; Baichoo, Shakuntala; Benkahla, Alia; Brown, David K; Chimusa, Emile R; Fadlelmola, Faisal M; Falola, Dare; Fatumo, Segun; Ghedira, Kais; Ghouila, Amel; Hazelhurst, Scott; Isewon, Itunuoluwa; Jung, Segun; Kassim, Samar Kamal; Kayondo, Jonathan K; Mbiyavanga, Mamana; Meintjes, Ayton; Mohammed, Somia; Mosaku, Abayomi; Moussa, Ahmed; Muhammd, Mustafa; Mungloo-Dilmohamud, Zahra; Nashiru, Oyekanmi; Odia, Trust; Okafor, Adaobi; Oladipo, Olaleye; Osamor, Victor; Oyelade, Jellili; Sadki, Khalid; Salifu, Samson Pandam; Soyemi, Jumoke; Panji, Sumir; Radouani, Fouzia; Souiai, Oussama; Tastan Bishop, Özlem
Although pockets of bioinformatics excellence have developed in Africa, generally, large-scale genomic data analysis has been limited by the availability of expertise and infrastructure. H3ABioNet, a pan-African bioinformatics network, was established to build capacity specifically to enable H3Africa (Human Heredity and Health in Africa) researchers to analyze their data in Africa. Since the inception of the H3Africa initiative, H3ABioNet's role has evolved in response to changing needs from the consortium and the African bioinformatics community. H3ABioNet set out to develop core bioinformatics infrastructure and capacity for genomics research in various aspects of data collection, transfer, storage, and analysis. Various resources have been developed to address genomic data management and analysis needs of H3Africa researchers and other scientific communities on the continent. NetMap was developed and used to build an accurate picture of network performance within Africa and between Africa and the rest of the world, and Globus Online has been rolled out to facilitate data transfer. A participant recruitment database was developed to monitor participant enrollment, and data is being harmonized through the use of ontologies and controlled vocabularies. The standardized metadata will be integrated to provide a search facility for H3Africa data and biospecimens. Because H3Africa projects are generating large-scale genomic data, facilities for analysis and interpretation are critical. H3ABioNet is implementing several data analysis platforms that provide a large range of bioinformatics tools or workflows, such as Galaxy, the Job Management System, and eBiokits. A set of reproducible, portable, and cloud-scalable pipelines to support the multiple H3Africa data types are also being developed and dockerized to enable execution on multiple computing infrastructures. In addition, new tools have been developed for analysis of the uniquely divergent African data and for
Johnson, Kathy A.
For the purpose of this paper, bioinformatics is defined as the application of computer technology to the management of biological information. It can be thought of as the science of developing computer databases and algorithms to facilitate and expedite biological research. This is a crosscutting capability that supports nearly all human health areas ranging from computational modeling, to pharmacodynamics research projects, to decision support systems within autonomous medical care. Bioinformatics serves to increase the efficiency and effectiveness of the life sciences research program. It provides data, information, and knowledge capture which further supports management of the bioastronautics research roadmap - identifying gaps that still remain and enabling the determination of which risks have been addressed.
Wei, Dongqing; Zhao, Tangzhen; Dai, Hao
This text examines in detail mathematical and physical modeling, computational methods and systems for obtaining and analyzing biological structures, using pioneering research cases as examples. As such, it emphasizes programming and problem-solving skills. It provides information on structure bioinformatics at various levels, with individual chapters covering introductory to advanced aspects, from fundamental methods and guidelines on acquiring and analyzing genomics and proteomics sequences, the structures of protein, DNA and RNA, to the basics of physical simulations and methods for conform
Kachaluba, Sarah Buck; Brady, Jessica Evans; Critten, Jessica
This article is based on quantitative and qualitative research examining humanities scholars' understandings of the advantages and disadvantages of print versus electronic information resources. It explores how humanities' faculty members at Florida State University (FSU) use print and electronic resources, as well as how they perceive these…
England, Lenore; Fu, Li; Miller, Stephen
Organization of electronic resources workflow is critical in the increasingly complicated and complex world of library management. A simple organizational tool that can be readily applied to electronic resources management (ERM) is the use of checklists. Based on the principles discussed in The Checklist Manifesto: How to Get Things Right, the…
Burr, Tom L [Los Alamos National Laboratory
Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.
A significant shift is taking place in libraries, with the purchase of e-resources accounting for the bulk of materials spending. Electronic Resource Management makes the case that technical services workflows need to make a corresponding shift toward e-centric models and highlights the increasing variety of e-formats that are forcing new developments in the field.Six chapters cover key topics, including: technical services models, both past and emerging; staffing and workflow in electronic resource management; implementation and transformation of electronic resource management systems; the ro
Findings indicate that the study group has regular access to the internet , and preferred using free online resources from Google and Wikipedia to institutionally subscribed academic online resources in databases such as HINARI, EBSCO Host, Questia , JSTOR and High Beam.This shows that technology alone cannot help ...
Hulseberg, Anna; Monson, Sarah
Electronic resources, the tools we use to manage them, and the needs and expectations of our users are constantly evolving; at the same time, the roles, responsibilities, and workflow of the library staff who manage e-resources are also in flux. Recognizing a need to be more intentional and proactive about how we manage e-resources, the…
The rapid growth in the creation and dissemination of electronic information has emphasized the digital environment's speed and ease of dissemination with little regard for its long-term preservation and access...
Smith, Fred Hewitt
Described herein are devices and techniques for remotely controlling user access to a restricted computer resource. The process includes obtaining an image from a communication device of a user. An individual and a landmark are identified within the image. Determinations are made that the individual is the user and that the landmark is a predetermined landmark. Access to a restricted computing resource is granted based on the determining that the individual is the user and that the landmark is the predetermined landmark. Other embodiments are disclosed.
reported using free Internet resources including the search engines, while only a small proportion uses scholarly databases. For example, 22% of researchers reported using the. African Journals Online (AJOL) while 7% use Gale databases (see Table 2 for details). Additionally, the frequency of use also varied significantly ...
The aim of the paper is to identify challenges associated with the cataloguing of e resources in some selected university libraries in south –south Nigeria. The descriptive survey design involving the use of questionnaire as the research instrument was adopted. The population comprised of cataloguers in five selected ...
Michael R Clay
Full Text Available Training anatomic and clinical pathology residents in the principles of bioinformatics is a challenging endeavor. Most residents receive little to no formal exposure to bioinformatics during medical education, and most of the pathology training is spent interpreting histopathology slides using light microscopy or focused on laboratory regulation, management, and interpretation of discrete laboratory data. At a minimum, residents should be familiar with data structure, data pipelines, data manipulation, and data regulations within clinical laboratories. Fellowship-level training should incorporate advanced principles unique to each subspecialty. Barriers to bioinformatics education include the clinical apprenticeship training model, ill-defined educational milestones, inadequate faculty expertise, and limited exposure during medical training. Online educational resources, case-based learning, and incorporation into molecular genomics education could serve as effective educational strategies. Overall, pathology bioinformatics training can be incorporated into pathology resident curricula, provided there is motivation to incorporate, institutional support, educational resources, and adequate faculty expertise.
The study also revealed that majority of the University libraries have adequate basic infrastructure for effective electronic information services. ... acquired by the library are put into maximal use by the library clientele, thereby ensuring the achievement of the library's objective which is satisfying the users, information needs.
Dugdale, David; Dugdale, Christine
Describes the development of the ResIDe Electronic Library at the University of the West of England, Bristol. Analyzes potential of the system to increase economy, efficiency and effectiveness in library services and relates it to how the needs of sponsors and students can be met. (Author/LRW)
Mishima, Hiroyuki; Sasaki, Kensaku; Tanaka, Masahiro; Tatebe, Osamu; Yoshiura, Koh-Ichiro
In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error.Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles
Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error. Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. Findings We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Conclusions Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability
H. M. Kravtsov
Full Text Available Abstract. Results on modeling of quality management system of electronic information resources on the basis of the analysis of its elements functioning with use of the integrated and differentiated approaches are presented. Application of such model is illustrated on an example of calculation and optimization of parameters of a quality management system at the organization of the co-ordinated work of services of monitoring, an estimation of quality and support of electronic learning resources.
Emergent Computation is concerned with recent applications of Mathematical Linguistics or Automata Theory. This subject has a primary focus upon "Bioinformatics" (the Genome and arising interest in the Proteome), but the closing chapter also examines applications in Biology, Medicine, Anthropology, etc. The book is composed of an organized examination of DNA, RNA, and the assembly of amino acids into proteins. Rather than examine these areas from a purely mathematical viewpoint (that excludes much of the biochemical reality), the author uses scientific papers written mostly by biochemists based upon their laboratory observations. Thus while DNA may exist in its double stranded form, triple stranded forms are not excluded. Similarly, while bases exist in Watson-Crick complements, mismatched bases and abasic pairs are not excluded, nor are Hoogsteen bonds. Just as there are four bases naturally found in DNA, the existence of additional bases is not ignored, nor amino acids in addition to the usual complement of...
Cantacessi, C; Campbell, B E; Jex, A R; Young, N D; Hall, R S; Ranganathan, S; Gasser, R B
The advent and integration of high-throughput '-omics' technologies (e.g. genomics, transcriptomics, proteomics, metabolomics, glycomics and lipidomics) are revolutionizing the way biology is done, allowing the systems biology of organisms to be explored. These technologies are now providing unique opportunities for global, molecular investigations of parasites. For example, studies of a transcriptome (all transcripts in an organism, tissue or cell) have become instrumental in providing insights into aspects of gene expression, regulation and function in a parasite, which is a major step to understanding its biology. The purpose of this article was to review recent applications of next-generation sequencing technologies and bioinformatic tools to large-scale investigations of the transcriptomes of parasitic nematodes of socio-economic significance (particularly key species of the order Strongylida) and to indicate the prospects and implications of these explorations for developing novel methods of parasite intervention. © 2011 Blackwell Publishing Ltd.
Tolvanen, Martti; Vihinen, Mauno
Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…
Kortsarts, Yana; Morris, Robert W.; Utell, Janine M.
Bioinformatics is a relatively new interdisciplinary field that integrates computer science, mathematics, biology, and information technology to manage, analyze, and understand biological, biochemical and biophysical information. We present our experience in teaching an interdisciplinary course, Introduction to Bioinformatics, which was developed…
This study aimed to improve the current state of electronic resource evaluation in libraries. While the use of Web DB, e-book, e-journal, and other e-resources such as CD-ROM, DVD, and micro materials is increasing in libraries, their use is not comprehensively factored into the general evaluation of libraries and may diminish the reliability of…
Kent State University has developed a centralized system that manages the communication and work related to the review and selection of commercially available electronic resources. It is an automated system that tracks the review process, provides selectors with price and trial information, and compiles reviewers' feedback about the resource. It…
The current study aimed to investigate language students' use of print and electronic resources for their research papers required in research techniques class, focusing on which reading strategies they used while reading these resources. The participants of the study were 90 sophomore students enrolled in the research techniques class offered at…
With the development of the Internet and the growth of online resources, bioinformatics training for wet-lab biologists became necessary as a part of their education. This article describes a one-semester course 'Applied Bioinformatics Course' (ABC, http://abc.cbi.pku.edu.cn/) that the author has been teaching to biological graduate students at the Peking University and the Chinese Academy of Agricultural Sciences for the past 13 years. ABC is a hands-on practical course to teach students to use online bioinformatics resources to solve biological problems related to their ongoing research projects in molecular biology. With a brief introduction to the background of the course, detailed information about the teaching strategies of the course are outlined in the 'How to teach' section. The contents of the course are briefly described in the 'What to teach' section with some real examples. The author wishes to share his teaching experiences and the online teaching materials with colleagues working in bioinformatics education both in local and international universities. © The Author 2013. Published by Oxford University Press.
Student use of electronic books has become an accepted supplement to traditional resources. Student use and satisfaction was monitored through an online course discussion board. Increased use of electronic books indicate this service is an accepted supplement to the print book collection.
The field of bioinformatics has allowed the interpretation of massive amounts of biological data, ushering in the era of 'omics' to biomedical research. Its potential impact on pharmacology research is enormous and it has shown some emerging successes. A full realization of this potential, however, requires standardized data annotation for large health record databases and molecular data resources. Improved standardization will further stimulate the development of system pharmacology models, using translational bioinformatics methods. This new translational bioinformatics paradigm is highly complementary to current pharmacological research fields, such as personalized medicine, pharmacoepidemiology and drug discovery. In this review, I illustrate the application of transformational bioinformatics to research in numerous pharmacology subdisciplines. © 2015 The British Pharmacological Society.
Revote, Jerico; Watson-Haigh, Nathan S.; Quenette, Steve; Bethwaite, Blair; McGrath, Annette
Abstract The Bioinformatics Training Platform (BTP) has been developed to provide access to the computational infrastructure required to deliver sophisticated hands-on bioinformatics training courses. The BTP is a cloud-based solution that is in active use for delivering next-generation sequencing training to Australian researchers at geographically dispersed locations. The BTP was built to provide an easy, accessible, consistent and cost-effective approach to delivering workshops at host universities and organizations with a high demand for bioinformatics training but lacking the dedicated bioinformatics training suites required. To support broad uptake of the BTP, the platform has been made compatible with multiple cloud infrastructures. The BTP is an open-source and open-access resource. To date, 20 training workshops have been delivered to over 700 trainees at over 10 venues across Australia using the BTP. PMID:27084333
Pallen, Mark J
Microbial bioinformatics in 2020 will remain a vibrant, creative discipline, adding value to the ever-growing flood of new sequence data, while embracing novel technologies and fresh approaches. Databases and search strategies will struggle to cope and manual curation will not be sustainable during the scale-up to the million-microbial-genome era. Microbial taxonomy will have to adapt to a situation in which most microorganisms are discovered and characterised through the analysis of sequences. Genome sequencing will become a routine approach in clinical and research laboratories, with fresh demands for interpretable user-friendly outputs. The "internet of things" will penetrate healthcare systems, so that even a piece of hospital plumbing might have its own IP address that can be integrated with pathogen genome sequences. Microbiome mania will continue, but the tide will turn from molecular barcoding towards metagenomics. Crowd-sourced analyses will collide with cloud computing, but eternal vigilance will be the price of preventing the misinterpretation and overselling of microbial sequence data. Output from hand-held sequencers will be analysed on mobile devices. Open-source training materials will address the need for the development of a skilled labour force. As we boldly go into the third decade of the twenty-first century, microbial sequence space will remain the final frontier! © 2016 The Author. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
Full Text Available While academic libraries in most countries are struggling to negotiate with publishers and vendors individually or collaboratively via consortia, a few countries have experimented with a different model, national site licensing (NSL. Because NSL often involves government and large-scale collaboration, it has the potential to solve many problems in the complex licensing world. However, not many nations have adopted it. This study uses historical research approach and the comparative case study research method to explore the seemingly low level of adoption. The cases include the Canadian National Site Licensing Project (CNSLP, the United Kingdom’s National Electronic Site Licensing Initiative (NESLI, and the United States, which has not adopted NSL. The theoretical framework guiding the research design and data collection is W. Richard Scott’s institutional theory, which utilizes three supporting pillars—regulative, normative, and cultural-cognitive—to analyze institutional processes. In this study, the regulative pillar and the normative pillar of NSL adoption— an institutional construction and change—are examined. Data were collected from monographs, research articles, government documents, and relevant websites. Based on the analysis of these cases, a preliminary model is proposed for the adoption of NSL. The factors that support a country’s adoption of NSL include the need for new institutions, a centralized educational policy-making system and funding system, supportive political trends, and the tradition of cooperation. The factors that may prevent a country from adopting NSL include decentralized educational policy and funding, diversity and the large number of institutions, the concern for the “Big Deal,” and the concern for monopoly.
Lawlor, Brendan; Walsh, Paul
There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians.
Luke, Stephen; Fountain, John S; Reith, David M; Braitberg, George; Cruickshank, Jaycen
ED staff use a range of poisons information resources of varying type and quality. The present study aims to identify those resources utilised in the state of Victoria, Australia, and assess opinion of the most used electronic products. A previously validated self-administered survey was conducted in 15 EDs, with 10 questionnaires sent to each. The survey was then repeated following the provision of a 4-month period of access to Toxinz™, an Internet poisons information product novel to the region. The study was conducted from December 2010 to August 2011. There were 117 (78%) and 48 (32%) responses received from the first and second surveys, respectively, a 55% overall response rate. No statistically significant differences in professional group, numbers of poisoned patients seen or resource type accessed were identified between studies. The electronic resource most used in the first survey was Poisindex® (48.68%) and Toxinz™ (64.1%) in the second. There were statistically significant (P poisons information but would do so if a reputable product was available. The order of poisons information sources most utilised was: consultation with a colleague, in-house protocols and electronic resources. There was a significant difference in satisfaction with electronic poisons information resources and a movement away from existing sources when choice was provided. Interest in increased use of mobile solutions was identified. © 2014 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
Hugh P Shanahan
Full Text Available We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development.
Shanahan, Hugh P; Owen, Anne M; Harrison, Andrew P
We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development.
Kamali, Amir Hossein; Giannoulatou, Eleni; Chen, Tsong Yueh; Charleston, Michael A; McEwan, Alistair L; Ho, Joshua W K
Bioinformatics is the application of computational, mathematical and statistical techniques to solve problems in biology and medicine. Bioinformatics programs developed for computational simulation and large-scale data analysis are widely used in almost all areas of biophysics. The appropriate choice of algorithms and correct implementation of these algorithms are critical for obtaining reliable computational results. Nonetheless, it is often very difficult to systematically test these programs as it is often hard to verify the correctness of the output, and to effectively generate failure-revealing test cases. Software testing is an important process of verification and validation of scientific software, but very few studies have directly dealt with the issues of bioinformatics software testing. In this work, we review important concepts and state-of-the-art methods in the field of software testing. We also discuss recent reports on adapting and implementing software testing methodologies in the bioinformatics field, with specific examples drawn from systems biology and genomic medicine.
Bruhn, Russel Elton; Burton, Philip John
Data interchange bioinformatics databases will, in the future, most likely take place using extensible markup language (XML). The document structure will be described by an XML Schema rather than a document type definition (DTD). To ensure flexibility, the XML Schema must incorporate aspects of Object-Oriented Modeling. This impinges on the choice of the data model, which, in turn, is based on the organization of bioinformatics data by biologists. Thus, there is a need for the general bioinformatics community to be aware of the design issues relating to XML Schema. This paper, which is aimed at a general bioinformatics audience, uses examples to describe the differences between a DTD and an XML Schema and indicates how Unified Modeling Language diagrams may be used to incorporate Object-Oriented Modeling in the design of schema.
de Jong, Anne; van Heel, Auke J.; Kuipers, Oscar P.
Bioinformatic tools can greatly improve the efficiency of bacteriocin screening efforts by limiting the amount of strains. Different classes of bacteriocins can be detected in genomes by looking at different features. Finding small bacteriocins can be especially challenging due to low homology and because small open reading frames (ORFs) are often omitted from annotations. In this chapter, several bioinformatic tools/strategies to identify bacteriocins in genomes are discussed.
Full Text Available Research and innovation are listed as the key success factors for the future development of Finnish prosperity and the Finnish economy. The Finnish libraries have developed a scenario to support this vision. University, polytechnic and research institute libraries as well as public libraries have defined the core electronic resources necessary to improve access to information in Finland. The primary aim of this work has been to provide information and justification for central funding for electronic resources to support the national goals. The secondary aim is to help with the reallocation of existing central funds to better support access to information.
Full Text Available The objective of this study is to know the rate and purpose of the use of e-resource by the scientists at pharmacopoeial libraries in India. Among other things, this study examined the preferences of the scientists toward printed books and journals, electronic information resources, and pattern of using e-resources. Non-probability sampling specially accidental and purposive technique was applied in the collection of primary data through administration of user questionnaire. The sample respondents chosen for the study consists of principle scientific officer, senior scientific officer, scientific officer, and scientific assistant of different division of the laboratories, namely, research and development, pharmaceutical chemistry, pharmacovigilance, pharmacology, pharmacogonosy, and microbiology. The findings of the study reveal the personal experiences and perceptions they have had on practice and research activity using e-resource. The major findings indicate that of the total anticipated participants, 78% indicated that they perceived the ability to use computer for electronic information resources. The data analysis shows that all the scientists belonging to the pharmacopoeial libraries used electronic information resources to address issues relating to drug indexes and compendia, monographs, drugs obtained through online databases, e-journals, and the Internet sources—especially polices by regulatory agencies, contacts, drug promotional literature, and standards.
Woodruff, Allison; Aoki, Paul M.; Grinter, Rebecca E.; Hurst, Amy; Szymanski, Margaret H.; Thornton, James D.
We describe an electronic guidebook, Sotto Voce, that enables visitors to share audio information by eavesdropping on each other's guidebook activity. We have conducted three studies of visitors using electronic guidebooks in a historic house: one study with open air audio played through speakers and two studies with eavesdropped audio. An analysis of visitor interaction in these studies suggests that eavesdropped audio provides more social and interactive learning resources than open air aud...
Full Text Available For many years, library users have used only from the printed media in order to get the information that they have needed. Today with the widespread use of the Web and the addition of electronic information resources to library collections, the use of information in the electronic environment as well as in printed media is started to be used. In time, such types of information resources as, electronic journals, electronic books, electronic encyclopedias, electronic dictionaries and electronic theses have been added to library collections. In this study, selection criteria that can be used for electronic information resources are discussed and suggestions are provided for libraries that try to select electronic information resources for their collections.
Ranganathan, Shoba; Hsu, Wen-Lian; Yang, Ueng-Cheng; Tan, Tin Wee
The 2008 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation set up in 1998, was organized as the 7th International Conference on Bioinformatics (InCoB), jointly with the Bioinformatics and Systems Biology in Taiwan (BIT 2008) Conference, Oct. 20?23, 2008 at Taipei, Taiwan. Besides bringing together scientists from the field of bioinformatics in this region, InCoB is actively involving researchers from the area of systems biology,...
Koehn, Shona L.; Hawamdeh, Suliman
As library collections increasingly become digital, libraries are faced with many challenges regarding the acquisition and management of electronic resources. Some of these challenges include copyright and fair use, the first-sale doctrine, licensing versus ownership, digital preservation, long-term archiving, and, most important, the issue of…
This study looks into the use of electronic resources by the faculty members of College of Technology Education, Kumasi of the University of Education, Winneba, Ghana. Sixty-two copies of a questionnaire were sent to the entire faculty and 31 were returned which gave a response rate of 50%. The responses showed very ...
van Kampen, Antoine H C; Moerland, Perry D
Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically contributes to systems medicine. First, we explain the role of bioinformatics in the management and analysis of data. In particular we show the importance of publicly available biological and clinical repositories to support systems medicine studies. Second, we discuss how the integration and analysis of multiple types of omics data through integrative bioinformatics may facilitate the determination of more predictive and robust disease signatures, lead to a better understanding of (patho)physiological molecular mechanisms, and facilitate personalized medicine. Third, we focus on network analysis and discuss how gene networks can be constructed from omics data and how these networks can be decomposed into smaller modules. We discuss how the resulting modules can be used to generate experimentally testable hypotheses, provide insight into disease mechanisms, and lead to predictive models. Throughout, we provide several examples demonstrating how bioinformatics contributes to systems medicine and discuss future challenges in bioinformatics that need to be addressed to enable the advancement of systems medicine.
Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi
In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017
Anton M. Avramchuk
Full Text Available Today the problem of designing multimedia electronic educational resources from language disciplines in Moodle is very important. This system has a lot of different, powerful resources, plugins to facilitate the learning of students with language disciplines. This article presents an overview and comparative analysis of the five Moodle plugins for designing multimedia electronic educational resources from language disciplines. There have been considered their key features and functionality in order to choose the best for studying language disciplines in the Moodle. Plugins are compared by a group of experts according to the criteria: efficiency, functionality and easy use. For a comparative analysis of the plugins it is used the analytic hierarchy process.
Grisham, William; Schottler, Natalie A.; Valli-Marill, Joanne; Beck, Lisa; Beatty, Jackson
This completely computer-based module's purpose is to introduce students to bioinformatics resources. We present an easy-to-adopt module that weaves together several important bioinformatic tools so students can grasp how these tools are used in answering research questions. Students integrate information gathered from websites dealing with…
Chux Gervase Iwu
Full Text Available This study set out to examine the effect of e-hrm systems in assisting human resource practitioners to execute their duties and responsibilities. In comparison to developed economies of the world, information technology adoption in sub-Saharan Africa has not been without certain glitches. Some of the factors that are responsible for these include poor need identification, sustainable funding, and insufficient skills. Besides these factors, there is also the issue of change management and users sticking to what they already know. Although, the above factors seem negative, there is strong evidence that information systems such as electronic human resource management present benefits to an organization. To achieve this, a dual research approach was utilized. Literature assisted immensely in both the development of the conceptual framework upon which the study hinged as well as in the development of the questionnaire items. The study also made use of an interview checklist to guide the participants. The findings reveal a mix of responses that indicate that while there are gains in adopting e-hrm systems, it is wiser to consider supporting resources as well as articulate the needs of the university better before any investment is made.
Howard Lopes Ribeiro Junior
Full Text Available The objective of this study was to evaluate and apply the Bioinformatics theoretical contents and practical for the course students in Biological Sciences Degree Fully enrolled in the disciplines of General Genetics and Molecular Biology, State University of Ceara in 2010. The theoretical approach previously tested (RIBEIRO JUNIOR, 2011 consisted of a presentation of historical concepts, basic and specific to current advances in research involved the areas of molecular biology. The practice of "Building a Molecular Phylogeny in Silico" is designed to become functional in practice the concepts presented above, using the database of the National Center for Biotechnology Information, NCBI, and their sequence alignment tool, the BLASTp (Basic Local Alignment Search Tool Protein-Protein. positive results obtained with the application of the lecture Introduction to Bioinformatics and practical activities were highlighted with the characterizations of molecular phylogenies of the sequences hypothetical proposals for the implementation of the alignments and the statements of students mentioned above. These activities were seen as essential so that students could experience step by step to a better understanding of the emerging field of life sciences: the Bioinformatics.
Х А Гербеков
Full Text Available Today the tools for maintaining training courses based on opportunities of information and communication technologies are developed. Practically in all directions of preparation and on all subject matters electronic textbook and self-instruction manuals are created. Nevertheless the industry of computer educational and methodical materials actively develops and gets more and more areas of development and introduction. In this regard more and more urgent is a problem of development of the electronic educational resources adequate to modern educational requirements. Creation and the organization of training courses with use of electronic educational resources in particular on the basis of Internet technologies remains a difficult methodical task.In article the questions connected with development of electronic educational resources for use when studying the substantial line “Information technologies” of a school course of informatics in particular for studying of spreadsheets are considered. Also the analysis of maintenance of a school course and the unified state examination from the point of view of representation of task in him corresponding to the substantial line of studying “Information technologies” on mastering technology of information processing in spreadsheets and the methods of visualization given by means of charts and schedules is carried out.
van Kampen, Antoine H. C.; Moerland, Perry D.
Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically
Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael
Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…
This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...
Vesth, Tammi Camilla; Rasmussen, Jane Lind Nybo; Theobald, Sebastian
with the Joint Genome Institute. The Aspergillus Mine is not intended as a genomic data sharing service but instead focuses on creating an environment where the results of bioinformatic analysis is made available for inspection. The data and code is public upon request and figures can be obtained directly from...
Vaez Barzani, Ahmad
In this thesis we present an overview of bioinformatics-based approaches for genomic association mapping, with emphasis on human quantitative traits and their contribution to complex diseases. We aim to provide a comprehensive walk-through of the classic steps of genomic association mapping
Hassanien, Aboul Ella; Al-Shammari, Eiman Tamah; Ghali, Neveen I
Computational intelligence (CI) is a well-established paradigm with current systems having many of the characteristics of biological computers and capable of performing a variety of tasks that are difficult to do using conventional techniques. It is a methodology involving adaptive mechanisms and/or an ability to learn that facilitate intelligent behavior in complex and changing environments, such that the system is perceived to possess one or more attributes of reason, such as generalization, discovery, association and abstraction. The objective of this article is to present to the CI and bioinformatics research communities some of the state-of-the-art in CI applications to bioinformatics and motivate research in new trend-setting directions. In this article, we present an overview of the CI techniques in bioinformatics. We will show how CI techniques including neural networks, restricted Boltzmann machine, deep belief network, fuzzy logic, rough sets, evolutionary algorithms (EA), genetic algorithms (GA), swarm intelligence, artificial immune systems and support vector machines, could be successfully employed to tackle various problems such as gene expression clustering and classification, protein sequence classification, gene selection, DNA fragment assembly, multiple sequence alignment, and protein function prediction and its structure. We discuss some representative methods to provide inspiring examples to illustrate how CI can be utilized to address these problems and how bioinformatics data can be characterized by CI. Challenges to be addressed and future directions of research are also presented and an extensive bibliography is included. Copyright © 2013 Elsevier Ltd. All rights reserved.
Светлана Анатольева Баженова
Article is devoted questions of use of electronic resources at training to computer science in a teacher training college, principles of pedagogical expediency of use of electronic resources at training are specified computer science and positive aspects of such use for different forms of work of the student and the teacher are allocated.
Amusa, Oyintola Isiaka; Atinmo, Morayo
(Purpose) This study surveyed the level of availability, use and constraints to use of electronic resources among law lecturers in Nigeria. (Methodology) Five hundred and fifty-two law lecturers were surveyed and four hundred and forty-two responded. (Results) Data analysis revealed that the level of availability of electronic resources for the…
Karikari, Thomas K; Quansah, Emmanuel; Mohamed, Wael M Y
Research in bioinformatics has a central role in helping to advance biomedical research. However, its introduction to Africa has been met with some challenges (such as inadequate infrastructure, training opportunities, research funding, human resources, biorepositories and databases) that have contributed to the slow pace of development in this field across the continent. Fortunately, recent improvements in areas such as research funding, infrastructural support and capacity building are helping to develop bioinformatics into an important discipline in Africa. These contributions are leading to the establishment of world-class research facilities, biorepositories, training programmes, scientific networks and funding schemes to improve studies into disease and health in Africa. With increased contribution from all stakeholders, these developments could be further enhanced. Here, we discuss how the recent developments are contributing to the advancement of bioinformatics in Africa.
Thomas K. Karikari
Full Text Available Research in bioinformatics has a central role in helping to advance biomedical research. However, its introduction to Africa has been met with some challenges (such as inadequate infrastructure, training opportunities, research funding, human resources, biorepositories and databases that have contributed to the slow pace of development in this field across the continent. Fortunately, recent improvements in areas such as research funding, infrastructural support and capacity building are helping to develop bioinformatics into an important discipline in Africa. These contributions are leading to the establishment of world-class research facilities, biorepositories, training programmes, scientific networks and funding schemes to improve studies into disease and health in Africa. With increased contribution from all stakeholders, these developments could be further enhanced. Here, we discuss how the recent developments are contributing to the advancement of bioinformatics in Africa.
Attwood, Teresa K; Atwood, Teresa K; Bongcam-Rudloff, Erik; Brazas, Michelle E; Corpas, Manuel; Gaudet, Pascale; Lewitter, Fran; Mulder, Nicola; Palagi, Patricia M; Schneider, Maria Victoria; van Gelder, Celia W G
In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy--paradoxically, many are actually closing "niche" bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all.
van Gelder, Celia W G; Hooft, Rob W W; van Rijswijk, Merlijn N; van den Berg, Linda; Kok, Ruben G; Reinders, Marcel; Mons, Barend; Heringa, Jaap
This review provides a historical overview of the inception and development of bioinformatics research in the Netherlands. Rooted in theoretical biology by foundational figures such as Paulien Hogeweg (at Utrecht University since the 1970s), the developments leading to organizational structures supporting a relatively large Dutch bioinformatics community will be reviewed. We will show that the most valuable resource that we have built over these years is the close-knit national expert community that is well engaged in basic and translational life science research programmes. The Dutch bioinformatics community is accustomed to facing the ever-changing landscape of data challenges and working towards solutions together. In addition, this community is the stable factor on the road towards sustainability, especially in times where existing funding models are challenged and change rapidly. © The Author 2017. Published by Oxford University Press.
Wood, Louisa; Gebhardt, Philipp
Since 2010, the European Molecular Biology Laboratory's (EMBL) Heidelberg laboratory and the European Bioinformatics Institute (EMBL-EBI) have jointly run bioinformatics training courses developed specifically for secondary school science teachers within Europe and EMBL member states. These courses focus on introducing bioinformatics, databases, and data-intensive biology, allowing participants to explore resources and providing classroom-ready materials to support them in sharing this new knowledge with their students. In this article, we chart our progress made in creating and running three bioinformatics training courses, including how the course resources are received by participants and how these, and bioinformatics in general, are subsequently used in the classroom. We assess the strengths and challenges of our approach, and share what we have learned through our interactions with European science teachers.
Wren, Jonathan D
To analyze the relative proportion of bioinformatics papers and their non-bioinformatics counterparts in the top 20 most cited papers annually for the past two decades. When defining bioinformatics papers as encompassing both those that provide software for data analysis or methods underlying data analysis software, we find that over the past two decades, more than a third (34%) of the most cited papers in science were bioinformatics papers, which is approximately a 31-fold enrichment relative to the total number of bioinformatics papers published. More than half of the most cited papers during this span were bioinformatics papers. Yet, the average 5-year JIF of top 20 bioinformatics papers was 7.7, whereas the average JIF for top 20 non-bioinformatics papers was 25.8, significantly higher (P bioinformatics journals tended to have higher Gini coefficients, suggesting that development of novel bioinformatics resources may be somewhat 'hit or miss'. That is, relative to other fields, bioinformatics produces some programs that are extremely widely adopted and cited, yet there are fewer of intermediate success. firstname.lastname@example.org Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
"The overall aim of "EURASIP Journal on Bioinformatics and Systems Biology" is to publish research results related to signal processing and bioinformatics theories and techniques relevant to a wide...
Oliver, Jeffrey C
Health sciences research is increasingly focusing on big data applications, such as genomic technologies and precision medicine, to address key issues in human health. These approaches rely on biological data repositories and bioinformatic analyses, both of which are growing rapidly in size and scope. Libraries play a key role in supporting researchers in navigating these and other information resources. With the goal of supporting bioinformatics research in the health sciences, the University of Arizona Health Sciences Library established a Bioinformation program. To shape the support provided by the library, I developed and administered a needs assessment survey to the University of Arizona Health Sciences campus in Tucson, Arizona. The survey was designed to identify the training topics of interest to health sciences researchers and the preferred modes of training. Survey respondents expressed an interest in a broad array of potential training topics, including "traditional" information seeking as well as interest in analytical training. Of particular interest were training in transcriptomic tools and the use of databases linking genotypes and phenotypes. Staff were most interested in bioinformatics training topics, while faculty were the least interested. Hands-on workshops were significantly preferred over any other mode of training. The University of Arizona Health Sciences Library is meeting those needs through internal programming and external partnerships. The results of the survey demonstrate a keen interest in a variety of bioinformatic resources; the challenge to the library is how to address those training needs. The mode of support depends largely on library staff expertise in the numerous subject-specific databases and tools. Librarian-led bioinformatic training sessions provide opportunities for engagement with researchers at multiple points of the research life cycle. When training needs exceed library capacity, partnering with intramural and
Wright, Victoria Ann; Vaughan, Brendan W; Laurent, Thomas; Lopez, Rodrigo; Brooksbank, Cath; Schneider, Maria Victoria
Today's molecular life scientists are well educated in the emerging experimental tools of their trade, but when it comes to training on the myriad of resources and tools for dealing with biological data, a less ideal situation emerges. Often bioinformatics users receive no formal training on how to make the most of the bioinformatics resources and tools available in the public domain. The European Bioinformatics Institute, which is part of the European Molecular Biology Laboratory (EMBL-EBI), holds the world's most comprehensive collection of molecular data, and training the research community to exploit this information is embedded in the EBI's mission. We have evaluated eLearning, in parallel with face-to-face courses, as a means of training users of our data resources and tools. We anticipate that eLearning will become an increasingly important vehicle for delivering training to our growing user base, so we have undertaken an extensive review of Learning Content Management Systems (LCMSs). Here, we describe the process that we used, which considered the requirements of trainees, trainers and systems administrators, as well as taking into account our organizational values and needs. This review describes the literature survey, user discussions and scripted platform testing that we performed to narrow down our choice of platform from 36 to a single platform. We hope that it will serve as guidance for others who are seeking to incorporate eLearning into their bioinformatics training programmes.
McHenry, Megan S; Fischer, Lydia J; Chun, Yeona; Vreeman, Rachel C
The objective of this study is to conduct a systematic review of the literature of how portable electronic technologies with offline functionality are perceived and used to provide health education in resource-limited settings. Three reviewers evaluated articles and performed a bibliography search to identify studies describing health education delivered by portable electronic device with offline functionality in low- or middle-income countries. Data extracted included: study population; study design and type of analysis; type of technology used; method of use; setting of technology use; impact on caregivers, patients, or overall health outcomes; and reported limitations. Searches yielded 5514 unique titles. Out of 75 critically reviewed full-text articles, 10 met inclusion criteria. Study locations included Botswana, Peru, Kenya, Thailand, Nigeria, India, Ghana, and Tanzania. Topics addressed included: development of healthcare worker training modules, clinical decision support tools, patient education tools, perceptions and usability of portable electronic technology, and comparisons of technologies and/or mobile applications. Studies primarily looked at the assessment of developed educational modules on trainee health knowledge, perceptions and usability of technology, and comparisons of technologies. Overall, studies reported positive results for portable electronic device-based health education, frequently reporting increased provider/patient knowledge, improved patient outcomes in both quality of care and management, increased provider comfort level with technology, and an environment characterized by increased levels of technology-based, informal learning situations. Negative assessments included high investment costs, lack of technical support, and fear of device theft. While the research is limited, portable electronic educational resources present promising avenues to increase access to effective health education in resource-limited settings, contingent
Kastin, S; Wexler, J
During the past 30 years, there has been an explosion in the volume of published medical information. As this volume has increased, so has the need for efficient methods for searching the data. MEDLINE, the primary medical database, is currently limited to abstracts of the medical literature. MEDLINE searches use AND/OR/NOT logical searching for keywords that have been assigned to each article and for textwords included in article abstracts. Recently, the complete text of some scientific journals, including figures and tables, has become accessible electronically. Keyword and textword searches can provide an overwhelming number of results. Search engines that use phrase searching, or searches that limit the number of words between two finds, improve the precision of search engines. The development of the Internet as a vehicle for worldwide communication, and the emergence of the World Wide Web (WWW) as a common vehicle for communication have made instantaneous access to much of the entire body of medical information an exciting possibility. There is more than one way to search the WWW for information. At the present time, two broad strategies have emerged for cataloging the WWW: directories and search engines. These allow more efficient searching of the WWW. Directories catalog WWW information by creating categories and subcategories of information and then publishing pointers to information within the category listings. Directories are analogous to yellow pages of the phone book. Search engines make no attempt to categorize information. They automatically scour the WWW looking for words and then automatically create an index of those words. When a specific search engine is used, its index is searched for a particular word. Usually, search engines are nonspecific and produce voluminous results. Use of AND/OR/NOT and "near" and "adjacent" search refinements greatly improve the results of a search. Search engines that limit their scope to specific sites, and
Feenstra, K. Anton; Abeln, Sanne
While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which
Spengler, Sylvia J.
There is a well-known story about the blind man examining the elephant: the part of the elephant examined determines his perception of the whole beast. Perhaps bioinformatics--the shotgun marriage between biology and mathematics, computer science, and engineering--is like an elephant that occupies a large chair in the scientific living room. Given the demand for and shortage of researchers with the computer skills to handle large volumes of biological data, where exactly does the bioinformatics elephant sit? There are probably many biologists who feel that a major product of this bioinformatics elephant is large piles of waste material. If you have tried to plow through Web sites and software packages in search of a specific tool for analyzing and collating large amounts of research data, you may well feel the same way. But there has been progress with major initiatives to develop more computing power, educate biologists about computers, increase funding, and set standards. For our purposes, bioinformatics is not simply a biologically inclined rehash of information theory (1) nor is it a hodgepodge of computer science techniques for building, updating, and accessing biological data. Rather bioinformatics incorporates both of these capabilities into a broad interdisciplinary science that involves both conceptual and practical tools for the understanding, generation, processing, and propagation of biological information. As such, bioinformatics is the sine qua non of 21st-century biology. Analyzing gene expression using cDNA microarrays immobilized on slides or other solid supports (gene chips) is set to revolutionize biology and medicine and, in so doing, generate vast quantities of data that have to be accurately interpreted (Fig. 1). As discussed at a meeting a few months ago (Microarray Algorithms and Statistical Analysis: Methods and Standards; Tahoe City, California; 9-12 November 1999), experiments with cDNA arrays must be subjected to quality control
Stockinger, Heinz; Altenhoff, Adrian M; Arnold, Konstantin; Bairoch, Amos; Bastian, Frederic; Bergmann, Sven; Bougueleret, Lydie; Bucher, Philipp; Delorenzi, Mauro; Lane, Lydie; Le Mercier, Philippe; Lisacek, Frédérique; Michielin, Olivier; Palagi, Patricia M; Rougemont, Jacques; Schwede, Torsten; von Mering, Christian; van Nimwegen, Erik; Walther, Daniel; Xenarios, Ioannis; Zavolan, Mihaela; Zdobnov, Evgeny M; Zoete, Vincent; Appel, Ron D
The SIB Swiss Institute of Bioinformatics (www.isb-sib.ch) was created in 1998 as an institution to foster excellence in bioinformatics. It is renowned worldwide for its databases and software tools, such as UniProtKB/Swiss-Prot, PROSITE, SWISS-MODEL, STRING, etc, that are all accessible on ExPASy.org, SIB's Bioinformatics Resource Portal. This article provides an overview of the scientific and training resources SIB has consistently been offering to the life science community for more than 15 years. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
A. V. Loban
Full Text Available Purpose of the article: improving of scientific and methodical base of the theory of the е-learning of variability. Methods used: conceptual and logical modeling of the е-learning of variability process with electronic educational resource of new generation and system analysis of the interconnection of the studied subject area, methods, didactics approaches and information and communication technologies means. Results: the formalization complex model of the е-learning of variability with electronic educational resource of new generation is developed, conditionally decomposed into three basic components: the formalization model of the course in the form of the thesaurusclassifier (“Author of e-resource”, the model of learning as management (“Coordination. Consultation. Control”, the learning model with the thesaurus-classifier (“Student”. Model “Author of e-resource” allows the student to achieve completeness, high degree of didactic elaboration and structuring of the studied material in triples of variants: modules of education information, practical task and control tasks; the result of the student’s (author’s of e-resource activity is the thesaurus-classifier. Model of learning as management is based on the principle of personal orientation of learning in computer environment and determines the logic of interaction between the lecturer and the student when determining the triple of variants individually for each student; organization of a dialogue between the lecturer and the student for consulting purposes; personal control of the student’s success (report generation and iterative search for the concept of the class assignment in the thesaurus-classifier before acquiring the required level of training. Model “Student” makes it possible to concretize the learning tasks in relation to the personality of the student and to the training level achieved; the assumption of the lecturer about the level of training of a
Gront, Dominik; Kolinski, Andrzej
In this Note we present a new software library for structural bioinformatics. The library contains programs, computing sequence- and profile-based alignments and a variety of structural calculations with user-friendly handling of various data formats. The software organization is very flexible. Algorithms are written in Java language and may be used by Java programs. Moreover the modules can be accessed from Jython (Python scripting language implemented in Java) scripts. Finally, the new version of BioShell delivers several utility programs that can do typical bioinformatics task from a command-line level. Availability The software is available for download free of charge from its website: http://bioshell.chem.uw.edu.pl. This website provides also numerous examples, code snippets and API documentation.
Schweighofer, Karl; Pohorille, Andrew
Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.
For waste from electric and electronic equipment, the WEEE Directive stipulates the separate collection of electric and electronic waste. As to new electric and electronic devices, the Restriction of Hazardous Substances (RoHS) Directive bans the use of certain chemicals dangerous for man and environment. From the implementation of the WEEE directive, many unsolved problems have been documented: poor collection success, emission of dangerous substances during collection and recycling, irretrievable loss of valuable metals among others. As to RoHS, data from the literature show a satisfying success. The problems identified in the process can be reduced to some basic dilemmas at the borders between waste management, product policy and chemical safety. The objectives of the WEEE Directive and the specific targets for use and recycling of appliances are not consistent. There is no focus on scarce resources. Extended producer responsibility is not sufficient to guarantee sustainable waste management. Waste management reaches its limits due to problems of implementation but also due to physical laws. A holistic approach is necessary looking at all branch points and sinks in the stream of used products and waste from electric and electronic equipment. This may be done with respect to the general rules for sustainable management of material streams covering the three dimensions of sustainable policy. The relationships between the players in the field of electric and electronic devices have to be taken into account. Most of the problems identified in the implementation process will not be solved by the current amendment of the WEEE Directive.
Gómez-Tello, V; Latour-Pérez, J; Añón Elizalde, J M; Palencia-Herrejón, E; Díaz-Alersi, R; De Lucas-García, N
Estimate knowledge and use habits of different electronic resources in a sample of Spanish intensivists: Internet, E-mail, distribution lists, and use of portable electronic devices. Self-applied questionnaire. A 50-question questionnaire was distributed among Spanish intensivists through the hospital marketing delegates of a pharmaceutical company and of electronic forums. A total of 682 questionnaires were analyzed (participation: 74%). Ninety six percent of those surveyed used Internet individually: 67% admitted training gap. Internet was the second source of clinical consultations most used (61%), slightly behind consultation to colleagues (65%). The pages consulted most were bibliographic databases (65%) and electronic professional journals (63%), with limited use of Evidence Based Medicine pages (19%). Ninety percent of those surveyed used e-mail regularly in the practice of their profession, although 25% admitted that were not aware of its possibilities. The use of E-mail decreased significantly with increase in age. A total of 62% of the intensivists used distribution lists. Of the rest, 42% were not aware of its existence and 32% admitted they had insufficient training to handle them. Twenty percent of those surveyed had portable electronic devices and 64% considered it useful, basically due to its rapid consultation at bedside. Female gender was a negative predictive factor of its use (OR 0.35; 95% CI 0.2-0.63; p=0.0002). A large majority of the Spanish intensivists use Internet and E-mail. E-mail lists and use of portable devices are still underused resources. There are important gaps in training and infrequent use of essential pages. There are specific groups that require directed educational policies.
Full Text Available Designers have a saying that "the joy of an early release lasts but a short time. The bitterness of an unusable system lasts for years." It is indeed disappointing to discover that your data resources are not being used to their full potential. Not only have you invested your time, effort, and research grant on the project, but you may face costly redesigns if you want to improve the system later. This scenario would be less likely if the product was designed to provide users with exactly what they need, so that it is fit for purpose before its launch. We work at EMBL-European Bioinformatics Institute (EMBL-EBI, and we consult extensively with life science researchers to find out what they need from biological data resources. We have found that although users believe that the bioinformatics community is providing accurate and valuable data, they often find the interfaces to these resources tricky to use and navigate. We believe that if you can find out what your users want even before you create the first mock-up of a system, the final product will provide a better user experience. This would encourage more people to use the resource and they would have greater access to the data, which could ultimately lead to more scientific discoveries. In this paper, we explore the need for a user-centred design (UCD strategy when designing bioinformatics resources and illustrate this with examples from our work at EMBL-EBI. Our aim is to introduce the reader to how selected UCD techniques may be successfully applied to software design for bioinformatics.
Full Text Available Today the organizations used information technology in performing human resource department affairs and this is called as electronic human resource management EHRM. In fact as the competitive complexity increases the need for implementing EHRM in production and service businesses increases too. This paper is written in order to specify the importance of implementing EHRM in production and service organizations and also to evaluate efficiency rate and the importance degree in these two ones. In this paper first the topics literature and the most important aspects of implementing these systems will be reviewed and after categorizing these views the hierarchal model will be proposed by applying AHP method. The result of analyzing this model by EXPERT CHOICE software shows that implementing EHRM in both kinds of organizations has the same importance however there is a large difference between them in implementing aspects.
Krutova Anzhelika S.
Full Text Available The aim of the article is to develop the theoretical bases for the classification and coding of economic information and the scientific justification of the content of information resources of an electronic commerce enterprise. The essence of information resources for management of electronic business entities is investigated. It is proved that the organization of accounting in e-commerce systems is advisable to be built on the basis of two circuits: accounting for financial flows and accounting associated with transformation of business factors in products and services as a result of production activities. There presented a sequence of accounting organization that allows to combine the both circuits in a single information system, which provides a possibility for the integrated replenishment and distributed simultaneous use of the e-commerce system by all groups of users. It is proved that the guarantee of efficient activity of the information management system of electronic commerce entities is a proper systematization of the aggregate of information resources on economic facts and operations of an enterprise in accordance with the management tasks by building the hierarchy of accounting nomenclatures. It is suggested to understand nomenclature as an objective, primary information aggregate concerning a certain fact of the economic activity of an enterprise, which is characterized by minimum requisites, is entered into the database of the information system and is to be reflected in the accounting system. It is proposed to build a database of e-commerce systems as a part of directories (constants, personnel, goods / products, suppliers, buyers and the hierarchy of accounting nomenclatures. The package of documents regulating the organization of accounting at an enterprise should include: the provision on the accounting services, the order on the accounting policy, the job descriptions, the schedules of information exchange, the report card and
Anton M. Avramchuk
Full Text Available The article is devoted to the problem of developing the competency of teachers of language disciplines on designing multimedia electronic educational resources in the Moodle system. The concept of "the competence of teachers of language disciplines on designing multimedia electronic educational resources in the Moodle system" is justified and defined. Identified and characterized the components by which the levels of the competency development of teachers of language disciplines on designing multimedia electronic educational resources in the Moodle system should be assessed. Developed a model for the development of the competency of teachers of language disciplines on designing multimedia electronic educational resources in the Moodle system, which is based on the main scientific approaches, used in adult education, and consists of five blocks: target, informative, technological, diagnostic and effective.
Full Text Available Computer-based resources are central to much, if not most, biological and medical research. However, while there is an ever expanding choice of bioinformatics resources to use, described within the biomedical literature, little work to date has provided an evaluation of the full range of availability or levels of usage of database and software resources. Here we use text mining to process the PubMed Central full-text corpus, identifying mentions of databases or software within the scientific literature. We provide an audit of the resources contained within the biomedical literature, and a comparison of their relative usage, both over time and between the sub-disciplines of bioinformatics, biology and medicine. We find that trends in resource usage differs between these domains. The bioinformatics literature emphasises novel resource development, while database and software usage within biology and medicine is more stable and conservative. Many resources are only mentioned in the bioinformatics literature, with a relatively small number making it out into general biology, and fewer still into the medical literature. In addition, many resources are seeing a steady decline in their usage (e.g., BLAST, SWISS-PROT, though some are instead seeing rapid growth (e.g., the GO, R. We find a striking imbalance in resource usage with the top 5% of resource names (133 names accounting for 47% of total usage, and over 70% of resources extracted being only mentioned once each. While these results highlight the dynamic and creative nature of bioinformatics research they raise questions about software reuse, choice and the sharing of bioinformatics practice. Is it acceptable that so many resources are apparently never reused? Finally, our work is a step towards automated extraction of scientific method from text. We make the dataset generated by our study available under the CC0 license here: http://dx.doi.org/10.6084/m9.figshare.1281371.
Syzdykova, Assel; Malta, André; Zolfo, Maria; Diro, Ermias; Oliveira, José Luis
Despite the great impact of information and communication technologies on clinical practice and on the quality of health services, this trend has been almost exclusive to developed countries, whereas countries with poor resources suffer from many economic and social issues that have hindered the real benefits of electronic health (eHealth) tools. As a component of eHealth systems, electronic health records (EHRs) play a fundamental role in patient management and effective medical care services. Thus, the adoption of EHRs in regions with a lack of infrastructure, untrained staff, and ill-equipped health care providers is an important task. However, the main barrier to adopting EHR software in low- and middle-income countries is the cost of its purchase and maintenance, which highlights the open-source approach as a good solution for these underserved areas. The aim of this study was to conduct a systematic review of open-source EHR systems based on the requirements and limitations of low-resource settings. First, we reviewed existing literature on the comparison of available open-source solutions. In close collaboration with the University of Gondar Hospital, Ethiopia, we identified common limitations in poor resource environments and also the main requirements that EHRs should support. Then, we extensively evaluated the current open-source EHR solutions, discussing their strengths and weaknesses, and their appropriateness to fulfill a predefined set of features relevant for low-resource settings. The evaluation methodology allowed assessment of several key aspects of available solutions that are as follows: (1) integrated applications, (2) configurable reports, (3) custom reports, (4) custom forms, (5) interoperability, (6) coding systems, (7) authentication methods, (8) patient portal, (9) access control model, (10) cryptographic features, (11) flexible data model, (12) offline support, (13) native client, (14) Web client,(15) other clients, (16) code
Oguchi, Masahiro; Murakami, Shinsuke; Sakanakura, Hirofumi; Kida, Akiko; Kameya, Takashi
Highlights: → End-of-life electrical and electronic equipment (EEE) as secondary metal resources. → The content and the total amount of metals in specific equipment are both important. → We categorized 21 EEE types from contents and total amounts of various metals. → Important equipment types as secondary resources were listed for each metal kind. → Collectability and possible collection systems of various EEE types were discussed. - Abstract: End-of-life electrical and electronic equipment (EEE) has recently received attention as a secondary source of metals. This study examined characteristics of end-of-life EEE as secondary metal resources to consider efficient collection and metal recovery systems according to the specific metals and types of EEE. We constructed an analogy between natural resource development and metal recovery from end-of-life EEE and found that metal content and total annual amount of metal contained in each type of end-of-life EEE should be considered in secondary resource development, as well as the collectability of the end-of-life products. We then categorized 21 EEE types into five groups and discussed their potential as secondary metal resources. Refrigerators, washing machines, air conditioners, and CRT TVs were evaluated as the most important sources of common metals, and personal computers, mobile phones, and video games were evaluated as the most important sources of precious metals. Several types of small digital equipment were also identified as important sources of precious metals; however, mid-size information and communication technology (ICT) equipment (e.g., printers and fax machines) and audio/video equipment were shown to be more important as a source of a variety of less common metals. The physical collectability of each type of EEE was roughly characterized by unit size and number of end-of-life products generated annually. Current collection systems in Japan were examined and potentially appropriate collection
Papi, Ahmad; Ghazavi, Roghayeh; Moradi, Salimeh
Understanding of the medical society's from the types of information resources for quick and easy access to information is an imperative task in medical researches and management of the treatment. The present study was aimed to determine the level of awareness of the physicians in using various electronic information resources and the factors affecting it. This study was a descriptive survey. The data collection tool was a researcher-made questionnaire. The study population included all the physicians and specialty physicians of the teaching hospitals affiliated to Isfahan University of Medical Sciences and numbered 350. The sample size based on Morgan's formula was set at 180. The content validity of the tool was confirmed by the library and information professionals and the reliability was 95%. Descriptive statistics were used including the SPSS software version 19. On reviewing the need of the physicians to obtain the information on several occasions, the need for information in conducting the researches was reported by the maximum number of physicians (91.9%) and the usage of information resources, especially the electronic resources, formed 65.4% as the highest rate with regard to meeting the information needs of the physicians. Among the electronic information databases, the maximum awareness was related to Medline with 86.5%. Among the various electronic information resources, the highest awareness (43.3%) was related to the E-journals. The highest usage (36%) was also from the same source. The studied physicians considered the most effective deterrent in the use of electronic information resources as being too busy and lack of time. Despite the importance of electronic information resources for the physician's community, there was no comprehensive knowledge of these resources. This can lead to less usage of these resources. Therefore, careful planning is necessary in the hospital libraries in order to introduce the facilities and full capabilities of the
Zlamparet, Gabriel I; Tan, Quanyin; Stevels, A B; Li, Jinhui
This comparative research represents an example for a better conservation of resources by reducing the amount of waste (kg) and providing it more value under the umbrella of remanufacturing. The three discussed cases will expose three issues already addressed separately in the literature. The generation of waste electrical and electronic equipment (WEEE) interacts with the environmental depletion. In this article, we gave the examples of addressed issues under the concept of remanufacturing. Online collection opportunity eliminating classical collection, a business to business (B2B) implementation for remanufactured servers and medical devices. The material reuse (recycling), component sustainability, reuse (part harvesting), product reuse (after repair/remanufacturing) indicates the recovery potential using remanufacturing tool for a better conservation of resources adding more value to the products. Our findings can provide an overview of new system organization for the general collection, market potential and the technological advantages using remanufacturing instead of recycling of WEEE or used electrical and electronic equipment. Copyright © 2017. Published by Elsevier Ltd.
Deepti D. Deobagkar
Full Text Available Bioinformatics software and visualisation tools have been a key factor in the rapid and phenomenal advances in genomics, proteomics, medicine, drug discovery, systems approaches and in fact in every area of new development. Indian scientists have also made a mark in a few specific areas. India has an advantage of an early start and extensive and organised network in the Bioinformatics education and research with substantial inputs from the Indian government. India has a strong hold in computation and IT and has a pool of bright and young talent with demographic dividend along with experienced and excellent mentors and researchers. Although small in number and scale, Bioinformatics Industry also has a presence and is making its mark in India. There are a number of high throughput and extremely useful resources available which are critical in biological data analysis and interpretation. This has made a paradigm shift in the way research can be carried out and discoveries can be made in any area of biological, biochemical and chemical research. This article summarises the current status and contributions from India in the development of software and web servers for Bioinformatics applications.
Dr.Paithankar Rajeev; R., Mr.Kamble V.R.
The libraries and information services has been changed due to the development of information and communication technology. Electronics resources role is very important as information repositories are use of information for various purposes like academic, research, teaching and learning process. E-resources gives solutions of the traditional libraries as like all data storage in digital format, users can access library without boundaries through internet so e-resources popularity is very cont...
Oakley, Mark T; Barthel, Daniel; Bykov, Yuri; Garibaldi, Jonathan M; Burke, Edmund K; Krasnogor, Natalio; Hirst, Jonathan D
Optimisation problems pervade structural bioinformatics. In this review, we describe recent work addressing a selection of bioinformatics challenges. We begin with a discussion of research into protein structure comparison, and highlight the utility of Kolmogorov complexity as a measure of structural similarity. We then turn to research into de novo protein structure prediction, in which structures are generated from first principles. In this endeavour, there is a compromise between the detail of the model and the extent to which the conformational space of the protein can be sampled. We discuss some developments in this area, including off-lattice structure prediction using the great deluge algorithm. One strategy to reduce the size of the search space is to restrict the protein chain to sites on a regular lattice. In this context, we highlight the use of memetic algorithms, which combine genetic algorithms with local optimisation, to the study of simple protein models on the two-dimensional square lattice and the face-centred cubic lattice.
Hubbard, Joan C.; North, Alexa B.; Arjomand, H. Lari
Examines methods used to search for entry-level managerial positions and assesses how human resource and personnel directors in Georgia perceive these methods. Findings indicate that few of the directors use electronic technology to fill such positions, but they view positively those applicants who use electronic job searching methods. (RJM)
Ranganathan, Shoba; Hsu, Wen-Lian; Yang, Ueng-Cheng; Tan, Tin Wee
The 2008 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation set up in 1998, was organized as the 7th International Conference on Bioinformatics (InCoB), jointly with the Bioinformatics and Systems Biology in Taiwan (BIT 2008) Conference, Oct. 20-23, 2008 at Taipei, Taiwan. Besides bringing together scientists from the field of bioinformatics in this region, InCoB is actively involving researchers from the area of systems biology, to facilitate greater synergy between these two groups. Marking the 10th Anniversary of APBioNet, this InCoB 2008 meeting followed on from a series of successful annual events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea), New Delhi (India) and Hong Kong. Additionally, tutorials and the Workshop on Education in Bioinformatics and Computational Biology (WEBCB) immediately prior to the 20th Federation of Asian and Oceanian Biochemists and Molecular Biologists (FAOBMB) Taipei Conference provided ample opportunity for inducting mainstream biochemists and molecular biologists from the region into a greater level of awareness of the importance of bioinformatics in their craft. In this editorial, we provide a brief overview of the peer-reviewed manuscripts accepted for publication herein, grouped into thematic areas. As the regional research expertise in bioinformatics matures, the papers fall into thematic areas, illustrating the specific contributions made by APBioNet to global bioinformatics efforts.
Keerthikumar, Shivakumar; Gangoda, Lahiru; Gho, Yong Song; Mathivanan, Suresh
Extracellular vesicles (EVs) are a class of membranous vesicles that are released by multiple cell types into the extracellular environment. This unique class of extracellular organelles which play pivotal role in intercellular communication are conserved across prokaryotes and eukaryotes. Depending upon the cell origin and the functional state, the molecular cargo including proteins, lipids, and RNA within the EVs are modulated. Owing to this, EVs are considered as a subrepertoire of the host cell and are rich reservoirs of disease biomarkers. In addition, the availability of EVs in multiple bodily fluids including blood has created significant interest in biomarker and signaling research. With the advancement in high-throughput techniques, multiple EV studies have embarked on profiling the molecular cargo. To benefit the scientific community, existing free Web-based resources including ExoCarta, EVpedia, and Vesiclepedia catalog multiple datasets. These resources aid in elucidating molecular mechanism and pathophysiology underlying different disease conditions from which EVs are isolated. Here, the existing bioinformatics tools to perform integrated analysis to identify key functional components in the EV datasets are discussed.
Ramli, Rindra M.
An overview of the Recommendation Study and the subsequent Implementation of a new Electronic Resources Management system ERMS in an international graduate research university in the Kingdom of Saudi Arabia. It covers the timeline, deliverables and challenges as well as lessons learnt by the Project Team.
Full Text Available Abstract Background There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no sucessful attempts have been made to integrate chemo- and bioinformatics into a single framework. Results Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Conclusion Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL, an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at http://www.bioclipse.net.
Kolodziej, M.A. [Quick Test International Inc., (Canada). Canadian Technology Human Resource Board; Baker, O. [KeySpan Energy Canada, Calgary, AB (Canada)
KeySpan Energy Canada is in the process of obtaining recognition of various occupational profiles including pipeline operators, inspectors, and field and plant operators from various certifying organizations. The process of allowing individuals to obtain certification is recognized by Canadian Technology Human Resources Board as a step towards national standards for technologists and technicians. Proven competency is a must for workers in todays oil industry in response to increasingly stringent government safety regulations, environmental concerns and high public scrutiny. Quick Test international Inc. has developed a management tool in collaboration with end users at KeySpan Energy Canada. It is an electronic, Internet based competency tool for tracking personal competencies and maintaining continued competency. Response to the tool has been favourable. 2 refs., 4 figs.
Yaroslav M. Hlynsky
Full Text Available This article discusses the theoretical foundation, the creation and implementation of the electronic educational video resources (EEVR in the example of the development and the usage of the collection of video tutorials in event-driven programming theme, which is studied in the framework of the subject "Informatics" by students of many specialties. It offers some development of the existing conceptual and categorical apparatus concerning EEVR development. It is alleged that the video tutorials allow you to automate the process of learning, redistribute instructional time for the benefit of students' independent work, to provide classroom release time for the teaching of the theoretical issues of the course that is aimed at improving the fundamental nature of training. Practical recommendations for the development of the effective EEVR, which may be useful for the authors of e-learning courses for students of different forms of training are proposed.
The mountains of data thrusting from the new landscape of modern high-throughput biology are irrevocably changing biomedical research and creating a near-insatiable demand for training in data management and manipulation and data mining and analysis. Among life scientists, from clinicians to environmental researchers, a common theme is the need not just to use, and gain familiarity with, bioinformatics tools and resources but also to understand their underlying fundamental theoretical and practical concepts. Providing bioinformatics training to empower life scientists to handle and analyse their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource-centric demos, using interactivity, problem-solving exercises and cooperative learning to substantially enhance training quality and learning outcomes. In this context, this article discusses various pragmatic criteria for identifying training needs and learning objectives, for selecting suitable trainees and trainers, for developing and maintaining training skills and evaluating training quality. Adherence to these criteria may help not only to guide course organizers and trainers on the path towards bioinformatics training excellence but, importantly, also to improve the training experience for life scientists.
Full Text Available The purpose of this paper is to present a general view of the current applications of fuzzy logic in medicine and bioinformatics. We particularly review the medical literature using fuzzy logic. We then recall the geometrical interpretation of fuzzy sets as points in a fuzzy hypercube and present two concrete illustrations in medicine (drug addictions and in bioinformatics (comparison of genomes.
Bioinformatics is a scientific discipline that applies computer science and information technology to help understand biological processes. The NIH provides a list of free online bioinformatics tutorials, either generated by the NIH Library or other institutes, which includes introductory lectures and "how to" videos on using various tools.
This article describes a new approach to teaching bioinformatics using "Arabidopsis" genetic sequences. Several open-ended and inquiry-based laboratory exercises have been designed to help students grasp key concepts and gain practical skills in bioinformatics, using "Arabidopsis" leucine-rich repeat receptor-like kinase (LRR…
Heyer, Laurie J.
This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…
Dare Samuel Adeleke
Full Text Available Availability, awareness and use of electronic resources provide access to authoritative, reliable, accurate and timely access to information. The use of electronic information resources (EIRs can enable innovation in teaching and increase timeliness in research of postgraduate students which will eventual result into encouragement of the expected research-led enquiry in this digital age. The study adopted a descriptive survey design. Samples of 300 of postgraduate students within seven out 13 Faculties were randomly selected. Data were collected using questionnaire designed to elicit response from respondents and data were analyzed using descriptive statistics methods percentages, mean, and standard deviation. Results indicated that internet was ranked most available and used in the university. Low level of usage of electronic resources, in particular, full texts data bases is linked to a number of constraints: Interrupted power supply was ranked highest among other factors as speed and capacity of computers, retrieval of records with high recall and low precision, retrieving records relevant to information need, lack of knowledge of search techniques to retrieve information effectively, non possession of requisite IT skills and problems accessing the internet. The study recommended that usage of electronic resources be made compulsory, intensifying awareness campaigns concerning the availability, training on use of electronic resources and the problem of power outage be addressed.
Smith, James F., III; Rhyne, Robert D., II
A fuzzy logic based expert system has been developed that automatically allocates electronic attack (EA) resources in real-time over many dissimilar platforms. The platforms can be very general, e.g., ships, planes, robots, land based facilities, etc. Potential foes the platforms deal with can also be general. This paper describes data mining activities related to development of the resource manager with a focus on genetic algorithm based optimization. A genetic algorithm requires the construction of a fitness function, a function that must be maximized to give optimal or near optimal results. The fitness functions are in general non- differentiable at many points and highly non-linear, neither property providing difficulty for a genetic algorithm. The fitness functions are constructed using insights from geometry, physics, engineering, and military doctrine. Examples are given as to how fitness functions are constructed including how the fitness function is averaged over a database of military scenarios. The use of a database of scenarios prevents the algorithm from having too narrow a range of behaviors, i.e., it creates a more robust solution.
Wajeeh M. Daher
Full Text Available Electronic resources are becoming an integral part of the modern life and of the educational scene, especially the high education scene. In this research we wanted to verify what influences first degree university students' use of electronic resources and their opinions regarding this use. Collecting data from 202 students and analyzing it using SPSS, we found that more than one half of the participants had high level of electronic media use and more than one third had moderate level of electronic media use. These levels of use indicate the students' awareness of the role and benefits of electronic media use. Regarding the factors that influence the students' se of electronic resources we found that the student's use of electronic resources had significant strong positive relationships with the provision of electronic resources by the academic institution. It had significant moderate positive relationships with the resources characteristics and the course requirement, and had significant weak relationships with the instructor's support and the student's characteristics. We explained these relationships as resulting from the influence of the surrounding community. Regarding the students' opinions about the use of electronic resources, we found that the student's opinion of electronic resources has significant strong positive relationships with student's use of electronic resources, level of this use, the academic institution available facilities, student's characteristics and resources characteristics. It does not have significant relationships with the instructor's support or the course requirement. We explained these relationships depending on activity theory and its integration with ecological psychology.
Abbott, Michael B.
The rapidly developing practice of encapsulating knowledge in electronic media is shown to lead necessarily to the restructuring of the knowledge itself. The consequences of this for hydraulics, hydrology and more general water-resources management are investigated in particular relation to current process-simulation, real-time control and advice-serving systems. The generic properties of the electronic knowledge encapsulator are described, and attention is drawn to the manner in which knowledge 'goes into hiding' through encapsulation. This property is traced in the simple situations of pure mathesis and in the more complex situations of taxinomia using one example each from hydraulics and hydrology. The consequences for systems architectures are explained, pointing to the need for multi-agent architectures for ecological modelling and for more general hydroinformatics systems also. The relevance of these developments is indicated by reference to ongoing projects in which they are currently being realised. In conclusion, some more general epistemological aspects are considered within the same context. As this contribution is so much concerned with the processes of signification and communication, it has been partly shaped by the theory of semiotics, as popularised by Eco ( A Theory of Semiotics, Indiana University, Bloomington, 1977).
Dai, Lin; Gao, Xin; Guo, Yan; Xiao, Jingfa; Zhang, Zhang
As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics. This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.
Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather
Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science.
Full Text Available Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS, Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS, and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.
Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. PMID:23190475
Chakraborty, Chiranjib; George Priya Doss, C; Zhu, Hailong; Agoramoorthy, Govindasamy
Hong Kong's bioinformatics sector is attaining new heights in combination with its economic boom and the predominance of the working-age group in its population. Factors such as a knowledge-based and free-market economy have contributed towards a prominent position on the world map of bioinformatics. In this review, we have considered the educational measures, landmark research activities and the achievements of bioinformatics companies and the role of the Hong Kong government in the establishment of bioinformatics as strength. However, several hurdles remain. New government policies will assist computational biologists to overcome these hurdles and further raise the profile of the field. There is a high expectation that bioinformatics in Hong Kong will be a promising area for the next generation.
As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics.This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. 2012 Dai et al.; licensee BioMed Central Ltd.
Grisham, William; Schottler, Natalie A; Valli-Marill, Joanne; Beck, Lisa; Beatty, Jackson
This completely computer-based module's purpose is to introduce students to bioinformatics resources. We present an easy-to-adopt module that weaves together several important bioinformatic tools so students can grasp how these tools are used in answering research questions. Students integrate information gathered from websites dealing with anatomy (Mouse Brain Library), quantitative trait locus analysis (WebQTL from GeneNetwork), bioinformatics and gene expression analyses (University of California, Santa Cruz Genome Browser, National Center for Biotechnology Information's Entrez Gene, and the Allen Brain Atlas), and information resources (PubMed). Instructors can use these various websites in concert to teach genetics from the phenotypic level to the molecular level, aspects of neuroanatomy and histology, statistics, quantitative trait locus analysis, and molecular biology (including in situ hybridization and microarray analysis), and to introduce bioinformatic resources. Students use these resources to discover 1) the region(s) of chromosome(s) influencing the phenotypic trait, 2) a list of candidate genes-narrowed by expression data, 3) the in situ pattern of a given gene in the region of interest, 4) the nucleotide sequence of the candidate gene, and 5) articles describing the gene. Teaching materials such as a detailed student/instructor's manual, PowerPoints, sample exams, and links to free Web resources can be found at http://mdcune.psych.ucla.edu/modules/bioinformatics.
English in Australia, 1973
Contains seven short resources''--units, lessons, and activities on the power of observation, man and his earth, snakes, group discussion, colloquial and slang, the continuous story, and retelling a story. (DD)
Full Text Available Abstract Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.
Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.
Oksana M. Melnyk
Full Text Available This article proves the need for a comprehensive assessment of electronic educational game resources in mathematics for the primary school students; gives the definition of “the factor-criteria model of the electronic educational game resources (EEGR”. It also describes the created model, which consists of requirements for the content, methodological and program parts of the electronic resources for primary school; identifies the factors and the criteria to each of them. It was proposed to assess the ratios within the group of factors and each group of criteria according to the arithmetic progression. The presented model can be a convenient tool both for primary school teachers and EEGR developers. It can also be a basis for a unified state comprehensive system of assessment of the EEGR.
Sheri L Lewis
Full Text Available Public health surveillance is undergoing a revolution driven by advances in the field of information technology. Many countries have experienced vast improvements in the collection, ingestion, analysis, visualization, and dissemination of public health data. Resource-limited countries have lagged behind due to challenges in information technology infrastructure, public health resources, and the costs of proprietary software. The Suite for Automated Global Electronic bioSurveillance (SAGES is a collection of modular, flexible, freely-available software tools for electronic disease surveillance in resource-limited settings. One or more SAGES tools may be used in concert with existing surveillance applications or the SAGES tools may be used en masse for an end-to-end biosurveillance capability. This flexibility allows for the development of an inexpensive, customized, and sustainable disease surveillance system. The ability to rapidly assess anomalous disease activity may lead to more efficient use of limited resources and better compliance with World Health Organization International Health Regulations.
Fatumo, Segun A.; Adoga, Moses P.; Ojo, Opeolu O.; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi
Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries. PMID:24763310
Segun A Fatumo
Full Text Available Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.
Fatumo, Segun A; Adoga, Moses P; Ojo, Opeolu O; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi
Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.
Zhou, Shuigeng; Liao, Ruiqi; Guan, Jihong
In the past decades, with the rapid development of high-throughput technologies, biology research has generated an unprecedented amount of data. In order to store and process such a great amount of data, cloud computing and MapReduce were applied to many fields of bioinformatics. In this paper, we first introduce the basic concepts of cloud computing and MapReduce, and their applications in bioinformatics. We then highlight some problems challenging the applications of cloud computing and MapReduce to bioinformatics. Finally, we give a brief guideline for using cloud computing in biology research.
Teresa K Attwood
Full Text Available In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy--paradoxically, many are actually closing "niche" bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all.
Atwood, Teresa K.; Bongcam-Rudloff, Erik; Brazas, Michelle E.; Corpas, Manuel; Gaudet, Pascale; Lewitter, Fran; Mulder, Nicola; Palagi, Patricia M.; Schneider, Maria Victoria; van Gelder, Celia W. G.
In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy—paradoxically, many are actually closing “niche” bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all. PMID:25856076
Owolabi, Sola; Idowu, Oluwafemi A.; Okocha, Foluke; Ogundare, Atinuke Omotayo
The study evaluated utilization of electronic information resources by undergraduates in the Faculties of Education and the Social Sciences in University of Ibadan. The study adopted a descriptive survey design with a study population of 1872 undergraduates in the Faculties of Education and the Social Sciences in University of Ibadan, from which a…
Akussah, Maxwell; Asante, Edward; Adu-Sarkodee, Rosemary
The study investigates the relationship between impact of electronic resources and its usage in academic libraries in Ghana: evidence from Koforidua Polytechnic & All Nations University College, Ghana. The study was a quantitative approach using questionnaire to gather data and information. A valid response rate of 58.5% was assumed. SPSS…
Full Text Available The drastic increase in the number of coronaviruses discovered and coronavirus genomes being sequenced have given us an unprecedented opportunity to perform genomics and bioinformatics analysis on this family of viruses. Coronaviruses possess the largest genomes (26.4 to 31.7 kb among all known RNA viruses, with G + C contents varying from 32% to 43%. Variable numbers of small ORFs are present between the various conserved genes (ORF1ab, spike, envelope, membrane and nucleocapsid and downstream to nucleocapsid gene in different coronavirus lineages. Phylogenetically, three genera, Alphacoronavirus, Betacoronavirus and Gammacoronavirus, with Betacoronavirus consisting of subgroups A, B, C and D, exist. A fourth genus, Deltacoronavirus, which includes bulbul coronavirus HKU11, thrush coronavirus HKU12 and munia coronavirus HKU13, is emerging. Molecular clock analysis using various gene loci revealed that the time of most recent common ancestor of human/civet SARS related coronavirus to be 1999-2002, with estimated substitution rate of 4´10-4 to 2´10-2 substitutions per site per year. Recombination in coronaviruses was most notable between different strains of murine hepatitis virus (MHV, between different strains of infectious bronchitis virus, between MHV and bovine coronavirus, between feline coronavirus (FCoV type I and canine coronavirus generating FCoV type II, and between the three genotypes of human coronavirus HKU1 (HCoV-HKU1. Codon usage bias in coronaviruses were observed, with HCoV-HKU1 showing the most extreme bias, and cytosine deamination and selection of CpG suppressed clones are the two major independent biological forces that shape such codon usage bias in coronaviruses.
Melero, Juan L; Andrades, Sergi; Arola, Lluís; Romeu, Antoni
Psoriasis is an immune-mediated, inflammatory and hyperproliferative disease of the skin and joints. The cause of psoriasis is still unknown. The fundamental feature of the disease is the hyperproliferation of keratinocytes and the recruitment of cells from the immune system in the region of the affected skin, which leads to deregulation of many well-known gene expressions. Based on data mining and bioinformatic scripting, here we show a new dimension of the effect of psoriasis at the genomic level. Using our own pipeline of scripts in Perl and MySql and based on the freely available NCBI Gene Expression Omnibus (GEO) database: DataSet Record GDS4602 (Series GSE13355), we explore the extent of the effect of psoriasis on gene expression in the affected tissue. We give greater insight into the effects of psoriasis on the up-regulation of some genes in the cell cycle (CCNB1, CCNA2, CCNE2, CDK1) or the dynamin system (GBPs, MXs, MFN1), as well as the down-regulation of typical antioxidant genes (catalase, CAT; superoxide dismutases, SOD1-3; and glutathione reductase, GSR). We also provide a complete list of the human genes and how they respond in a state of psoriasis. Our results show that psoriasis affects all chromosomes and many biological functions. If we further consider the stable and mitotically inheritable character of the psoriasis phenotype, and the influence of environmental factors, then it seems that psoriasis has an epigenetic origin. This fit well with the strong hereditary character of the disease as well as its complex genetic background. Copyright © 2017 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.
Olsen, Lars Rønn; Campos, Benito; Barnkob, Mike Stein
cancer immunotherapies has yet to be fulfilled. The insufficient efficacy of existing treatments can be attributed to a number of biological and technical issues. In this review, we detail the current limitations of immunotherapy target selection and design, and review computational methods to streamline...... therapy target discovery in a bioinformatics analysis pipeline. We describe specialized bioinformatics tools and databases for three main bottlenecks in immunotherapy target discovery: the cataloging of potentially antigenic proteins, the identification of potential HLA binders, and the selection epitopes...
Rattanaumpawan, Pinyo; Boonyasiri, Adhiratha; Vong, Sirenda; Thamlikitkul, Visanu
Electronic surveillance of infectious diseases involves rapidly collecting, collating, and analyzing vast amounts of data from interrelated multiple databases. Although many developed countries have invested in electronic surveillance for infectious diseases, the system still presents a challenge for resource-limited health care settings. We conducted a systematic review by performing a comprehensive literature search on MEDLINE (January 2000-December 2015) to identify studies relevant to electronic surveillance of infectious diseases. Study characteristics and results were extracted and systematically reviewed by 3 infectious disease physicians. A total of 110 studies were included. Most surveillance systems were developed and implemented in high-income countries; less than one-quarter were conducted in low-or middle-income countries. Information technologies can be used to facilitate the process of obtaining laboratory, clinical, and pharmacologic data for the surveillance of infectious diseases, including antimicrobial resistance (AMR) infections. These novel systems require greater resources; however, we found that using electronic surveillance systems could result in shorter times to detect targeted infectious diseases and improvement of data collection. This study highlights a lack of resources in areas where an effective, rapid surveillance system is most needed. The availability of information technology for the electronic surveillance of infectious diseases, including AMR infections, will facilitate the prevention and containment of such emerging infectious diseases. Copyright © 2018 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Tella, Adeyinka; Orim, Faith; Ibrahim, Dauda Morenikeji; Memudu, Suleiman Ajala
The use of e-resources is now commonplace among academics in tertiary educational institutions the world over. Many academics including those in the universities are exploring the opportunities of e-resources to facilitate teaching and research. As the use of e-resources is increasing particularly among academics at the University of Ilorin,…
Ménager, Hervé; Kalaš, Matúš; Rapacki, Kristoffer
within convenient, integrated “workbench” environments. Resource descriptions are the core element of registry and workbench systems, which are used to both help the user find and comprehend available software tools, data resources, and Web Services, and to localise, execute and combine them......, a software component that will ease the integration of bioinformatics resources in a workbench environment, using their description provided by the existing ELIXIR Tools and Data Services Registry....
Mohammad Reza Davarpanah
Full Text Available The purpose of this study was to investigate the usage of electronic journals in Ferdowsi University, Iran based on e-metrics. The paper also aimed to emphasize the analysis of cost-benefit and the correlation between the journal impact factors and the usage data. In this study experiences of Ferdowsi University library on licensing and usage of electronic resources was evaluated by providing a cost-benefit analysis based on the cost and usage statistics of electronic resources. Vendor-provided data were also compared with local usage data. The usage data were collected by tracking web-based access locally, and by collecting vender-provided usage data. The data sources were one-year of vendor-supplied e-resource usage data such as Ebsco, Elsevier, Proquest, Emerald, Oxford and Springer and local usage data collected from the Ferdowsi university web server. The study found that actual usage values differ for vendor-provided data and local usage data. Elsevier has got the highest usage degree in searches, sessions and downloads. Statistics also showed that a small number of journals satisfy significant amount of use while the majority of journals were used less frequent and some were never used at all. The users preferred the PDF rather than HTML format. The data in subject profile suggested that the provided e-resources were best suited to certain subjects. There was no correlation between IF and electronic journal use. Monitoring the usage of e-resources gained increasing importance for acquisition policy and budget decisions. The article provided information about local metrics for the six surveyed vendors/publishers, e.g. usage trends, requests per package, cost per use as related to the scientific specialty of the university.
Full Text Available Background: Over the last couple decades, the field of bioinformatics has helped spur medical discoveries that offer a better understanding of the genetic basis of disease, which in turn improve public health and save lives. Concomitantly, support requirements for molecular biology researchers have grown in scope and complexity, incorporating specialized resources, technologies, and techniques. Case Presentation: To address this specific need among National Institutes of Health (NIH intramural researchers, the NIH Library hired an expert bioinformatics trainer and consultant with a PhD in biochemistry to implement a bioinformatics support program. This study traces the program from its inception in 2009 to its present form. Discussion involves the particular skills of program staff, development of content, collection of resources, associated technology, assessment, and the impact of the program on the NIH community. Conclusion: Based on quantitative and qualitative data, the bioinformatics support program has been heavily used and appreciated by researchers. Continued success will depend on filling key staff positions, building on the existing program infrastructure, and keeping abreast of developments within the field to remain relevant and in touch with the medical research community utilizing bioinformatics services.
Rahman, Md. Anisur
Joint Master Degree in Digital Library Learning (DILL) The purpose of this study is to find out the challenges facing by international students in using electronic resources in the OUC learning center. This research has used a qualitative approach and purposive, a non-probability techniques used for sampling of this study. A semi-structured face-to-face interviews method is used for the collection of data. The interview questions were open ended and the discourse analysis metho...
Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T
Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations. Copyright © 2015 Elsevier Inc. All rights reserved.
Krieger, E.; Vriend, G.
MOTIVATION: Due to the steadily growing computational demands in bioinformatics and related scientific disciplines, one is forced to make optimal use of the available resources. A straightforward solution is to build a network of idle computers and let each of them work on a small piece of a
H. M. Kravtsov
Full Text Available Communication improving of educational processes requires today new approaches to the management arrangements and forming of educational policy in the field of distance learning, which is based on the use of modern information and communication technologies. An important step in this process is the continuous monitoring of the development and implementation of information technology and, in particular, the distance learning systems in higher educational establishments. The main objective of the monitoring is the impact assessment on the development of distance learning following the state educational standards, curricula, methodical and technical equipment and other factors; factors revelation that influence the implementation and outcomes of distance learning; results comparison of educational institution functioning and distance education systems in order to determine the most efficient ways of its development. The paper presents the analysis results of the dependence of the quality of educational services on the electronic educational resources. Trends in educational services development was studied by comparing the quality influence of electronic educational resources on the quality of educational services of higher pedagogical educational institutions of Ukraine as of 2009-2010 and 2012-2013. Generally, the analysis of the survey results allows evaluating quality of the modern education services as satisfactory and it can be said that almost 70% of the success of their future development depends on the quality of the used electronic educational resources and distance learning systems in particular.
Chen, Xiaoling; Chang, Jeffrey T
Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. https://github.com/jefftc/changlab. firstname.lastname@example.org. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com
Chen, Xiaoling; Chang, Jeffrey T.
Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: firstname.lastname@example.org PMID:28052928
Habib, Komal; Parajuly, Keshav; Wenzel, Henrik
Recovery of resources, in particular, metals, from waste flows is widely seen as a prioritized option to reduce their potential supply constraints in the future. The current waste electrical and electronic equipment (WEEE) treatment system is more focused on bulk metals, where the recycling rate of specialty metals, such as rare earths, is negligible compared to their increasing use in modern products, such as electronics. This study investigates the challenges in recovering these resources in the existing WEEE treatment system. It is illustrated by following the material flows of resources in a conventional WEEE treatment plant in Denmark. Computer hard disk drives (HDDs) containing neodymium-iron-boron (NdFeB) magnets were selected as the case product for this experiment. The resulting output fractions were tracked until their final treatment in order to estimate the recovery potential of rare earth elements (REEs) and other resources contained in HDDs. The results further show that out of the 244 kg of HDDs treated, 212 kg comprising mainly of aluminum and steel can be finally recovered from the metallurgic process. The results further demonstrate the complete loss of REEs in the existing shredding-based WEEE treatment processes. Dismantling and separate processing of NdFeB magnets from their end-use products can be a more preferred option over shredding. However, it remains a technological and logistic challenge for the existing system.
А. М. Гавриленко
Object of research is the help information resource "The chronicle of the Odessa national university of I. I. Mechnikov: dates, facts, events". The main objective of our article – to state the main methodological bases of creation of information resource. One of advantages of information resource is possibility of continuous updating and replenishment by new information. Main objective of creation of this information resource is systematization of material on stories of the Odessa national university of I. I. Mechnikov from the date of his basis to the present, ensuring interactive access to information on the main dates, the most significant events in life of university. The base of research are sources on the history of university, chronology of historical development, formation of infrastructure, cadres and scientific researches. In information resource the main stages of development, functioning and transformation of the Odessa University are analyzed, information on its divisions is collected. For creation of this information resource in Scientific library the method of work was developed, the main selection criteria of data are allocated. This information resource have practical value for all who is interested in history of university, historians, scientists-researchers of history of science and the city of Odessa.
Lipner, Rebecca S; Brossman, Bradley G; Samonte, Kelli M; Durning, Steven J
Electronic resources are increasingly used in medical practice. Their use during high-stakes certification examinations has been advocated by many experts, but whether doing so would affect the capacity to differentiate between high and low abilities is unknown. To determine the effect of electronic resources on examination performance characteristics. Randomized controlled trial. Medical certification program. 825 physicians initially certified by the American Board of Internal Medicine (ABIM) who passed the Internal Medicine Certification examination or sat for the Internal Medicine Maintenance of Certification (IM-MOC) examination in 2012 to 2015. Participants were randomly assigned to 1 of 4 conditions: closed book using typical or additional time, or open book (that is, UpToDate [Wolters Kluwer]) using typical or additional time. All participants took the same modified version of the IM-MOC examination. Primary outcomes included item difficulty (how easy or difficult the question was), item discrimination (how well the question differentiated between high and low abilities), and average question response time. Secondary outcomes included examination dimensionality (that is, the number of factors measured) and test-taking strategy. Item response theory was used to calculate question characteristics. Analysis of variance compared differences among conditions. Closed-book conditions took significantly less time than open-book conditions (mean, 79.2 seconds [95% CI, 78.5 to 79.9 seconds] vs. 110.3 seconds [CI, 109.2 to 111.4 seconds] per question). Mean discrimination was statistically significantly higher for open-book conditions (0.34 [CI, 0.32 to 0.35] vs. 0.39 [CI, 0.37 to 0.41] per question). A strong single dimension showed that the examination measured the same factor with or without the resource. Only 1 electronic resource was evaluated. Inclusion of an electronic resource with time constraints did not adversely affect test performance and did not change
Uso da bioinformática na diferenciação molecular da Entamoeba histolytica e Entamoeba díspar - DOI: 10.4025/actascihealthsci.v30i2.2375 Molecular discrimination of Entamoeba histolytica and Entamoeba dispar by bioinformatics resources - DOI: 10.4025/actascihealthsci.v30i2.2375
Full Text Available Amebíase invasiva, causada por Entamoeba histolytica, é microscopicamente indistinguível da espécie não-patogênica Entamoeba dispar. Com auxílio de ferramentas de bioinformática, objetivou-se diferenciar Entamoeba histolytica e Entamoeba dispar por técnicas moleculares. A análise foi realizada a partir do banco de dados da National Center for Biotechnology Information; pela pesquisa de similaridade de sequências, elegeu-se o gene da cisteína sintase. Um par de primer foi desenhado (programa Web Primer e foi selecionada a enzima de restrição TaqI (programa Web Cutter. Após a atuação da enzima, o fragmento foi dividido em dois, um com 255 pb e outro com 554 pb, padrão característico da E. histolytica. Na ausência de corte, o fragmento apresentou o tamanho de 809 pb, referente à E. dispar.Under microscopic conditions, the invasive Entamoeba histolytica is indistinguishable from the non-pathogenic species Entamoeba dispar. In this way, the present study was carried out to determine a molecular strategy for discriminating both species by the mechanisms of bioinformatics. The gene cysteine synthetase was considered for such a purpose by using the resources of the National Center for Biotechnology Information data bank in the search for similarities in the gene sequence. In this way, a primer pair was designed by the Web Primer program and the restriction enzyme TaqI was selected by the Web Cutter software program. The DNA fragment had a size of 809 bp before cutting, which is consistent with E. dispar. The gene fragment was partitioned in a first fragment with 255 bp and a second one with 554 bp, which is similar to the genetic characteristics of E. histolytica.
Picardi, Ernesto; Regina, Teresa M R; Verbitskiy, Daniil; Brennicke, Axel; Quagliariello, Carla
RNA editing is a post-transcriptional molecular process whereby the information in a genetic message is modified from that in the corresponding DNA template by means of nucleotide substitutions, insertions and/or deletions. It occurs mostly in organelles by clade-specific diverse and unrelated biochemical mechanisms. RNA editing events have been annotated in primary databases as GenBank and at more sophisticated level in the specialized databases REDIdb, dbRES and EdRNA. At present, REDIdb is the only freely available database that focuses on the organellar RNA editing process and annotates each editing modification in its biological context. Here we present an updated and upgraded release of REDIdb with a web-interface refurbished with graphical and computational facilities that improve RNA editing investigations. Details of the REDIdb features and novelties are illustrated and compared to other RNA editing databases. REDIdb is freely queried at http://biologia.unical.it/py_script/REDIdb/. Copyright © 2010 Elsevier B.V. and Mitochondria Research Society. All rights reserved.
DNA sequencing is the deciphering of hereditary information. It is an indispensable prerequisite for many biotechnical applications and technologies and the continual acquisition of genomic information is very important. This opens the door not only for further research and better understanding of the architectural plan of life ...
Large-scale genomic data for peanut have only become available in the last few years, with the advent of low-cost sequencing technologies. To make the data accessible to researchers and to integrate across diverse types of data, the International Peanut Genomics Consortium funded the development of ...
The thesis focuses on two bioinformatics research topics: the development of tools for an efficient and reliable identification of single nucleotides polymorphisms (SNPs) and polymorphic simple sequence repeats (SSRs) from expressed sequence tags (ESTs) (Chapter 2, 3 and 4), and the subsequent
Biological databases are having a growth spurt. Much of this results from research in genetics and biodiversity, coupled with fast-paced developments in information technology. The revolution in bioinformatics, defined by Sugden and Pennisi (2000) as the "tools and techniques for...
An integrative bioinformatics pipeline for the genomewide identification of novel porcine microRNA genes. Wei Fang, Na Zhou, Dengyun Li, Zhigang Chen, Pengfei Jiang and Deli Zhang. J. Genet. 92,587 593. Figure 1. Primary sequence of the predicted SSc-mir-2053 precursor and locations of some terms in the secondary ...
Lelieveld, S.H.; Veltman, J.A.; Gilissen, C.F.
With the widespread adoption of next generation sequencing technologies by the genetics community and the rapid decrease in costs per base, exome sequencing has become a standard within the repertoire of genetic experiments for both research and diagnostics. Although bioinformatics now offers
Dec 6, 2013 ... The majority of miRNAs in pig (Sus scrofa), an impor- tant domestic animal, remain unknown. From this perspec- tive, we attempted the genomewide identification of novel porcine miRNAs. Here, we propose a novel integrative bioinformatics pipeline to identify conservative and non- conservative novel ...
Thus, there is the need for appropriate strategies of introducing the basic components of this emerging scientific field to part of the African populace through the development of an online distance education learning tool. This study involved the design of a bioinformatics online distance educative tool an implementation of ...
reaction (PCR), oligo hybridization and DNA sequencing. Proper primer design is actually one of the most important factors/steps in successful DNA sequencing. Various bioinformatics programs are available for selection of primer pairs from a template sequence. The plethora programs for PCR primer design reflects the.
Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...
Kelley, Scott; Alger, Christianna; Deutschman, Douglas
The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP).…
Nielsen, Henrik; Sperotto, Maria Maddalena
)-based bioinformatics approach. The ANN was trained to recognize feature-based patterns in proteins that are considered to be associated with lipid rafts. The trained ANN was then used to predict protein raftophilicity. We found that, in the case of α-helical membrane proteins, their hydrophobic length does not affect...
Ondrej, Vladan; Dvorak, Petr
Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…
In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…
Boyle, John A.
Bioinformatics has emerged as an important research tool in recent years. The ability to mine large databases for relevant information has become increasingly central to many different aspects of biochemistry and molecular biology. It is important that undergraduates be introduced to the available information and methodologies. We present a…
In this thesis, I detail my 4-year efforts in developing bioinformatics tools and algorithms to address the growing demands of current proteomics endeavors, covering a range of facets such as large-scale protein expression profiling, charting post-translation modifications as well as
Full Text Available Performing business according to contemporary requirements influences companies for continuous usage of modern managerial tools, such as a human resource information system (HRIS and electronic recruitment (ER. Human resources have been recognised as curtail resources and the main source of a competitive advantage in creation of successful business performance. In order to attract and select the top employees, companies use quality information software for attracting internal ones, and electronic recruitment for attracting the best possible external candidates. The main aim of this paper is to research the level of the usage of HRIS and ER within medium-size and large Croatian companies. Moreover, the additional aim of this paper is to evaluate the relationship among the usage of these modern managerial tools and the overall success of human resource management within these companies. For the purpose of this paper, primary and secondary research has been conducted in order to reveal the level of the usage of HRIS and ER as well as the overall success of human resource management in Croatian companies. The companies’ classification (HRIS and ER is done by using the non-hierarchical k-means cluster method as well as the nonparametric Kruskal Wallis test. Further, the companies are ranked by the multicriteria PROMETHEE method. Relevant nonparametric tests are used for testing the overall companies’ HRM. Finally, binary logistic regression is estimated, relating binary variable HRM and HRIS development. After detailed research, it can be concluded that large Croatian companies apply HRIS in majority (with a positive relation to HRM performance, but still require certain degrees of its development.
Full Text Available What are you working on? You have certainly been asked that question many times, whether it be at a Saturday night party, during a discussion with your neighbors, or at a family gathering. Communicating with a lay audience about scientific subjects and making them attractive is a difficult task. But difficult or not, you will have to do it for many years, not only with your family and friends, but also with your colleagues and collaborators. So, better learn now! Although not usually taught, the ability to explain your work to others is an essential skill in science, where communication plays a key role. Using some examples of the French Regional Student Group activities, we discuss here (i why it is important to have such communication skills, (ii how you can get involved in these activities by using existing resources or working with people who have previous experience, and (iii what you get out of this amazing experience. We aim to motivate you and provide you with tips and ideas to get involved in promoting scientific activities while getting all the benefits.
Manyam, Ganiraju; Payton, Michelle A.; Roth, Jack A.; Abruzzo, Lynne V.; Coombes, Kevin R.
With the proliferation of high-throughput technologies, genome-level data analysis has become common in molecular biology. Bioinformaticians are developing extensive resources to annotate and mine biological features from high-throughput data. The underlying database management systems for most bioinformatics software are based on a relational model. Modern non-relational databases offer an alternative that has flexibility, scalability, and a non-rigid design schema. Moreover, with an accelerated development pace, non-relational databases like CouchDB can be ideal tools to construct bioinformatics utilities. We describe CouchDB by presenting three new bioinformatics resources: (a) geneSmash, which collates data from bioinformatics resources and provides automated gene-centric annotations, (b) drugBase, a database of drug-target interactions with a web interface powered by geneSmash, and (c) HapMap-CN, which provides a web interface to query copy number variations from three SNP-chip HapMap datasets. In addition to the web sites, all three systems can be accessed programmatically via web services. PMID:22609849
Beck, Tim N; Chikwem, Adaeze J; Solanki, Nehal R; Golemis, Erica A
Bioinformatic approaches are intended to provide systems level insight into the complex biological processes that underlie serious diseases such as cancer. In this review we describe current bioinformatic resources, and illustrate how they have been used to study a clinically important example: epithelial-to-mesenchymal transition (EMT) in lung cancer. Lung cancer is the leading cause of cancer-related deaths and is often diagnosed at advanced stages, leading to limited therapeutic success. While EMT is essential during development and wound healing, pathological reactivation of this program by cancer cells contributes to metastasis and drug resistance, both major causes of death from lung cancer. Challenges of studying EMT include its transient nature, its molecular and phenotypic heterogeneity, and the complicated networks of rewired signaling cascades. Given the biology of lung cancer and the role of EMT, it is critical to better align the two in order to advance the impact of precision oncology. This task relies heavily on the application of bioinformatic resources. Besides summarizing recent work in this area, we use four EMT-associated genes, TGF-β (TGFB1), NEDD9/HEF1, β-catenin (CTNNB1) and E-cadherin (CDH1), as exemplars to demonstrate the current capacities and limitations of probing bioinformatic resources to inform hypothesis-driven studies with therapeutic goals. Copyright © 2014 the American Physiological Society.
Kovarik, Dina N; Patterson, Davis G; Cohen, Carolyn; Sanders, Elizabeth A; Peterson, Karen A; Porter, Sandra G; Chowning, Jeanne Ting
We investigated the effects of our Bio-ITEST teacher professional development model and bioinformatics curricula on cognitive traits (awareness, engagement, self-efficacy, and relevance) in high school teachers and students that are known to accompany a developing interest in science, technology, engineering, and mathematics (STEM) careers. The program included best practices in adult education and diverse resources to empower teachers to integrate STEM career information into their classrooms. The introductory unit, Using Bioinformatics: Genetic Testing, uses bioinformatics to teach basic concepts in genetics and molecular biology, and the advanced unit, Using Bioinformatics: Genetic Research, utilizes bioinformatics to study evolution and support student research with DNA barcoding. Pre-post surveys demonstrated significant growth (n = 24) among teachers in their preparation to teach the curricula and infuse career awareness into their classes, and these gains were sustained through the end of the academic year. Introductory unit students (n = 289) showed significant gains in awareness, relevance, and self-efficacy. While these students did not show significant gains in engagement, advanced unit students (n = 41) showed gains in all four cognitive areas. Lessons learned during Bio-ITEST are explored in the context of recommendations for other programs that wish to increase student interest in STEM careers.
Kovarik, Dina N.; Patterson, Davis G.; Cohen, Carolyn; Sanders, Elizabeth A.; Peterson, Karen A.; Porter, Sandra G.; Chowning, Jeanne Ting
We investigated the effects of our Bio-ITEST teacher professional development model and bioinformatics curricula on cognitive traits (awareness, engagement, self-efficacy, and relevance) in high school teachers and students that are known to accompany a developing interest in science, technology, engineering, and mathematics (STEM) careers. The program included best practices in adult education and diverse resources to empower teachers to integrate STEM career information into their classrooms. The introductory unit, Using Bioinformatics: Genetic Testing, uses bioinformatics to teach basic concepts in genetics and molecular biology, and the advanced unit, Using Bioinformatics: Genetic Research, utilizes bioinformatics to study evolution and support student research with DNA barcoding. Pre–post surveys demonstrated significant growth (n = 24) among teachers in their preparation to teach the curricula and infuse career awareness into their classes, and these gains were sustained through the end of the academic year. Introductory unit students (n = 289) showed significant gains in awareness, relevance, and self-efficacy. While these students did not show significant gains in engagement, advanced unit students (n = 41) showed gains in all four cognitive areas. Lessons learned during Bio-ITEST are explored in the context of recommendations for other programs that wish to increase student interest in STEM careers. PMID:24006393
Accardi, Luigi; Freudenberg, Wolfgang; Ohya, Masanori
.Use of cryptographic ideas to interpret biological phenomena (and vice versa) / M. Regoli -- Discrete approximation to operators in white noise analysis / Si Si -- Bogoliubov type equations via infinite-dimensional equations for measures / V. V. Kozlov and O. G. Smolyanov -- Analysis of several categorical data using measure of proportional reduction in variation / K. Yamamoto ... [et al.] -- The electron reservoir hypothesis for two-dimensional electron systems / K. Yamada ... [et al.] -- On the correspondence between Newtonian and functional mechanics / E. V. Piskovskiy and I. V. Volovich -- Quantile-quantile plots: An approach for the inter-species comparison of promoter architecture in eukaryotes / K. Feldmeier ... [et al.] -- Entropy type complexities in quantum dynamical processes / N. Watanabe -- A fair sampling test for Ekert protocol / G. Adenier, A. Yu. Khrennikov and N. Watanabe -- Brownian dynamics simulation of macromolecule diffusion in a protocell / T. Ando and J. Skolnick -- Signaling network of environmental sensing and adaptation in plants: Key roles of calcium ion / K. Kuchitsu and T. Kurusu -- NetzCope: A tool for displaying and analyzing complex networks / M. J. Barber, L. Streit and O. Strogan -- Study of HIV-1 evolution by coding theory and entropic chaos degree / K. Sato -- The prediction of botulinum toxin structure based on in silico and in vitro analysis / T. Suzuki and S. Miyazaki -- On the mechanism of D-wave high T[symbol] superconductivity by the interplay of Jahn-Teller physics and Mott physics / H. Ushio, S. Matsuno and H. Kamimura.
Full Text Available Abstract Background Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. Results To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. Conclusions The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others.
Kane, Danielle; Schneidewind, Jeff
As part of a focused, methodical, and evaluative approach to emerging technologies, QR codes are one of many new technologies being used by the UC Irvine Libraries. QR codes provide simple connections between print and virtual resources. In summer 2010, a small task force began to investigate how QR codes could be used to provide information and…
Alem, Leila; McLean, Alistair
Community participation is central to achieving sustainable natural resource management. A prerequisite to informed participation is that community and stakeholder groups have access to different knowledge sources, are more closely attuned to the different issues and viewpoints, and are sufficiently equipped to understand and maybe resolve complex…
Blumberg, Roger B.
This paper describes a hypermedia resource, called MendelWeb that integrates elementary biology, discrete mathematics, and the history of science. MendelWeb is constructed from Gregor Menders 1865 paper, "Experiments in Plant Hybridization". An English translation of Mendel's paper, which is considered to mark the birth of classical and…
The aim of this review is to discuss the importance of bioinformatics and emphasize the need to acquire bioinformatics training and skills so as to maximize its potentials for improved delivery of animal health. In this review, bioinformatics is introduced, challenges to effective animal disease diagnosis, prevention and control, ...
Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.
There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…
In April 2007, the University of Washington Libraries debuted WorldCat Local (WCL), a localized version of the WorldCat database that interoperates with a library's integrated library system and fulfillment services to provide a single-search interface for a library's physical and electronic content. This brief will describe how WCL incorporates a…
For libraries to continue to lead in this industry generally and academic libraries in particular, deliberate effort must be made to bring the IT education to every potential user of the libraries. This however must be done based on available data. This is what this study sought to provide- a survey of the use of electronic ...
Brown, Melvin Marlo; And Others
Some of the administrative and organizational issues in creating a gopher, specifically a library gopher for university libraries, are discussed. In 1993 the Electronic Collections Task Force of the New Mexico State University library administration began to develop a library-based gopher system that would enable users to have unlimited access to…
Woodruff, Allison; Aoki, Paul M.; Grinter, Rebecca E.; Hurst, Amy; Szymanski, Margaret H.; Thornton, James D.
This paper describes an electronic guidebook, "Sotto Voce," that enables visitors to share audio information by eavesdropping on each others guidebook activity. The first section discusses the design and implementation of the guidebook device, key aspects of its user interface, the design goals for the audio environment, the eavesdropping…
Denaxas, Spiros C; George, Julie; Herrett, Emily; Shah, Anoop D; Kalra, Dipak; Hingorani, Aroon D; Kivimaki, Mika; Timmis, Adam D; Smeeth, Liam; Hemingway, Harry
The goal of cardiovascular disease (CVD) research using linked bespoke studies and electronic health records (CALIBER) is to provide evidence to inform health care and public health policy for CVDs across different stages of translation, from discovery, through evaluation in trials to implementation, where linkages to electronic health records provide new scientific opportunities. The initial approach of the CALIBER programme is characterized as follows: (i) Linkages of multiple electronic heath record sources: examples include linkages between the longitudinal primary care data from the Clinical Practice Research Datalink, the national registry of acute coronary syndromes (Myocardial Ischaemia National Audit Project), hospitalization and procedure data from Hospital Episode Statistics and cause-specific mortality and social deprivation data from the Office of National Statistics. Current cohort analyses involve a million people in initially healthy populations and disease registries with ∼105 patients. (ii) Linkages of bespoke investigator-led cohort studies (e.g. UK Biobank) to registry data (e.g. Myocardial Ischaemia National Audit Project), providing new means of ascertaining, validating and phenotyping disease. (iii) A common data model in which routine electronic health record data are made research ready, and sharable, by defining and curating with meta-data >300 variables (categorical, continuous, event) on risk factors, CVDs and non-cardiovascular comorbidities. (iv) Transparency: all CALIBER studies have an analytic protocol registered in the public domain, and data are available (safe haven model) for use subject to approvals. For more information, e-mail email@example.com PMID:23220717
Kouskoumvekaki, Irene; Shublaq, Nour; Brunak, Søren
As both the amount of generated biological data and the processing compute power increase, computational experimentation is no longer the exclusivity of bioinformaticians, but it is moving across all biomedical domains. For bioinformatics to realize its translational potential, domain experts need...... access to user-friendly solutions to navigate, integrate and extract information out of biological databases, as well as to combine tools and data resources in bioinformatics workflows. In this review, we present services that assist biomedical scientists in incorporating bioinformatics tools...... into their research.We review recent applications of Cytoscape, BioGPS and DAVID for data visualization, integration and functional enrichment. Moreover, we illustrate the use of Taverna, Kepler, GenePattern, and Galaxy as open-access workbenches for bioinformatics workflows. Finally, we mention services...
Schönbach, Christian; Verma, Chandra; Bond, Peter J; Ranganathan, Shoba
The International Conference on Bioinformatics (InCoB) has been publishing peer-reviewed conference papers in BMC Bioinformatics since 2006. Of the 44 articles accepted for publication in supplement issues of BMC Bioinformatics, BMC Genomics, BMC Medical Genomics and BMC Systems Biology, 24 articles with a bioinformatics or systems biology focus are reviewed in this editorial. InCoB2017 is scheduled to be held in Shenzen, China, September 20-22, 2017.
Full Text Available The human microbiome has received much attention because many studies have reported that the human gut microbiome is associated with several diseases. The very large datasets that are produced by these kinds of studies means that bioinformatics approaches are crucial for their analysis. Here, we systematically reviewed bioinformatics tools that are commonly used in microbiome research, including a typical pipeline and software for sequence alignment, abundance profiling, enterotype determination, taxonomic diversity, identifying differentially abundant species/genes, gene cataloging, and functional analyses. We also summarized the algorithms and methods used to define metagenomic species and co-abundance gene groups to expand our understanding of unclassified and poorly understood gut microbes that are undocumented in the current genome databases. Additionally, we examined the methods used to identify metagenomic biomarkers based on the gut microbiome, which might help to expand the knowledge and approaches for disease detection and monitoring.
Gorodkin, Jan; Hofacker, Ivo L.; Ruzzo, Walter L.
RNA bioinformatics and computational RNA biology have emerged from implementing methods for predicting the secondary structure of single sequences. The field has evolved to exploit multiple sequences to take evolutionary information into account, such as compensating (and structure preserving) base...... changes. These methods have been developed further and applied for computational screens of genomic sequence. Furthermore, a number of additional directions have emerged. These include methods to search for RNA 3D structure, RNA-RNA interactions, and design of interfering RNAs (RNAi) as well as methods...... for interactions between RNA and proteins.Here, we introduce the basic concepts of predicting RNA secondary structure relevant to the further analyses of RNA sequences. We also provide pointers to methods addressing various aspects of RNA bioinformatics and computational RNA biology....
Sequence alignment algorithms such as the Smith-Waterman algorithm are among the most important applications in the development of bioinformatics. Sequence alignment algorithms must process large amounts of data which may take a long time. Here, we introduce our Adaptive Hybrid Multiprocessor technique to accelerate the implementation of the Smith-Waterman algorithm. Our technique utilizes both the graphics processing unit (GPU) and the central processing unit (CPU). It adapts to the implementation according to the number of CPUs given as input by efficiently distributing the workload between the processing units. Using existing resources (GPU and CPU) in an efficient way is a novel approach. The peak performance achieved for the platforms GPU + CPU, GPU + 2CPUs, and GPU + 3CPUs is 10.4 GCUPS, 13.7 GCUPS, and 18.6 GCUPS, respectively (with the query length of 511 amino acid). © 2010 IEEE.
Samish, Ilan; Bourne, Philip E; Najmanovich, Rafael J
The field of structural bioinformatics and computational biophysics has undergone a revolution in the last 10 years. Developments that are captured annually through the 3DSIG meeting, upon which this article reflects. An increase in the accessible data, computational resources and methodology has resulted in an increase in the size and resolution of studied systems and the complexity of the questions amenable to research. Concomitantly, the parameterization and efficiency of the methods have markedly improved along with their cross-validation with other computational and experimental results. The field exhibits an ever-increasing integration with biochemistry, biophysics and other disciplines. In this article, we discuss recent achievements along with current challenges within the field. © The Author 2014. Published by Oxford University Press.
Gries, Nadja von; Wilts, Claas Henning
Critical metals are in great demand by the electrical and electronics industry, so waste electrical and eletronic equipment represents a significant source of secondary raw materials. Owing to low recycling rates and the concomitant supply risks associated with critical metals, the closure of the material cycles is highly relevant to the German economy. Losses of these metals occur from collection until their material recovery, along the entire disposal chain of waste electrical and electroni...
Fang, Wai-Chi; Lue, Jaw-Chyng
A system comprising very-large-scale integrated (VLSI) circuits is being developed as a means of bioinformatics-oriented analysis and recognition of patterns of fluorescence generated in a microarray in an advanced, highly miniaturized, portable genetic-expression-assay instrument. Such an instrument implements an on-chip combination of polymerase chain reactions and electrochemical transduction for amplification and detection of deoxyribonucleic acid (DNA).
Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter
Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly
Full Text Available PURPOSE: Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. METHODS: This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. RESULTS: The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. CONCLUSIONS: The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets
Christopher L Williams
Full Text Available Objective: Within the information technology (IT industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise′s overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Context: Bioinformatics relies on nimble IT framework which can adapt to changing requirements. Aims: To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics Conclusions: Use of the microservices framework
Hildebrandt, Anna Katharina; Stöckel, Daniel; Fischer, Nina M; de la Garza, Luis; Krüger, Jens; Nickels, Stefan; Röttig, Marc; Schärfe, Charlotta; Schumann, Marcel; Thiel, Philipp; Lenhof, Hans-Peter; Kohlbacher, Oliver; Hildebrandt, Andreas
Web-based workflow systems have gained considerable momentum in sequence-oriented bioinformatics. In structural bioinformatics, however, such systems are still relatively rare; while commercial stand-alone workflow applications are common in the pharmaceutical industry, academic researchers often still rely on command-line scripting to glue individual tools together. In this work, we address the problem of building a web-based system for workflows in structural bioinformatics. For the underlying molecular modelling engine, we opted for the BALL framework because of its extensive and well-tested functionality in the field of structural bioinformatics. The large number of molecular data structures and algorithms implemented in BALL allows for elegant and sophisticated development of new approaches in the field. We hence connected the versatile BALL library and its visualization and editing front end BALLView with the Galaxy workflow framework. The result, which we call ballaxy, enables the user to simply and intuitively create sophisticated pipelines for applications in structure-based computational biology, integrated into a standard tool for molecular modelling. ballaxy consists of three parts: some minor modifications to the Galaxy system, a collection of tools and an integration into the BALL framework and the BALLView application for molecular modelling. Modifications to Galaxy will be submitted to the Galaxy project, and the BALL and BALLView integrations will be integrated in the next major BALL release. After acceptance of the modifications into the Galaxy project, we will publish all ballaxy tools via the Galaxy toolshed. In the meantime, all three components are available from http://www.ball-project.org/ballaxy. Also, docker images for ballaxy are available at https://registry.hub.docker.com/u/anhi/ballaxy/dockerfile/. ballaxy is licensed under the terms of the GPL. © The Author 2014. Published by Oxford University Press. All rights reserved. For
Torre, Denis; Krawczuk, Patrycja; Jagodnik, Kathleen M.; Lachmann, Alexander; Wang, Zichen; Wang, Lily; Kuleshov, Maxim V.; Ma'Ayan, Avi
Biomedical data repositories such as the Gene Expression Omnibus (GEO) enable the search and discovery of relevant biomedical digital data objects. Similarly, resources such as OMICtools, index bioinformatics tools that can extract knowledge from these digital data objects. However, systematic access to pre-generated 'canned' analyses applied by bioinformatics tools to biomedical digital data objects is currently not available. Datasets2Tools is a repository indexing 31,473 canned bioinformatics analyses applied to 6,431 datasets. The Datasets2Tools repository also contains the indexing of 4,901 published bioinformatics software tools, and all the analyzed datasets. Datasets2Tools enables users to rapidly find datasets, tools, and canned analyses through an intuitive web interface, a Google Chrome extension, and an API. Furthermore, Datasets2Tools provides a platform for contributing canned analyses, datasets, and tools, as well as evaluating these digital objects according to their compliance with the findable, accessible, interoperable, and reusable (FAIR) principles. By incorporating community engagement, Datasets2Tools promotes sharing of digital resources to stimulate the extraction of knowledge from biomedical research data. Datasets2Tools is freely available from: http://amp.pharm.mssm.edu/datasets2tools.
Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.
Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469
Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P
Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.
Cohen, K Bretonnel; Hunter, Lawrence E
Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing.
Full Text Available This paper aims to determine the preference and use of electronic information and resources by blind/visually impaired users in the leading National Capital Region (NCR libraries of India. Survey methodology has been used as the basic research tool for data collection with the help of questionnaires. The 125 in total users surveyed in all the five libraries were selected randomly on the basis of willingness of the users with experience of working in digital environments to participate in the survey. The survey results were tabulated and analyzed with descriptive statistics methods using Excel software and 'Stata version 11'. The findings reveal that ICT have a positive impact in the lives of people with disabilities as it helps them to work independently and increases the level of confidence among them. The Internet is the most preferred medium of access to information among the majority of blind/visually impaired users. The 'Complexity of content available on the net' is found as the major challenge faced during Internet use by blind users of NCR libraries. 'Audio books on CDs/DVDs and DAISY books' are the most preferred electronic resources among the majority of blind/visually impaired users. This study will help the library professionals and organizations/institutions serving people with disabilities to develop effective library services for blind/visually impaired users in the digital environment on the basis of findings on information usage behavior in the study.
Ramli, Rindra M.
An Exploratory study on KAUST library use of LibAnswers in resolving electronic resources questions received in LibAnswers. It describes the findings of the questions received in LibAnswers. The author made suggestions based on the findings to improve the reference services in responding to e-resources questions.
Fortinsky, Kyle J; Fournier, Marc R; Benchimol, Eric I
Patients with inflammatory bowel disease (IBD) are increasingly turning to the Internet to research their condition and engage in discourse on their experiences. This has resulted in new dynamics in the relationship between providers and their patients, with misinformation and advertising potentially presenting barriers to the cooperative patient-provider partnership. This article addresses important issues of online IBD-related health information and social media activity, such as quality, reliability, objectivity, and privacy. We reviewed the medical literature on the quality of online information provided to IBD patients, and summarized the most commonly accessed Websites related to IBD. We also assessed the activity on popular social media sites (such as Facebook, Twitter, and YouTube), and evaluated currently available applications for use by IBD patients and providers on mobile phones and tablets. Through our review of the literature and currently available resources, we developed a list of recommended online resources to strengthen patient participation in their care by providing reliable, comprehensive educational material. Copyright © 2011 Crohn's & Colitis Foundation of America, Inc.
Kaminski Stanislaw; Kaminski Piotr; Kaminska Dorota; Trzcinski Jerzy
The use of mechanical sieves has a great impact on measurement results because occurrence of anisometric particles causes undercounting the average size. Such errors can be avoided by using opto-electronic measuring devices that enable measurement of particles from 10 μm up to a few dozen millimetres in size. The results of measurement of each particle size fraction are summed up proportionally to its weight with the use of Elsieve software system and for every type of material particle-size ...
Full Text Available The use of mechanical sieves has a great impact on measurement results because occurrence of anisometric particles causes undercounting the average size. Such errors can be avoided by using opto-electronic measuring devices that enable measurement of particles from 10 μm up to a few dozen millimetres in size. The results of measurement of each particle size fraction are summed up proportionally to its weight with the use of Elsieve software system and for every type of material particle-size distribution can be obtained. The software allows further statistical interpretation of the results. Beam of infrared radiation identifies size of particles and counts them precisely. Every particle is represented by an electronic impulse proportional to its size. Measurement of particles in aqueous suspension that replaces the hydrometer method can be carried out by using the IPS L analyser (range from 0.2 to 600 μm. The IPS UA analyser (range from 0.5 to 2000 μm is designed for measurement in the air. An ultrasonic adapter enables performing measurements of moist and aggregated particles from 0.5 to 1000 μm. The construction and software system allow to determine second dimension of the particle, its shape coefficient and specific surface area. The AWK 3D analyser (range from 0.2 to 31.5 mm is devoted to measurement of various powdery materials with subsequent determination of particle shape. The AWK B analyser (range from 1 to 130 mm measures materials of thick granulation and shape of the grains. The presented method of measurement repeatedly accelerates and facilitates study of granulometric composition.
Kim T Gurwitz
Full Text Available Africa is not unique in its need for basic bioinformatics training for individuals from a diverse range of academic backgrounds. However, particular logistical challenges in Africa, most notably access to bioinformatics expertise and internet stability, must be addressed in order to meet this need on the continent. H3ABioNet (www.h3abionet.org, the Pan African Bioinformatics Network for H3Africa, has therefore developed an innovative, free-of-charge "Introduction to Bioinformatics" course, taking these challenges into account as part of its educational efforts to provide on-site training and develop local expertise inside its network. A multiple-delivery-mode learning model was selected for this 3-month course in order to increase access to (mostly African, expert bioinformatics trainers. The content of the course was developed to include a range of fundamental bioinformatics topics at the introductory level. For the first iteration of the course (2016, classrooms with a total of 364 enrolled participants were hosted at 20 institutions across 10 African countries. To ensure that classroom success did not depend on stable internet, trainers pre-recorded their lectures, and classrooms downloaded and watched these locally during biweekly contact sessions. The trainers were available via video conferencing to take questions during contact sessions, as well as via online "question and discussion" forums outside of contact session time. This learning model, developed for a resource-limited setting, could easily be adapted to other settings.
Gurwitz, Kim T; Aron, Shaun; Panji, Sumir; Maslamoney, Suresh; Fernandes, Pedro L; Judge, David P; Ghouila, Amel; Domelevo Entfellner, Jean-Baka; Guerfali, Fatma Z; Saunders, Colleen; Mansour Alzohairy, Ahmed; Salifu, Samson P; Ahmed, Rehab; Cloete, Ruben; Kayondo, Jonathan; Ssemwanga, Deogratius; Mulder, Nicola
Africa is not unique in its need for basic bioinformatics training for individuals from a diverse range of academic backgrounds. However, particular logistical challenges in Africa, most notably access to bioinformatics expertise and internet stability, must be addressed in order to meet this need on the continent. H3ABioNet (www.h3abionet.org), the Pan African Bioinformatics Network for H3Africa, has therefore developed an innovative, free-of-charge "Introduction to Bioinformatics" course, taking these challenges into account as part of its educational efforts to provide on-site training and develop local expertise inside its network. A multiple-delivery-mode learning model was selected for this 3-month course in order to increase access to (mostly) African, expert bioinformatics trainers. The content of the course was developed to include a range of fundamental bioinformatics topics at the introductory level. For the first iteration of the course (2016), classrooms with a total of 364 enrolled participants were hosted at 20 institutions across 10 African countries. To ensure that classroom success did not depend on stable internet, trainers pre-recorded their lectures, and classrooms downloaded and watched these locally during biweekly contact sessions. The trainers were available via video conferencing to take questions during contact sessions, as well as via online "question and discussion" forums outside of contact session time. This learning model, developed for a resource-limited setting, could easily be adapted to other settings.
Adanu, Rmk; Adu-Sarkodie, Y; Opare-Sem, O; Nkyekyer, K; Donkor, P; Lawson, A; Engleberg, N C
To determine whether a group of Ghanaian students are able to easily use electronic learning material and whether they perceive this method of learning as acceptable. The University of Ghana Medical School (UGMS) and the School of Medical Sciences (SMS), Kwame Nkrumah University of Science and Technology (KNUST) PARTICIPANTS: One hundred and fifty third year medical students at SMS and nineteen fifth year medical students at UGMS METHODS: Two e-learning materials were developed, one on the polymerase chain reaction and the other on total abdominal hysterectomy and these were distributed to selected medical students. Two weeks after the distribution of the programmes, a one-page, self-administered questionnaire was distributed to the target groups of students at the two institutions. Ninety three percent (139) of respondents at KNUST and 95% (18) at UG report having access to a computer for learning purposes. All of the UG students viewed the TAH programme; 82% (130) of the KNUST students viewed the PCR animations. All students who viewed the programmes at both institutions indicated that the e-learning pro-grammes were "more effective" in comparison to other methods of learning. Computer ownership or availability at both medical schools is sufficient to permit the distribution and viewing of e-learning materials by students and the medical students considered both programmes to be very helpful.
Sinha, Sunil K; Crain, Barbara; Flickinger, Katie; Calkins, Hugh; Rickard, John; Cheng, Alan; Berger, Ronald; Tomaselli, Gordon; Marine, Joseph E
The feasibility and safety of postmortem cardiovascular implantable electronic device (CIED; pacemaker or defibrillator) retrieval for reuse has been shown. To date, studies indicate a low yield of reusable postmortem CIEDs (17%-30%). The purpose of this study was to test the hypothesis that a higher rate of reusable CIEDs would be identified upon postmortem retrieval when an institutional protocol for systematic and routine acquisition, interrogation, reprogramming, and manufacturer analysis was used. Over a 6-year period, all subjects referred for autopsy underwent concomitant CIED pulse generator retrieval and enrollment in the Johns Hopkins Post-Mortem CIED Registry. CIEDs were interrogated, reprogrammed, and submitted for manufacturer analysis. In total, 84 autopsies had CIEDs (37 pacemakers, 47 implantable cardioverter-defibrillators). CIEDs were implanted 2.84 ± 2.32 years before death, with 30% implanted 60% of pacemakers and >50% of defibrillators demonstrated normal functional status with projected longevities >7 years on average. Formation of a national hospital-based "CIED donor network" would facilitate larger scale charitable efforts in underserved countries. Copyright © 2016 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
Evolution has shaped the life forms for billion of years. Domestication is an accelerated process that can be used as a model for evolutionary changes. The aim of this thesis project has been to carry out extensive bioinformatic analyses of whole genome sequencing data to reveal SNPs, InDels and selective sweeps in the chicken, pig and dog genome. Pig genome sequencing revealed loci under selection for elongation of back and increased number of vertebrae, associated with the NR6A1, PLAG1,...
Karimzadeh, Mehran; Hoffman, Michael M
Investing in documenting your bioinformatics software well can increase its impact and save your time. To maximize the effectiveness of your documentation, we suggest following a few guidelines we propose here. We recommend providing multiple avenues for users to use your research software, including a navigable HTML interface with a quick start, useful help messages with detailed explanation and thorough examples for each feature of your software. By following these guidelines, you can assure that your hard work maximally benefits yourself and others. © The Author 2017. Published by Oxford University Press.
The general audience for these lectures is mainly physicists, computer scientists, engineers or the general public wanting to know more about what’s going on in the biosciences. What’s bioinformatics and why is all this fuss being made about it ? What’s this revolution triggered by the human genome project ? Are there any results yet ? What are the problems ? What new avenues of research have been opened up ? What about the technology ? These new developments will be compared with what happened at CERN earlier in its evolution, and it is hoped that the similiraties and contrasts will stimulate new curiosity and provoke new thoughts.
Lue, Jaw-Chyng L.; Fang, Wai-Chi
A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.
Handl, Julia; Kell, Douglas B; Knowles, Joshua
This paper reviews the application of multiobjective optimization in the fields of bioinformatics and computational biology. A survey of existing work, organized by application area, forms the main body of the review, following an introduction to the key concepts in multiobjective optimization. An original contribution of the review is the identification of five distinct "contexts," giving rise to multiple objectives: These are used to explain the reasons behind the use of multiobjective optimization in each application area and also to point the way to potential future uses of the technique.
Shi, Lizhen [Florida State Univ., Tallahassee, FL (United States); Wang, Zhong [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yu, Weikuan [Florida State Univ., Tallahassee, FL (United States); Meng, Xiandong [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
The combination of the Hadoop MapReduce programming model and cloud computing allows biological scientists to analyze next-generation sequencing (NGS) data in a timely and cost-effective manner. Cloud computing platforms remove the burden of IT facility procurement and management from end users and provide ease of access to Hadoop clusters. However, biological scientists are still expected to choose appropriate Hadoop parameters for running their jobs. More importantly, the available Hadoop tuning guidelines are either obsolete or too general to capture the particular characteristics of bioinformatics applications. In this paper, we aim to minimize the cloud computing cost spent on bioinformatics data analysis by optimizing the extracted significant Hadoop parameters. When using MapReduce-based bioinformatics tools in the cloud, the default settings often lead to resource underutilization and wasteful expenses. We choose k-mer counting, a representative application used in a large number of NGS data analysis tools, as our study case. Experimental results show that, with the fine-tuned parameters, we achieve a total of 4× speedup compared with the original performance (using the default settings). Finally, this paper presents an exemplary case for tuning MapReduce-based bioinformatics applications in the cloud, and documents the key parameters that could lead to significant performance benefits.
Magana, Alejandra J.; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari
Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the…
Full Text Available Under third-party power intervention (TPPI, which increases uncertainty in task environments, complex channel power interplays and restructuring are indispensable among green supply chain members as they move toward sustainable collaborative relationships for increased viability and competitive advantage. From the resource dependence perspective, this work presents a novel conceptual model to investigate the influence of political and social power on channel power restructuring and induced green supply chain collaboration in brander-retailer bidirectional green supply chains of fashionable consumer electronics products (FCEPs. An FCEP refers to the consumer electronics product (e.g., personal computers, mobile phones, computer notebooks, and game consoles with the features of a well-known brand associated, a short product lifecycle, timely and fashionable design fit for market trends, and quick responsiveness to the variations of market demands. The proposed model is tested empirically using questionnaire data obtained from retailers in the FCEP brander-retailer distribution channels. Analytical results reveal that as an extension of political and social power, TPPI positively affects the reciprocal interdependence of dyadic members and reduces power asymmetry, thereby enhancing the collaborative relationship of dyadic members and leading to improved green supply chain performance. Therein, reciprocal interdependence underlying collaborative relationship is the key to reducing the external environmental uncertainties in the TPPI context.
The automatic classification of GPCRs by bioinformatics methodology can provide functional information for new GPCRs in the whole 'GPCR proteome' and this information is important for the development of novel drugs. Since GPCR proteome is classified hierarchically, general ways for GPCR function prediction are based on hierarchical classification. Various computational tools have been developed to predict GPCR functions; those tools use not simple sequence searches but more powerful methods, such as alignment-free methods, statistical model methods, and machine learning methods used in protein sequence analysis, based on learning datasets. The first stage of hierarchical function prediction involves the discrimination of GPCRs from non-GPCRs and the second stage involves the classification of the predicted GPCR candidates into family, subfamily, and sub-subfamily levels. Then, further classification is performed according to their protein-protein interaction type: binding G-protein type, oligomerized partner type, etc. Those methods have achieved predictive accuracies of around 90 %. Finally, I described the future subject of research of the bioinformatics technique about functional prediction of GPCR.
Full Text Available Elizabeth T Masters,1 Jack Mardekian,1 Birol Emir,1 Andrew Clair,1 Max Kuhn,2 Stuart L Silverman,31Pfizer, Inc., New York, NY, 2Pfizer, Inc., Groton, CT, 3Cedars-Sinai Medical Center, Los Angeles, CA, USABackground: Diagnosis of fibromyalgia (FM is often challenging. Identifying factors associated with an FM diagnosis may guide health care providers in implementing appropriate diagnostic and management strategies.Methods: This retrospective study used the de-identified Humedica electronic medical record (EMR database to identify variables associated with an FM diagnosis. Cases (n=4,296 were subjects ≥18 years old with ≥2 International Classification of Diseases, Ninth Revision (ICD-9 codes for FM (729.1 ≥30 days apart during 2012, associated with an integrated delivery network, with ≥1 encounter with a health care provider in 2011 and 2012. Controls without FM (no-FM; n=583,665 did not have the ICD-9 codes for FM. Demographic, clinical, and health care resource utilization variables were extracted from structured EMR data. Univariate analysis identified variables showing significant differences between the cohorts based on odds ratios (ORs.Results: Consistent with FM epidemiology, FM subjects were predominantly female (78.7% vs 64.5%; P<0.0001 and slightly older (mean age 53.3 vs 52.7 years; P=0.0318. Relative to the no-FM cohort, the FM cohort was characterized by a higher prevalence of nearly all evaluated comorbidities; the ORs suggested a higher likelihood of an FM diagnosis (P<0.0001, especially for musculoskeletal and neuropathic pain conditions (OR 3.1 for each condition. Variables potentially associated with an FM diagnosis included higher levels of use of specific health care resources including emergency-room visits, outpatient visits, hospitalizations, and medications. Units used per subject for emergency-room visits, outpatient visits, hospitalizations, and medications were also significantly higher in the FM cohort (P<0
Afgan, Enis; Sloggett, Clare; Goonasekera, Nuwan; Makunin, Igor; Benson, Derek; Crowe, Mark; Gladman, Simon; Kowsar, Yousef; Pheasant, Michael; Horst, Ron; Lonie, Andrew
Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise. We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic. This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the
Full Text Available Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise.We designed and implemented the Genomics Virtual Laboratory (GVL as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic.This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints
Full Text Available Abstract Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'. A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption. An add-on module ('NuBio' facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures and functionality (e.g., to parse/write standard file formats. Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and
Karim, Md Rezaul; Michel, Audrey; Zappa, Achille; Baranov, Pavel; Sahay, Ratnesh; Rebholz-Schuhmann, Dietrich
Data workflow systems (DWFSs) enable bioinformatics researchers to combine components for data access and data analytics, and to share the final data analytics approach with their collaborators. Increasingly, such systems have to cope with large-scale data, such as full genomes (about 200 GB each), public fact repositories (about 100 TB of data) and 3D imaging data at even larger scales. As moving the data becomes cumbersome, the DWFS needs to embed its processes into a cloud infrastructure, where the data are already hosted. As the standardized public data play an increasingly important role, the DWFS needs to comply with Semantic Web technologies. This advancement to DWFS would reduce overhead costs and accelerate the progress in bioinformatics research based on large-scale data and public resources, as researchers would require less specialized IT knowledge for the implementation. Furthermore, the high data growth rates in bioinformatics research drive the demand for parallel and distributed computing, which then imposes a need for scalability and high-throughput capabilities onto the DWFS. As a result, requirements for data sharing and access to public knowledge bases suggest that compliance of the DWFS with Semantic Web standards is necessary. In this article, we will analyze the existing DWFS with regard to their capabilities toward public open data use as well as large-scale computational and human interface requirements. We untangle the parameters for selecting a preferable solution for bioinformatics research with particular consideration to using cloud services and Semantic Web technologies. Our analysis leads to research guidelines and recommendations toward the development of future DWFS for the bioinformatics research community. © The Author 2017. Published by Oxford University Press.
Full Text Available Over the past 30 years, genomic and bioinformatic analysis of human adenoviruses has been achieved using a variety of DNA sequencing methods; initially with the use of restriction enzymes and more currently with the use of the GS FLX pyrosequencing technology. Following the conception of DNA sequencing in the 1970s, analysis of adenoviruses has evolved from 100 base pair mRNA fragments to entire genomes. Comparative genomics of adenoviruses made its debut in 1984 when nucleotides and amino acids of coding sequences within the hexon genes of two human adenoviruses (HAdV, HAdV–C2 and HAdV–C5, were compared and analyzed. It was determined that there were three different zones (1-393, 394-1410, 1411-2910 within the hexon gene, of which HAdV–C2 and HAdV–C5 shared zones 1 and 3 with 95% and 89.5% nucleotide identity, respectively. In 1992, HAdV-C5 became the first adenovirus genome to be fully sequenced using the Sanger method. Over the next seven years, whole genome analysis and characterization was completed using bioinformatic tools such as blastn, tblastx, ClustalV and FASTA, in order to determine key proteins in species HAdV-A through HAdV-F. The bioinformatic revolution was initiated with the introduction of a novel species, HAdV-G, that was typed and named by the use of whole genome sequencing and phylogenetics as opposed to traditional serology. HAdV bioinformatics will continue to advance as the latest sequencing technology enables scientists to add to and expand the resource databases. As a result of these advancements, how novel HAdVs are typed has changed. Bioinformatic analysis has become the revolutionary tool that has significantly accelerated the in-depth study of HAdV microevolution through comparative genomics.
Cieślik, Marcin; Mura, Cameron
Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive documentation and annotated usage
Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP) paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'). A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption). An add-on module ('NuBio') facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures) and functionality (e.g., to parse/write standard file formats). Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and includes extensive
Innovative direct energy conversion systems using electronic adiabatic processes of electron fluid in solid conductors: new plants of electrical power and hydrogen gas resources without environmental pollutions
Kondoh, Y.; Kondo, M.; Shimoda, K.; Takahashi, T.
It is shown that using a novel recycling process of the environmental thermal energy, innovative permanent auto-working direct energy converter systems (PA-DEC systems) from the environmental thermal to electrical and/or chemical potential (TE/CP) energies, abbreviated as PA-TE/CP-DEC systems, can be used for new auto-working electrical power plants and the plants of the compressible and conveyable hydrogen gas resources at various regions in the whole world, with contributions to the world peace and the economical development in the south part of the world. It is shown that the same physical mechanism by free electrons and electrical potential determined by temperature in conductors, which include semiconductors, leads to the Peltier effect and the Seebeck one. It is experimentally clarified that the long distance separation between two π type elements of the heat absorption (HAS) and the production one (HPS) of the Peltier effect circuit system or between the higher temperature side (HTS) and the lower one (LTS) of the Seebeck effect circuit one does not change in the whole for the both effects. By using present systems, we do not need to use petrified fuels such as coals, oils, and natural gases in order to decrease the greenhouse effect by the CO 2 surrounding the earth. Furthermore, we do not need plats of nuclear fissions that left radiating wastes, i.e., with no environmental pollutions. The PA-TE/CP-DEC systems can be applicable for several km scale systems to the micro ones, such as the plants of the electrical power, the compact transportable hydrogen gas resources, a large heat energy container, which can be settled at far place from thermal energy absorbing area, the refrigerators, the air conditioners, home electrical apparatuses, and further the computer elements. It is shown that the simplest PA-TE/CP-DEC system can be established by using only the Seebeck effect components and the resolving water ones. It is clarified that the externally applied
and dhurrin, which have not previously been characterized in blueberries. There are more than 44,500 spider species with distinct habitats and unique characteristics. Spiders are masters of producing silk webs to catch prey and using venom to neutralize. The exploration of the genetics behind these properties...... japonicus (Lotus), Vaccinium corymbosum (blueberry), Stegodyphus mimosarum (spider) and Trifolium occidentale (clover). From a bioinformatics data analysis perspective, my work can be divided into three parts; genome annotation, small RNA, and gene expression analysis. Lotus is a legume of significant...... has just started. We have assembled and annotated the first two spider genomes to facilitate our understanding of spiders at the molecular level. The need for analyzing the large and increasing amount of sequencing data has increased the demand for efficient, user friendly, and broadly applicable...
Surangi W. Punyasena
Full Text Available Recent advances in microscopy, imaging, and data analyses have permitted both the greater application of quantitative methods and the collection of large data sets that can be used to investigate plant morphology. This special issue, the first for Applications in Plant Sciences, presents a collection of papers highlighting recent methods in the quantitative study of plant form. These emerging biometric and bioinformatic approaches to plant sciences are critical for better understanding how morphology relates to ecology, physiology, genotype, and evolutionary and phylogenetic history. From microscopic pollen grains and charcoal particles, to macroscopic leaves and whole root systems, the methods presented include automated classification and identification, geometric morphometrics, and skeleton networks, as well as tests of the limits of human assessment. All demonstrate a clear need for these computational and morphometric approaches in order to increase the consistency, objectivity, and throughput of plant morphological studies.
japonicus (Lotus), Vaccinium corymbosum (blueberry), Stegodyphus mimosarum (spider) and Trifolium occidentale (clover). From a bioinformatics data analysis perspective, my work can be divided into three parts; genome annotation, small RNA, and gene expression analysis. Lotus is a legume of significant...... biology and genetics studies. We present an improved Lotus genome assembly and annotation, a catalog of natural variation based on re-sequencing of 29 accessions, and describe the involvement of small RNAs in the plant-bacteria symbiosis. Blueberries contain anthocyanins, other pigments and various...... polyphenolic compounds, which have been linked to protection against diabetes, cardiovascular disease and age-related cognitive decline. We present the first genome- guided approach in blueberry to identify genes involved in the synthesis of health-protective compounds. Using RNA-Seq data from five stages...
ACADEMIC TRAINING LECTURE SERIES 27, 28 February 1, 2, 3 March 2006 from 11:00 to 12:00 - Auditorium, bldg. 500 Decoding the Genome A special series of 5 lectures on: Recent extraordinary advances in the life sciences arising through new detection technologies and bioinformatics The past five years have seen an extraordinary change in the information and tools available in the life sciences. The sequencing of the human genome, the discovery that we possess far fewer genes than foreseen, the measurement of the tiny changes in the genomes that differentiate us, the sequencing of the genomes of many pathogens that lead to diseases such as malaria are all examples of completely new information that is now available in the quest for improved healthcare. New tools have allowed similar strides in the discovery of the associated protein structures, providing invaluable information for those searching for new drugs. New DNA microarray chips permit simultaneous measurement of the state of expression of tens...
Yukinawa, N; Ishii, S; Takenouchi, T; Oba, S
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods
Wang, Xiran; Jiang, Leiyu; Tang, Haoru
GSTF12 has always been known as a key factor of proanthocyanins accumulate in plant testa. Through bioinformatics analysis of the nucleotide and encoded protein sequence of GSTF12, it is more advantageous to the study of genes related to anthocyanin biosynthesis accumulation pathway. Therefore, we chosen GSTF12 gene of 11 kinds species, downloaded their nucleotide and protein sequence from NCBI as the research object, found strawberry GSTF12 gene via bioinformation analyse, constructed phylogenetic tree. At the same time, we analysed the strawberry GSTF12 gene of physical and chemical properties and its protein structure and so on. The phylogenetic tree showed that Strawberry and petunia were closest relative. By the protein prediction, we found that the protein owed one proper signal peptide without obvious transmembrane regions.
Full Text Available The adoption of electronic medical record systems in resource-limited settings can help clinicians monitor patients' adherence to HIV antiretroviral therapy (ART and identify patients at risk of future ART failure, allowing resources to be targeted to those most at risk.Among adult patients enrolled on ART from 2005-2013 at two large, public-sector hospitals in Haiti, ART failure was assessed after 6-12 months on treatment, based on the World Health Organization's immunologic and clinical criteria. We identified models for predicting ART failure based on ART adherence measures and other patient characteristics. We assessed performance of candidate models using area under the receiver operating curve, and validated results using a randomly-split data sample. The selected prediction model was used to generate a risk score, and its ability to differentiate ART failure risk over a 42-month follow-up period was tested using stratified Kaplan Meier survival curves.Among 923 patients with CD4 results available during the period 6-12 months after ART initiation, 196 (21.2% met ART failure criteria. The pharmacy-based proportion of days covered (PDC measure performed best among five possible ART adherence measures at predicting ART failure. Average PDC during the first 6 months on ART was 79.0% among cases of ART failure and 88.6% among cases of non-failure (p<0.01. When additional information including sex, baseline CD4, and duration of enrollment in HIV care prior to ART initiation were added to PDC, the risk score differentiated between those who did and did not meet failure criteria over 42 months following ART initiation.Pharmacy data are most useful for new ART adherence alerts within iSanté. Such alerts offer potential to help clinicians identify patients at high risk of ART failure so that they can be targeted with adherence support interventions, before ART failure occurs.
Vamathevan, J; Birney, E
Objectives: To highlight and provide insights into key developments in translational bioinformatics between 2014 and 2016. Methods: This review describes some of the most influential bioinformatics papers and resources that have been published between 2014 and 2016 as well as the national genome sequencing initiatives that utilize these resources to routinely embed genomic medicine into healthcare. Also discussed are some applications of the secondary use of patient data followed by a comprehensive view of the open challenges and emergent technologies. Results: Although data generation can be performed routinely, analyses and data integration methods still require active research and standardization to improve streamlining of clinical interpretation. The secondary use of patient data has resulted in the development of novel algorithms and has enabled a refined understanding of cellular and phenotypic mechanisms. New data storage and data sharing approaches are required to enable diverse biomedical communities to contribute to genomic discovery. Conclusion: The translation of genomics data into actionable knowledge for use in healthcare is transforming the clinical landscape in an unprecedented way. Exciting and innovative models that bridge the gap between clinical and academic research are set to open up the field of translational bioinformatics for rapid growth in a digital era. Georg Thieme Verlag KG Stuttgart.
J. Köster (Johannes)
textabstractWe present Rust-Bio, the first general purpose bioinformatics library for the innovative Rust programming language. Rust-Bio leverages the unique combination of speed, memory safety and high-level syntax offered by Rust to provide a fast and safe set of bioinformatics algorithms and data
The main bottleneck in advancing genomics in present times is the lack of expertise in using bioinformatics tools and approaches for data mining in raw DNA sequences generated by modern high throughput technologies such as next generation sequencing. Although bioinformatics has been making major progress and ...
Nomi L Harris
Full Text Available The Bioinformatics Open Source Conference (BOSC is organized by the Open Bioinformatics Foundation (OBF, a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG before the annual Intelligent Systems in Molecular Biology (ISMB conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.
Life sciences research and development has opened up new challenges and opportunities for bioinformatics. The contribution of bioinformatics advances made possible the mapping of the entire human genome and genomes of many other organisms in just over a decade. These discoveries, along with current efforts to ...
Harris, Nomi L; Cock, Peter J A; Lapp, Hilmar; Chapman, Brad; Davey, Rob; Fields, Christopher; Hokamp, Karsten; Munoz-Torres, Monica
The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.
Buttigieg, Pier Luigi
Using live presentation to communicate the interdisciplinary and abstract content of bioinformatics to its educationally diverse studentship is a sizeable challenge. This review collects a number of perspectives on multimedia presentation, visual communication and pedagogy. The aim is to encourage educators to reflect on the great potential of live presentation in facilitating bioinformatics education.
Bioinformatics has advanced the course of research and future veterinary vaccines development because it has provided new tools for identification of vaccine targets from sequenced biological data of organisms. In Nigeria, there is lack of bioinformatics training in the universities, expect for short training courses in which ...
Howard, David R.; Miskowski, Jennifer A.; Grunwald, Sandra K.; Abler, Michael L.
At the University of Wisconsin-La Crosse, we have undertaken a program to integrate the study of bioinformatics across the undergraduate life science curricula. Our efforts have included incorporating bioinformatics exercises into courses in the biology, microbiology, and chemistry departments, as well as coordinating the efforts of faculty within…
Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…
Ramlo, Susan E.; McConnell, David; Duan, Zhong-Hui; Moore, Francisco B.
Faculty at a Midwestern metropolitan public university recently developed a course on bioinformatics that emphasized collaboration and inquiry. Bioinformatics, essentially the application of computational tools to biological data, is inherently interdisciplinary. Thus part of the challenge of creating this course was serving the needs and…
When bioinformatics education is considered, several issues are addressed. At the undergraduate level, the main issue revolves around conveying information from two main and different fields: biology and computer science. At the graduate level, the main issue is bridging the gap between biology students and computer science students. However, there is an educational component that is rarely addressed within the context of bioinformatics education: the ethics component. Here, a different perspective is provided on bioinformatics education, and the current status of ethics is analyzed within the existing bioinformatics programs. Analysis of the existing undergraduate and graduate programs, in both Europe and the United States, reveals the minimal attention given to ethics within bioinformatics education. Given that bioinformaticians speedily and effectively shape the biomedical sciences and hence their implications for society, here redesigning of the bioinformatics curricula is suggested in order to integrate the necessary ethics education. Unique ethical problems awaiting bioinformaticians and bioinformatics ethics as a separate field of study are discussed. In addition, a template for an "Ethics in Bioinformatics" course is provided.
... only. A limited number of selected reports, advice on product selection and safety alerts are freely available, as are a five year listing of product recalls, a listing of major consumer product...
... No. 5AB-0052, "Audit of the Management and Administration of Research Projects Funded by the Defense Advanced Research Projects Agency," will discuss the adequacy of the Defense Advanced Research...
Brooksbank, Cath; Morgan, Sarah L.; Rosenwald, Anne; Warnow, Tandy; Welch, Lonnie
Bioinformatics is recognized as part of the essential knowledge base of numerous career paths in biomedical research and healthcare. However, there is little agreement in the field over what that knowledge entails or how best to provide it. These disagreements are compounded by the wide range of populations in need of bioinformatics training, with divergent prior backgrounds and intended application areas. The Curriculum Task Force of the International Society of Computational Biology (ISCB) Education Committee has sought to provide a framework for training needs and curricula in terms of a set of bioinformatics core competencies that cut across many user personas and training programs. The initial competencies developed based on surveys of employers and training programs have since been refined through a multiyear process of community engagement. This report describes the current status of the competencies and presents a series of use cases illustrating how they are being applied in diverse training contexts. These use cases are intended to demonstrate how others can make use of the competencies and engage in the process of their continuing refinement and application. The report concludes with a consideration of remaining challenges and future plans. PMID:29390004
Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu
Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.
Background Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. Results We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012–2013. Conclusions 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost. PMID:23937194
Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D
Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.
Full Text Available The study tested a hybrid model with constructs drawn from the Technology Acceptance Model (TAM and Diffusion of Innovation (DOI theory in order to examine the moderating effect of productivity and relative advantage (RA on perceived usefulness (PU vis-à-vis electronic information resources (EIR adoption in private university libraries in Ogun and Osun States of Nigeria. The descriptive research design was adopted in the study. The population consisted of 61 (55.0% librarians and 50 (45.0% library officers (totaling 116—100% in Babcock University, Bells University, Covenant University, Bowen University, Oduduwa University, and Redeemer's University. Purposive sampling procedure was adopted after which total enumeration was used since the total population is small. The questionnaire was used for data collection. Of the 116 copies of the questionnaire administered, 111 (95.7% were found usable. The instrument was structured based on a 4-point Likert agreement scale of Strongly Agree, Agree, Disagree, and Strongly Disagree. Data were analyzed using descriptive statistics like tables of frequency counts and percentage. The findings revealed that productivity and relative advantage are significant moderators of perceived usefulness of EIR adoption in private university libraries in Ogun and Osun States, Nigeria.
Bhunia, Gouri Sankar; Dikhit, Manas Ranjan; Kesari, Shreekant; Sahoo, Ganesh Chandra; Das, Pradeep
Visceral leishmaniasis or kala-azar is a potent parasitic infection causing death of thousands of people each year. Medicinal compounds currently available for the treatment of kala-azar have serious side effects and decreased efficacy owing to the emergence of resistant strains. The type of immune reaction is also to be considered in patients infected with Leishmania donovani (L. donovani). For complete eradication of this disease, a high level modern research is currently being applied both at the molecular level as well as at the field level. The computational approaches like remote sensing, geographical information system (GIS) and bioinformatics are the key resources for the detection and distribution of vectors, patterns, ecological and environmental factors and genomic and proteomic analysis. Novel approaches like GIS and bioinformatics have been more appropriately utilized in determining the cause of visearal leishmaniasis and in designing strategies for preventing the disease from spreading from one region to another.
Full Text Available Performing Bioinformatic´s experiments involve an intensive access to distributed services and information resources through Internet. Although existing tools facilitate the implementation of workflow-oriented applications, they lack of capabilities to integrate services beyond low-scale applications, particularly integrating services with heterogeneous interaction patterns and in a larger scale. This is particularly required to enable a large-scale distributed processing of biological data generated by massive sequencing technologies. On the other hand, such integration mechanisms are provided by middleware products like Enterprise Service Buses (ESB, which enable to integrate distributed systems following a Service Oriented Architecture. This paper proposes an integration platform, based on enterprise middleware, to integrate Bioinformatics services. It presents a multi-level reference architecture and focuses on ESB-based mechanisms to provide asynchronous communications, event-based interactions and data transformation capabilities. The paper presents a formal specification of the platform using the Event-B model.
Ten Hoopen, Petra; Pesant, Stéphane; Kottmann, Renzo; Kopf, Anna; Bicak, Mesude; Claus, Simon; Deneudt, Klaas; Borremans, Catherine; Thijsse, Peter; Dekeyzer, Stefanie; Schaap, Dick Ma; Bowler, Chris; Glöckner, Frank Oliver; Cochrane, Guy
Contextual data collected concurrently with molecular samples are critical to the use of metagenomics in the fields of marine biodiversity, bioinformatics and biotechnology. We present here Marine Microbial Biodiversity, Bioinformatics and Biotechnology (M2B3) standards for "Reporting" and "Serving" data. The M2B3 Reporting Standard (1) describes minimal mandatory and recommended contextual information for a marine microbial sample obtained in the epipelagic zone, (2) includes meaningful information for researchers in the oceanographic, biodiversity and molecular disciplines, and (3) can easily be adopted by any marine laboratory with minimum sampling resources. The M2B3 Service Standard defines a software interface through which these data can be discovered and explored in data repositories. The M2B3 Standards were developed by the European project Micro B3, funded under 7(th) Framework Programme "Ocean of Tomorrow", and were first used with the Ocean Sampling Day initiative. We believe that these standards have value in broader marine science.
Suciu, Radu M; Aydin, Emir; Chen, Brian E
With the exponential increase and widespread availability of genomic, transcriptomic, and proteomic data, accessing these '-omics' data is becoming increasingly difficult. The current resources for accessing and analyzing these data have been created to perform highly specific functions intended for specialists, and thus typically emphasize functionality over user experience. We have developed a web-based application, GeneDig.org, that allows any general user access to genomic information with ease and efficiency. GeneDig allows for searching and browsing genes and genomes, while a dynamic navigator displays genomic, RNA, and protein information simultaneously for co-navigation. We demonstrate that our application allows more than five times faster and efficient access to genomic information than any currently available methods. We have developed GeneDig as a platform for bioinformatics integration focused on usability as its central design. This platform will introduce genomic navigation to broader audiences while aiding the bioinformatics analyses performed in everyday biology research.
Emery, Erin E.; Lapidos, Stan; Eisenstein, Amy R.; Ivan, Iulia I.; Golden, Robyn L.
Purpose: To demonstrate the feasibility of the BRIGHTEN Program (Bridging Resources of an Interdisciplinary Geriatric Health Team via Electronic Networking), an interdisciplinary team intervention for assessing and treating older adults for depression in outpatient primary and specialty medical clinics. The BRIGHTEN team collaborates "virtually"…
Ольга Николаевна Крылова
Full Text Available The article introduces some results of research, which were devoted to evaluation of tearches' mobility to introduce innovations in the contents of education. The author considers innovative potential of modules of the methodical support for system of electronic educational resources.
Hartnett, Eric; Beh, Eugenia; Resnick, Taryn; Ugaz, Ana; Tabacaru, Simona
In 2010, after two previous unsuccessful attempts at electronic resources management system (ERMS) implementation, Texas A&M University (TAMU) Libraries set out once again to find an ERMS that would fit its needs. After surveying the field, TAMU Libraries selected the University of Notre Dame Hesburgh Libraries-developed, open-source ERMS,…
This paper gives a brief overview of electronic information resources and services offered by The J.D. Rockefeller Research Library at Egerton University and the marketing of these resources. The paper examines the various reasons for marketing electronic information resources, with emphasis on the various, and illustrates marketing strategies used by J.D Rockefeller Research library towards effective utilization of the available resources in supporting research, teaching and learnin...
Panji, Sumir; Fernandes, Pedro L.; Judge, David P.; Ghouila, Amel; Salifu, Samson P.; Ahmed, Rehab; Kayondo, Jonathan; Ssemwanga, Deogratius
Africa is not unique in its need for basic bioinformatics training for individuals from a diverse range of academic backgrounds. However, particular logistical challenges in Africa, most notably access to bioinformatics expertise and internet stability, must be addressed in order to meet this need on the continent. H3ABioNet (www.h3abionet.org), the Pan African Bioinformatics Network for H3Africa, has therefore developed an innovative, free-of-charge “Introduction to Bioinformatics” course, taking these challenges into account as part of its educational efforts to provide on-site training and develop local expertise inside its network. A multiple-delivery–mode learning model was selected for this 3-month course in order to increase access to (mostly) African, expert bioinformatics trainers. The content of the course was developed to include a range of fundamental bioinformatics topics at the introductory level. For the first iteration of the course (2016), classrooms with a total of 364 enrolled participants were hosted at 20 institutions across 10 African countries. To ensure that classroom success did not depend on stable internet, trainers pre-recorded their lectures, and classrooms downloaded and watched these locally during biweekly contact sessions. The trainers were available via video conferencing to take questions during contact sessions, as well as via online “question and discussion” forums outside of contact session time. This learning model, developed for a resource-limited setting, could easily be adapted to other settings. PMID:28981516
Cochrane, Guy; Apweiler, Rolf; Birney, Ewan
Abstract The European Bioinformatics Institute (EMBL-EBI) supports life-science research throughout the world by providing open data, open-source software and analytical tools, and technical infrastructure (https://www.ebi.ac.uk). We accommodate an increasingly diverse range of data types and integrate them, so that biologists in all disciplines can explore life in ever-increasing detail. We maintain over 40 data resources, many of which are run collaboratively with partners in 16 countries (https://www.ebi.ac.uk/services). Submissions continue to increase exponentially: our data storage has doubled in less than two years to 120 petabytes. Recent advances in cellular imaging and single-cell sequencing techniques are generating a vast amount of high-dimensional data, bringing to light new cell types and new perspectives on anatomy. Accordingly, one of our main focus areas is integrating high-quality information from bioimaging, biobanking and other types of molecular data. This is reflected in our deep involvement in Open Targets, stewarding of plant phenotyping standards (MIAPPE) and partnership in the Human Cell Atlas data coordination platform, as well as the 2017 launch of the Omics Discovery Index. This update gives a birds-eye view of EMBL-EBI’s approach to data integration and service development as genomics begins to enter the clinic. PMID:29186510
Brazas, Michelle D; Ouellette, B F Francis
Bioinformatics.ca has been hosting continuing education programs in introductory and advanced bioinformatics topics in Canada since 1999 and has trained more than 2,000 participants to date. These workshops have been adapted over the years to keep pace with advances in both science and technology as well as the changing landscape in available learning modalities and the bioinformatics training needs of our audience. Post-workshop surveys have been a mandatory component of each workshop and are used to ensure appropriate adjustments are made to workshops to maximize learning. However, neither bioinformatics.ca nor others offering similar training programs have explored the long-term impact of bioinformatics continuing education training. Bioinformatics.ca recently initiated a look back on the impact its workshops have had on the career trajectories, research outcomes, publications, and collaborations of its participants. Using an anonymous online survey, bioinformatics.ca analyzed responses from those surveyed and discovered its workshops have had a positive impact on collaborations, research, publications, and career progression.
Full Text Available ABSTRACT:The traditional methods for mining foods for bioactive peptides are tedious and long. Similar to the drug industry, the length of time to identify and deliver a commercial health ingredient that reduces disease symptoms can take anything between 5 to 10 years. Reducing this time and effort is crucial in order to create new commercially viable products with clear and important health benefits. In the past few years, bioinformatics, the science that brings together fast computational biology, and efficient genome mining, is appearing as the long awaited solution to this problem. By quickly mining food genomes for characteristics of certain food therapeutic ingredients, researchers can potentially find new ones in a matter of a few weeks. Yet, surprisingly, very little success has been achieved so far using bioinformatics in mining for food bioactives.The absence of food specific bioinformatic mining tools, the slow integration of both experimental mining and bioinformatics, and the important difference between different experimental platforms are some of the reasons for the slow progress of bioinformatics in the field of functional food and more specifically in bioactive peptide discovery.In this paper I discuss some methods that could be easily translated, using a rational peptide bioinformatics design, to food bioactive peptide mining. I highlight the need for an integrated food peptide database. I also discuss how to better integrate experimental work with bioinformatics in order to improve the mining of food for bioactive peptides, therefore achieving a higher success rates.
Basyuni, M.; Wasilah, M.; Sumardi
This study describes the bioinformatics methods to analyze eight actin genes from mangrove plants on DDBJ/EMBL/GenBank as well as predicted the structure, composition, subcellular localization, similarity, and phylogenetic. The physical and chemical properties of eight mangroves showed variation among the genes. The percentage of the secondary structure of eight mangrove actin genes followed the order of a helix > random coil > extended chain structure for BgActl, KcActl, RsActl, and A. corniculatum Act. In contrast to this observation, the remaining actin genes were random coil > extended chain structure > a helix. This study, therefore, shown the prediction of secondary structure was performed for necessary structural information. The values of chloroplast or signal peptide or mitochondrial target were too small, indicated that no chloroplast or mitochondrial transit peptide or signal peptide of secretion pathway in mangrove actin genes. These results suggested the importance of understanding the diversity and functional of properties of the different amino acids in mangrove actin genes. To clarify the relationship among the mangrove actin gene, a phylogenetic tree was constructed. Three groups of mangrove actin genes were formed, the first group contains B. gymnorrhiza BgAct and R. stylosa RsActl. The second cluster which consists of 5 actin genes the largest group, and the last branch consist of one gene, B. sexagula Act. The present study, therefore, supported the previous results that plant actin genes form distinct clusters in the tree.
Horbach, D.Y.; Usanov, S.A.
One of the mechanisms of external signal transduction (ionizing radiation, toxicants, stress) to the target cell is the existence of membrane and intracellular proteins with intrinsic tyrosine kinase activity. No wonder that etiology of malignant growth links to abnormalities in signal transduction through tyrosine kinases. The epidermal growth factor receptor (EGFR) tyrosine kinases play fundamental roles in development, proliferation and differentiation of tissues of epithelial, mesenchymal and neuronal origin. There are four types of EGFR: EGF receptor (ErbB1/HER1), ErbB2/Neu/HER2, ErbB3/HER3 and ErbB4/HER4. Abnormal expression of EGFR, appearance of receptor mutants with changed ability to protein-protein interactions or increased tyrosine kinase activity have been implicated in the malignancy of different types of human tumors. Bioinformatics is currently using in investigation on design and selection of drugs that can make alterations in structure or competitively bind with receptors and so display antagonistic characteristics. (authors)
Neerincx, Pieter B T; Leunissen, Jack A M
Bioinformaticians have developed large collections of tools to make sense of the rapidly growing pool of molecular biological data. Biological systems tend to be complex and in order to understand them, it is often necessary to link many data sets and use more than one tool. Therefore, bioinformaticians have experimented with several strategies to try to integrate data sets and tools. Owing to the lack of standards for data sets and the interfaces of the tools this is not a trivial task. Over the past few years building services with web-based interfaces has become a popular way of sharing the data and tools that have resulted from many bioinformatics projects. This paper discusses the interoperability problem and how web services are being used to try to solve it, resulting in the evolution of tools with web interfaces from HTML/web form-based tools not suited for automatic workflow generation to a dynamic network of XML-based web services that can easily be used to create pipelines.
Ison, Jon; Kalaš, Matúš; Jonassen, Inge; Bolser, Dan; Uludag, Mahmut; McWilliam, Hamish; Malone, James; Lopez, Rodrigo; Pettifer, Steve; Rice, Peter
Motivation: Advancing the search, publication and integration of bioinformatics tools and resources demands consistent machine-understandable descriptions. A comprehensive ontology allowing such descriptions is therefore required. Results: EDAM is an ontology of bioinformatics operations (tool or workflow functions), types of data and identifiers, application domains and data formats. EDAM supports semantic annotation of diverse entities such as Web services, databases, programmatic libraries, standalone tools, interactive applications, data schemas, datasets and publications within bioinformatics. EDAM applies to organizing and finding suitable tools and data and to automating their integration into complex applications or workflows. It includes over 2200 defined concepts and has successfully been used for annotations and implementations. Availability: The latest stable version of EDAM is available in OWL format from http://edamontology.org/EDAM.owl and in OBO format from http://edamontology.org/EDAM.obo. It can be viewed online at the NCBO BioPortal and the EBI Ontology Lookup Service. For documentation and license please refer to http://edamontology.org. This article describes version 1.2 available at http://edamontology.org/EDAM_1.2.owl. Contact: firstname.lastname@example.org PMID:23479348
Phosphoenolpyruvate carboxykinase (PEPCK), a critical gluconeogenic enzyme, catalyzes the first committed step in the diversion of tricarboxylic acid cycle intermediates toward gluconeogenesis. According to the relative conservation of homologous gene, a bioinformatics strategy was applied to clone Fusarium ...
Diaz Acosta, B.
The Microsoft Biology Initiative (MBI) is an effort in Microsoft Research to bring new technology and tools to the area of bioinformatics and biology. This initiative is comprised of two primary components, the Microsoft Biology Foundation (MBF) and the Microsoft Biology Tools (MBT). MBF is a language-neutral bioinformatics toolkit built as an extension to the Microsoft .NET Framework—initially aimed at the area of Genomics research. Currently, it implements a range of parsers for common bioinformatics file formats; a range of algorithms for manipulating DNA, RNA, and protein sequences; and a set of connectors to biological web services such as NCBI BLAST. MBF is available under an open source license, and executables, source code, demo applications, documentation and training materials are freely downloadable from http://research.microsoft.com/bio. MBT is a collection of tools that enable biology and bioinformatics researchers to be more productive in making scientific discoveries.
BASIC QUALIFICATIONS To be considered for this position, you must minimally meet the knowledge, skills, and abilities listed below: Bachelor’s degree in life science/bioinformatics/math/physics/computer related field from an accredited college or university according to the Council for Higher Education Accreditation (CHEA). (Additional qualifying experience may be substituted for the required education). Foreign degrees must be evaluated for U.S. equivalency. In addition to the educational requirements, a minimum of five (5) years of progressively responsible relevant experience. Must be able to obtain and maintain a security clearance. PREFERRED QUALIFICATIONS Candidates with these desired skills will be given preferential consideration: A Masters’ or PhD degree in any quantitative science is preferred. Commitment to solving biological problems and communicating these solutions. Ability to multi-task across projects. Experience in submitting data sets to public repositories. Management of large genomic data sets including integration with data available from public sources. Prior customer-facing role. Record of scientific achievements including journal publications and conference presentations. Expected Competencies: Deep understanding of and experience in processing high throughput biomedical data: data cleaning, normalization, analysis, interpretation and visualization. Ability to understand and analyze data from complex experimental designs. Proficiency in at least two of the following programming languages: Perl, Python, R, Java and C/C++. Experience in at least two of the following areas: metagenomics, ChIPSeq, RNASeq, ExomeSeq, DHS-Seq, microarray analysis. Familiarity with public databases: NCBI, Ensembl, TCGA, cBioPortal, Broad FireHose. Knowledge of working in a cluster environment.
Kanterakis, Alexandros; Kuiper, Joël; Potamias, George; Swertz, Morris A
Today researchers can choose from many bioinformatics protocols for all types of life sciences research, computational environments and coding languages. Although the majority of these are open source, few of them possess all virtues to maximize reuse and promote reproducible science. Wikipedia has proven a great tool to disseminate information and enhance collaboration between users with varying expertise and background to author qualitative content via crowdsourcing. However, it remains an open question whether the wiki paradigm can be applied to bioinformatics protocols. We piloted PyPedia, a wiki where each article is both implementation and documentation of a bioinformatics computational protocol in the python language. Hyperlinks within the wiki can be used to compose complex workflows and induce reuse. A RESTful API enables code execution outside the wiki. Initial content of PyPedia contains articles for population statistics, bioinformatics format conversions and genotype imputation. Use of the easy to learn wiki syntax effectively lowers the barriers to bring expert programmers and less computer savvy researchers on the same page. PyPedia demonstrates how wiki can provide a collaborative development, sharing and even execution environment for biologists and bioinformaticians that complement existing resources, useful for local and multi-center research teams. PyPedia is available online at: http://www.pypedia.com. The source code and installation instructions are available at: https://github.com/kantale/PyPedia_server. The PyPedia python library is available at: https://github.com/kantale/pypedia. PyPedia is open-source, available under the BSD 2-Clause License.
Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel
Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.
Gentleman, R.C.; Carey, V.J.; Bates, D.M.
into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples.......The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry...
Gasparoviča-Asīte, M; Aleksejeva, L; Gersons, V
This article studies the possibilities of BEXA family classification algorithms – BEXA, FuzzyBexa and FuzzyBexa II in data, especially bioinformatics data, classification. Three different types of data sets were used in the study – data sets often used in the literature (like Iris data set), UCI data repository real life data sets (like breast cancer data set) and real bioinformatics data sets that have the specific character – a large number of attributes (several thousands) and a small numb...
Full Text Available Abstract Background Recent advances in experimental and computational technologies have fueled the development of many sophisticated bioinformatics programs. The correctness of such programs is crucial as incorrectly computed results may lead to wrong biological conclusion or misguide downstream experimentation. Common software testing procedures involve executing the target program with a set of test inputs and then verifying the correctness of the test outputs. However, due to the complexity of many bioinformatics programs, it is often difficult to verify the correctness of the test outputs. Therefore our ability to perform systematic software testing is greatly hindered. Results We propose to use a novel software testing technique, metamorphic testing (MT, to test a range of bioinformatics programs. Instead of requiring a mechanism to verify whether an individual test output is correct, the MT technique verifies whether a pair of test outputs conform to a set of domain specific properties, called metamorphic relations (MRs, thus greatly increases the number and variety of test cases that can be applied. To demonstrate how MT is used in practice, we applied MT to test two open-source bioinformatics programs, namely GNLab and SeqMap. In particular we show that MT is simple to implement, and is effective in detecting faults in a real-life program and some artificially fault-seeded programs. Further, we discuss how MT can be applied to test programs from various domains of bioinformatics. Conclusion This paper describes the application of a simple, effective and automated technique to systematically test a range of bioinformatics programs. We show how MT can be implemented in practice through two real-life case studies. Since many bioinformatics programs, particularly those for large scale simulation and data analysis, are hard to test systematically, their developers may benefit from using MT as part of the testing strategy. Therefore our work
Kalas, Matús; Puntervoll, Pål; Joseph, Alexandre; Bartaseviciūte, Edita; Töpfer, Armin; Venkataraman, Prabakar; Pettifer, Steve; Bryne, Jan Christian; Ison, Jon; Blanchet, Christophe; Rapacki, Kristoffer; Jonassen, Inge
The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use of the resources is hampered by the lack of widely used, standard data-exchange formats for the basic, everyday bioinformatics data types. BioXSD has been developed as a candidate for standard, canonical exchange format for basic bioinformatics data. BioXSD is represented by a dedicated XML Schema and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web. The BioXSD 1.0 XML Schema is freely available at http://www.bioxsd.org/BioXSD-1.0.xsd under the Creative Commons BY-ND 3.0 license. The http://bioxsd.org web page offers documentation, examples of data in BioXSD format, example workflows with source codes in common programming languages, an updated list of compatible web services and tools and a repository of feature requests from the community.
Kalaš, Matúš; Puntervoll, Pæl; Joseph, Alexandre; Bartaševičiūtė, Edita; Töpfer, Armin; Venkataraman, Prabakar; Pettifer, Steve; Bryne, Jan Christian; Ison, Jon; Blanchet, Christophe; Rapacki, Kristoffer; Jonassen, Inge
Motivation: The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use of the resources is hampered by the lack of widely used, standard data-exchange formats for the basic, everyday bioinformatics data types. Results: BioXSD has been developed as a candidate for standard, canonical exchange format for basic bioinformatics data. BioXSD is represented by a dedicated XML Schema and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web. Availability: The BioXSD 1.0 XML Schema is freely available at http://www.bioxsd.org/BioXSD-1.0.xsd under the Creative Commons BY-ND 3.0 license. The http://bioxsd.org web page offers documentation, examples of data in BioXSD format, example workflows with source codes in common programming languages, an updated list of compatible web services and tools and a repository of feature requests from the community. Contact: email@example.com; firstname.lastname@example.org; email@example.com PMID:20823319
Full Text Available As a focal point of biotechnology, bioinformatics integrates knowledge from biology, mathematics, physics, chemistry, computer science and information science. It generally deals with genome informatics, protein structure and drug design. However, the data or information thus acquired from the main areas of bioinformatics may not be effective. Some researchers combined bioinformatics with wireless sensor network (WSN into biosensor and other tools, and applied them to such areas as fermentation, environmental monitoring, food engineering, clinical medicine and military. In the combination, the WSN is used to collect data and information. The reliability of the WSN in bioinformatics is the prerequisite to effective utilization of information. It is greatly influenced by factors like quality, benefits, service, timeliness and stability, some of them are qualitative and some are quantitative. Hence, it is necessary to develop a method that can handle both qualitative and quantitative assessment of information. A viable option is the fuzzy linguistic method, especially 2-tuple linguistic model, which has been extensively used to cope with such issues. As a result, this paper introduces 2-tuple linguistic representation to assist experts in giving their opinions on different WSNs in bioinformatics that involve multiple factors. Moreover, the author proposes a novel way to determine attribute weights and uses the method to weigh the relative importance of different influencing factors which can be considered as attributes in the assessment of the WSN in bioinformatics. Finally, an illustrative example is given to provide a reasonable solution for the assessment.
Gerstein, Mark; Greenbaum, Dov; Cheung, Kei; Miller, Perry L
Computational biology and bioinformatics (CBB), the terms often used interchangeably, represent a rapidly evolving biological discipline. With the clear potential for discovery and innovation, and the need to deal with the deluge of biological data, many academic institutions are committing significant resources to develop CBB research and training programs. Yale formally established an interdepartmental Ph.D. program in CBB in May 2003. This paper describes Yale's program, discussing the scope of the field, the program's goals and curriculum, as well as a number of issues that arose in implementing the program. (Further updated information is available from the program's website, www.cbb.yale.edu.)
Zargaran, Eiman; Schuurman, Nadine; Nicol, Andrew J; Matzopoulos, Richard; Cinnamon, Jonathan; Taulu, Tracey; Ricker, Britta; Garbutt Brown, David Ross; Navsaria, Pradeep; Hameed, S Morad
Ninety percent of global trauma deaths occur in under-resourced or remote environments, with little or no capacity for injury surveillance. We hypothesized that emerging electronic and web-based technologies could enable design of a tablet-based application, the electronic Trauma Health Record (eTHR), used by front-line clinicians to inform trauma care and acquire injury surveillance data for injury control and health policy development. The study was conducted in 3 phases: 1. Design of an electronic application capable of supporting clinical care and injury surveillance; 2. Preliminary feasibility testing of eTHR in a low-resource, high-volume trauma center; and 3. Qualitative usability testing with 22 trauma clinicians from a spectrum of high- and low-resource and urban and remote settings including Vancouver General Hospital, Whitehorse General Hospital, British Columbia Mobile Medical Unit, and Groote Schuur Hospital in Cape Town, South Africa. The eTHR was designed with 3 key sections (admission note, operative note, discharge summary), and 3 key capabilities (clinical checklist creation, injury severity scoring, wireless data transfer to electronic registries). Clinician-driven registry data collection proved to be feasible, with some limitations, in a busy South African trauma center. In pilot testing at a level I trauma center in Cape Town, use of eTHR as a clinical tool allowed for creation of a real-time, self-populating trauma database. Usability assessments with traumatologists in various settings revealed the need for unique eTHR adaptations according to environments of intended use. In all settings, eTHR was found to be user-friendly and have ready appeal for frontline clinicians. The eTHR has potential to be used as an electronic medical record, guiding clinical care while providing data for injury surveillance, without significantly hindering hospital workflow in various health-care settings. Copyright © 2014 American College of Surgeons. Published
Becerra, José María
Full Text Available We propose the founding of a Natural History bioinformatics framework, which would solve one of the main problems in Natural History: data which is scattered around in many incompatible systems (not only computer systems, but also paper ones. This framework consists of computer resources (hardware and software, methodologies that ease the circulation of data, and staff expert in dealing with computers, who will develop software solutions to the problems encountered by naturalists. This system is organized in three layers: acquisition, data and analysis. Each layer is described, and an account of the elements that constitute it given.
Se presentan las bases de una estructura bioinformática para Historia Natural, que trata de resolver uno de los principales problemas en ésta: la presencia de datos distribuidos a lo largo de muchos sistemas incompatibles entre sí (y no sólo hablamos de sistemas informáticos, sino también en papel. Esta estructura se sustenta en recursos informáticos (en sus dos vertientes: hardware y software, en metodologías que permitan la fácil circulación de los datos, y personal experto en el uso de ordenadores que se encargue de desarrollar soluciones software a los problemas que plantean los naturalistas. Este sistema estaría organizado en tres capas: de adquisición, de datos y de análisis. Cada una de estas capas se describe, indicando los elementos que la componen.
Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E
A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly
Vassilev, D.; Leunissen, J.; Atanassov, A.; Nenov, A.; Dimov, G.
The goal of plant genomics is to understand the genetic and molecular basis of all biological processes in plants that are relevant to the specie. This understanding is fundamental to allow efficient exploitation of plants as biological resources in the development of new cultivars with improved
Full Text Available Information and Communication Technologies (ICTs) are capable of expanding access to quality education, educational resources and provide teachers with new skills. Nevertheless, a majority of rural public schools have limited ICTs, mainly due...
Chan, Landon L; Jiang, Peiyong
The discovery of cell-free DNA molecules in plasma has opened up numerous opportunities in noninvasive diagnosis. Cell-free DNA molecules have become increasingly recognized as promising biomarkers for detection and management of many diseases. The advent of next generation sequencing has provided unprecedented opportunities to scrutinize the characteristics of cell-free DNA molecules in plasma in a genome-wide fashion and at single-base resolution. Consequently, clinical applications of circulating cell-free DNA analysis have not only revolutionized noninvasive prenatal diagnosis but also facilitated cancer detection and monitoring toward an era of blood-based personalized medicine. With the remarkably increasing throughput and lowering cost of next generation sequencing, bioinformatics analysis becomes increasingly demanding to understand the large amount of data generated by these sequencing platforms. In this Review, we highlight the major bioinformatics algorithms involved in the analysis of cell-free DNA sequencing data. Firstly, we briefly describe the biological properties of these molecules and provide an overview of the general bioinformatics approach for the analysis of cell-free DNA. Then, we discuss the specific upstream bioinformatics considerations concerning the analysis of sequencing data of circulating cell-free DNA, followed by further detailed elaboration on each key clinical situation in noninvasive prenatal diagnosis and cancer management where downstream bioinformatics analysis is heavily involved. We also discuss bioinformatics analysis as well as clinical applications of the newly developed massively parallel bisulfite sequencing of cell-free DNA. Finally, we offer our perspectives on the future development of bioinformatics in noninvasive diagnosis. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Grey literature Web resources in the field of accelerator science and its allied subjects are collected for the scientists and engineers of RRCAT (Raja Ramanna Centre for Advanced Technology). For definition purposes the different types of grey literature are described. The Web resources collected and compiled in this article (with an overview and link for each) specifically focus on technical reports, preprints or e-prints, which meet the main information needs of RRCAT users.
Indonesia has a huge amount of biodiversity, which may contain many biomaterials for pharmaceutical application. These resources potency should be explored to discover new drugs for human wealth. However, the bioactive screening using conventional methods is very expensive and time-consuming. Therefore, we developed a methodology for screening the potential of natural resources based on bioinformatics. The method is developed based on the fact that organisms in the same taxon will have similar genes, metabolism and secondary metabolites product. Then we employ bioinformatics to explore the potency of biomaterial from Indonesia biodiversity by comparing species with the well-known taxon containing the active compound through published paper or chemical database. Then we analyze drug-likeness, bioactivity and the target proteins of the active compound based on their molecular structure. The target protein was examined their interaction with other proteins in the cell to determine action mechanism of the active compounds in the cellular level, as well as to predict its side effects and toxicity. By using this method, we succeeded to screen anti-cancer, immunomodulators and anti-inflammation from Indonesia biodiversity. For example, we found anticancer from marine invertebrate by employing the method. The anti-cancer was explore based on the isolated compounds of marine invertebrate from published article and database, and then identified the protein target, followed by molecular pathway analysis. The data suggested that the active compound of the invertebrate able to kill cancer cell. Further, we collect and extract the active compound from the invertebrate, and then examined the activity on cancer cell (MCF7). The MTT result showed that the methanol extract of marine invertebrate was highly potent in killing MCF7 cells. Therefore, we concluded that bioinformatics is cheap and robust way to explore bioactive from Indonesia biodiversity for source of drug and another
U.S. Department of Health & Human Services — EuPathDB Bioinformatics Resource Center for Biodefense and Emerging/Re-emerging Infectious Diseases is a portal for accessing genomic-scale datasets associated with...
Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke
Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics. © The Author 2013. Published by Oxford University Press. For Permissions, please email: firstname.lastname@example.org.
Tao, Ying; Liu, Yang; Friedman, Carol
Information visualization techniques, which take advantage of the bandwidth of human vision, are powerful tools for organizing and analyzing a large amount of data. In the postgenomic era, information visualization tools are indispensable for biomedical research. This paper aims to present an overview of current applications of information visualization techniques in bioinformatics for visualizing different types of biological data, such as from genomics, proteomics, expression profiling and structural studies. Finally, we discuss the challenges of information visualization in bioinformatics related to dealing with more complex biological information in the emerging fields of systems biology and systems medicine. PMID:20976032
Phillips, J. C.
Allosteric (long-range) interactions can be surprisingly strong in proteins of biomedical interest. Here we use bioinformatic scaling to connect prior results on nonsteroidal anti-inflammatory drugs to promising new drugs that inhibit cancer cell metabolism. Many parallel features are apparent, which explain how even one amino acid mutation, remote from active sites, can alter medical results. The enzyme twins involved are cyclooxygenase (aspirin) and isocitrate dehydrogenase (IDH). The IDH results are accurate to 1% and are overdetermined by adjusting a single bioinformatic scaling parameter. It appears that the final stage in optimizing protein functionality may involve leveling of the hydrophobic limits of the arms of conformational hydrophilic hinges.
Leclère, Valérie; Weber, Tilmann; Jacques, Philippe
This chapter helps in the use of bioinformatics tools relevant to the discovery of new nonribosomal peptides (NRPs) produced by microorganisms. The strategy described can be applied to draft or fully assembled genome sequences. It relies on the identification of the synthetase genes and the decip......This chapter helps in the use of bioinformatics tools relevant to the discovery of new nonribosomal peptides (NRPs) produced by microorganisms. The strategy described can be applied to draft or fully assembled genome sequences. It relies on the identification of the synthetase genes...
de Miranda Antonio B
Full Text Available Abstract Background BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Results Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Conclusion Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.
Kalas, M.; Puntervoll, P.; Joseph, A.
Motivation: The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use...... and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth...... interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web....
Manchester Metropolitan Univ. (England).
This issues paper, sixth in a series of eight, is intended to distill formative evaluation questions on topics that are central to the development of the higher and further education information environment in the United Kingdom. In undertaking formative evaluation studies, the Formative Evaluation of the Distributed National Electronic Resource…
Fatima, Anam; Abbas, Asad; Ming, Wan; Zaheer, Ahmad Nawaz; Akhtar, Masood-ul-Hassan
Technology plays a vital role in every field of life especially in business and education. Electronic commerce (EC) begins in the year of 1991 right after internet was introduced for commercial use. It is known to be the 12th five years' plan (2011 to 2015) of Chinese Ministry of Industry and Information Technology. The main "objective"…
Clinician‐selected Electronic Information Resources do not Guarantee Accuracy in Answering Primary Care Physicians’ Information Needs. A review of: McKibbon, K. Ann, and Douglas B. Fridsma. “Effectiveness of Clinician‐selected Electronic Information Resources for Answering Primary Care Physicians’ Information Needs.” Journal of the American Medical Informatics Association 13.6 (2006: 653‐9.
Martha Ingrid Preddie
Full Text Available Objective – To determine if electronic information resources selected by primary care physicians improve their ability to answer simulated clinical questions.Design – An observational study utilizing hour‐long interviews and think‐aloud protocols.Setting – The offices and clinics of primary care physicians in Canada and the United States.Subjects – Twenty‐five primary care physicians of whom 4 were women, 17 were from Canada, 22 were family physicians,and 24 were board certified.Methods – Participants provided responses to 23 multiple‐choice questions. Each physician then chose two questions and looked for the answers utilizing information resources of their own choice. The search processes, chosen resources and search times were noted. These were analyzed along with data on the accuracy of the answers and certainties related to the answer to each clinical question prior to the search.Main results – Twenty‐three physicians sought answers to 46 simulated clinical questions. Utilizing only electronic information resources, physicians spent a mean of 13.0 (SD 5.5 minutes searching for answers to the questions, an average of 7.3(SD 4.0 minutes for the first question and 5.8 (SD 2.2 minutes to answer the second question. On average, 1.8 resources were utilized per question. Resources that summarized information, such as the Cochrane Database of Systematic Reviews, UpToDate and Clinical Evidence, were favored 39.2% of the time, MEDLINE (Ovid and PubMed 35.7%, and Internet resources including Google 22.6%. Almost 50% of the search and retrieval strategies were keyword‐based, while MeSH, subheadings and limiting were used less frequently. On average, before searching physicians answered 10 of 23 (43.5% questions accurately. For questions that were searched using clinician‐selected electronic resources, 18 (39.1% of the 46 answers were accurate before searching, while 19 (42.1% were accurate after searching. The difference of
Ditty, Jayna L.; Kvaal, Christopher A.; Goodner, Brad; Freyermuth, Sharyn K.; Bailey, Cheryl; Britton, Robert A.; Gordon, Stuart G.; Heinhorst, Sabine; Reed, Kelynne; Xu, Zhaohui; Sanders-Lorenz, Erin R.; Axen, Seth; Kim, Edwin; Johns, Mitrick; Scott, Kathleen; Kerfeld, Cheryl A.
Undergraduate life sciences education needs an overhaul, as clearly described in the National Research Council of the National Academies publication BIO 2010: Transforming Undergraduate Education for Future Research Biologists. Among BIO 2010's top recommendations is the need to involve students in working with real data and tools that reflect the nature of life sciences research in the 21st century. Education research studies support the importance of utilizing primary literature, designing and implementing experiments, and analyzing results in the context of a bona fide scientific question in cultivating the analytical skills necessary to become a scientist. Incorporating these basic scientific methodologies in undergraduate education leads to increased undergraduate and post-graduate retention in the sciences. Toward this end, many undergraduate teaching organizations offer training and suggestions for faculty to update and improve their teaching approaches to help students learn as scientists, through design and discovery (e.g., Council of Undergraduate Research [www.cur.org] and Project Kaleidoscope [www.pkal.org]). With the advent of genome sequencing and bioinformatics, many scientists now formulate biological questions and interpret research results in the context of genomic information. Just as the use of bioinformatic tools and databases changed the way scientists investigate problems, it must change how scientists teach to create new opportunities for students to gain experiences reflecting the influence of genomics, proteomics, and bioinformatics on modern life sciences research. Educators have responded by incorporating bioinformatics into diverse life science curricula. While these published exercises in, and guidelines for, bioinformatics curricula are helpful and inspirational, faculty new to the area of bioinformatics inevitably need training in the theoretical underpinnings of the algorithms. Moreover, effectively integrating bioinformatics
Bhuvaneshwar, Krithika; Belouali, Anas; Singh, Varun; Johnson, Robert M; Song, Lei; Alaoui, Adil; Harris, Michael A; Clarke, Robert; Weiner, Louis M; Gusev, Yuriy; Madhavan, Subha
G-DOC Plus is a data integration and bioinformatics platform that uses cloud computing and other advanced computational tools to handle a variety of biomedical BIG DATA including gene expression arrays, NGS and medical images so that they can be analyzed in the full context of other omics and clinical information. G-DOC Plus currently holds data from over 10,000 patients selected from private and public resources including Gene Expression Omnibus (GEO), The Cancer Genome Atlas (TCGA) and the recently added datasets from REpository for Molecular BRAin Neoplasia DaTa (REMBRANDT), caArray studies of lung and colon cancer, ImmPort and the 1000 genomes data sets. The system allows researchers to explore clinical-omic data one sample at a time, as a cohort of samples; or at the level of population, providing the user with a comprehensive view of the data. G-DOC Plus tools have been leveraged in cancer and non-cancer studies for hypothesis generation and validation; biomarker discovery and multi-omics analysis, to explore somatic mutations and cancer MRI images; as well as for training and graduate education in bioinformatics, data and computational sciences. Several of these use cases are described in this paper to demonstrate its multifaceted usability. G-DOC Plus can be used to support a variety of user groups in multiple domains to enable hypothesis generation for precision medicine research. The long-term vision of G-DOC Plus is to extend this translational bioinformatics platform to stay current with emerging omics technologies and analysis methods to continue supporting novel hypothesis generation, analysis and validation for integrative biomedical research. By integrating several aspects of the disease and exposing various data elements, such as outpatient lab workup, pathology, radiology, current treatments, molecular signatures and expected outcomes over a web interface, G-DOC Plus will continue to strengthen precision medicine research. G-DOC Plus is available
3] Recently, the emergence of the novel 2009 influenza A ( H1N1 ) virus and the SARS coronavirus have demonstrated how rapidly pathogens can spread...standards in both minimum data sets for disease surveillance and routine diagnosis and care. Analysis & Visualization. As previously discussed, the...g002 SAGES Electronic Disease Surveillance PLoS ONE | www.plosone.org 3 May 2011 | Volume 6 | Issue 5 | e19750 for pandemic influenza as well as
Lorna M. Campbell
Full Text Available The Scottish electronic Staff Development Library (http://www.sesdl.scotcit.acuk is an ongoing collaborative project involving the Universities of Edinburgh, Paisley and Strathclyde which has been funded by SHEFC as part of their current ScotCIT Programme (http:llwww.scotcit.ac.uk. This project is being developed in response to the increasing demand for flexible, high-quality staff development materials.
Structural bioinformatics is concerned with the molecular structure of biomacromolecules on a genomic scale, using computational methods. Classic problems in structural bioinformatics include the prediction of protein and RNA structure from sequence, the design of artificial proteins or enzymes, and the automated analysis and comparison of biomacromolecules in atomic detail. The determination of macromolecular structure from experimental data (for example coming from nuclear magnetic resonance, X-ray crystallography or small angle X-ray scattering) has close ties with the field of structural bioinformatics. Recently, probabilistic models and machine learning methods based on Bayesian principles are providing efficient and rigorous solutions to challenging problems that were long regarded as intractable. In this review, I will highlight some important recent developments in the prediction, analysis and experimental determination of macromolecular structure that are based on such methods. These developments include generative models of protein structure, the estimation of the parameters of energy functions that are used in structure prediction, the superposition of macromolecules and structure determination methods that are based on inference. Although this review is not exhaustive, I believe the selected topics give a good impression of the exciting new, probabilistic road the field of structural bioinformatics is taking.
Jungck, John R; Weisstein, Anton E
The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes-the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software-the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a 'two-culture' problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses.
A Refresher Course on 'Bioinformatics in Modern Biology' for graduate and postgraduate college/university teachers will be held at School of Life Sciences, Manipal University, Manipal for two weeks from 5 to 17 May 2014. The objective of this Course is to improvise on teaching methodologies incorporating online teaching ...
Cazals, Frédéric; Dreyfus, Tom
Software in structural bioinformatics has mainly been application driven. To favor practitioners seeking off-the-shelf applications, but also developers seeking advanced building blocks to develop novel applications, we undertook the design of the Structural Bioinformatics Library ( SBL , http://sbl.inria.fr ), a generic C ++/python cross-platform software library targeting complex problems in structural bioinformatics. Its tenet is based on a modular design offering a rich and versatile framework allowing the development of novel applications requiring well specified complex operations, without compromising robustness and performances. The SBL involves four software components (1-4 thereafter). For end-users, the SBL provides ready to use, state-of-the-art (1) applications to handle molecular models defined by unions of balls, to deal with molecular flexibility, to model macro-molecular assemblies. These applications can also be combined to tackle integrated analysis problems. For developers, the SBL provides a broad C ++ toolbox with modular design, involving core (2) algorithms , (3) biophysical models and (4) modules , the latter being especially suited to develop novel applications. The SBL comes with a thorough documentation consisting of user and reference manuals, and a bugzilla platform to handle community feedback. The SBL is available from http://sbl.inria.fr. Frederic.Cazals@inria.fr. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com
Budd, Aidan; Corpas, Manuel; Brazas, Michelle D; Fuller, Jonathan C; Goecks, Jeremy; Mulder, Nicola J; Michaut, Magali; Ouellette, B F Francis; Pawlik, Aleksandra; Blomberg, Niklas
"Scientific community" refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i) the exchange and development of ideas and expertise; (ii) career development; (iii) coordinated funding activities; (iv) interactions and engagement with professionals from other fields; and (v) other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop "The 'How To Guide' for Establishing a Successful Bioinformatics Network" at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) and the 12th European Conference on Computational Biology (ECCB).
van Gelder, Celia W.G.; Hooft, Rob; van Rijswijk, Merlijn; van den Berg, Linda; Kok, Ruben; Reinders, M.J.T.; Mons, Barend; Heringa, Jaap
This review provides a historical overview of the inception and development of bioinformatics research in the Netherlands. Rooted in theoretical biology by foundational figures such as Paulien Hogeweg (at Utrecht University since the 1970s), the developments leading to organizational structures
Bioinformatics has become an essential tool not only for basic research but also for applied research in biotechnology and biomedical sciences. Optimal primer sequence and appropriate primer concentration are essential for maximal specificity and efficiency of PCR. A poorly designed primer can result in little or no ...
A Bioinformatic Strategy to Rapidly Characterize cDNA LibrariesG. Charles Ostermeier1, David J. Dix2 and Stephen A. Krawetz1.1Departments of Obstetrics and Gynecology, Center for Molecular Medicine and Genetics, & Institute for Scientific Computing, Wayne State Univer...
Suplatov, Dmitry; Voevodin, Vladimir; Švedas, Vytas
The ability of proteins and enzymes to maintain a functionally active conformation under adverse environmental conditions is an important feature of biocatalysts, vaccines, and biopharmaceutical proteins. From an evolutionary perspective, robust stability of proteins improves their biological fitness and allows for further optimization. Viewed from an industrial perspective, enzyme stability is crucial for the practical application of enzymes under the required reaction conditions. In this review, we analyze bioinformatic-driven strategies that are used to predict structural changes that can be applied to wild type proteins in order to produce more stable variants. The most commonly employed techniques can be classified into stochastic approaches, empirical or systematic rational design strategies, and design of chimeric proteins. We conclude that bioinformatic analysis can be efficiently used to study large protein superfamilies systematically as well as to predict particular structural changes which increase enzyme stability. Evolution has created a diversity of protein properties that are encoded in genomic sequences and structural data. Bioinformatics has the power to uncover this evolutionary code and provide a reproducible selection of hotspots - key residues to be mutated in order to produce more stable and functionally diverse proteins and enzymes. Further development of systematic bioinformatic procedures is needed to organize and analyze sequences and structures of proteins within large superfamilies and to link them to function, as well as to provide knowledge-based predictions for experimental evaluation. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 2. Science Academies' Refresher Course on Bioinformatics in Modern Biology. Information and Announcements Volume 19 Issue 2 February 2014 pp 192-192. Fulltext. Click here to view fulltext PDF. Permanent link:
Weisstein, Anton E.
The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes—the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software—the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a ‘two-culture’ problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses. PMID:23821621
May 2, 2011 ... A virus-neutralizing antibody by a virus-specific synthetic peptide. J. Virol. 55(3): 836-839. Geourjon C, Deléage G (1995). SOPMA: significant improvements in protein secondary structure prediction by consensus prediction from multiple alignments. Bioinformatics, 11(6): 681-684. Guex N, Peitsch MC ...
Rasmussen, Morten; Thaysen-Andersen, Morten; Højrup, Peter
We have developed "GLYCANthrope " - CROSSWORKS for glycans: a bioinformatics tool, which assists in identifying N-linked glycosylated peptides as well as their glycan moieties from MS2 data of enzymatically digested glycoproteins. The program runs either as a stand-alone application or as a plug...
Lima, Andre O. S.; Garces, Sergio P. S.
Bioinformatics is one of the fastest growing scientific areas over the last decade. It focuses on the use of informatics tools for the organization and analysis of biological data. An example of their importance is the availability nowadays of dozens of software programs for genomic and proteomic studies. Thus, there is a growing field (private…
Gelbart, Hadas; Yarden, Anat
Following the rationale that learning is an active process of knowledge construction as well as enculturation into a community of experts, we developed a novel web-based learning environment in bioinformatics for high-school biology majors in Israel. The learning environment enables the learners to actively participate in a guided inquiry process…
Kappa casein (CSN3) gene is a variant of the milk protein highly conserved in mammalian species. Genetic variations in CSN3 gene of six mammalian livestock species were investigated using bioinformatics approach. A total of twenty-seven CSN3 gene sequences with corresponding amino acids belonging to the six ...
Lewis, Jamie; Bartlett, Andrew; Atkinson, Paul
Bioinformatics--the so-called shotgun marriage between biology and computer science--is an interdiscipline. Despite interdisciplinarity being seen as a virtue, for having the capacity to solve complex problems and foster innovation, it has the potential to place projects and people in anomalous categories. For example, valorised…
Jun 26, 2013 ... 2Bioinformatics and Biotechnology, DES, FBAS International Islamic University, Islamabad, Pakistan. Accepted 26 April, 2013. The Tp73 ... New discoveries about the control and function of p73 are still in progress and it is ..... modern research for diagnostics and evolutionary history of p73. REFERENCES.
The convergence and wealth of informatics, bioinformatics and genomics methods and associated resources allow a comprehensive and rapid approach for the surveillance and detection of bacterial and viral organisms. Coupled with the continuing race for the fastest, most cost-efficient and highest-quality DNA sequencing technology, that is, "next generation sequencing", the detection of biological threat agents by `cheaper and faster' means is possible. With the application of improved bioinformatic tools for the understanding of these genomes and for parsing unique pathogen genome signatures, along with `state-of-the-art' informatics which include faster computational methods, equipment and databases, it is feasible to apply new algorithms to biothreat agent detection. Two such methods are high-throughput DNA sequencing-based and resequencing microarray-based identification. These are illustrated and validated by two examples involving human adenoviruses, both from real-world test beds.
Stringer-Calvert David WJ
Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the
deRiel, E; Puttkammer, N; Hyppolite, N; Diallo, J; Wagner, S; Honoré, J G; Balan, J G; Celestin, N; Vallès, J S; Duval, N; Thimothé, G; Boncy, J; Coq, N R L; Barnhart, S
Electronic health information systems, including electronic medical records (EMRs), have the potential to improve access to information and quality of care, among other things. Success factors and challenges for novel EMR implementations in low-resource settings have increasingly been studied, although less is known about maturing systems and sustainability. One systematic review identified seven categories of implementation success factors: ethical, financial, functionality, organizational, political, technical and training. This case study applies this framework to iSanté, Haiti's national EMR in use in more than 100 sites and housing records for more than 750 000 patients. The author group, consisting of representatives of different agencies within the Haitian Ministry of Health (MSPP), funding partner the Centers for Disease Control and Prevention (CDC) Haiti, and implementing partner the International Training and Education Center for Health (I-TECH), identify successes and lessons learned according to the seven identified categories, and propose an additional cross-cutting category, sustainability. Factors important for long-term implementation success of complex information systems are balancing investments in hardware and software infrastructure upkeep, user capacity and data quality control; designing and building a system within the context of the greater eHealth ecosystem with a plan for interoperability and data exchange; establishing system governance and strong leadership to support local system ownership and planning for system financing to ensure sustainability. Lessons learned from 10 years of implementation of the iSanté EMR system are relevant to sustainability of a full range of increasingly interrelated information systems (e.g. for laboratory, supply chain, pharmacy and human resources) in the health sector in low-resource settings. © The Author 2017. Published by Oxford University Press in association with The London School of Hygiene
Nehm, Ross H.; Budd, Ann F.
NMITA is a reef coral biodiversity database that we use to introduce students to the expansive realm of bioinformatics beyond genetics. We introduce a series of lessons that have students use this database, thereby accessing real data that can be used to test hypotheses about biodiversity and evolution while targeting the "National Science …
Arrais, Joel P; Rosa, Nuno; Melo, José; Coelho, Edgar D; Amaral, Diana; Correia, Maria José; Barros, Marlene; Oliveira, José Luís
The molecular complexity of the human oral cavity can only be clarified through identification of components that participate within it. However current proteomic techniques produce high volumes of information that are dispersed over several online databases. Collecting all of this data and using an integrative approach capable of identifying unknown associations is still an unsolved problem. This is the main motivation for this work. We present the online bioinformatic tool OralCard, which comprises results from 55 manually curated articles reflecting the oral molecular ecosystem (OralPhysiOme). It comprises experimental information available from the oral proteome both of human (OralOme) and microbial origin (MicroOralOme) structured in protein, disease and organism. This tool is a key resource for researchers to understand the molecular foundations implicated in biology and disease mechanisms of the oral cavity. The usefulness of this tool is illustrated with the analysis of the oral proteome associated with diabetes melitus type 2. OralCard is available at http://bioinformatics.ua.pt/oralcard. Copyright © 2013 Elsevier Ltd. All rights reserved.
Full Text Available Natural products are among the most important sources of lead molecules for drug discovery. With the development of affordable whole-genome sequencing technologies and other ‘omics tools, the field of natural products research is currently undergoing a shift in paradigms. While, for decades, mainly analytical and chemical methods gave access to this group of compounds, nowadays genomics-based methods offer complementary approaches to find, identify and characterize such molecules. This paradigm shift also resulted in a high demand for computational tools to assist researchers in their daily work. In this context, this review gives a summary of tools and databases that currently are available to mine, identify and characterize natural product biosynthesis pathways and their producers based on ‘omics data. A web portal called Secondary Metabolite Bioinformatics Portal (SMBP at http://www.secondarymetabolites.org is introduced to provide a one-stop catalog and links to these bioinformatics resources. In addition, an outlook is presented how the existing tools and those to be developed will influence synthetic biology approaches in the natural products field.
Dennehy, Patricia; White, Mary P; Hamilton, Andrew; Pohl, Joanne M; Tanner, Clare; Onifade, Tiffiani J; Zheng, Kai
To present a partnership-based and community-oriented approach designed to ease provider anxiety and facilitate the implementation of electronic health records (EHR) in resource-limited primary care settings. The approach, referred to as partnership model, was developed and iteratively refined through the research team's previous work on implementing health information technology (HIT) in over 30 safety net practices. This paper uses two case studies to illustrate how the model was applied to help two nurse-managed health centers (NMHC), a particularly vulnerable primary care setting, implement EHR and get prepared to meet the meaningful use criteria. The strong focus of the model on continuous quality improvement led to eventual implementation success at both sites, despite difficulties encountered during the initial stages of the project. There has been a lack of research, particularly in resource-limited primary care settings, on strategies for abating provider anxiety and preparing them to manage complex changes associated with EHR uptake. The partnership model described in this paper may provide useful insights into the work shepherded by HIT regional extension centers dedicated to supporting resource-limited communities disproportionally affected by EHR adoption barriers. NMHC, similar to other primary care settings, are often poorly resourced, understaffed, and lack the necessary expertise to deploy EHR and integrate its use into their day-to-day practice. This study demonstrates that implementation of EHR, a prerequisite to meaningful use, can be successfully achieved in this setting, and partnership efforts extending far beyond the initial software deployment stage may be the key.
Inlow, Jennifer K.; Miller, Paige; Pittman, Bethany
We describe two bioinformatics exercises intended for use in a computer laboratory setting in an upper-level undergraduate biochemistry course. To introduce students to bioinformatics, the exercises incorporate several commonly used bioinformatics tools, including BLAST, that are freely available online. The exercises build upon the students'…
Shachak, Aviv; Ophir, Ron; Rubin, Eitan
The need to support bioinformatics training has been widely recognized by scientists, industry, and government institutions. However, the discussion of instructional methods for teaching bioinformatics is only beginning. Here we report on a systematic attempt to design two bioinformatics workshops for graduate biology students on the basis of…
Accardi, L.; Freudenberg, Wolfgang; Ohya, Masanori
/ H. Kamimura -- Massive collection of full-length complementary DNA clones and microarray analyses: keys to rice transcriptome analysis / S. Kikuchi -- Changes of influenza A(H5) viruses by means of entropic chaos degree / K. Sato and M. Ohya -- Basics of genome sequence analysis in bioinformatics - its fundamental ideas and problems / T. Suzuki and S. Miyazaki -- A basic introduction to gene expression studies using microarray expression data analysis / D. Wanke and J. Kilian -- Integrating biological perspectives: a quantum leap for microarray expression analysis / D. Wanke ... [et al.].
Vetrivel, Umashankar; Pilla, Kalabharath
Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.
This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...
López, Vivian F.; Aguilar, Ramiro; Alonso, Luis; Moreno, María N.; Corchado, Juan M.
In this paper we describe both theoretical and practical results of a novel data mining process that combines hybrid techniques of association analysis and classical sequentiation algorithms of genomics to generate grammatical structures of a specific language. We used an application of a compilers generator system that allows the development of a practical application within the area of grammarware, where the concepts of the language analysis are applied to other disciplines, such as Bioinformatic. The tool allows the complexity of the obtained grammar to be measured automatically from textual data. A technique of incremental discovery of sequential patterns is presented to obtain simplified production rules, and compacted with bioinformatics criteria to make up a grammar.
Cristancho, Marco; Isaza, Gustavo; Pinzón, Andrés; Rodríguez, Juan
This volume compiles accepted contributions for the 2nd Edition of the Colombian Computational Biology and Bioinformatics Congress CCBCOL, after a rigorous review process in which 54 papers were accepted for publication from 119 submitted contributions. Bioinformatics and Computational Biology are areas of knowledge that have emerged due to advances that have taken place in the Biological Sciences and its integration with Information Sciences. The expansion of projects involving the study of genomes has led the way in the production of vast amounts of sequence data which needs to be organized, analyzed and stored to understand phenomena associated with living organisms related to their evolution, behavior in different ecosystems, and the development of applications that can be derived from this analysis. .
Li, Xiao; Zhang, Yizheng
It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biology data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium) and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.
Varma, B Sharat Chandra; Balakrishnan, M
This book presents an evaluation methodology to design future FPGA fabrics incorporating hard embedded blocks (HEBs) to accelerate applications. This methodology will be useful for selection of blocks to be embedded into the fabric and for evaluating the performance gain that can be achieved by such an embedding. The authors illustrate the use of their methodology by studying the impact of HEBs on two important bioinformatics applications: protein docking and genome assembly. The book also explains how the respective HEBs are designed and how hardware implementation of the application is done using these HEBs. It shows that significant speedups can be achieved over pure software implementations by using such FPGA-based accelerators. The methodology presented in this book may also be used for designing HEBs for accelerating software implementations in other domains besides bioinformatics. This book will prove useful to students, researchers, and practicing engineers alike.
Overby, Casey Lynnette; Tarczy-Hornoch, Peter
Personalized medicine can be defined broadly as a model of healthcare that is predictive, personalized, preventive and participatory. Two US President's Council of Advisors on Science and Technology reports illustrate challenges in personalized medicine (in a 2008 report) and in use of health information technology (in a 2010 report). Translational bioinformatics is a field that can help address these challenges and is defined by the American Medical Informatics Association as "the development of storage, analytic and interpretive methods to optimize the transformation of increasing voluminous biomedical data into proactive, predictive, preventative and participatory health." This article discusses barriers to implementing genomics applications and current progress toward overcoming barriers, describes lessons learned from early experiences of institutions engaged in personalized medicine and provides example areas for translational bioinformatics research inquiry.
At the end of January I travelled to the States to speak at and attend the first O'Reilly Bioinformatics Technology Conference. It was a large, well-organized and diverse meeting with an interesting history. Although the meeting was not a typical academic conference, its style will, I am sure, become more typical of meetings in both biological and computational sciences.Speakers at the event included prominent bioinformatics researchers such as Ewan Birney, Terry Gaasterland and Lincoln Stein; authors and leaders in the open source programming community like Damian Conway and Nat Torkington; and representatives from several publishing companies including the Nature Publishing Group, Current Science Group and the President of O'Reilly himself, Tim O'Reilly. There were presentations, tutorials, debates, quizzes and even a 'jam session' for musical bioinformaticists.
Phua, J; See, K C; Khalizah, H J; Low, S P; Lim, T K
Clinical questions often arise at daily hospital bedside rounds. Yet, little information exists on how the search for answers may be facilitated. The aim of this prospective study was, therefore, to evaluate the overall utility, including the feasibility and usefulness of incorporating searches of UpToDate, a popular online information resource, into rounds. Doctors searched UpToDate for any unresolved clinical questions during rounds for patients in general medicine and respiratory wards, and in the medical intensive care unit of a tertiary teaching hospital. The nature of the questions and the results of the searches were recorded. Searches were deemed feasible if they were completed during the rounds and useful if they provided a satisfactory answer. A total of 157 UpToDate searches were performed during the study period. Questions were raised by all ranks of clinicians from junior doctors to consultants. The searches were feasible and performed immediately during rounds 44% of the time. Each search took a median of three minutes (first quartile: two minutes, third quartile: five minutes). UpToDate provided a useful and satisfactory answer 75% of the time, a partial answer 17% of the time and no answer 9% of the time. It led to a change in investigations, diagnosis or management 37% of the time, confirmed what was originally known or planned 38% of the time and had no effect 25% of the time. Incorporating UpToDate searches into daily bedside rounds was feasible and useful in clinical decision-making.
James F. Aiton
Full Text Available The rapid expansion occurring in World-Wide Web activity is beginning to make the concepts of ‘global hypermedia’ and ‘universal document readership’ realistic objectives of the new revolution in information technology. One consequence of this increase in usage is that educators and students are becoming more aware of the diversity of the knowledge base which can be accessed via the Internet. Although computerised databases and information services have long played a key role in bioinformatics these same resources can also be used to provide core materials for teaching and learning. The large datasets and arch ives th at have been compiled for biomedical research can be enhanced with the addition of a variety of multimedia elements (images. digital videos. animation etc.. The use of this digitally stored information in structured and self-directed learning environments is likely to increase as activity across World-Wide Web increases.
Full Text Available "Scientific community" refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i the exchange and development of ideas and expertise; (ii career development; (iii coordinated funding activities; (iv interactions and engagement with professionals from other fields; and (v other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop "The 'How To Guide' for Establishing a Successful Bioinformatics Network" at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB and the 12th European Conference on Computational Biology (ECCB.
Budd, Aidan; Corpas, Manuel; Brazas, Michelle D.; Fuller, Jonathan C.; Goecks, Jeremy; Mulder, Nicola J.; Michaut, Magali; Ouellette, B. F. Francis; Pawlik, Aleksandra; Blomberg, Niklas
“Scientific community” refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i) the exchange and development of ideas and expertise; (ii) career development; (iii) coordinated funding activities; (iv) interactions and engagement with professionals from other fields; and (v) other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop “The ‘How To Guide’ for Establishing a Successful Bioinformatics Network” at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) and the 12th European Conference on Computational Biology (ECCB). PMID:25654371
Díaz-Del-Pino, Sergio; Falgueras, Juan; Perez-Wohlfeil, Esteban; Trelles, Oswaldo
Nearly 10 years have passed since the first mobile apps appeared. Given the fact that bioinformatics is a web-based world and that mobile devices are endowed with web-browsers, it seemed natural that bioinformatics would transit from personal computers to mobile devices but nothing could be further from the truth. The transition demands new paradigms, designs and novel implementations. Throughout an in-depth analysis of requirements of existing bioinformatics applications we designed and deployed an easy-to-use web-based lightweight mobile client. Such client is able to browse, select, compose automatically interface parameters, invoke services and monitor the execution of Web Services using the service's metadata stored in catalogs or repositories. mORCA is available at http://bitlab-es.com/morca/app as a web-app. It is also available in the App store by Apple and Play Store by Google. The software will be available for at least 2 years. firstname.lastname@example.org. Source code, final web-app, training material and documentation is available at http://bitlab-es.com/morca. © The Author(s) 2017. Published by Oxford University Press.
Bencharit, Sompop; Border, Michael B; Edelmann, Alex; Byrd, Warren C
The 3rd International Conference on Proteomics & Bioinformatics (Proteomics 2013) Philadelphia, PA, USA, 15-17 July 2013 The Third International Conference on Proteomics & Bioinformatics (Proteomics 2013) was sponsored by the OMICS group and was organized in order to strengthen the future of proteomics science by bringing together professionals, researchers and scholars from leading universities across the globe. The main topics of this conference included the integration of novel platforms in data analysis, the use of a systems biology approach, different novel mass spectrometry platforms and biomarker discovery methods. The conference was divided into proteomic methods and research interests. Among these two categories, interactions between methods in proteomics and bioinformatics, as well as other research methodologies, were discussed. Exceptional topics from the keynote forum, oral presentations and the poster session have been highlighted. The topics range from new techniques for analyzing proteomics data, to new models designed to help better understand genetic variations to the differences in the salivary proteomes of HIV-infected patients.
Full Text Available Abstract The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS, adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded and http://soap.g-language.org/kbws_dl.wsdl (Document/literal.
Fourment, Mathieu; Gillings, Michael R
The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from http://www.bioinformatics.org/benchmark/. This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language.
Repin, Rul Aisyah Mat; Mutalib, Sahilah Abdul; Shahimi, Safiyyah; Khalid, Rozida Mohd.; Ayob, Mohd. Khan; Bakar, Mohd. Faizal Abu; Isa, Mohd Noor Mat
In this study, we performed bioinformatics analysis toward genome sequence of Lysinibacillussphaericus (L. sphaericus) to determine gene encoded for gelatinase. L. sphaericus was isolated from soil and gelatinase species-specific bacterium to porcine and bovine gelatin. This bacterium offers the possibility of enzymes production which is specific to both species of meat, respectively. The main focus of this research is to identify the gelatinase encoded gene within the bacteria of L. Sphaericus using bioinformatics analysis of partially sequence genome. From the research study, three candidate gene were identified which was, gelatinase candidate gene 1 (P1), NODE_71_length_93919_cov_158.931839_21 which containing 1563 base pair (bp) in size with 520 amino acids sequence; Secondly, gelatinase candidate gene 2 (P2), NODE_23_length_52851_cov_190.061386_17 which containing 1776 bp in size with 591 amino acids sequence; and Thirdly, gelatinase candidate gene 3 (P3), NODE_106_length_32943_cov_169.147919_8 containing 1701 bp in size with 566 amino acids sequence. Three pairs of oligonucleotide primers were designed and namely as, F1, R1, F2, R2, F3 and R3 were targeted short sequences of cDNA by PCR. The amplicons were reliably results in 1563 bp in size for candidate gene P1 and 1701 bp in size for candidate gene P3. Therefore, the results of bioinformatics analysis of L. Sphaericus resulting in gene encoded gelatinase were identified.
Hussain, Hanaa M; Benkrid, Khaled; Seker, Huseyin
Bioinformatics data tend to be highly dimensional in nature thus impose significant computational demands. To resolve limitations of conventional computing methods, several alternative high performance computing solutions have been proposed by scientists such as Graphical Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The latter have shown to be efficient and high in performance. In recent years, FPGAs have been benefiting from dynamic partial reconfiguration (DPR) feature for adding flexibility to alter specific regions within the chip. This work proposes combing the use of FPGAs and DPR to build a dynamic multi-classifier architecture that can be used in processing bioinformatics data. In bioinformatics, applying different classification algorithms to the same dataset is desirable in order to obtain comparable, more reliable and consensus decision, but it can consume long time when performed on conventional PC. The DPR implementation of two common classifiers, namely support vector machines (SVMs) and K-nearest neighbor (KNN) are combined together to form a multi-classifier FPGA architecture which can utilize specific region of the FPGA to work as either SVM or KNN classifier. This multi-classifier DPR implementation achieved at least ~8x reduction in reconfiguration time over the single non-DPR classifier implementation, and occupied less space and hardware resources than having both classifiers. The proposed architecture can be extended to work as an ensemble classifier.
Tilahun, Binyam; Fritz, Fleur
Electronic medical record (EMR) systems are increasingly being implemented in hospitals of developing countries to improve patient care and clinical service. However, only limited evaluation studies are available concerning the level of adoption and determinant factors of success in those settings. The objective of this study was to assess the usage pattern, user satisfaction level, and determinants of health professional's satisfaction towards a comprehensive EMR system implemented in Ethiopia where parallel documentation using the EMR and the paper-based medical records is in practice. A quantitative, cross-sectional study design was used to assess the usage pattern, user satisfaction level, and determinant factors of an EMR system implemented in Ethiopia based on the DeLone and McLean model of information system success. Descriptive statistical methods were applied to analyze the data and a binary logistic regression model was used to identify determinant factors. Health professionals (N=422) from five hospitals were approached and 406 responded to the survey (96.2% response rate). Out of the respondents, 76.1% (309/406) started to use the system immediately after implementation and user training, but only 31.7% (98/309) of the professionals reported using the EMR during the study (after 3 years of implementation). Of the 12 core EMR functions, 3 were never used by most respondents, and they were also unaware of 4 of the core EMR functions. It was found that 61.4% (190/309) of the health professionals reported over all dissatisfaction with the EMR (median=4, interquartile range (IQR)=1) on a 5-level Likert scale. Physicians were more dissatisfied (median=5, IQR=1) when compared to nurses (median=4, IQR=1) and the health management information system (HMIS) staff (median=2, IQR=1). Of all the participants, 64.4% (199/309) believed that the EMR had no positive impact on the quality of care. The participants indicated an agreement with the system and information
Supreet Kaur Gill
Full Text Available Clinical research is making toiling efforts for promotion and wellbeing of the health status of the people. There is a rapid increase in number and severity of diseases like cancer, hepatitis, HIV etc, resulting in high morbidity and mortality. Clinical research involves drug discovery and development whereas clinical trials are performed to establish safety and efficacy of drugs. Drug discovery is a long process starting with the target identification, validation and lead optimization. This is followed by the preclinical trials, intensive clinical trials and eventually post marketing vigilance for drug safety. Softwares and the bioinformatics tools play a great role not only in the drug discovery but also in drug development. It involves the use of informatics in the development of new knowledge pertaining to health and disease, data management during clinical trials and to use clinical data for secondary research. In addition, new technology likes molecular docking, molecular dynamics simulation, proteomics and quantitative structure activity relationship in clinical research results in faster and easier drug discovery process. During the preclinical trials, the software is used for randomization to remove bias and to plan study design. In clinical trials software like electronic data capture, Remote data capture and electronic case report form (eCRF is used to store the data. eClinical, Oracle clinical are software used for clinical data management and for statistical analysis of the data. After the drug is marketed the safety of a drug could be monitored by drug safety software like Oracle Argus or ARISg. Therefore, softwares are used from the very early stages of drug designing, to drug development, clinical trials and during pharmacovigilance. This review describes different aspects related to application of computers and bioinformatics in drug designing, discovery and development, formulation designing and clinical research.
Full Text Available
This paper gives a brief overview of electronic information resources and services offered by The J.D. Rockefeller Research Library at Egerton University and the marketing of these resources. The paper examines the various reasons for marketing electronic information resources, with emphasis on the various, and illustrates marketing strategies used by J.D Rockefeller Research library towards effective utilization of the available resources in supporting research, teaching and learning. These strategies include use of posters, notices, brochures, telephone calls, Current Awareness Services (CAS, workshops and seminars, and decentralization of services, among others. It concludes with a discussion of cost effective use of these strategies in research and teaching.
Niels Rathlev, MD
Full Text Available Introduction: There is a paucity of literature supporting the use of electronic alerts for patients with high frequency emergency department (ED use. We sought to measure changes in opioid prescribing and administration practices, total charges and other resource utilization using electronic alerts to notify providers of an opioid-use care plan for high frequency ED patients. Methods: This was a randomized, non-blinded, two-group parallel design study of patients who had 1 opioid use disorder and 2 high frequency ED use. Three affiliated hospitals with identical electronic health records participated. Patients were randomized into “Care Plan” versus “Usual Care groups”. Between the years before and after randomization, we compared as primary outcomes the following: 1 opioids (morphine mg equivalents prescribed to patients upon discharge and administered to ED and inpatients; 2 total medical charges, and the numbers of; 3 ED visits, 4 ED visits with advanced radiologic imaging (computed tomography [CT] or magnetic resonance imaging [MRI] studies, and 5 inpatient admissions. Results: A total of 40 patients were enrolled. For ED and inpatients in the “Usual Care” group, the proportion of morphine mg equivalents received in the post-period compared with the pre-period was 15.7%, while in the “Care Plan” group the proportion received in the post-period compared with the pre-period was 4.5% (ratio=0.29, 95% CI [0.07-1.12]; p=0.07. For discharged patients in the “Usual Care” group, the proportion of morphine mg equivalents prescribed in the post-period compared with the pre-period was 25.7% while in the “Care Plan” group, the proportion prescribed in the post-period compared to the pre-period was 2.9%. The “Care Plan” group showed an 89% greater proportional change over the periods compared with the “Usual Care” group (ratio=0.11, 95% CI [0.01-0.092]; p=0.04. Care plans did not change the total charges, or, the numbers
Bansal Arvind K
Full Text Available Abstract The revolutionary growth in the computation speed and memory storage capability has fueled a new era in the analysis of biological data. Hundreds of microbial genomes and many eukaryotic genomes including a cleaner draft of human genome have been sequenced raising the expectation of better control of microorganisms. The goals are as lofty as the development of rational drugs and antimicrobial agents, development of new enhanced bacterial strains for bioremediation and pollution control, development of better and easy to administer vaccines, the development of protein biomarkers for various bacterial diseases, and better understanding of host-bacteria interaction to prevent bacterial infections. In the last decade the development of many new bioinformatics techniques and integrated databases has facilitated the realization of these goals. Current research in bioinformatics can be classified into: (i genomics – sequencing and comparative study of genomes to identify gene and genome functionality, (ii proteomics – identification and characterization of protein related properties and reconstruction of metabolic and regulatory pathways, (iii cell visualization and simulation to study and model cell behavior, and (iv application to the development of drugs and anti-microbial agents. In this article, we will focus on the techniques and their limitations in genomics and proteomics. Bioinformatics research can be classified under three major approaches: (1 analysis based upon the available experimental wet-lab data, (2 the use of mathematical modeling to derive new information, and (3 an integrated approach that integrates search techniques with mathematical modeling. The major impact of bioinformatics research has been to automate the genome sequencing, automated development of integrated genomics and proteomics databases, automated genome comparisons to identify the genome function, automated derivation of metabolic pathways, gene
Lin, Sabrina; Fonteno, Shawn; Satish, Shruthi; Bhanu, Bir; Talbot, Prue
Because video data are complex and are comprised of many images, mining information from video material is difficult to do without the aid of computer software. Video bioinformatics is a powerful quantitative approach for extracting spatio-temporal data from video images using computer software to perform dating mining and analysis. In this article, we introduce a video bioinformatics method for quantifying the growth of human embryonic stem cells (hESC) by analyzing time-lapse videos collected in a Nikon BioStation CT incubator equipped with a camera for video imaging. In our experiments, hESC colonies that were attached to Matrigel were filmed for 48 hours in the BioStation CT. To determine the rate of growth of these colonies, recipes were developed using CL-Quant software which enables users to extract various types of data from video images. To accurately evaluate colony growth, three recipes were created. The first segmented the image into the colony and background, the second enhanced the image to define colonies throughout the video sequence accurately, and the third measured the number of pixels in the colony over time. The three recipes were run in sequence on video data collected in a BioStation CT to analyze the rate of growth of individual hESC colonies over 48 hours. To verify the truthfulness of the CL-Quant recipes, the same data were analyzed manually using Adobe Photoshop software. When the data obtained using the CL-Quant recipes and Photoshop were compared, results were virtually identical, indicating the CL-Quant recipes were truthful. The method described here could be applied to any video data to measure growth rates of hESC or other cells that grow in colonies. In addition, other video bioinformatics recipes can be developed in the future for other cell processes such as migration, apoptosis, and cell adhesion. PMID:20495527
There is a Relationship between Resource Expenditures and Reference Transactions in Academic Libraries. A Review of: Dubnjakovic, A. (2012. Electronic resource expenditure and the decline in reference transaction statistics in academic libraries. Journal of Academic Librarianship, 38(2, 94-100. doi:10.1016/j.acalib.2012.01.001
Annie M. Hughes
Full Text Available Objective – To provide an analysis of the impact of expenditures on electronic resourcesand gate counts on the increase or decrease in reference transactions.Design – Analysis of results of existing survey data from the National Center for Educational Statistics (NCES 2006 Academic Library Survey(ALS.Setting – Academic libraries in the United States.Subjects – 3925 academic library respondents.Methods – The author chose to use survey data collected from the 2006 ALS conducted bythe NCES. The survey included data on various topics related to academic libraries, but in the case of this study, the author chose to analyze three of the 193 variables included. The three variables: electronic books expenditure, computer hardware and software, and expenditures on bibliographic utilities, were combined into one variable called electronic resource expenditure. Gate counts were also considered as a variable. Electronic resource expenditure was also split as a variable into three groups: low, medium, and high. Multiple regression analysis and general linear modeling, along with tests of reliability, were employed. Main Results – The author determined that low, medium, and high spenders with regard to electronic resources exhibited differences in gate counts, and gate counts have an effect on reference transactions in any given week. Gate counts tend to not have much of an effect on reference transactions for the higher spenders, and higher spenders tend to have a higher number of reference transactions overall. Low spenders have lower gate counts and also a lower amount of reference transactions.Conclusion – The findings from this study show that academic libraries spending more on electronic resources also tend to have an increase with regard to reference transactions. The author also concludes that library spaces are no longer the determining factor with regard to number of reference transactions. Spending more on electronic resources is
Lopez, Rodrigo; Silventoinen, Ville; Robinson, Stephen; Kibria, Asif; Gish, Warren
Since 1995, the WU-BLAST programs (http://blast.wustl.edu) have provided a fast, flexible and reliable method for similarity searching of biological sequence databases. The software is in use at many locales and web sites. The European Bioinformatics Institute's WU-Blast2 (http://www.ebi.ac.uk/blast2/) server has been providing free access to these search services since 1997 and today supports many features that both enhance the usability and expand on the scope of the software. PMID:12824421
Wiwanitkit, Somsri; Wiwanitkit, Viroj
The role of microRNA in the pathogenesis of pulmonary tuberculosis is the interesting topic in chest medicine at present. Recently, it was proposed that the microRNA can be a useful biomarker for monitoring of pulmonary tuberculosis and might be the important part in pathogenesis of disease. Here, the authors perform a bioinformatics study to assess the microRNA within known tuberculosis RNA. The microRNA part can be detected and this can be important key information in further study of the p...
The signature feature of Cellular Automata is the realization that "simple rules can give rise to complex behavior". In particular how fixed "rock-bottom" simple rules can give rise to multiple levels of organization. Here we describe Multilevel Cellular Automata, in which the microscopic entities (states) and their transition rules themselves are adjusted by the mesoscale patterns that they themselves generate. Thus we study the feedback of higher levels of organization on the lower levels. Such an approach is preeminently important for studying bioinformatic systems. We will here focus on an evolutionary approach to formalize such Multilevel Cellular Automata, and review examples of studies that use them.
Pharmacogenetics refers to the study of the individual pharmacological response based on the genotype. Its objective is to optimize treatment in an individual basis, thereby creating a more efficient and safe personalized therapy. In the second part of this review, the molecular methods of study in pharmacogenetics, including microarray technology or DNA chips, are discussed. Among them we highlight the microarrays used to determine the gene expression that detect specific RNA sequences, and the microarrays employed to determine the genotype that detect specific DNA sequences, including polymorphisms, particularly single nucleotide polymorphisms (SNPs). The relationship between pharmacogenetics, bioinformatics and ethical concerns is reviewed.
Rezig, Slim; Sakhri, Saber
Salmonellas are the main responsible agent for the frequent food-borne gastrointestinal diseases. Their detection using classical methods are laborious and their results take a lot of time to be revealed. In this context, we tried to set up a revealing technique of the invA virulence gene, found in the majority of Salmonella species. After amplification with PCR using specific primers created and verified by bioinformatics programs, two couples of primers were set up and they appeared to be very specific and sensitive for the detection of invA gene. (Author)
Soualmia, L F; Lecroq, T
To summarize excellent current research in the field of Bioinformatics and Translational Informatics with application in the health domain and clinical care. We provide a synopsis of the articles selected for the IMIA Yearbook 2015, from which we attempt to derive a synthetic overview of current and future activities in the field. As last year, a first step of selection was performed by querying MEDLINE with a list of MeSH descriptors completed by a list of terms adapted to the section. Each section editor has evaluated separately the set of 1,594 articles and the evaluation results were merged for retaining 15 articles for peer-review. The selection and evaluation process of this Yearbook's section on Bioinformatics and Translational Informatics yielded four excellent articles regarding data management and genome medicine that are mainly tool-based papers. In the first article, the authors present PPISURV a tool for uncovering the role of specific genes in cancer survival outcome. The second article describes the classifier PredictSNP which combines six performing tools for predicting disease-related mutations. In the third article, by presenting a high-coverage map of the human proteome using high resolution mass spectrometry, the authors highlight the need for using mass spectrometry to complement genome annotation. The fourth article is also related to patient survival and decision support. The authors present datamining methods of large-scale datasets of past transplants. The objective is to identify chances of survival. The current research activities still attest the continuous convergence of Bioinformatics and Medical Informatics, with a focus this year on dedicated tools and methods to advance clinical care. Indeed, there is a need for powerful tools for managing and interpreting complex, large-scale genomic and biological datasets, but also a need for user-friendly tools developed for the clinicians in their daily practice. All the recent research and
Commercial success or failure of innovation in bioinformatics and in-silico biology requires the appropriate use of legal tools for protecting and exploiting intellectual property. These tools include patents, copyrights, trademarks, design rights, and limiting information in the form of 'trade secrets'. Potentially patentable components of bioinformatics programmes include lines of code, algorithms, data content, data structure and user interfaces. In both the US and the European Union, copyright protection is granted for software as a literary work, and most other major industrial countries have adopted similar rules. Nonetheless, the grant of software patents remains controversial and is being challenged in some countries. Current debate extends to aspects such as whether patents can claim not only the apparatus and methods but also the data signals and/or products, such as a CD-ROM, on which the programme is stored. The patentability of substances discovered using in-silico methods is a separate debate that is unlikely to be resolved in the near future.
Liu, Yao-Yuan; Harbison, SallyAnn
Short tandem repeats, single nucleotide polymorphisms, and whole mitochondrial analyses are three classes of markers which will play an important role in the future of forensic DNA typing. The arrival of massively parallel sequencing platforms in forensic science reveals new information such as insights into the complexity and variability of the markers that were previously unseen, along with amounts of data too immense for analyses by manual means. Along with the sequencing chemistries employed, bioinformatic methods are required to process and interpret this new and extensive data. As more is learnt about the use of these new technologies for forensic applications, development and standardization of efficient, favourable tools for each stage of data processing is being carried out, and faster, more accurate methods that improve on the original approaches have been developed. As forensic laboratories search for the optimal pipeline of tools, sequencer manufacturers have incorporated pipelines into sequencer software to make analyses convenient. This review explores the current state of bioinformatic methods and tools used for the analyses of forensic markers sequenced on the massively parallel sequencing (MPS) platforms currently most widely used. Copyright © 2017 Elsevier B.V. All rights reserved.
Full Text Available Human G-protein coupled receptors (hGPCRs constitute a large and highly pharmaceutically relevant membrane receptor superfamily. About half of the hGPCRs' family members are chemosensory receptors, involved in bitter taste and olfaction, along with a variety of other physiological processes. Hence these receptors constitute promising targets for pharmaceutical intervention. Molecular modeling has been so far the most important tool to get insights on agonist binding and receptor activation. Here we investigate both aspects by bioinformatics-based predictions across all bitter taste and odorant receptors for which site-directed mutagenesis data are available. First, we observe that state-of-the-art homology modeling combined with previously used docking procedures turned out to reproduce only a limited fraction of ligand/receptor interactions inferred by experiments. This is most probably caused by the low sequence identity with available structural templates, which limits the accuracy of the protein model and in particular of the side-chains' orientations. Methods which transcend the limited sampling of the conformational space of docking may improve the predictions. As an example corroborating this, we review here multi-scale simulations from our lab and show that, for the three complexes studied so far, they significantly enhance the predictive power of the computational approach. Second, our bioinformatics analysis provides support to previous claims that several residues, including those at positions 1.50, 2.50, and 7.52, are involved in receptor activation.
Full Text Available Our previous study demonstrated that human KIAA0100 gene was a novel acute monocytic leukemia-associated antigen (MLAA gene. But the functional characterization of human KIAA0100 gene has remained unknown to date. Here, firstly, bioinformatic prediction of human KIAA0100 gene was carried out using online softwares; Secondly, Human KIAA0100 gene expression was downregulated by the clustered regularly interspaced short palindromic repeats (CRISPR/CRISPR-associated (Cas 9 system in U937 cells. Cell proliferation and apoptosis were next evaluated in KIAA0100-knockdown U937 cells. The bioinformatic prediction showed that human KIAA0100 gene was located on 17q11.2, and human KIAA0100 protein was located in the secretory pathway. Besides, human KIAA0100 protein contained a signalpeptide, a transmembrane region, three types of secondary structures (alpha helix, extended strand, and random coil , and four domains from mitochondrial protein 27 (FMP27. The observation on functional characterization of human KIAA0100 gene revealed that its downregulation inhibited cell proliferation, and promoted cell apoptosis in U937 cells. To summarize, these results suggest human KIAA0100 gene possibly comes within mitochondrial genome; moreover, it is a novel anti-apoptotic factor related to carcinogenesis or progression in acute monocytic leukemia, and may be a potential target for immunotherapy against acute monocytic leukemia.
Hande, Sneha; Goswami, Kalyan; Sharma, Richa; Bhoj, Priyanka; Jena, Lingaraj; Reddy, Maryada Venkata Rami
Lymphatic filariasis, commonly called elephantiasis, poses a burden of estimated level of 5.09 million disability adjusted life year. Limitations of its sole drug, diethylcarbamazine (DEC) drive exploration of effective filarial target. A few plant extracts having polyphenolic ingredients and some synthetic compounds possess potential dihydrofolate reductase (DHFR) inhibitory effect. Here, we postulated a plausible link between folates and polyphenolics based on their common precursor in shikimate metabolism. Considering its implication in structural resemblance based antagonism, we have attempted to validate parasitic DHFR protein as a target. The bioinformatics approach, in the absence of crystal structure of the proposed target, used to authenticate and for virtual docking with suitable tested compounds, showed remarkably lower thermodynamic parameters as opposed to the positive control. A comparative docking analysis between human and Brugia malayi DHFR also showed effective binding parameters with lower inhibition constants of these ligands with parasitic target, but not with human counterpart highlighting safety and efficacy. This study suggests that DHFR could be a valid drug target for lymphatic filariasis, and further reveal that bioinformatics may be an effective tool in reverse pharmacological approach for drug design.
Pauling, Josch; Klipp, Edda
Lipids are highly diverse metabolites of pronounced importance in health and disease. While metabolomics is a broad field under the omics umbrella that may also relate to lipids, lipidomics is an emerging field which specializes in the identification, quantification and functional interpretation of complex lipidomes. Today, it is possible to identify and distinguish lipids in a high-resolution, high-throughput manner and simultaneously with a lot of structural detail. However, doing so may produce thousands of mass spectra in a single experiment which has created a high demand for specialized computational support to analyze these spectral libraries. The computational biology and bioinformatics community has so far established methodology in genomics, transcriptomics and proteomics but there are many (combinatorial) challenges when it comes to structural diversity of lipids and their identification, quantification and interpretation. This review gives an overview and outlook on lipidomics research and illustrates ongoing computational and bioinformatics efforts. These efforts are important and necessary steps to advance the lipidomics field alongside analytic, biochemistry, biomedical and biology communities and to close the gap in available computational methodology between lipidomics and other omics sub-branches.
Full Text Available While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread—a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure.
Ragothaman, Anjani; Boddu, Sairam Chowdary; Kim, Nayong; Feinstein, Wei; Brylinski, Michal; Jha, Shantenu; Kim, Joohyun
While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread--a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure.
Political Unrest and Educational Electronic Resource Usage in a Conflict Zone, Kashmir (Indian Administered Kashmir): Log Analysis as Politico Analytical Tool=Hindistan Tarafından Yönetilen Keşmir Anlaşmazlık Bölgesi’nde Siyasi Karışıklık ve Eğitimle İlgili Elektronik Kaynakların Kullanımı: Siyasi Analiz Aracı Olarak Log Analizleri
Sumeer Gul; Samrin Nabi; Samina Mushtaq; Tariq Ahmad Shah; Suhail Ahmad
Electronic resource usage has proved as one of the best decision making tools in the library setups. Electronic resource usage in relation to the political disturbance can act as one of the tools to highlight the impact of political disturbance on educational setups in general and the electronic resource usage in particular. The study takes a serious look in the electronic resource usage in Kashmir and the impact of unrest on it. The paper highlights a relational platform between educat...
Full Text Available We propose the formation of an International Psycho-Social and Cultural Bioinformatics Project (IPCBP to explore the research foundations of Integrative Medical Insights (IMI on all levels from the molecular-genomic to the psychological, cultural, social, and spiritual. Just as The Human Genome Project identified the molecular foundations of modern medicine with the new technology of sequencing DNA during the past decade, the IPCBP would extend and integrate this neuroscience knowledge base with the technology of gene expression via DNA/proteomic microarray research and brain imaging in development, stress, healing, rehabilitation, and the psychotherapeutic facilitation of existentional wellness. We anticipate that the IPCBP will require a unique international collaboration of, academic institutions, researchers, and clinical practioners for the creation of a new neuroscience of mind-body communication, brain plasticity, memory, learning, and creative processing during optimal experiential states of art, beauty, and truth. We illustrate this emerging integration of bioinformatics with medicine with a videotape of the classical 4-stage creative process in a neuroscience approach to psychotherapy.
Full Text Available We propose the formation of an International PsychoSocial and Cultural Bioinformatics Project (IPCBP to explore the research foundations of Integrative Medical Insights (IMI on all levels from the molecular-genomic to the psychological, cultural, social, and spiritual. Just as The Human Genome Project identified the molecular foundations of modern medicine with the new technology of sequencing DNA during the past decade, the IPCBP would extend and integrate this neuroscience knowledge base with the technology of gene expression via DNA/proteomic microarray research and brain imaging in development, stress, healing, rehabilitation, and the psychotherapeutic facilitation of existentional wellness. We anticipate that the IPCBP will require a unique international collaboration of, academic institutions, researchers, and clinical practioners for the creation of a new neuroscience of mind-body communication, brain plasticity, memory, learning, and creative processing during optimal experiential states of art, beauty, and truth. We illustrate this emerging integration of bioinformatics with medicine with a videotape of the classical 4-stage creative process in a neuroscience approach to psychotherapy.
He, Yongqun; Xiang, Zuoshuang
Brucella spp. are Gram-negative, facultative intracellular bacteria that cause brucellosis, one of the commonest zoonotic diseases found worldwide in humans and a variety of animal species. While several animal vaccines are available, there is no effective and safe vaccine for prevention of brucellosis in humans. VIOLIN (http://www.violinet.org) is a web-based vaccine database and analysis system that curates, stores, and analyzes published data of commercialized vaccines, and vaccines in clinical trials or in research. VIOLIN contains information for 454 vaccines or vaccine candidates for 73 pathogens. VIOLIN also contains many bioinformatics tools for vaccine data analysis, data integration, and vaccine target prediction. To demonstrate the applicability of VIOLIN for vaccine research, VIOLIN was used for bioinformatics analysis of existing Brucella vaccines and prediction of new Brucella vaccine targets. VIOLIN contains many literature mining programs (e.g., Vaxmesh) that provide in-depth analysis of Brucella vaccine literature. As a result of manual literature curation, VIOLIN contains information for 38 Brucella vaccines or vaccine candidates, 14 protective Brucella antigens, and 68 host response studies to Brucella vaccines from 97 peer-reviewed articles. These Brucella vaccines are classified in the Vaccine Ontology (VO) system and used for different ontological applications. The web-based VIOLIN vaccine target prediction program Vaxign was used to predict new Brucella vaccine targets. Vaxign identified 14 outer membrane proteins that are conserved in six virulent strains from B. abortus, B. melitensis, and B. suis that are pathogenic in humans. Of the 14 membrane proteins, two proteins (Omp2b and Omp31-1) are not present in B. ovis, a Brucella species that is not pathogenic in humans. Brucella vaccine data stored in VIOLIN were compared and analyzed using the VIOLIN query system. Bioinformatics curation and ontological representation of Brucella vaccines
Yuen Macaire MS
Full Text Available Abstract Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL calls that are implemented in a set of Application Programming Interfaces (APIs. The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD, Biomolecular Interaction Network Database (BIND, Database of Interacting Proteins (DIP, Molecular Interactions Database (MINT, IntAct, NCBI Taxonomy, Gene Ontology (GO, Online Mendelian Inheritance in Man (OMIM, LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First
Background Brucella spp. are Gram-negative, facultative intracellular bacteria that cause brucellosis, one of the commonest zoonotic diseases found worldwide in humans and a variety of animal species. While several animal vaccines are available, there is no effective and safe vaccine for prevention of brucellosis in humans. VIOLIN (http://www.violinet.org) is a web-based vaccine database and analysis system that curates, stores, and analyzes published data of commercialized vaccines, and vaccines in clinical trials or in research. VIOLIN contains information for 454 vaccines or vaccine candidates for 73 pathogens. VIOLIN also contains many bioinformatics tools for vaccine data analysis, data integration, and vaccine target prediction. To demonstrate the applicability of VIOLIN for vaccine research, VIOLIN was used for bioinformatics analysis of existing Brucella vaccines and prediction of new Brucella vaccine targets. Results VIOLIN contains many literature mining programs (e.g., Vaxmesh) that provide in-depth analysis of Brucella vaccine literature. As a result of manual literature curation, VIOLIN contains information for 38 Brucella vaccines or vaccine candidates, 14 protective Brucella antigens, and 68 host response studies to Brucella vaccines from 97 peer-reviewed articles. These Brucella vaccines are classified in the Vaccine Ontology (VO) system and used for different ontological applications. The web-based VIOLIN vaccine target prediction program Vaxign was used to predict new Brucella vaccine targets. Vaxign identified 14 outer membrane proteins that are conserved in six virulent strains from B. abortus, B. melitensis, and B. suis that are pathogenic in humans. Of the 14 membrane proteins, two proteins (Omp2b and Omp31-1) are not present in B. ovis, a Brucella species that is not pathogenic in humans. Brucella vaccine data stored in VIOLIN were compared and analyzed using the VIOLIN query system. Conclusions Bioinformatics curation and ontological
D'Souza, Mark; Sulakhe, Dinanath; Wang, Sheng; Xie, Bing; Hashemifar, Somaye; Taylor, Andrew; Dubchak, Inna; Conrad Gilliam, T; Maltsev, Natalia
Recent technological advances in genomics allow the production of biological data at unprecedented tera- and petabyte scales. Efficient mining of these vast and complex datasets for the needs of biomedical research critically depends on a seamless integration of the clinical, genomic, and experimental information with prior knowledge about genotype-phenotype relationships. Such experimental data accumulated in publicly available databases should be accessible to a variety of algorithms and analytical pipelines that drive computational analysis and data mining.We present an integrated computational platform Lynx (Sulakhe et al., Nucleic Acids Res 44:D882-D887, 2016) ( http://lynx.cri.uchicago.edu ), a web-based database and knowledge extraction engine. It provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization. It gives public access to the Lynx integrated knowledge base (LynxKB) and its analytical tools via user-friendly web services and interfaces. The Lynx service-oriented architecture supports annotation and analysis of high-throughput experimental data. Lynx tools assist the user in extracting meaningful knowledge from LynxKB and experimental data, and in the generation of weighted hypotheses regarding the genes and molecular mechanisms contributing to human phenotypes or conditions of interest. The goal of this integrated platform is to support the end-to-end analytical needs of various translational projects.
Uehara, Hiroshi; Iwasaki, Yuki; Wada, Chieko; Ikemura, Toshimichi; Abe, Takashi
Although remarkable progress in metagenomic sequencing of various environmental samples has been made, large numbers of fragment sequences have been registered in the international DNA databanks, primarily without information on gene function and phylotype, and thus with limited usefulness. Industrial useful biological activity is often carried out by a set of genes, such as those constituting an operon. In this connection, metagenomic approaches have a weakness because sets of the genes are usually split up, since the sequences obtained by metagenome analyses are fragmented into 1-kb or much shorter segments. Therefore, even when a set of genes responsible for an industrially useful function is found in one metagenome library, it is usually difficult to know whether a single genome harbors the entire gene set or whether different genomes have individual genes. By modifying Self-Organizing Map (SOM), we previously developed BLSOM for oligonucleotide composition, which allowed classification (self-organization) of sequence fragments according to genomes. Because BLSOM could reassociate genomic fragments according to genomes, BLSOM may ameliorate the abovementioned weakness of metagenome analyses. Here, we have developed a strategy for clustering of metagenomic sequences according to phylotypes and genomes, by testing a gene set contributing to environment preservation.
Ison, J.; Rapacki, K.; Menager, H.; Kalas, M.; Rydza, E.; Chmura, P.; Anthon, C.; Beard, N.; Berka, K.; Bolser, D.; Booth, T.; Bretaudeau, A.; Brezovsky, J.; Casadio, R.; Cesareni, G.; Coppens, F.; Cornell, M.; Cuccuru, G.; Davidsen, K.; Vedova, G.D.; Dogan, T.; Doppelt-Azeroual, O.; Emery, L.; Gasteiger, E.; Gatter, T.; Goldberg, T.; Grosjean, M.; Gruning, B.; Helmer-Citterich, M.; Ienasescu, H.; Ioannidis, V.; Jespersen, M.C.; Jimenez, R.; Juty, N.; Juvan, P.; Koch, M.; Laibe, C.; Li, J.W.; Licata, L.; Mareuil, F.; Micetic, I.; Friborg, R.M.; Moretti, S.; Morris, C.; Moller, S.; Nenadic, A.; Peterson, H.; Profiti, G.; Rice, P.; Romano, P.; Roncaglia, P.; Saidi, R.; Schafferhans, A.; Schwammle, V.; Smith, C.; Sperotto, M.M.; Stockinger, H.; Varekova, R.S.; Tosatto, S.C.; Torre, V.; Uva, P.; Via, A.; Yachdav, G.; Zambelli, F.; Vriend, G.; Rost, B.; Parkinson, H.; Longreen, P.; Brunak, S.
Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a
Wightman, Bruce; Hark, Amy T
The development of fields such as bioinformatics and genomics has created new challenges and opportunities for undergraduate biology curricula. Students preparing for careers in science, technology, and medicine need more intensive study of bioinformatics and more sophisticated training in the mathematics on which this field is based. In this study, we deliberately integrated bioinformatics instruction at multiple course levels into an existing biology curriculum. Students in an introductory biology course, intermediate lab courses, and advanced project-oriented courses all participated in new course components designed to sequentially introduce bioinformatics skills and knowledge, as well as computational approaches that are common to many bioinformatics applications. In each course, bioinformatics learning was embedded in an existing disciplinary instructional sequence, as opposed to having a single course where all bioinformatics learning occurs. We designed direct and indirect assessment tools to follow student progress through the course sequence. Our data show significant gains in both student confidence and ability in bioinformatics during individual courses and as course level increases. Despite evidence of substantial student learning in both bioinformatics and mathematics, students were skeptical about the link between learning bioinformatics and learning mathematics. While our approach resulted in substantial learning gains, student "buy-in" and engagement might be better in longer project-based activities that demand application of skills to research problems. Nevertheless, in situations where a concentrated focus on project-oriented bioinformatics is not possible or desirable, our approach of integrating multiple smaller components into an existing curriculum provides an alternative. Copyright © 2012 Wiley Periodicals, Inc.
Wefer, Stephen H.
The proliferation of bioinformatics in modern Biology marks a new revolution in science, which promises to influence science education at all levels. This thesis examined state standards for content that articulated bioinformatics, and explored secondary students' affective and cognitive perceptions of, and performance in, a bioinformatics mini-unit. The results are presented as three studies. The first study analyzed secondary science standards of 49 U.S States (Iowa has no science framework) and the District of Columbia for content related to bioinformatics at the introductory high school biology level. The bionformatics content of each state's Biology standards were categorized into nine areas and the prevalence of each area documented. The nine areas were: The Human Genome Project, Forensics, Evolution, Classification, Nucleotide Variations, Medicine, Computer Use, Agriculture/Food Technology, and Science Technology and Society/Socioscientific Issues (STS/SSI). Findings indicated a generally low representation of bioinformatics related content, which varied substantially across the different areas. Recommendations are made for reworking existing standards to incorporate bioinformatics and to facilitate the goal of promoting science literacy in this emerging new field among secondary school students. The second study examined thirty-two students' affective responses to, and content mastery of, a two-week bioinformatics mini-unit. The findings indicate that the students generally were positive relative to their interest level, the usefulness of the lessons, the difficulty level of the lessons, likeliness to engage in additional bioinformatics, and were overall successful on the assessments. A discussion of the results and significance is followed by suggestions for future research and implementation for transferability. The third study presents a case study of individual differences among ten secondary school students, whose cognitive and affective percepts were