Crichton, Daniel; Srivastava, Sudhir; Johnsey, Donald
Discovery of disease biomarkers for cancer is a leading focus of early detection. The National Cancer Institute created a network of collaborating institutions focused on the discovery and validation of cancer biomarkers called the Early Detection Research Network (EDRN). Informatics plays a key role in enabling a virtual knowledge environment that provides scientists real time access to distributed data sets located at research institutions across the nation. The distributed and heterogeneous nature of the collaboration makes data sharing across institutions very difficult. EDRN has developed a comprehensive informatics effort focused on developing a national infrastructure enabling seamless access, sharing and discovery of science data resources across all EDRN sites. This paper will discuss the EDRN knowledge system architecture, its objectives and its accomplishments.
Baldi, Pierre; Brunak, Søren
, and medicine will be particularly affected by the new results and the increased understanding of life at the molecular level. Bioinformatics is the development and application of computer methods for analysis, interpretation, and prediction, as well as for the design of experiments. It has emerged...
Ménager, Hervé; Kalaš, Matúš; Rapacki, Kristoffer
The diversity and complexity of bioinformatics resources presents significant challenges to their localisation, deployment and use, creating a need for reliable systems that address these issues. Meanwhile, users demand increasingly usable and integrated ways to access and analyse data, especially......, a software component that will ease the integration of bioinformatics resources in a workbench environment, using their description provided by the existing ELIXIR Tools and Data Services Registry....
Chimusa, Emile R; Mbiyavanga, Mamana; Masilela, Velaphi; Kumuthini, Judit
A shortage of practical skills and relevant expertise is possibly the primary obstacle to social upliftment and sustainable development in Africa. The "omics" fields, especially genomics, are increasingly dependent on the effective interpretation of large and complex sets of data. Despite abundant natural resources and population sizes comparable with many first-world countries from which talent could be drawn, countries in Africa still lag far behind the rest of the world in terms of specialized skills development. Moreover, there are serious concerns about disparities between countries within the continent. The multidisciplinary nature of the bioinformatics field, coupled with rare and depleting expertise, is a critical problem for the advancement of bioinformatics in Africa. We propose a formalized matchmaking system, which is aimed at reversing this trend, by introducing the Knowledge Transfer Programme (KTP). Instead of individual researchers travelling to other labs to learn, researchers with desirable skills are invited to join African research groups for six weeks to six months. Visiting researchers or trainers will pass on their expertise to multiple people simultaneously in their local environments, thus increasing the efficiency of knowledge transference. In return, visiting researchers have the opportunity to develop professional contacts, gain industry work experience, work with novel datasets, and strengthen and support their ongoing research. The KTP develops a network with a centralized hub through which groups and individuals are put into contact with one another and exchanges are facilitated by connecting both parties with potential funding sources. This is part of the PLOS Computational Biology Education collection.
Athanassios M. Kintsakis
Full Text Available Hermes introduces a new “describe once, run anywhere” paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.
Kintsakis, Athanassios M.; Psomopoulos, Fotis E.; Symeonidis, Andreas L.; Mitkas, Pericles A.
Hermes introduces a new "describe once, run anywhere" paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.
Kanterakis, Alexandros; Kuiper, Joël; Potamias, George; Swertz, Morris A
Today researchers can choose from many bioinformatics protocols for all types of life sciences research, computational environments and coding languages. Although the majority of these are open source, few of them possess all virtues to maximize reuse and promote reproducible science. Wikipedia has proven a great tool to disseminate information and enhance collaboration between users with varying expertise and background to author qualitative content via crowdsourcing. However, it remains an open question whether the wiki paradigm can be applied to bioinformatics protocols. We piloted PyPedia, a wiki where each article is both implementation and documentation of a bioinformatics computational protocol in the python language. Hyperlinks within the wiki can be used to compose complex workflows and induce reuse. A RESTful API enables code execution outside the wiki. Initial content of PyPedia contains articles for population statistics, bioinformatics format conversions and genotype imputation. Use of the easy to learn wiki syntax effectively lowers the barriers to bring expert programmers and less computer savvy researchers on the same page. PyPedia demonstrates how wiki can provide a collaborative development, sharing and even execution environment for biologists and bioinformaticians that complement existing resources, useful for local and multi-center research teams. PyPedia is available online at: http://www.pypedia.com. The source code and installation instructions are available at: https://github.com/kantale/PyPedia_server. The PyPedia python library is available at: https://github.com/kantale/pypedia. PyPedia is open-source, available under the BSD 2-Clause License.
Emile R Chimusa
Full Text Available A shortage of practical skills and relevant expertise is possibly the primary obstacle to social upliftment and sustainable development in Africa. The "omics" fields, especially genomics, are increasingly dependent on the effective interpretation of large and complex sets of data. Despite abundant natural resources and population sizes comparable with many first-world countries from which talent could be drawn, countries in Africa still lag far behind the rest of the world in terms of specialized skills development. Moreover, there are serious concerns about disparities between countries within the continent. The multidisciplinary nature of the bioinformatics field, coupled with rare and depleting expertise, is a critical problem for the advancement of bioinformatics in Africa. We propose a formalized matchmaking system, which is aimed at reversing this trend, by introducing the Knowledge Transfer Programme (KTP. Instead of individual researchers travelling to other labs to learn, researchers with desirable skills are invited to join African research groups for six weeks to six months. Visiting researchers or trainers will pass on their expertise to multiple people simultaneously in their local environments, thus increasing the efficiency of knowledge transference. In return, visiting researchers have the opportunity to develop professional contacts, gain industry work experience, work with novel datasets, and strengthen and support their ongoing research. The KTP develops a network with a centralized hub through which groups and individuals are put into contact with one another and exchanges are facilitated by connecting both parties with potential funding sources. This is part of the PLOS Computational Biology Education collection.
Thiele, Herbert; Glandorf, Jörg; Hufnagel, Peter
With the large variety of Proteomics workflows, as well as the large variety of instruments and data-analysis software available, researchers today face major challenges validating and comparing their Proteomics data. Here we present a new generation of the ProteinScape bioinformatics platform, now enabling researchers to manage Proteomics data from the generation and data warehousing to a central data repository with a strong focus on the improved accuracy, reproducibility and comparability demanded by many researchers in the field. It addresses scientists; current needs in proteomics identification, quantification and validation. But producing large protein lists is not the end point in Proteomics, where one ultimately aims to answer specific questions about the biological condition or disease model of the analyzed sample. In this context, a new tool has been developed at the Spanish Centro Nacional de Biotecnologia Proteomics Facility termed PIKE (Protein information and Knowledge Extractor) that allows researchers to control, filter and access specific information from genomics and proteomic databases, to understand the role and relationships of the proteins identified in the experiments. Additionally, an EU funded project, ProDac, has coordinated systematic data collection in public standards-compliant repositories like PRIDE. This will cover all aspects from generating MS data in the laboratory, assembling the whole annotation information and storing it together with identifications in a standardised format.
Full Text Available With the large variety of Proteomics workflows, as well as the large variety of instruments and data-analysis software available, researchers today face major challenges validating and comparing their Proteomics data. Here we present a new generation of the ProteinScapeTM bioinformatics platform, now enabling researchers to manage Proteomics data from the generation and data warehousing to a central data repository with a strong focus on the improved accuracy, reproducibility and comparability demanded by many researchers in the field. It addresses scientists` current needs in proteomics identification, quantification and validation. But producing large protein lists is not the end point in Proteomics, where one ultimately aims to answer specific questions about the biological condition or disease model of the analyzed sample. In this context, a new tool has been developed at the Spanish Centro Nacional de Biotecnologia Proteomics Facility termed PIKE (Protein information and Knowledge Extractor that allows researchers to control, filter and access specific information from genomics and proteomic databases, to understand the role and relationships of the proteins identified in the experiments. Additionally, an EU funded project, ProDac, has coordinated systematic data collection in public standards-compliant repositories like PRIDE. This will cover all aspects from generating MS data in the laboratory, assembling the whole annotation information and storing it together with identifications in a standardised format.
Chun-Hung Richard Lin
Full Text Available Bioinformatics is advanced from in-house computing infrastructure to cloud computing for tackling the vast quantity of biological data. This advance enables large number of collaborative researches to share their works around the world. In view of that, retrieving biological data over the internet becomes more and more difficult because of the explosive growth and frequent changes. Various efforts have been made to address the problems of data discovery and delivery in the cloud framework, but most of them suffer the hindrance by a MapReduce master server to track all available data. In this paper, we propose an alternative approach, called PRKad, which exploits a Peer-to-Peer (P2P model to achieve efficient data discovery and delivery. PRKad is a Kademlia-based implementation with Round-Trip-Time (RTT as the associated key, and it locates data according to Distributed Hash Table (DHT and XOR metric. The simulation results exhibit that our PRKad has the low link latency to retrieve data. As an interdisciplinary application of P2P computing for bioinformatics, PRKad also provides good scalability for servicing a greater number of users in dynamic cloud environments.
The book brings together certain facts which were collated and processed on the subject of ''Man-Environment-Knowledge'' within the scope of a tutorial function of the same name organised and held by the ''ETH'' (the Swiss Federal Institute of Technology in Zurich). Starting off from certain important fundamental facts which reproduce the current state of our planet in the form of data, it goes on to illustrate those interconnections and processes that are important for the understanding of the world in which we live. figs., tabs., [800 refs.
Shen, Lishuang; Diroma, Maria Angela; Gonzalez, Michael; Navarro-Gomez, Daniel; Leipzig, Jeremy; Lott, Marie T.; Oven, Mannis; Wallace, D.C.; Muraresku, Colleen Clarke; Zolkipli-Cunningham, Zarazuela; Chinnery, Patrick; Attimonelli, Marcella; Zuchner, Stephan; Falk, Marni J.; Gai, Xiaowu
textabstractMSeqDR is the Mitochondrial Disease Sequence Data Resource, a centralized and comprehensive genome and phenome bioinformatics resource built by the mitochondrial disease community to facilitate clinical diagnosis and research investigations of individual patient phenotypes, genomes, genes, and variants. A central Web portal (https://mseqdr.org) integrates community knowledge from expert-curated databases with genomic and phenotype data shared by clinicians and researchers. MSeqDR ...
Suciu, Radu M; Aydin, Emir; Chen, Brian E
With the exponential increase and widespread availability of genomic, transcriptomic, and proteomic data, accessing these '-omics' data is becoming increasingly difficult. The current resources for accessing and analyzing these data have been created to perform highly specific functions intended for specialists, and thus typically emphasize functionality over user experience. We have developed a web-based application, GeneDig.org, that allows any general user access to genomic information with ease and efficiency. GeneDig allows for searching and browsing genes and genomes, while a dynamic navigator displays genomic, RNA, and protein information simultaneously for co-navigation. We demonstrate that our application allows more than five times faster and efficient access to genomic information than any currently available methods. We have developed GeneDig as a platform for bioinformatics integration focused on usability as its central design. This platform will introduce genomic navigation to broader audiences while aiding the bioinformatics analyses performed in everyday biology research.
Zhang, Jiang; Rossello Busquet, Ana; Soler, José
This paper makes three contributions to assist households to control their home devices in an easy way and to simplify the software installation and configuration processes across multi-vendor environments. First, a Home Environment Service Knowledge Management System is proposed, which is based...... on the knowledge implemented by ontology and uses the inference function of reasoner to find out available software services according to household requests. Second, this paper provides a concrete methodology to exploit and acquire conflict-free information from ontology knowledge by using a reasoner. At last......, a strategy of calculating the sequence of service dependency hierarchy is proposed by this paper....
The objective of this task is to assess the potential for using the software support environment (SSE) workstations and associated software for design knowledge capture (DKC) tasks. This assessment will include the identification of required capabilities for DKC and hardware/software modifications needed to support DKC. Several approaches to achieving this objective are discussed and interim results are provided: (1) research into the problem of knowledge engineering in a traditional computer-aided software engineering (CASE) environment, like the SSE; (2) research into the problem of applying SSE CASE tools to develop knowledge based systems; and (3) direct utilization of SSE workstations to support a DKC activity.
Biological raw data are growing exponentially, providing a large amount of information on what life is. It is believed that potential functions and the rules governing protein behaviors can be revealed from analysis on known native structures of proteins. Many knowledge-based potentials for proteins have been proposed. Contrary to most existing review articles which mainly describe technical details and applications of various potential models, the main foci for the discussion here are ideas and concepts involving the construction of potentials, including the relation between free energy and energy, the additivity of potentials of mean force and some key issues in potential construction. Sequence analysis is briefly viewed from an energetic viewpoint. Project supported in part by the National Natural Science Foundation of China (Grant Nos. 11175224 and 11121403).
Biological raw data are growing exponentially, providing a large amount of information on what life is. It is believed that potential functions and the rules governing protein behaviors can be revealed from analysis on known native structures of proteins. Many knowledge-based potentials for proteins have been proposed. Contrary to most existing review articles which mainly describe technical details and applications of various potential models, the main foci for the discussion here are ideas and concepts involving the construction of potentials, including the relation between free energy and energy, the additivity of potentials of mean force and some key issues in potential construction. Sequence analysis is briefly viewed from an energetic viewpoint. (topical review)
Shen, Lishuang; Diroma, Maria Angela; Gonzalez, Michael; Navarro-Gomez, Daniel; Leipzig, Jeremy; Lott, Marie T; van Oven, Mannis; Wallace, Douglas C; Muraresku, Colleen Clarke; Zolkipli-Cunningham, Zarazuela; Chinnery, Patrick F; Attimonelli, Marcella; Zuchner, Stephan; Falk, Marni J; Gai, Xiaowu
MSeqDR is the Mitochondrial Disease Sequence Data Resource, a centralized and comprehensive genome and phenome bioinformatics resource built by the mitochondrial disease community to facilitate clinical diagnosis and research investigations of individual patient phenotypes, genomes, genes, and variants. A central Web portal (https://mseqdr.org) integrates community knowledge from expert-curated databases with genomic and phenotype data shared by clinicians and researchers. MSeqDR also functions as a centralized application server for Web-based tools to analyze data across both mitochondrial and nuclear DNA, including investigator-driven whole exome or genome dataset analyses through MSeqDR-Genesis. MSeqDR-GBrowse genome browser supports interactive genomic data exploration and visualization with custom tracks relevant to mtDNA variation and mitochondrial disease. MSeqDR-LSDB is a locus-specific database that currently manages 178 mitochondrial diseases, 1,363 genes associated with mitochondrial biology or disease, and 3,711 pathogenic variants in those genes. MSeqDR Disease Portal allows hierarchical tree-style disease exploration to evaluate their unique descriptions, phenotypes, and causative variants. Automated genomic data submission tools are provided that capture ClinVar compliant variant annotations. PhenoTips will be used for phenotypic data submission on deidentified patients using human phenotype ontology terminology. The development of a dynamic informed patient consent process to guide data access is underway to realize the full potential of these resources. © 2016 WILEY PERIODICALS, INC.
L. Shen (Lishuang); M.A. Diroma (Maria Angela); M. Gonzalez (Michael); D. Navarro-Gomez (Daniel); J. Leipzig (Jeremy); M.T. Lott (Marie T.); M. van Oven (Mannis); D.C. Wallace; C.C. Muraresku (Colleen Clarke); Z. Zolkipli-Cunningham (Zarazuela); P.F. Chinnery (Patrick); M. Attimonelli (Marcella); S. Zuchner (Stephan); M.J. Falk (Marni J.); X. Gai (Xiaowu)
textabstractMSeqDR is the Mitochondrial Disease Sequence Data Resource, a centralized and comprehensive genome and phenome bioinformatics resource built by the mitochondrial disease community to facilitate clinical diagnosis and research investigations of individual patient phenotypes, genomes,
In this preliminary report we present ongoing research on intelligent knowledge management (KM) environments supporting communication in a virtual environment. An agent community handles the interaction between knowledge sources of different degrees of formality and knowledge users and
Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh
In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: email@example.com.
Gelbart, Hadas; Yarden, Anat
Following the rationale that learning is an active process of knowledge construction as well as enculturation into a community of experts, we developed a novel web-based learning environment in bioinformatics for high-school biology majors in Israel. The learning environment enables the learners to actively participate in a guided inquiry process…
Jagadeesh Chandra Bose, R.P.; Aalst, van der W.M.P.; Nurcan, S.
Process mining techniques can be used to extract non-trivial process related knowledge and thus generate interesting insights from event logs. Similarly, bioinformatics aims at increasing the understanding of biological processes through the analysis of information associated with biological
Full text: Knowledge as the most valuable asset in organization should be managed properly and carefully. When we are trying to manage knowledge, we should not intent to create big knowledge repositories that will capture everything that anybody ever knew. It is better to follow people who have knowledge and to develop culture and technology that will help them to share knowledge and experience. The key elements for successful knowledge management are: people, processes and technology. Technology should be standard and reliable to facilitate knowledge sharing. KM processes should be defined to simplify creation, sharing and use of knowledge. People are the most valuable resource of organizational knowledge because they can create new knowledge, share knowledge around the organization and use that knowledge to achieve the best performance. Technology and processes are powerful together, but without the people there is a high risk that efforts to change something in organization will not be successful. People are such factor that can break or make any KM initiatives. It is even more critical situation in nuclear knowledge management. How to develop organizational culture and individual behavior in nuclear field will be described. (author
Hiraoka, Satoshi; Yang, Ching-Chia; Iwasaki, Wataru
Metagenomic approaches are now commonly used in microbial ecology to study microbial communities in more detail, including many strains that cannot be cultivated in the laboratory. Bioinformatic analyses make it possible to mine huge metagenomic datasets and discover general patterns that govern microbial ecosystems. However, the findings of typical metagenomic and bioinformatic analyses still do not completely describe the ecology and evolution of microbes in their environments. Most analyses still depend on straightforward sequence similarity searches against reference databases. We herein review the current state of metagenomics and bioinformatics in microbial ecology and discuss future directions for the field. New techniques will allow us to go beyond routine analyses and broaden our knowledge of microbial ecosystems. We need to enrich reference databases, promote platforms that enable meta- or comprehensive analyses of diverse metagenomic datasets, devise methods that utilize long-read sequence information, and develop more powerful bioinformatic methods to analyze data from diverse perspectives.
Briggs, Hugh C.
The Aerospace and Defense industry is experiencing an increasing loss of knowledge through workforce reductions associated with business consolidation and retirement of senior personnel. Significant effort is being placed on process definition as part of ISO certification and, more recently, CMMI certification. The process knowledge in these efforts represents the simplest of engineering knowledge and many organizations are trying to get senior engineers to write more significant guidelines, best practices and design manuals. A new generation of design software, known as Product Lifecycle Management systems, has many mechanisms for capturing and deploying a wider variety of engineering knowledge than simple process definitions. These hold the promise of significant improvements through reuse of prior designs, codification of practices in workflows, and placement of detailed how-tos at the point of application.
The concept of organisational knowledge as a valuable strategic asset has become quite popular recently. Increased competition, globalisation and the emergence of new organisational models built on process-based organisational structures require organisations to create, capture, share and apply
in education systems and national development plans. .... modern science and highlights the social origins of abstract thinking (Young, 2008). ... of discipline knowledge suggests that because 'the actual universe is deeply alien to our default .... the carbon cycle, solar energy and ozone (Summers, Kruger & Childs, 2001).
Alexander O. Karpov
Cognitive-active learning research-type environment is the fundamental component of the education system for the knowledge society. The purpose of the research is the development of conceptual bases and a constructional model of a cognitively active learning environment that stimulates the creation of new knowledge and its socio-economic application. Research methods include epistemic-didactic analysis of empirical material collected as a result of the study of research environments at school...
Johnson, Kathy A.
For the purpose of this paper, bioinformatics is defined as the application of computer technology to the management of biological information. It can be thought of as the science of developing computer databases and algorithms to facilitate and expedite biological research. This is a crosscutting capability that supports nearly all human health areas ranging from computational modeling, to pharmacodynamics research projects, to decision support systems within autonomous medical care. Bioinformatics serves to increase the efficiency and effectiveness of the life sciences research program. It provides data, information, and knowledge capture which further supports management of the bioastronautics research roadmap - identifying gaps that still remain and enabling the determination of which risks have been addressed.
Bakar, Muhamad Shahbani Abu; Jalil, Dzulkafli
The growth of Knowledge Economy has transformed human capital to be the vital asset in business organization of the 21st century. Arguably, due to its white-collar nature, knowledge-based industry is more favorable than traditional manufacturing business. However, over dependency on human capital can also be a major challenge as any workers will inevitably leave the company or retire. This situation will possibly create knowledge gap that may impact business continuity of the enterprise. Knowledge retention in the corporate environment has been of many research interests. Learning Management System (LMS) refers to the system that provides the delivery, assessment and management tools for an organization to handle its knowledge repository. By using the aspirations of a proven LMS implemented in an academic environment, this paper proposes LMS model that can be used to enable peer-to-peer knowledge capture and sharing in the knowledge-based organization. Cloud Enterprise Resource Planning (ERP), referred to an ERP solution in the internet cloud environment was chosen as the domain knowledge. The complexity of the Cloud ERP business and its knowledge make it very vulnerable to the knowledge retention problem. This paper discusses how the company's essential knowledge can be retained using the LMS system derived from academic environment into the corporate model.
This report first presents the nuclear and physical-chemical properties of tritium and addresses the notions of bioaccumulation, bio-magnification and remanence. It describes and comments the natural and anthropic origins of tritium (natural production, quantities released in the environment in France by nuclear tests, nuclear plants, nuclear fuel processing plants, research centres). It describes how tritium is measured as a free element (sampling, liquid scintillation, proportional counting, enrichment method) or linked to organic matter (combustion, oxidation, helium-3-based measurement). It discusses tritium concentrations noticed in different parts of the environment (soils, continental waters, sea). It describes how tritium is transferred to ecosystems (transfer of atmospheric tritium to ground ecosystems, and to soft water ecosystems). It discusses existing models which describe the behaviour of tritium in ecosystems. It finally describes and comments toxic effects of tritium on living ground and aquatic organisms
Full Text Available The process of modern design under the distributed resource environment is interpreted as the process of knowledge flow and integration. As the acquisition of new knowledge strongly depends on resources, knowledge flow can be influenced by technical, economic, and social relation factors, and so forth. In order to achieve greater efficiency of knowledge flow and make the product more competitive, the root causes of the above factors should be acquired first. In this paper, the authors attempt to reveal the nature of design knowledge flow from the perspectives of fluid dynamics and energy. The knowledge field effect and knowledge agglomeration effect are analyzed, respectively, in which the knowledge field effect model considering single task node and the single knowledge energy model in the knowledge flow are established, then the general expression of knowledge energy conservation with consideration of the kinetic energy and potential energy of knowledge is built. Then, the knowledge flow rules and their influential factors including complete transfer and incomplete transfer of design knowledge are studied. Finally, the coupling knowledge flows in the knowledge service platform for modern design are analyzed to certify the feasibility of the research work.
Frisvad, Jeppe Revall; Falster, Peter; Møller, Gert Lykke
To obtain unpredictable social interaction between autonomous agents in real-time environments, we present a simple method for logic-based knowledge exchange. A method which is able to form new knowledge rather than do simple exchange of particular rules found in predetermined rule sets. The appl......To obtain unpredictable social interaction between autonomous agents in real-time environments, we present a simple method for logic-based knowledge exchange. A method which is able to form new knowledge rather than do simple exchange of particular rules found in predetermined rule sets...
Good, Benjamin M; Su, Andrew I
Bioinformatics is faced with a variety of problems that require human involvement. Tasks like genome annotation, image analysis, knowledge-base population and protein structure determination all benefit from human input. In some cases, people are needed in vast quantities, whereas in others, we need just a few with rare abilities. Crowdsourcing encompasses an emerging collection of approaches for harnessing such distributed human intelligence. Recently, the bioinformatics community has begun to apply crowdsourcing in a variety of contexts, yet few resources are available that describe how these human-powered systems work and how to use them effectively in scientific domains. Here, we provide a framework for understanding and applying several different types of crowdsourcing. The framework considers two broad classes: systems for solving large-volume 'microtasks' and systems for solving high-difficulty 'megatasks'. Within these classes, we discuss system types, including volunteer labor, games with a purpose, microtask markets and open innovation contests. We illustrate each system type with successful examples in bioinformatics and conclude with a guide for matching problems to crowdsourcing solutions that highlights the positives and negatives of different approaches.
Full text: The paper outlines the experiences to date in developing mature knowledge management within the UK’s nuclear regulatory body The Office for Nuclear Regulation (ONR). In 2010 concerns over the loss of knowledge due to the age profile within the organization instigated a review of knowledge management and the development of a knowledge management initiative. Initially activities focused on knowledge capture but in order to move to through life knowledge transfer, knowledge management was then aligned with organizational resilience initiatives. A review of progress highlighted the need to better engage the whole organization to achieve the desired level of maturity for knowledge management. Knowledge management activities now cover organizational culture and environment and all aspects of organizational resilience. Benefits to date include clear understanding of core knowledge requirements, better specifications for recruitment and training and the ability to deploy new regulatory approaches. During the period of implementing the knowledge management programme ONR undertook several organizational changes in moving to become a separate statutory body. The UK nuclear industry was in a period of increased activity including the planning of new nuclear reactors. This dynamic environment caused challenges for embedding knowledge management within ONR which are discussed in the paper. (author
Hofmann-Apitius, Martin; Fluck, Juliane; Furlong, Laura; Fornes, Oriol; Kolárik, Corinna; Hanser, Susanne; Boeker, Martin; Schulz, Stefan; Sanz, Ferran; Klinger, Roman; Mevissen, Theo; Gattermayer, Tobias; Oliva, Baldo; Friedrich, Christoph M
In essence, the virtual physiological human (VPH) is a multiscale representation of human physiology spanning from the molecular level via cellular processes and multicellular organization of tissues to complex organ function. The different scales of the VPH deal with different entities, relationships and processes, and in consequence the models used to describe and simulate biological functions vary significantly. Here, we describe methods and strategies to generate knowledge environments representing molecular entities that can be used for modelling the molecular scale of the VPH. Our strategy to generate knowledge environments representing molecular entities is based on the combination of information extraction from scientific text and the integration of information from biomolecular databases. We introduce @neuLink, a first prototype of an automatically generated, disease-specific knowledge environment combining biomolecular, chemical, genetic and medical information. Finally, we provide a perspective for the future implementation and use of knowledge environments representing molecular entities for the VPH.
Boeck, H.; Villa, M.
In this work authors present the maintenance of nuclear knowledge in an antinuclear environment in Austria. Participation of the TRIGA Mark II research reactor in the Atominstitut in different courses, in research projects and education is presented.
Baloian, Nelson; Zurita, Gustavo
Knowledge management is a critical activity for any organization. It has been said to be a differentiating factor and an important source of competitiveness if this knowledge is constructed and shared among its members, thus creating a learning organization. Knowledge construction is critical for any collaborative organizational learning environment. Nowadays workers must perform knowledge creation tasks while in motion, not just in static physical locations; therefore it is also required that knowledge construction activities be performed in ubiquitous scenarios, and supported by mobile and pervasive computational systems. These knowledge creation systems should help people in or outside organizations convert their tacit knowledge into explicit knowledge, thus supporting the knowledge construction process. Therefore in our understanding, we consider highly relevant that undergraduate university students learn about the knowledge construction process supported by mobile and ubiquitous computing. This has been a little explored issue in this field. This paper presents the design, implementation, and an evaluation of a system called MCKC for Mobile Collaborative Knowledge Construction, supporting collaborative face-to-face tacit knowledge construction and sharing in ubiquitous scenarios. The MCKC system can be used by undergraduate students to learn how to construct knowledge, allowing them anytime and anywhere to create, make explicit and share their knowledge with their co-learners, using visual metaphors, gestures and sketches to implement the human-computer interface of mobile devices (PDAs).
Hellström, Tomas; Husted, Kenneth
This paper argues that knowledge mapping may provide a fruitful avenue for intellectual capital management in academic environments such as university departments. However, while some research has been conducted on knowledge mapping and intellectual capital management in the public sector, the university has so far not been directly considered for this type of management. The paper initially reviews the functions and techniques of knowledge mapping and assesses these in the light ...
Hellström, Tomas; Husted, Kenneth
This paper argues that knowledge mapping may provide a fruitful avenue for intellectual capitalmanagement in academic environments such as university departments. However, while some researchhas been conducted on knowledge mapping and intellectual capital management in the public sector...... reflect of the uses of knowledge mapping at their departments and institutes. Finally a number ofsuggestions are made as to the rationale and conduct of knowledge mapping in academe.Keywords: Knowledge mapping, academic, intellectual capital management, focus group, researchmanagement......,the university has so far not been directly considered for this type of management. The paper initiallyreviews the functions and techniques of knowledge mapping and assesses these in the light of academicdemands. Secondly, the result of a focus group study is presented, where academic leaders were askedto...
Natália Chaves Lessa Schots
Full Text Available Background: Process performance analysis is a key step for implementing continuous improvement in software organizations. However, the knowledge to execute such analysis is not trivial and the person responsible to executing it must be provided with appropriate support. Aim: This paper presents a knowledge-based environment, named SPEAKER, proposed for supporting software organizations during the execution of process performance analysis. SPEAKER comprises a body of knowledge and a set of activities and tasks for software process performance analysis along with supporting tools to executing these activities and tasks. Method: We conducted an informal literature reviews and a systematic mapping study, which provided basic requirements for the proposed environment. We implemented the SPEAKER environment integrating supporting tools for the execution of activities and tasks of performance analysis and the knowledge necessary to execute them, in order to meet the variability presented by the characteristics of these activities. Results: In this paper, we describe each SPEAKER module and the individual evaluations of these modules, and also present an example of use comprising how the environment can guide the user through a specific performance analysis activity. Conclusion: Although we only conducted individual evaluations of SPEAKER’s modules, the example of use indicates the feasibility of the proposed environment. Therefore, the environment as a whole will be further evaluated to verify if it attains its goal of assisting in the execution of process performance analysis by non-specialist people.
VERONA G; PRANDELLI E.; SAWHNEY M.
The authors examine the implications of virtual customer environments for supporting the innovation process. By building on the literature of knowledge brokers, they introduce the concept of virtual knowledge brokers — actors who leverage the internet to support third parties’ innovation activities. These actors enable firms to extend their reach in engaging with customers and they also allow firms to have a richer dialogue with customers because of their perceived neutrality. Consequently...
Full Text Available PURPOSE: Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. METHODS: This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. RESULTS: The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. CONCLUSIONS: The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets
Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter
Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly
Nielsen, Jørgen Lerche; Meyer, Kirsten
and creation processes. The aim is to obtain a deeper comprehension of which factors determine whether the use of information technology becomes a success or a failure in relation to knowledge sharing and creation. The paper is based on three previous studies investigating the use of information technology......Do the knowledge sharing and creation processes in collaborating groups benefit from the use of new information environments or are the environments rather inhibitive to the development of these processes? A number of different studies have shown quite varied results when it comes to appraising...... the importance and value of using new information technology in knowledge sharing and creation processes. In this paper we will try to unveil the patterns appearing in the use of new information environment and the users' understanding of the significance of using information technology in knowledge sharing...
Mackenzie Owen, John
An important characteristic of most computer supported work environments is the distribution of work over individuals or teams in different locations. This leads to what we nowadays call `virtual' environments. In these environments communication between actors is to a large degree mediated, i.e. established through communications media (telephone, fax, computer networks) rather in a face-to-face way. Unfortunately, mediated communication limits the effectiveness of knowledge exchange in virt...
Alexander O. Karpov
Full Text Available Cognitive-active learning research-type environment is the fundamental component of the education system for the knowledge society. The purpose of the research is the development of conceptual bases and a constructional model of a cognitively active learning environment that stimulates the creation of new knowledge and its socio-economic application. Research methods include epistemic-didactic analysis of empirical material collected as a result of the study of research environments at schools and universities; conceptualization and theoretical modeling of the cognitively active surrounding, which provides an infrastructure of the research-type cognitive process. The empirical material summarized in this work was collected in the research-cognitive space of the “Step into the Future” program, which is one of the most powerful systems of research education in present-day Russia. The article presents key points of the author's concept of generative learning environments and a model of learning and scientific innovation environment implemented at Russian schools and universities.
Full Text Available The article underscores the process of knowledge sharing in a multicultural organisational environment. Generally,multiculturalism emanates from being influenced by different contexts that provide the potential for human diversity. Itresults in disparate behavioural patterns and bodies of knowledge which lead to variance in terms of racial, sexual, ageand cultural orientations. The process of sharing knowledge is complex and is susceptible to multicultural variances.Considering that knowledge sharing processes and probable multicultural influences are contextual, the purpose of thearticle is to establish the extent of knowledge flows in the Department of Information Science at the University of SouthAfrica. In particular the article seeks to give an overall view on how knowledge is shared across intergenerational, culturaland interracial lines in the Department. The qualitative approach was considered appropriate for this study because itfocuses on observing events from the perspectives of those who are involved and is aimed at understanding the attitude,behaviour and opinions of those individuals (Powell & Connaway 2004. A basic interpretive qualitative research designwas used for this study. Data was collected through interviews and document analysis. The data were inductively analysedand the findings are presented and discussed using references to the literature that informed the study.
Chakraborty, Chiranjib; George Priya Doss, C; Zhu, Hailong; Agoramoorthy, Govindasamy
Hong Kong's bioinformatics sector is attaining new heights in combination with its economic boom and the predominance of the working-age group in its population. Factors such as a knowledge-based and free-market economy have contributed towards a prominent position on the world map of bioinformatics. In this review, we have considered the educational measures, landmark research activities and the achievements of bioinformatics companies and the role of the Hong Kong government in the establishment of bioinformatics as strength. However, several hurdles remain. New government policies will assist computational biologists to overcome these hurdles and further raise the profile of the field. There is a high expectation that bioinformatics in Hong Kong will be a promising area for the next generation.
Jones, Anna Marie
The nutrition environment in schools can influence the risk for childhood overweight and obesity, which in turn can have life-long implications for risk of chronic disease. This dissertation aimed to examine the nutrition environment in primary public schools in California with regards to the amount of nutrition education provided in the classroom, the nutrition knowledge of teachers, and the training needs of school nutrition personnel. In order to determine nutrition knowledge of teachers, a valid and reliable questionnaire was developed to assess knowledge. The systematic process involved cognitive interviews, a mail-based pretest that utilized a random sample of addresses in California, and validity and reliability testing in a sample of university students. Results indicated that the questionnaire had adequate construct validity, internal consistency reliability, and test-retest reliability. Following the validation of the knowledge questionnaire, it was used in a study of public school teachers in California to determine the relationship between demographic and classroom characteristics and nutrition knowledge, in addition to barriers to nutrition education and resources used to plan nutrition lessons. Nutrition knowledge was not found to be associated with teaching nutrition in the classroom, however it was associated with gender, identifying as Hispanic or Latino, and grade level grouping taught. The most common barriers to nutrition education were time, and unrelated subject matter. The most commonly used resources to plan nutrition lessons were Dairy Council of California educational materials. The school nutrition program was the second area of the school nutrition environment to be examined, and the primary focus was to determine the perceived training needs of California school nutrition personnel. Respondents indicated a need for training in topics related to: program management; the Healthy, Hunger-Free Kids Act of 2010; nutrition, health and
Muhammad Kamarul Kabilan; Tuti Zalina Mohamed Ernes Zahar
This study investigates the effectiveness of using Facebook in enhancing vocabulary knowledge among Community College students. Thirty-three (33) Community College students are exposed to the use of Facebook as an environment of learning and enhancing their English vocabulary. They are given a pre-test and a post-test and the findings indicate that students perform significantly better in the post-test compared to the pre-test. It appears that Facebook could be considered as a supplementary l...
Schweighofer, Karl; Pohorille, Andrew
Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.
Full Text Available Aim/Purpose: The purpose of the study is to provide foundational research to exemplify how knowledge construction takes place in microblogging-based learning environments, to understand learner interaction representing the knowledge construction process, and to analyze learner perception, thereby suggesting a model of delivery for microblogging. Background: Up-and-coming digital native learners crave the real-time, multimedia, global-interconnectedness of microblogging, yet there has been limited research that specifically proposes a working model of Twitter’s classroom integration for designers and practitioners without bundling it in with other social media tools. Methodology: This semester-long study utilized a case-study research design via a multi-dimensional approach in a hybrid classroom with both face-to-face and online environments. Tweets were collected from four types of activities and coded based on content within their contextual setting. Twenty-four college students participated in the study. Contribution: The findings shed light on the process of knowledge construction in mi-croblogging and reveal key types of knowledge manifested during learning activities. The study also proposes a model for delivering microblogging to formal learning environments applicable to various contexts for designers and practitioners. Findings: There are distinct learner interaction patterns representing the process of knowledge construction in microblogging activities ranging from low-order to high-order cognitive tasks. Students generally were in favor of the Twitter integration in this study. Recommendations for Practitioners: The three central activities (exploring hashtags, discussion topics, and participating in live chats along with the backchannel activity formulate a working model that represents the sequential process of Twitter integration into classrooms. Impact on Society: Microblogging allows learners omnichannel access while hashtags
Freedman, Vicky L.; Lansing, Carina S.; Porter, Ellen A.; Schuchardt, Karen L.; Guillen, Zoe C.; Sivaramakrishnan, Chandrika; Gorton, Ian
In scientific simulation, scientists use measured data to create numerical models, execute simulations and analyze results from advanced simulators executing on high performance computing platforms. This process usually requires a team of scientists collaborating on data collection, model creation and analysis, and on authorship of publications and data. This paper shows that scientific teams can benefit from a user environment called Akuna that permits subsurface scientists in disparate locations to collaborate on numerical modeling and analysis projects. The Akuna user environment is built on the Velo framework that provides both a rich client environment for conducting and analyzing simulations and a Web environment for data sharing and annotation. Akuna is an extensible toolset that integrates with Velo, and is designed to support any type of simulator. This is achieved through data-driven user interface generation, use of a customizable knowledge management platform, and an extensible framework for simulation execution, monitoring and analysis. This paper describes how the customized Velo content management system and the Akuna toolset are used to integrate and enhance an effective collaborative research and application environment. The extensible architecture of Akuna is also described and demonstrates its usage for creation and execution of a 3D subsurface simulation.
Muhammad Kamarul Kabilan
Full Text Available This study investigates the effectiveness of using Facebook in enhancing vocabulary knowledge among Community College students. Thirty-three (33 Community College students are exposed to the use of Facebook as an environment of learning and enhancing their English vocabulary. They are given a pre-test and a post-test and the findings indicate that students perform significantly better in the post-test compared to the pre-test. It appears that Facebook could be considered as a supplementary learning environment or learning platform or a learning tool; with meaningful and engaging activities that require students to collaborate, network and functions as a community of practice, particularly for introverted students with low proficiency levels and have low self-esteem.
Fuller, Jonathan C; Khoueiry, Pierre; Dinkel, Holger; Forslund, Kristoffer; Stamatakis, Alexandros; Barry, Joseph; Budd, Aidan; Soldatos, Theodoros G; Linssen, Katja; Rajput, Abdul Mateen
The third Heidelberg Unseminars in Bioinformatics (HUB) was held on 18th October 2012, at Heidelberg University, Germany. HUB brought together around 40 bioinformaticians from academia and industry to discuss the 'Biggest Challenges in Bioinformatics' in a 'World Café' style event.
Fuller, Jonathan C; Khoueiry, Pierre; Dinkel, Holger; Forslund, Kristoffer; Stamatakis, Alexandros; Barry, Joseph; Budd, Aidan; Soldatos, Theodoros G; Linssen, Katja; Rajput, Abdul Mateen
The third Heidelberg Unseminars in Bioinformatics (HUB) was held in October at Heidelberg University in Germany. HUB brought together around 40 bioinformaticians from academia and industry to discuss the ‘Biggest Challenges in Bioinformatics' in a ‘World Café' style event.
Schönbach, Christian; Li, Jinyan; Ma, Lan; Horton, Paul; Sjaugi, Muhammad Farhan; Ranganathan, Shoba
The 16th International Conference on Bioinformatics (InCoB) was held at Tsinghua University, Shenzhen from September 20 to 22, 2017. The annual conference of the Asia-Pacific Bioinformatics Network featured six keynotes, two invited talks, a panel discussion on big data driven bioinformatics and precision medicine, and 66 oral presentations of accepted research articles or posters. Fifty-seven articles comprising a topic assortment of algorithms, biomolecular networks, cancer and disease informatics, drug-target interactions and drug efficacy, gene regulation and expression, imaging, immunoinformatics, metagenomics, next generation sequencing for genomics and transcriptomics, ontologies, post-translational modification, and structural bioinformatics are the subject of this editorial for the InCoB2017 supplement issues in BMC Genomics, BMC Bioinformatics, BMC Systems Biology and BMC Medical Genomics. New Delhi will be the location of InCoB2018, scheduled for September 26-28, 2018.
Tilton Susan C
Full Text Available Abstract Background MicroRNAs (miRNAs are noncoding RNAs that direct post-transcriptional regulation of protein coding genes. Recent studies have shown miRNAs are important for controlling many biological processes, including nervous system development, and are highly conserved across species. Given their importance, computational tools are necessary for analysis, interpretation and integration of high-throughput (HTP miRNA data in an increasing number of model species. The Bioinformatics Resource Manager (BRM v2.3 is a software environment for data management, mining, integration and functional annotation of HTP biological data. In this study, we report recent updates to BRM for miRNA data analysis and cross-species comparisons across datasets. Results BRM v2.3 has the capability to query predicted miRNA targets from multiple databases, retrieve potential regulatory miRNAs for known genes, integrate experimentally derived miRNA and mRNA datasets, perform ortholog mapping across species, and retrieve annotation and cross-reference identifiers for an expanded number of species. Here we use BRM to show that developmental exposure of zebrafish to 30 uM nicotine from 6–48 hours post fertilization (hpf results in behavioral hyperactivity in larval zebrafish and alteration of putative miRNA gene targets in whole embryos at developmental stages that encompass early neurogenesis. We show typical workflows for using BRM to integrate experimental zebrafish miRNA and mRNA microarray datasets with example retrievals for zebrafish, including pathway annotation and mapping to human ortholog. Functional analysis of differentially regulated (p Conclusions BRM provides the ability to mine complex data for identification of candidate miRNAs or pathways that drive phenotypic outcome and, therefore, is a useful hypothesis generation tool for systems biology. The miRNA workflow in BRM allows for efficient processing of multiple miRNA and mRNA datasets in a single
Background MicroRNAs (miRNAs) are noncoding RNAs that direct post-transcriptional regulation of protein coding genes. Recent studies have shown miRNAs are important for controlling many biological processes, including nervous system development, and are highly conserved across species. Given their importance, computational tools are necessary for analysis, interpretation and integration of high-throughput (HTP) miRNA data in an increasing number of model species. The Bioinformatics Resource Manager (BRM) v2.3 is a software environment for data management, mining, integration and functional annotation of HTP biological data. In this study, we report recent updates to BRM for miRNA data analysis and cross-species comparisons across datasets. Results BRM v2.3 has the capability to query predicted miRNA targets from multiple databases, retrieve potential regulatory miRNAs for known genes, integrate experimentally derived miRNA and mRNA datasets, perform ortholog mapping across species, and retrieve annotation and cross-reference identifiers for an expanded number of species. Here we use BRM to show that developmental exposure of zebrafish to 30 uM nicotine from 6–48 hours post fertilization (hpf) results in behavioral hyperactivity in larval zebrafish and alteration of putative miRNA gene targets in whole embryos at developmental stages that encompass early neurogenesis. We show typical workflows for using BRM to integrate experimental zebrafish miRNA and mRNA microarray datasets with example retrievals for zebrafish, including pathway annotation and mapping to human ortholog. Functional analysis of differentially regulated (p<0.05) gene targets in BRM indicates that nicotine exposure disrupts genes involved in neurogenesis, possibly through misregulation of nicotine-sensitive miRNAs. Conclusions BRM provides the ability to mine complex data for identification of candidate miRNAs or pathways that drive phenotypic outcome and, therefore, is a useful hypothesis
Fatumo, Segun A; Adoga, Moses P; Ojo, Opeolu O; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi
Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.
Segun A Fatumo
Full Text Available Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.
Mudasser Fraz Wyne
Full Text Available Bioinformatics is a new field that is poorly served by any of the traditional science programs in Biology, Computer science or Biochemistry. Known to be a rapidly evolving discipline, Bioinformatics has emerged from experimental molecular biology and biochemistry as well as from the artificial intelligence, database, pattern recognition, and algorithms disciplines of computer science. While institutions are responding to this increased demand by establishing graduate programs in bioinformatics, entrance barriers for these programs are high, largely due to the significant prerequisite knowledge which is required, both in the fields of biochemistry and computer science. Although many schools currently have or are proposing graduate programs in bioinformatics, few are actually developing new undergraduate programs. In this paper I explore the blend of a multidisciplinary approach, discuss the response of academia and highlight challenges faced by this emerging field.
Researchers take on challenges and opportunities to mine "Big Data" for answers to complex biological questions. Learn how bioinformatics uses advanced computing, mathematics, and technological platforms to store, manage, analyze, and understand data.
Huang, Hsiu-Mei; Liaw, Shu-Sheng
In today's competitive global economy characterized by knowledge acquisition, the concept of knowledge management has become increasingly prevalent in academic and business practices. Knowledge creation is an important factor and remains a source of competitive advantage over knowledge management. Information technology facilitates knowledge…
Tee, Meng Yew; Karney, Dennis
Research on knowledge cultivation often focuses on explicit forms of knowledge. However, knowledge can also take a tacit form--a form that is often difficult or impossible to tease out, even when it is considered critical in an educational context. A review of the literature revealed that few studies have examined tacit knowledge issues in online…
Morales, Hernán F; Giovambattista, Guillermo
We have developed BioSmalltalk, a new environment system for pure object-oriented bioinformatics programming. Adaptive end-user programming systems tend to become more important for discovering biological knowledge, as is demonstrated by the emergence of open-source programming toolkits for bioinformatics in the past years. Our software is intended to bridge the gap between bioscientists and rapid software prototyping while preserving the possibility of scaling to whole-system biology applications. BioSmalltalk performs better in terms of execution time and memory usage than Biopython and BioPerl for some classical situations. BioSmalltalk is cross-platform and freely available (MIT license) through the Google Project Hosting at http://code.google.com/p/biosmalltalk firstname.lastname@example.org Supplementary data are available at Bioinformatics online.
You, Lan; Lin, Hui
VGE geographic knowledge refers to the abstract and repeatable geo-information which is related to the geo-science problem, geographical phenomena and geographical laws supported by VGE. That includes expert experiences, evolution rule, simulation processes and prediction results in VGE. This paper proposes a conceptual framework for VGE knowledge engineering in order to effectively manage and use geographic knowledge in VGE. Our approach relies on previous well established theories on knowledge engineering and VGE. The main contribution of this report is following: (1) The concepts of VGE knowledge and VGE knowledge engineering which are defined clearly; (2) features about VGE knowledge different with common knowledge; (3) geographic knowledge evolution process that help users rapidly acquire knowledge in VGE; and (4) a conceptual framework for VGE knowledge engineering providing the supporting methodologies system for building an intelligent VGE. This conceptual framework systematically describes the related VGE knowledge theories and key technologies. That will promote the rapid transformation from geodata to geographic knowledge, and furtherly reduce the gap between the data explosion and knowledge absence.
Khalid Abdul Wahid
Full Text Available The purpose of this research is to investigate the impact of organizational knowledge factors and market knowledge factors on knowledge creation among Thai innovative companies. 464 questionnaires were distributed to Thai innovative companies registered under the National Innovation Agency (NIA and 217 were returned. Structural Equation Modelling (SEM is used to determine the effect of two sets of knowledge creation sources: organizational knowledge (social interaction, organizational routines and information system and market knowledge (customer orientation, competitor orientation and supplier orientation on knowledge creation (product and service outcome, process outcome and market outcome. The results indicated that the integration of organizational knowledge and market knowledge is the main driver of knowledge creation. Furthermore, the findings suggest that social interaction and customer orientation are the most significant predictors of knowledge creation. This study provides an empirical analysis on the importance of different sources of knowledge in the knowledge creation process in SMEs and its impact on companies’ innovative knowledge outcomes.
Chen, Xiaoling; Chang, Jeffrey T.
Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: email@example.com PMID:28052928
Liu, Hai-Ying; Bartonova, Alena; Neofytou, Panagiotis; Yang, Aileen; Kobernus, Michael J; Negrenti, Emanuele; Housiadas, Christos
The HENVINET Health and Environment Network aimed to enhance the use of scientific knowledge in environmental health for policy making. One of the goals was to identify and evaluate Decision Support Tools (DST) in current use. Special attention was paid to four "priority" health issues: asthma and allergies, cancer, neurodevelopment disorders, and endocrine disruptors.We identified a variety of tools that are used for decision making at various levels and by various stakeholders. We developed a common framework for information acquisition about DSTs, translated this to a database structure and collected the information in an online Metadata Base (MDB).The primary product is an open access web-based MDB currently filled with 67 DSTs, accessible through the HENVINET networking portal http://www.henvinet.eu and http://henvinet.nilu.no. Quality assurance and control of the entries and evaluation of requirements to use the DSTs were also a focus of the work. The HENVINET DST MDB is an open product that enables the public to get basic information about the DSTs, and to search the DSTs using pre-designed attributes or free text. Registered users are able to 1) review and comment on existing DSTs; 2) evaluate each DST's functionalities, and 3) add new DSTs, or change the entry for their own DSTs. Assessment of the available 67 DSTs showed: 1) more than 25% of the DSTs address only one pollution source; 2) 25% of the DSTs address only one environmental stressor; 3) almost 50% of the DSTs are only applied to one disease; 4) 41% of the DSTs can only be applied to one decision making area; 5) 60% of the DSTs' results are used only by national authority and/or municipality/urban level administration; 6) almost half of the DSTs are used only by environmental professionals and researchers. This indicates that there is a need to develop DSTs covering an increasing number of pollution sources, environmental stressors and health end points, and considering links to other 'Driving
Kampf, Constance; Islas Sedano, Carolina
this mobile phone game to help next years' students navigated the CampusNet system in order to study for the exam. The CampusNet system can be seen as a knowledge management technology situated within the social context of the Project Management course, and so the examples offered, in effect, demonstrate...... To be effective, knowledge management systems need to encompass both social processes and technical components (McDermott 2000), On the other hand, knowledge communication as a concept has emerged not from the inspiration of technology, but partly from the social-technical challenge of dealing...... with technology in knowledge management systems. So, is knowledge communication a process that can be technologically enabled? In this presentation, we explore the possibilities of socio-technical interaction for knowledge communication through the use of a mobile phone game as a knowledge communication tool...
In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…
Sharif, Amir M
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. Representing knowledge as information content alone is insufficient in providing us with an understanding of the world around us. A combination of context as well as reasoning of the information content is fundamental to representing knowledge in an information system. Knowledge Representation is typically concerned with providing structures and theories that are used as a basis for intellige...
A. E. Zhukov
Full Text Available Knowledge, along with skills and abilities forms the concept of professional qualification. Knowledge management as function and type of enterprise management activities includes making the knowledge more practically valuable and creating an active learning environment. Acquisition and assimilation of new knowledge includes six steps discussed in the article: definition, acquisition, selection, storage, distribution, use, development and implementation.
Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.
Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T
Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations. Copyright © 2015 Elsevier Inc. All rights reserved.
Lecocq, R; Gauvin, M
...) practices, namely on Knowledge Creation, Learning and Collaboration, the present work performs a detailed comparison of the states of these practices between the different military environments...
...) in training dismounted soldiers. This experiment investigated the effects of different VE parameters on spatial knowledge acquisition by comparing learning in advanced VE, restricted VE, and standard map training...
text search in retrieval systems are not only highlighted but also addressed in order to improve efficiency of retrieval systems in general and precision in particular. The paper finally explores new ways of driving access to knowledge through ...
Jones, Anna Marie
The nutrition environment in schools can influence the risk for childhood overweight and obesity, which in turn can have life-long implications for risk of chronic disease. This dissertation aimed to examine the nutrition environment in primary public schools in California with regards to the amount of nutrition education provided in the…
Wei, Dongqing; Zhao, Tangzhen; Dai, Hao
This text examines in detail mathematical and physical modeling, computational methods and systems for obtaining and analyzing biological structures, using pioneering research cases as examples. As such, it emphasizes programming and problem-solving skills. It provides information on structure bioinformatics at various levels, with individual chapters covering introductory to advanced aspects, from fundamental methods and guidelines on acquiring and analyzing genomics and proteomics sequences, the structures of protein, DNA and RNA, to the basics of physical simulations and methods for conform
Jason H. Sharp
Full Text Available The Malcolm Baldrige National Quality Award (MBNQA is well known to assess quality and business processes in a variety of sectors, including government. In this study, we investigate the relationship between aspects of the MBNQA’s leadership triad and knowledge management in an e-government context. Specifically, we survey 1,100 employees of a medium-sized city government in the United States to investigate the relationship between leadership triad components, leadership strategic planning, and customer/market focus, with knowledge management. Our results show that these components are significantly related to knowledge management and are important in the delivery of e-government applications to the citizenry.
Lovisolo, G.A.; Marino, C.
This report summarizes various studies for characterization and quantization of air quality in urban environment. In this work are also reported information on electromagnetic fields effects, acoustic pollution and sanitary effects [it
Burr, Tom L [Los Alamos National Laboratory
Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.
This article explores knowledge-building in an online distance-learning environment. The research examines how knowledge-building principles can be translated into online classroom practice for graduate students. Specifically, how do the course components and the online learning environments created in two online graduate courses contribute to…
Data Mining for Bioinformatics Applications provides valuable information on the data mining methods have been widely used for solving real bioinformatics problems, including problem definition, data collection, data preprocessing, modeling, and validation. The text uses an example-based method to illustrate how to apply data mining techniques to solve real bioinformatics problems, containing 45 bioinformatics problems that have been investigated in recent research. For each example, the entire data mining process is described, ranging from data preprocessing to modeling and result validation. Provides valuable information on the data mining methods have been widely used for solving real bioinformatics problems Uses an example-based method to illustrate how to apply data mining techniques to solve real bioinformatics problems Contains 45 bioinformatics problems that have been investigated in recent research.
ract describes work in developing knowledge base editing and debugging tools for the Multimission VICAR Planner (MVP) system. MVP uses artificial intelligence planning techniques to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing requests made to the JPL Multimission Image Processing Laboratory.
Antonsson, Ann-Beth; Hasle, Peter
towards knowledge-based behaviour. Additionally the time required increases when moving from skill- to knowledge-based behaviour. On the other hand, skill-based behaviour lacks the ability to solve problems and adapt to new situations. In the working environment risk assessment as well as the development...... of management routines are typically knowledge-based activities, whereas the application of good practice is more of skill or rule-based. For small companies, time as well as knowledge is an important constraint for the work environment management. Therefore the conclusion could be to focus on and provide skill......Background One of the main obstacles identified for small companies´ improvement of the working environment is lack of knowledge. Aim To discuss what kind of knowledge is required by small companies if they are to be able to improve their working environment and the pros and cons of different kinds...
Evens, Marie; Larmuseau, Charlotte; Dewaele, Katrien; Van Craesbeek, Leen; Elen, Jan; Depaepe, Fien
This study examines the effects of an online learning environment on preservice teachers' pedagogical content knowledge (PCK), content knowledge (CK) (related to French in primary teacher education), and pedagogical knowledge (PK) in a quasi-experimental design. More specifically, the following research question is addressed: Is a systematically…
Full Text Available This article discusses the understanding and behavior of entrepreneurs incubated for software companies (development of information systems, provide services in information technology - hardware and software, and advise on the implementation of administrative management systems in relation to knowledge, in the obtaining and facilitating the use of knowledge and availability of media. Initially, it sought a fundamental concept for the study and is based contextualise the issue, making the connection with the knowledge of the views of academic and business. After an analysis was conducted based on applied research to executives on their understanding of the concept of knowledge regarding the way in which it operates (professional and personal, concluding with the importance of knowledge for business growth from the person.
Full Text Available The proposed goal oriented knowledge acquisition and assessment are based on the flexible educational model and allows to implement an adaptive control of the enhanced learning process according to the requirements of student's knowledge level, his state of cognition and subject learning history. The enhanced learner knowledge model specifies how the cognition state of the user will be achieved step by step. The use case actions definition is a starting point of the specification, which depends on different levels of learning scenarios and user cognition sub goals. The use case actions specification is used as a basis to set the requirements for service software specification and attributes of learning objects respectively. The paper presents the enhanced architecture of the student self-evaluation and on-line assessment system TestTool. The system is explored as an assessment engine capable of supporting and improving the individualized intelligent goal oriented self-instructional and simulation based mode of learning, grounded on the GRID distributed service architecture.
The topic of this paper is play-like learning as it occurs when technology based learning environments is invited into the classroom. Observations of 5th grade classes playing with Lego Robolab, is used to illustrate that different ways of learning becomes visible when digital technology...
Reefman, R.J.B.; Van Nederveen, G.A.
Organisations and / or disciplines in Building and Construction projects are usually working in their own design and engineering environments and using their own Building Information Models (BIM). The discipline models are merged into a project BIM which is mainly used to check for interferences or
The author investigated the interaction effect of immersive virtual reality (VR) in the classroom. The objective of the project was to develop and provide a low-cost, scalable, and portable VR system containing purposely designed and developed immersive virtual learning environments for the US Army. The purpose of the mixed design experiment was…
Education for Library and Information professionals in managing the digital environment has been a key topic for discussion within the LIS environment for some time. However, before designing and implementing a program for digital library education, it is prudent to ensure that the skills and knowledge required to work in this environment are…
Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu
Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.
Abdulganiyu Abdu Yusuf; Zahraddeen Sufyanu; Kabir Yusuf Mamman; Abubakar Umar Suleiman
Bioinformatics is the application of computational tools to capture and interpret biological data. It has wide applications in drug development, crop improvement, agricultural biotechnology and forensic DNA analysis. There are various databases available to researchers in bioinformatics. These databases are customized for a specific need and are ranged in size, scope, and purpose. The main drawbacks of bioinformatics databases include redundant information, constant change, data spread over m...
Full Text Available Venomics is a modern approach that combines transcriptomics and proteomics to explore the toxin content of venoms. This review will give an overview of computational approaches that have been created to classify and consolidate venomics data, as well as algorithms that have helped discovery and analysis of toxin nucleic acid and protein sequences, toxin three-dimensional structures and toxin functions. Bioinformatics is used to tackle specific challenges associated with the identification and annotations of toxins. Recognizing toxin transcript sequences among second generation sequencing data cannot rely only on basic sequence similarity because toxins are highly divergent. Mass spectrometry sequencing of mature toxins is challenging because toxins can display a large number of post-translational modifications. Identifying the mature toxin region in toxin precursor sequences requires the prediction of the cleavage sites of proprotein convertases, most of which are unknown or not well characterized. Tracing the evolutionary relationships between toxins should consider specific mechanisms of rapid evolution as well as interactions between predatory animals and prey. Rapidly determining the activity of toxins is the main bottleneck in venomics discovery, but some recent bioinformatics and molecular modeling approaches give hope that accurate predictions of toxin specificity could be made in the near future.
Jungck, John R; Donovan, Samuel S; Weisstein, Anton E; Khiripet, Noppadon; Everse, Stephen J
Bioinformatics is central to biology education in the 21st century. With the generation of terabytes of data per day, the application of computer-based tools to stored and distributed data is fundamentally changing research and its application to problems in medicine, agriculture, conservation and forensics. In light of this 'information revolution,' undergraduate biology curricula must be redesigned to prepare the next generation of informed citizens as well as those who will pursue careers in the life sciences. The BEDROCK initiative (Bioinformatics Education Dissemination: Reaching Out, Connecting and Knitting together) has fostered an international community of bioinformatics educators. The initiative's goals are to: (i) Identify and support faculty who can take leadership roles in bioinformatics education; (ii) Highlight and distribute innovative approaches to incorporating evolutionary bioinformatics data and techniques throughout undergraduate education; (iii) Establish mechanisms for the broad dissemination of bioinformatics resource materials and teaching models; (iv) Emphasize phylogenetic thinking and problem solving; and (v) Develop and publish new software tools to help students develop and test evolutionary hypotheses. Since 2002, BEDROCK has offered more than 50 faculty workshops around the world, published many resources and supported an environment for developing and sharing bioinformatics education approaches. The BEDROCK initiative builds on the established pedagogical philosophy and academic community of the BioQUEST Curriculum Consortium to assemble the diverse intellectual and human resources required to sustain an international reform effort in undergraduate bioinformatics education.
Harris, Nomi L; Cock, Peter J A; Lapp, Hilmar; Chapman, Brad; Davey, Rob; Fields, Christopher; Hokamp, Karsten; Munoz-Torres, Monica
The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.
Watanuki, Keiichi; Kojima, Kazuyuki
The environment in which Japanese industry has achieved great respect is changing tremendously due to the globalization of world economies, while Asian countries are undergoing economic and technical development as well as benefiting from the advances in information technology. For example, in the design of custom-made casting products, a designer who lacks knowledge of casting may not be able to produce a good design. In order to obtain a good design and manufacturing result, it is necessary to equip the designer and manufacturer with a support system related to casting design, or a so-called knowledge transfer and creation system. This paper proposes a new virtual reality based knowledge acquisition and job training system for casting design, which is composed of the explicit and tacit knowledge transfer systems using synchronized multimedia and the knowledge internalization system using portable virtual environment. In our proposed system, the education content is displayed in the immersive virtual environment, whereby a trainee may experience work in the virtual site operation. Provided that the trainee has gained explicit and tacit knowledge of casting through the multimedia-based knowledge transfer system, the immersive virtual environment catalyzes the internalization of knowledge and also enables the trainee to gain tacit knowledge before undergoing on-the-job training at a real-time operation site.
Tilchin, Oleg; Kittany, Mohamed
In this paper we propose an adaptive approach to managing the development of students' knowledge in the comprehensive project-based learning (PBL) environment. Subject study is realized by two-stage PBL. It shapes adaptive knowledge management (KM) process and promotes the correct balance between personalized and collaborative learning. The…
Emergent Computation is concerned with recent applications of Mathematical Linguistics or Automata Theory. This subject has a primary focus upon "Bioinformatics" (the Genome and arising interest in the Proteome), but the closing chapter also examines applications in Biology, Medicine, Anthropology, etc. The book is composed of an organized examination of DNA, RNA, and the assembly of amino acids into proteins. Rather than examine these areas from a purely mathematical viewpoint (that excludes much of the biochemical reality), the author uses scientific papers written mostly by biochemists based upon their laboratory observations. Thus while DNA may exist in its double stranded form, triple stranded forms are not excluded. Similarly, while bases exist in Watson-Crick complements, mismatched bases and abasic pairs are not excluded, nor are Hoogsteen bonds. Just as there are four bases naturally found in DNA, the existence of additional bases is not ignored, nor amino acids in addition to the usual complement of...
Heitzler, Magnus; Kiertscher, Simon; Lang, Ulrich; Nocke, Thomas; Wahnes, Jens; Winkelmann, Volker
The C3Grid-INAD project aims to provide a common grid infrastructure for the climate science community to improve access to climate related data and domain workflows via the Internet. To make sense of the heterogeneous, often large-sized or even dynamically generated and modified files originating from C3Grid, a highly flexible and user-friendly analysis software is needed to run on different high-performance computing nodes within the grid environment, when requested by a user. Because visual analysis tools directly address human visual perception and therefore are being considered to be highly intuitive, two distinct visualization workflows have been integrated in C3Grid-INAD, targeting different application backgrounds. First, a GrADS-based workflow enables the ad-hoc visualization of selected datasets in respect to data source, temporal and spatial extent, as well as variables of interest. Being low in resource demands, this workflow allows for users to gain fast insights through basic spatial visualization. For more advanced visual analysis purposes, a second workflow enables the user to start a visualization session via Virtual Network Computing (VNC) and VirtualGL to access high-performance computing nodes on which a wide variety of different visual analysis tools are provided. These are made available using the easy-to-use software system SimEnvVis. Considering metadata as well as user preferences and analysis goals, SimEnvVis evaluates the attached tools and launches the selected visual analysis tool by providing a dynamically parameterized template. This approach facilitates the selection of the most suitable tools, and at the same time eases the process of familiarization with them. Because of a higher demand for computational resources, SimEnvVis-sessions are restricted to a smaller set of users at a time. This architecture enables climate scientists not only to remotely access, but also to visually analyze highly heterogeneous data originating from C3
Full Text Available Multitasking or moonlighting is the capability of some proteins to execute two or more biochemical functions. Usually, moonlighting proteins are experimentally revealed by serendipity. For this reason, it would be helpful that Bioinformatics could predict this multifunctionality, especially because of the large amounts of sequences from genome projects. In the present work, we analyse and describe several approaches that use sequences, structures, interactomics and current bioinformatics algorithms and programs to try to overcome this problem. Among these approaches are: a remote homology searches using Psi-Blast, b detection of functional motifs and domains, c analysis of data from protein-protein interaction databases (PPIs, d match the query protein sequence to 3D databases (i.e., algorithms as PISITE, e mutation correlation analysis between amino acids by algorithms as MISTIC. Programs designed to identify functional motif/domains detect mainly the canonical function but usually fail in the detection of the moonlighting one, Pfam and ProDom being the best methods. Remote homology search by Psi-Blast combined with data from interactomics databases (PPIs have the best performance. Structural information and mutation correlation analysis can help us to map the functional sites. Mutation correlation analysis can only be used in very specific situations –it requires the existence of multialigned family protein sequences - but can suggest how the evolutionary process of second function acquisition took place. The multitasking protein database MultitaskProtDB (http://wallace.uab.es/multitask/, previously published by our group, has been used as a benchmark for the all of the analyses.
Kortsarts, Yana; Morris, Robert W.; Utell, Janine M.
Bioinformatics is a relatively new interdisciplinary field that integrates computer science, mathematics, biology, and information technology to manage, analyze, and understand biological, biochemical and biophysical information. We present our experience in teaching an interdisciplinary course, Introduction to Bioinformatics, which was developed…
Tolvanen, Martti; Vihinen, Mauno
Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…
T.-C. Liu (Tzu-Chien); Y.-C. Lin (Yi-Chun); G.W.C. Paas (Fred)
textabstractTwo experiments examined the effects of prior knowledge on learning from different compositions of multiple representations in a mobile learning environment on plant leaf morphology for primary school students. Experiment 1 compared the learning effects of a mobile learning environment
Full Text Available The organizations based on knowledge are intelligent collective actors of the informational society and they are determinant for affirming it as a knowledge society; belonging to the contemporary reality both as an environment of professional and managerial environment and as an object of scientific research and strategic project, they mark the convergence between two phenomena defining the human nature – the knowledge one and the organization one – in a symbolic social construction for the ideas of collective competence, intelligent action and sustainable performance.
Full Text Available Background: Law firms in Botswana offer a particularly interesting context to explore the effects of transition in the knowledge economy. Acquiring and leveraging knowledge effectively in law firms through knowledge management can result in competitive advantage; yet the adoption of this approach remains in its infancy. Objectives: This article investigates the factors that will motivate the adoption of knowledge management in law firms in Botswana, and creates an awareness of the potential benefits of knowledge management in these firms. Method: The article uses both quantitative and qualitative research methods and the survey research design. A survey was performed on all 115 registered law firms and 217 lawyers in Botswana. Interviews were conducted with selected lawyers for more insight. Results: Several changes in the legal environment have motivated law firms to adopt knowledge management. Furthermore, lawyers appreciate the potential benefits of knowledge management. Conclusion: With the rise of the knowledge-based economy, coupled with the pressures faced by the legal industry in recent years, law firms in Botswana can no longer afford to rely on the traditional methods of managing knowledge. Knowledge management will, therefore, enhance the cost effectiveness of these firms. Strategic knowledge management certainly helps to prepare law firms in Botswana to be alive to the fact that the systematic harnessing of legal knowledge is no longer a luxury, but an absolute necessity in the knowledge economy. It will also provide an enabling business environment for private sector development and growth and, therefore, facilitate Botswana’s drive towards the knowledge-based economy.
Tabbakh, Tamara; Freeland-Graves, Jean H
The objective of this research was to assess adherence to the Healthy Eating Index-2010 of mothers and their adolescents (11-14 years old) and to examine the role of the home environment as a mediator of maternal nutrition knowledge and adolescent diet quality. It is hypothesized that mothers with greater knowledge impact the diet quality of their adolescents by creation of healthier home environments. A sample of 206 mother-adolescent dyads separately completed the Multidimensional Home Environment Scale, a Food Frequency Questionnaire, and a Nutrition Knowledge Scale. Body mass index-for-age percentiles were derived from weight and height measurements obtained by researcher; diet quality was estimated via the Healthy Eating Index (HEI)-2010. Percent of maximum score on nutrition knowledge for both mothers and adolescents were poor, with lowest scores on recommendations of healthy eating and physical activity (48% and 19%, respectively). A model of maternal nutrition knowledge (independent variable) and adolescent diet quality (dependent variable) indicated that greater knowledge was associated with higher scores on total fruit (p = 0.02), whole grains (p = 0.05), seafood and plant proteins (p = 0.01), and overall diet quality (p empty calories (p = 0.01). Inclusion of the home environment as a mediator yielded significant estimates of the indirect effect (β = 0.61, 95% CI: 0.3-1.0). Within the home environment, psychological (β = 0.46), social (β = 0.23), and environmental (β = 0.65) variables were all significant mediators of nutrition knowledge on diet quality. These results emphasize the importance of maternal nutrition knowledge and the mediating effect of the home environment on the diet quality of adolescents. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pallen, Mark J
Microbial bioinformatics in 2020 will remain a vibrant, creative discipline, adding value to the ever-growing flood of new sequence data, while embracing novel technologies and fresh approaches. Databases and search strategies will struggle to cope and manual curation will not be sustainable during the scale-up to the million-microbial-genome era. Microbial taxonomy will have to adapt to a situation in which most microorganisms are discovered and characterised through the analysis of sequences. Genome sequencing will become a routine approach in clinical and research laboratories, with fresh demands for interpretable user-friendly outputs. The "internet of things" will penetrate healthcare systems, so that even a piece of hospital plumbing might have its own IP address that can be integrated with pathogen genome sequences. Microbiome mania will continue, but the tide will turn from molecular barcoding towards metagenomics. Crowd-sourced analyses will collide with cloud computing, but eternal vigilance will be the price of preventing the misinterpretation and overselling of microbial sequence data. Output from hand-held sequencers will be analysed on mobile devices. Open-source training materials will address the need for the development of a skilled labour force. As we boldly go into the third decade of the twenty-first century, microbial sequence space will remain the final frontier! © 2016 The Author. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
Full Text Available In complex environment with hybrid terrain, different regions may have different terrain. Path planning for robots in such environment is an open NP-complete problem, which lacks effective methods. The paper develops a novel global path planning method based on common sense and evolution knowledge by adopting dual evolution structure in culture algorithms. Common sense describes terrain information and feasibility of environment, which is used to evaluate and select the paths. Evolution knowledge describes the angle relationship between the path and the obstacles, or the common segments of paths, which is used to judge and repair infeasible individuals. Taken two types of environments with different obstacles and terrain as examples, simulation results indicate that the algorithm can effectively solve path planning problem in complex environment and decrease the computation complexity for judgment and repair of infeasible individuals. It also can improve the convergence speed and have better computation stability.
Lawlor, Brendan; Walsh, Paul
There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians.
Lawlor, Brendan; Walsh, Paul
There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians. PMID:25996054
Kim, Dong Hoon; Song, Jun Yeob; Lee, Jong Hyun; Cha, Suk Keun
In the near future, the foreseen improvement in machine tools will be in the form of a knowledge evolution-based intelligent device. The goal of this study is to develop intelligent machine tools having knowledge-evolution capability in Machine to Machine (M2M) wired and wireless environment. The knowledge evolution-based intelligent machine tools are expected to be capable of gathering knowledge autonomously, producing knowledge, understanding knowledge, applying reasoning to knowledge, making new decisions, dialoguing with other machines, etc. The concept of the knowledge-evolution intelligent machine originated from the process of machine control operation by the sense, dialogue and decision of a human expert. The structure of knowledge evolution in M2M and the scheme for a dialogue agent among agent-based modules such as a sensory agent, a dialogue agent and an expert system (decision support agent) are presented in this paper, and work-offset compensation from thermal change and recommendation of cutting condition are performed on-line for knowledge-evolution verification
Robson da Silva Lopes
Full Text Available Bioinformatics and other well-established sciences, such as molecular biology, genetics, and biochemistry, provide a scientific approach for the analysis of data generated through “omics” projects that may be used in studies of chronobiology. The results of studies that apply these techniques demonstrate how they significantly aided the understanding of chronobiology. However, bioinformatics tools alone cannot eliminate the need for an understanding of the field of research or the data to be considered, nor can such tools replace analysts and researchers. It is often necessary to conduct an evaluation of the results of a data mining effort to determine the degree of reliability. To this end, familiarity with the field of investigation is necessary. It is evident that the knowledge that has been accumulated through chronobiology and the use of tools derived from bioinformatics has contributed to the recognition and understanding of the patterns and biological rhythms found in living organisms. The current work aims to develop new and important applications in the near future through chronobiology research.
Cohen, K Bretonnel; Hunter, Lawrence E
Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing.
Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P
Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.
Bruhn, Russel Elton; Burton, Philip John
Data interchange bioinformatics databases will, in the future, most likely take place using extensible markup language (XML). The document structure will be described by an XML Schema rather than a document type definition (DTD). To ensure flexibility, the XML Schema must incorporate aspects of Object-Oriented Modeling. This impinges on the choice of the data model, which, in turn, is based on the organization of bioinformatics data by biologists. Thus, there is a need for the general bioinformatics community to be aware of the design issues relating to XML Schema. This paper, which is aimed at a general bioinformatics audience, uses examples to describe the differences between a DTD and an XML Schema and indicates how Unified Modeling Language diagrams may be used to incorporate Object-Oriented Modeling in the design of schema.
Ruiz, F.; Gonzalez, J.; Delgado, J.L.
Full text: Technology, the social nature of learning and the generational learning style are conforming new models of training that are changing the roles of the instructors, the channels of communication and the proper learning content of the knowledge to be transferred. New training methodologies are being using in the primary and secondary education and “Vintage” classroom learning does not meet the educational requirements of these methodologies; therefore, it’s necessary to incorporate them in the Knowledge Management processes used in the nuclear industry. This paper describes a practical approach of an enriched learning environment with the purpose of creating and transferring nuclear knowledge. (author
Shanahan, Hugh P.; Owen, Anne M.; Harrison, Andrew P.
We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development. PMID:25050811
Murphy, Glen; Salomone, Sonia
While highly cohesive groups are potentially advantageous they are also often correlated with the emergence of knowledge and information silos based around those same functional or occupational clusters. Consequently, an essential challenge for engineering organisations wishing to overcome informational silos is to implement mechanisms that facilitate, encourage and sustain interactions between otherwise disconnected groups. This paper acts as a primer for those seeking to gain an understanding of the design, functionality and utility of a suite of software tools generically termed social media technologies in the context of optimising the management of tacit engineering knowledge. Underpinned by knowledge management theory and using detailed case examples, this paper explores how social media technologies achieve such goals, allowing for the transfer of knowledge by tapping into the tacit and explicit knowledge of disparate groups in complex engineering environments.
Full Text Available As a focal point of biotechnology, bioinformatics integrates knowledge from biology, mathematics, physics, chemistry, computer science and information science. It generally deals with genome informatics, protein structure and drug design. However, the data or information thus acquired from the main areas of bioinformatics may not be effective. Some researchers combined bioinformatics with wireless sensor network (WSN into biosensor and other tools, and applied them to such areas as fermentation, environmental monitoring, food engineering, clinical medicine and military. In the combination, the WSN is used to collect data and information. The reliability of the WSN in bioinformatics is the prerequisite to effective utilization of information. It is greatly influenced by factors like quality, benefits, service, timeliness and stability, some of them are qualitative and some are quantitative. Hence, it is necessary to develop a method that can handle both qualitative and quantitative assessment of information. A viable option is the fuzzy linguistic method, especially 2-tuple linguistic model, which has been extensively used to cope with such issues. As a result, this paper introduces 2-tuple linguistic representation to assist experts in giving their opinions on different WSNs in bioinformatics that involve multiple factors. Moreover, the author proposes a novel way to determine attribute weights and uses the method to weigh the relative importance of different influencing factors which can be considered as attributes in the assessment of the WSN in bioinformatics. Finally, an illustrative example is given to provide a reasonable solution for the assessment.
Ahi, Berat; Alisinanoglu, Fatma
The purpose of the research is to evaluate pre-service preschool teachers' knowledge about environment by analyzing their drawings about it. 70 first grade, 99 second grade, 56 third grade and 44 fourth grade, with a total of 269 students have been evaluated in this research. This qualitative research was made with social structuralism vision.…
Full Text Available Previous studies have shown that spatial knowledge acquisition differs across individuals in both real and virtual environments. For example, in a real environment, Ishikawa & Montello (2006 showed that some participants had almost perfect configural knowledge of the environment after one or two learning trials, whereas others performed at chance even after repeated learning trials. Using a virtual version of Ishikawa & Montello's layouts, we measured eye movements while participants were learning a layout of a route, as eye movements are shown to be closely linked to performance in spatial navigation tasks. We prepared three different layouts of a route depicted in a desktop virtual environment, along with the locations of four landmarks on that route. After learning each of the routes, we administered three different measures of spatial knowledge: numbering the landmark order, estimation of direction, and map sketching. Self-reported sense-of-direction (SDQ-S was also measured. Behavioral analyses showed positive correlations across the routes in the estimation of direction. However, consistent correlations were not observed between eye movements and performance of the estimation of direction in each route. Those results suggest that eye movements do not predict individual differences in spatial knowledge acquisition.
Sawyer, Brook E.; Justice, Laura M.; Guo, Ying; Logan, Jessica A. R.; Petrill, Stephen A.; Glenn-Applegate, Katherine; Kaderavek, Joan N.; Pentimonti, Jill M.
To contribute to the modest body of work examining the home literacy environment (HLE) and emergent literacy outcomes for children with disabilities, this study addressed two aims: (a) to determine the unique contributions of the HLE on print knowledge of preschool children with language impairment and (b) to identify whether specific child…
Akarasriworn, Chatchada; Ku, Heng-Yu
This study investigated 28 graduate students' knowledge construction and attitudes toward online synchronous videoconferencing collaborative learning environments. These students took an online course, self-selected 3 or 4 group members to form groups, and worked on projects across 16 weeks. Each group utilized Elluminate "Live!" for the…
Phillips, Beth M.; Morse, Erika E.
This paper presents findings from a stratified-random survey of family child care providers' backgrounds, caregiving environments, practices, attitudes, and knowledge related to language, literacy, and mathematics development for preschool children. Descriptive results are consistent with prior studies suggesting that home-based providers are…
Akman, Ozkan; Alagoz, Bulent
The purpose of education should be to raise people who are researchers, developer, investigating what they find, use their knowledge in their behaviors and who can interpret and put new things on them. When children are being educated, the experience should be before the occurrence of the story. First, good and bad environment should be shown,…
Bredeweg, Bert; Liem, Jochem; Beek, Wouter; Linnebank, Floris; Gracia, Jorge; Lozano, Esther; Wißner, Michael; Bühling, René; Salles, Paulo; Noble, Richard; Zitek, Andreas; Borisova, Petya; Mioduser, David
Articulating thought in computerbased media is a powerful means for humans to develop their understanding of phenomena. We have created DynaLearn, an intelligent learning environment that allows learners to acquire conceptual knowledge by constructing and simulating qualitative models of how systems
Ali, Taqdir; Hussain, Maqbool; Ali Khan, Wajahat; Afzal, Muhammad; Hussain, Jamil; Ali, Rahman; Hassan, Waseem; Jamshed, Arif; Kang, Byeong Ho; Lee, Sungyoung
Technologically integrated healthcare environments can be realized if physicians are encouraged to use smart systems for the creation and sharing of knowledge used in clinical decision support systems (CDSS). While CDSSs are heading toward smart environments, they lack support for abstraction of technology-oriented knowledge from physicians. Therefore, abstraction in the form of a user-friendly and flexible authoring environment is required in order for physicians to create shareable and interoperable knowledge for CDSS workflows. Our proposed system provides a user-friendly authoring environment to create Arden Syntax MLM (Medical Logic Module) as shareable knowledge rules for intelligent decision-making by CDSS. Existing systems are not physician friendly and lack interoperability and shareability of knowledge. In this paper, we proposed Intelligent-Knowledge Authoring Tool (I-KAT), a knowledge authoring environment that overcomes the above mentioned limitations. Shareability is achieved by creating a knowledge base from MLMs using Arden Syntax. Interoperability is enhanced using standard data models and terminologies. However, creation of shareable and interoperable knowledge using Arden Syntax without abstraction increases complexity, which ultimately makes it difficult for physicians to use the authoring environment. Therefore, physician friendliness is provided by abstraction at the application layer to reduce complexity. This abstraction is regulated by mappings created between legacy system concepts, which are modeled as domain clinical model (DCM) and decision support standards such as virtual medical record (vMR) and Systematized Nomenclature of Medicine - Clinical Terms (SNOMED CT). We represent these mappings with a semantic reconciliation model (SRM). The objective of the study is the creation of shareable and interoperable knowledge using a user-friendly and flexible I-KAT. Therefore we evaluated our system using completeness and user satisfaction
Full Text Available The present paper has a synergic dual purpose of bringing a psychological and neuroscience related perspective oriented towards decision making and knowledge creation diagnosis in the frame of Knowledge Management. !e conceptual model is built by means ofCognitive-Emotional and Explicit-Tacit knowledge dyads and structured on Analytic Hierarchy Process (AHP according to the hypothesis which designates the first dyad as an accessing mechanism of knowledge stored in the second dyad. Due to the well acknowledged needsconcerning new advanced decision making instruments and enhanced knowledge creation processes in the field of technical space projects emphasized by a high level of complexity, the herein study tries also to prove the relevance of the proposed conceptual diagnosis modelin Systems Engineering (SE methodology which foresees at its turn concurrent engineering within interdisciplinary working environments. !e theoretical model, entitled DiagnoSE, has the potential to provide practical implications to space/space related business sector butnot merely, and on the other hand, to trigger and inspire other knowledge management related researches for refining and testing the proposed instrument in SE or other similar decision making based working environment.
Mulder, Nicola; Schwartz, Russell; Brazas, Michelle D; Brooksbank, Cath; Gaeta, Bruno; Morgan, Sarah L; Pauley, Mark A; Rosenwald, Anne; Rustici, Gabriella; Sierk, Michael; Warnow, Tandy; Welch, Lonnie
Bioinformatics is recognized as part of the essential knowledge base of numerous career paths in biomedical research and healthcare. However, there is little agreement in the field over what that knowledge entails or how best to provide it. These disagreements are compounded by the wide range of populations in need of bioinformatics training, with divergent prior backgrounds and intended application areas. The Curriculum Task Force of the International Society of Computational Biology (ISCB) Education Committee has sought to provide a framework for training needs and curricula in terms of a set of bioinformatics core competencies that cut across many user personas and training programs. The initial competencies developed based on surveys of employers and training programs have since been refined through a multiyear process of community engagement. This report describes the current status of the competencies and presents a series of use cases illustrating how they are being applied in diverse training contexts. These use cases are intended to demonstrate how others can make use of the competencies and engage in the process of their continuing refinement and application. The report concludes with a consideration of remaining challenges and future plans.
Brooksbank, Cath; Morgan, Sarah L.; Rosenwald, Anne; Warnow, Tandy; Welch, Lonnie
Bioinformatics is recognized as part of the essential knowledge base of numerous career paths in biomedical research and healthcare. However, there is little agreement in the field over what that knowledge entails or how best to provide it. These disagreements are compounded by the wide range of populations in need of bioinformatics training, with divergent prior backgrounds and intended application areas. The Curriculum Task Force of the International Society of Computational Biology (ISCB) Education Committee has sought to provide a framework for training needs and curricula in terms of a set of bioinformatics core competencies that cut across many user personas and training programs. The initial competencies developed based on surveys of employers and training programs have since been refined through a multiyear process of community engagement. This report describes the current status of the competencies and presents a series of use cases illustrating how they are being applied in diverse training contexts. These use cases are intended to demonstrate how others can make use of the competencies and engage in the process of their continuing refinement and application. The report concludes with a consideration of remaining challenges and future plans. PMID:29390004
van Kampen, Antoine H C; Moerland, Perry D
Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically contributes to systems medicine. First, we explain the role of bioinformatics in the management and analysis of data. In particular we show the importance of publicly available biological and clinical repositories to support systems medicine studies. Second, we discuss how the integration and analysis of multiple types of omics data through integrative bioinformatics may facilitate the determination of more predictive and robust disease signatures, lead to a better understanding of (patho)physiological molecular mechanisms, and facilitate personalized medicine. Third, we focus on network analysis and discuss how gene networks can be constructed from omics data and how these networks can be decomposed into smaller modules. We discuss how the resulting modules can be used to generate experimentally testable hypotheses, provide insight into disease mechanisms, and lead to predictive models. Throughout, we provide several examples demonstrating how bioinformatics contributes to systems medicine and discuss future challenges in bioinformatics that need to be addressed to enable the advancement of systems medicine.
Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi
In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017
Full Text Available Co-opetition under the environment of BIM implementation process because the level is not high resulting in the project BIM application value of incomplete knowledge sharing among organizations, thus establishing good knowledge can effectively solve this problem and achieve the overall benefit and benefit sharing mechanism in the project organization. This paper defines five competing modes according to the competing degree of organization between different BIM applications, including imperfect competition, competition, competition and cooperation, full cooperation and cooperation, and put forward the conceptual model and related assumptions. Analysis of the effect of path and effect of project determined in BIM application mode, the concurrence of knowledge sharing, efficiency and overall efficiency of the project within the organization through the survey and empirical results, and according to the proposed contract, the distribution of benefits and work three kinds of knowledge sharing mechanism implementation path.
Full Text Available In this paper, we present an ontology-based visualization support system which can provide a meaningful learning environment to help e-book learners to effectively construct their knowledge frameworks. In this personalized visualization support system, learners are encouraged to actively locate new knowledge in their own knowledge framework and check the logical consistency of their ideas for clearing up misunderstandings; on the other hand, instructors will be able to decide the group distribution for collaborative learning activities based on the knowledge structure of learners. For facilitating those visualization supports, a method to semi-automatically construct a course-centered ontology to describe the required information in a map structure is presented. To automatically manipulate this course-centered ontology to provide visualization learning supports, a prototype system is designed and developed.
Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G
Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).
Rajkumar, E; Julious, S; Salome, A; Jennifer, G; John, A S; Kannan, L; Richard, J
The objective of this cross-sectional comparative study was to find the effects of environment and education on knowledge and attitude of nursing students towards leprosy. Data were collected, using a pretested questionnaire, from the first year and third year students of a School of Nursing attached to a leprosy specialty hospital and also from a comparable School of Nursing attached to a general hospital. The results showed that trainees acquired more knowledge on leprosy during training in both schools of nursing. However, those trained in leprosy hospital environment had higher knowledge and attitude scores than those trained in general hospital environment. The attitude of the trainees attached to leprosy hospital was favourable even before they had formal training in leprosy. Those trained in the general hospital showed more favourable attitude after training compared to before training. School of Nursing attached to leprosy hospital provided an atmosphere conducive to learning and understanding more about leprosy. The trainees retained what was learnt because of regular association with patients affected by leprosy. For employment in hospital or community based services or research related to leprosy, nurses trained in a leprosy hospital would have added value of knowledge and attitude.
Although the era of big data has produced many bioinformatics tools and databases, using them effectively often requires specialized knowledge. Many groups lack bioinformatics expertise, and frequently find that software documentation is inadequate and local colleagues may be overburdened or unfamil...
Likic, Vladimir A.
This article describes the experience of teaching structural bioinformatics to third year undergraduate students in a subject titled "Biomolecular Structure and Bioinformatics." Students were introduced to computer programming and used this knowledge in a practical application as an alternative to the well established Internet bioinformatics…
De Goede, Maartje; Postma, Albert
Males tend to outperform females in their knowledge of relative and absolute distances in spatial layouts and environments. It is unclear yet in how far these differences are innate or develop through life. The aim of the present study was to investigate whether gender differences in configurational knowledge for a natural environment might be modulated by experience. In order to examine this possibility, distance as well as directional knowledge of the city of Utrecht in the Netherlands was assessed in male and female inhabitants who had different levels of familiarity with this city. Experience affected the ability to solve difficult distance knowledge problems, but only for females. While the quality of the spatial representation of metric distances improved with more experience, this effect was not different for males and females. In contrast directional configurational measures did show a main gender effect but no experience modulation. In general, it seems that we obtain different configurational aspects according to different experiential time schemes. Moreover, the results suggest that experience may be a modulating factor in the occurrence of gender differences in configurational knowledge, though this seems dependent on the type of measurement. It is discussed in how far proficiency in mental rotation ability and spatial working memory accounts for these differences.
Schneider, Maria V.; Walter, Peter; Blatter, Marie-Claude
and clearly tagged in relation to target audiences, learning objectives, etc. Ideally, they would also be peer reviewed, and easily and efficiently accessible for downloading. Here, we present the Bioinformatics Training Network (BTN), a new enterprise that has been initiated to address these needs and review...
A handout used in a HUB (Heidelberg Unseminars in Bioinformatics) meeting focused on career development for bioinformaticians. It describes an activity for use to help introduce the idea of peer mentoring, potnetially acting as an opportunity to create peer-mentoring groups.
This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...
van Kampen, Antoine H. C.; Moerland, Perry D.
Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically
Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael
Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…
Vaez Barzani, Ahmad
In this thesis we present an overview of bioinformatics-based approaches for genomic association mapping, with emphasis on human quantitative traits and their contribution to complex diseases. We aim to provide a comprehensive walk-through of the classic steps of genomic association mapping
Gu, Peiqin; Chen, Huajun
Traditional Chinese medicine (TCM) is gaining increasing attention with the emergence of integrative medicine and personalized medicine, characterized by pattern differentiation on individual variance and treatments based on natural herbal synergism. Investigating the effectiveness and safety of the potential mechanisms of TCM and the combination principles of drug therapies will bridge the cultural gap with Western medicine and improve the development of integrative medicine. Dealing with rapidly growing amounts of biomedical data and their heterogeneous nature are two important tasks among modern biomedical communities. Bioinformatics, as an emerging interdisciplinary field of computer science and biology, has become a useful tool for easing the data deluge pressure by automating the computation processes with informatics methods. Using these methods to retrieve, store and analyze the biomedical data can effectively reveal the associated knowledge hidden in the data, and thus promote the discovery of integrated information. Recently, these techniques of bioinformatics have been used for facilitating the interactional effects of both Western medicine and TCM. The analysis of TCM data using computational technologies provides biological evidence for the basic understanding of TCM mechanisms, safety and efficacy of TCM treatments. At the same time, the carrier and targets associated with TCM remedies can inspire the rethinking of modern drug development. This review summarizes the significant achievements of applying bioinformatics techniques to many aspects of the research in TCM, such as analysis of TCM-related '-omics' data and techniques for analyzing biological processes and pharmaceutical mechanisms of TCM, which have shown certain potential of bringing new thoughts to both sides. © The Author 2013. Published by Oxford University Press. For Permissions, please email: firstname.lastname@example.org.
Hill, Randall W., Jr.
The issues of knowledge representation and control in hypermedia-based training environments are discussed. The main objective is to integrate the flexible presentation capability of hypermedia with a knowledge-based approach to lesson discourse management. The instructional goals and their associated concepts are represented in a knowledge representation structure called a 'concept network'. Its functional usages are many: it is used to control the navigation through a presentation space, generate tests for student evaluation, and model the student. This architecture was implemented in HyperCLIPS, a hybrid system that creates a bridge between HyperCard, a popular hypertext-like system used for building user interfaces to data bases and other applications, and CLIPS, a highly portable government-owned expert system shell.
Florin Gheorghe FILIP
Full Text Available Health care practitioners continually confront with a wide range of challenges, seeking to making difficult diagnoses, avoiding errors, ensuring highest quality, maximizing efficacy and reducing costs. Information technology has the potential to reduce clinical errors and to im-prove the decision making in the clinical milieu. This paper presents a pilot development of a clinical decision support systems (CDSS entitled MEDIS that was designed to incorporate knowledge from heterogeneous environments with the purpose of increasing the efficiency and the quality of the decision making process, and reducing costs based on advances of in-formation technologies, especially under the impact of the transition towards the mobile space. The system aims to capture and reuse knowledge in order to provide real-time access to clinical knowledge for a variety of users, including medical personnel, patients, teachers and students.
Gillette, Brandon A.
During the last several decades, the nature of childhood has changed. There is not much nature in it anymore. Numerous studies in environmental education, environmental psychology, and conservation psychology show that the time children spend outdoors encourages healthy physical development, enriches creativity and imagination, and enhances classroom performance. Additional research shows that people's outdoor experiences as children, and adults can lead to more positive attitudes and behavior towards the environment, along with more environmental knowledge with which to guide public policy decisions. The overall purpose of this study was to examine the effect of middle childhood (age 6-11) outdoor experiences on an individual's current knowledge of the environment. This correlational study evaluated the following potential relationships: 1) The effect of "outdoorsiness" (defined as a fondness or enjoyment of the outdoors and related activities) on an individual's environmental knowledge; 2) The effect of gender on an individual's level of outdoorsiness; 3) The effect of setting (urban, suburban, rural, farm) on an individual's level of outdoorsiness and environmental knowledge; 4) The effect of formal [science] education on an individual's level of outdoorsiness and environmental knowledge; and 5) The effect of informal, free-choice learning on an individual's level of outdoorsiness and environmental knowledge. Outdoorsiness was measured using the Natural Experience Scale (NES), which was developed through a series of pilot surveys and field-tested in this research study. Participants included 382 undergraduate students at the University of Kansas with no preference or bias given to declared or undeclared majors. The information from this survey was used to analyze the question of whether outdoor experiences as children are related in some way to an adult's environmental knowledge after accounting for other factors of knowledge acquisition such as formal education
Olsen, Lars Rønn; Campos, Benito; Barnkob, Mike Stein
therapy target discovery in a bioinformatics analysis pipeline. We describe specialized bioinformatics tools and databases for three main bottlenecks in immunotherapy target discovery: the cataloging of potentially antigenic proteins, the identification of potential HLA binders, and the selection epitopes...
"The overall aim of "EURASIP Journal on Bioinformatics and Systems Biology" is to publish research results related to signal processing and bioinformatics theories and techniques relevant to a wide...
Khan, Abdul Azeez; Khader, Sheik Abdul
E-learning or electronic learning platforms facilitate delivery of the knowledge spectrum to the learning community through information and communication technologies. The transfer of knowledge takes place from experts to learners, and externalization of the knowledge transfer is significant. In the e-learning environment, the learners seek…
Brown, James A. L.
A pedagogic intervention, in the form of an inquiry-based peer-assisted learning project (as a practical student-led bioinformatics module), was assessed for its ability to increase students' engagement, practical bioinformatic skills and process-specific knowledge. Elements assessed were process-specific knowledge following module completion,…
Marsh-Tootle, Wendy L; Funkhouser, Ellen; Frazier, Marcela G; Crenshaw, Katie; Wall, Terry C
To evaluate knowledge, attitudes, and environment of primary care providers, and to develop a conceptual framework showing their impact on self-reported pre-school vision screening (PVS) behaviors. Eligible primary care providers were individuals who filed claims with Medicaid agencies in Alabama, South Carolina, or Illinois, for at least eight well child checks for children aged 3 or 4 years during 1 year. Responses were obtained on-line from providers who enrolled in the intervention arm of a randomized trial to improve PVS. We calculated a summary score per provider per facet: (1) for behavior and knowledge, each correct answer was assigned a value of +1; and (2) for attitudes and environment, responses indicating support for PVS were assigned a value of +1, and other responses were assigned -1. Responses were available from 53 participants (43 of 49 enrolled pediatricians, 8 of 14 enrolled family physicians, one general physician, and one nurse practitioner). Recognizing that amblyopia often presents without outward signs was positively related to good PVS: [odds ratio (OR) = 3.9; p = 0.06]. Reporting that "preschool VS interrupts patient flow" posed a significant barrier (OR = 0.2; p = 0.05). Providers with high summed scores on attitudes (OR = 6.0; p = 0.03), or knowledge and attitudes (OR = 11.4; p attitudes or environment, and "good" PVS behavior (p = 0.04). PVS is influenced by positive attitudes, especially when combined with knowledge about amblyopia. Interventions to improve PVS should target multiple facets, emphasizing (1) asymptomatic children are at risk for amblyopia, (2) specific evidence-based tests have high testability and sensitivity for amblyopia in pre-school children, and (3) new tests minimize interruptions to patient flow.
Feenstra, K. Anton; Abeln, Sanne
While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which
Lindley, Lisa C; Cozad, Melanie J
To examine the relationship between nurse knowledge, work environment, and registered nurse (RN) turnover in perinatal hospice and palliative care organizations. Using nurse intellectual capital theory, a multivariate analysis was conducted with 2007 National Home and Hospice Care Survey data. Perinatal hospice and palliative care organizations experienced a 5% turnover rate. The professional experience of advanced practice nurses (APNs) was significantly related to turnover among RNs (β = -.032, P < .05). Compared to organizations with no APNs professional experience, clinical nurse specialists and nurse practitioners significantly reduced RN turnover by 3 percentage points. No other nurse knowledge or work environment variables were associated with RN turnover. Several of the control variables were also associated with RN turnover in the study; Organizations serving micropolitan (β = -.041, P < .05) and rural areas (β = -.037, P < .05) had lower RN turnover compared to urban areas. Organizations with a technology climate where nurses used electronic medical records had a higher turnover rate than those without (β = .036, P < .05). The findings revealed that advanced professional experience in the form of APNs was associated with reductions in RN turnover. This suggests that having a clinical nurse specialist or nurse practitioner on staff may provide knowledge and experience to other RNs, creating stability within the organization.
Hong, Huang-Yao; Chiu, Chieh-Hsin
This study explored how students viewed the role of ideas for knowledge work and how such a view was related to their inquiry activities. Data mainly came from students' online interaction logs, group discussion and inquiry, and a survey concerning the role of ideas for knowledge work. The findings suggest that knowledge building was conducive to…
Ranganathan, Shoba; Tammi, Martti; Gribskov, Michael; Tan, Tin Wee
Abstract In 1998, the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation was set up to champion the advancement of bioinformatics in the Asia Pacific. By 2002, APBioNet was able to gain sufficient critical mass to initiate the first International Conference on Bioinformatics (InCoB) bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2006 Conference was organized as the 5th annual conference of the Asia-...
Effectively transitioning science knowledge to an operational environment relevant to space weather is critical to meet the civilian and defense needs, especially considering how technologies are advancing and present evolving susceptibilities to space weather impacts. The effort to transition scientific knowledge to a useful application is not a research task nor is an operational activity, but an effort that bridges the two. Successful transitioning must be an intentional effort that has a clear goal for all parties and measureable outcome and deliverable. This talk will present proven methodologies that have been demonstrated to be effective for terrestrial weather and disaster relief efforts, and how those methodologies can be applied to space weather transition efforts.
Full Text Available In “linear documentary land”, we are trained to see stories everywhere we look. As noted by Grasseni and Walter (2014, digital media affordances encourage reflections on this particular “schooling of the eye”, the power relations it is embedded in as well as the creation of counter-practices. Indeed, many artists, media practitioners and scholars advocate interactivity as a different, possibly more “authentic“, representative strategy for documentary. Drawing on my ethnographic study of the Korsakow-System, this paper analyses a software as part of a situated visual knowledge practice that challenges story as primary organizing principle in computational networked environments.
Baker, Lisa M.
bias in earlier studies using science-like tasks, in which characteristics of the alternate hypothesis space may have made it unfeasible for participants to generate and test alternate hypotheses. In general, scientists and science undergraduates were found to engage in a systematic experimental design process that responded to salient features of the problem environment, including the constant potential for experimental error, availability of alternate hypotheses, and access to both theoretical knowledge and knowledge of experimental techniques.
Full Text Available Precision medicine (PM requires the delivery of individually adapted medical care based on the genetic characteristics of each patient and his/her tumor. The last decade witnessed the development of high-throughput technologies such as microarrays and next-generation sequencing which paved the way to PM in the field of oncology. While the cost of these technologies decreases, we are facing an exponential increase in the amount of data produced. Our ability to use this information in daily practice relies strongly on the availability of an efficient bioinformatics system that assists in the translation of knowledge from the bench towards molecular targeting and diagnosis. Clinical trials and routine diagnoses constitute different approaches, both requiring a strong bioinformatics environment capable of i warranting the integration and the traceability of data, ii ensuring the correct processing and analyses of genomic data and iii applying well-defined and reproducible procedures for workflow management and decision-making. To address the issues, a seamless information system was developed at Institut Curie which facilitates the data integration and tracks in real-time the processing of individual samples. Moreover, computational pipelines were developed to identify reliably genomic alterations and mutations from the molecular profiles of each patient. After a rigorous quality control, a meaningful report is delivered to the clinicians and biologists for the therapeutic decision. The complete bioinformatics environment and the key points of its implementation are presented in the context of the SHIVA clinical trial, a multicentric randomized phase II trial comparing targeted therapy based on tumor molecular profiling versus conventional therapy in patients with refractory cancer. The numerous challenges faced in practice during the setting up and the conduct of this trial are discussed as an illustration of PM application.
Frank, Eibe; Hall, Mark; Trigg, Len; Holmes, Geoffrey; Witten, Ian H
The Weka machine learning workbench provides a general-purpose environment for automatic classification, regression, clustering and feature selection-common data mining problems in bioinformatics research. It contains an extensive collection of machine learning algorithms and data pre-processing methods complemented by graphical user interfaces for data exploration and the experimental comparison of different machine learning techniques on the same problem. Weka can process data given in the form of a single relational table. Its main objectives are to (a) assist users in extracting useful information from data and (b) enable them to easily identify a suitable algorithm for generating an accurate predictive model from it. http://www.cs.waikato.ac.nz/ml/weka.
Full Text Available The changes of production factors priorities affect more and more the evolution of global economy, requiring reorientation of development policies both at company level and at national economies level to adapt to the phenomenon called: “the new economy” or “knowledge economy”. If for classical economy the ability to compete - competitiveness- depends largely on the quantity or the amount of production factors, at the present, has gained importance the efficiency of their use. The idea of making this article appeared both from the necessity of a systematic research to approach the Romanian competitive environment and the desire to stoop to this important conference theme. We can certainly say that the actual business environment is characterized by a particularly dynamic due to changes that occur within it, especially under the impact of technical and scientific revolution which brought to the fore the knowledge as an essential element of achieving a high competitiveness. It is intended that the proposed theme of this article to have an economic importance for the actual information and adequacy of the current economy in Romania.
Rocha, Miguel; Fdez-Riverola, Florentino; Paz, Juan
This proceedings presents recent practical applications of Computational Biology and Bioinformatics. It contains the proceedings of the 9th International Conference on Practical Applications of Computational Biology & Bioinformatics held at University of Salamanca, Spain, at June 3rd-5th, 2015. The International Conference on Practical Applications of Computational Biology & Bioinformatics (PACBB) is an annual international meeting dedicated to emerging and challenging applied research in Bioinformatics and Computational Biology. Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis o...
Bonnal, R.J.P.; Smant, G.; Prins, J.C.P.
Biogem provides a software development environment for the Ruby programming language, which encourages community-based software development for bioinformatics while lowering the barrier to entry and encouraging best practices. Biogem, with its targeted modular and decentralized approach, software
Zrinka Ristić Dedić
Full Text Available The study examines two components of metacognitive knowledge in the context of inquiry learning: metatask and metastrategic. Existing work on the topic has shown that adolescents often lacked metacognitive understanding necessary for optimal inquiry learning (Keselman & Kuhn, 2002; Kuhn, 2002a; Kuhn, Black, Keselman, & Kaplan, 2000, but demonstrated that engagement with inquiry tasks may improve it (Keselman, 2003; Kuhn & Pearsall, 1998.The aim of the study is to investigate the gains in metacognitive knowledge that occur as a result of repeated engagement with an inquiry learning task, and to examine the relationship between metacognitive knowledge and performance on the task.The participants were 34 eighth grade pupils, who participated in a self-directed experimentation task using the FILE programme (Hulshof, Wilhelm, Beishuizen, & van Rijn, 2005. The task required pupils to design and conduct experiments and to make inferences regarding the causal structure of a multivariable system. Pupils participated in four learning sessions over the course of one month. Metacognitive knowledge was assessed by the questionnaire before and after working in FILE.The results indicate that pupils improved in metacognitive knowledge following engagement with the task. However, many pupils showed insufficient metacognitive knowledge in the post-test and failed to apply newly achieved knowledge to the transfer task. Pupils who attained a higher level of metacognitive knowledge were more successful on the task than pupils who did not improve on metacognitive knowledge. A particular level of metacognitive understanding is a necessary, but not sufficient condition for successful performance on the task.
As library and information science (LIS) becomes an increasingly technology-driven profession, particularly in the academic library environment, questions arise as to the extent of information technology (IT) knowledge and skills that LIS professionals require. The purpose of this paper is to ascertain what IT knowledge and skills are needed by…
Songhao, He; Saito, Kenji; Maeda, Takashi; Kubo, Takara
For people who live in the knowledge society which has rapidly been changing, learning in the widest sense becomes indispensable in all phases of working, living and playing. The construction of an environment, to meet the demands of people who need to acquire new knowledge and skills as the need arises, and enlighten each other regularly, is…
Pettenati, M C; Pettenati, Corrado
In this paper we will highlight some important issues which will influence the redefinition of roles and duties of libraries and librarians in a networked based educational environment. Although librarians will also keep their traditional roles of faculty support services as well as reference service and research assistance, we identify the participation in the instructional design process, the support in the evaluation, development and use of a proper authoring system and the customization of information access, as being the domains where libraries and librarians should mainly involve themselves in the next future and make profit of their expertise in information and knowledge organization in order to properly and effectively support the institutions in the use of Information Technology in education.
Stegeman, Cynthia A.
The purpose of this study was to compare the effects of a student-centered, interactive, case-based, multimedia learning environment to a traditional tutorial-based, multimedia learning environment on second-year dental hygiene students (n = 29). Surveys were administered at four points to measure attainment and retention of knowledge, attitude,…
Full Text Available Abstract In 1998, the Asia Pacific Bioinformatics Network (APBioNet, Asia's oldest bioinformatics organisation was set up to champion the advancement of bioinformatics in the Asia Pacific. By 2002, APBioNet was able to gain sufficient critical mass to initiate the first International Conference on Bioinformatics (InCoB bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2006 Conference was organized as the 5th annual conference of the Asia-Pacific Bioinformatics Network, on Dec. 18–20, 2006 in New Delhi, India, following a series of successful events in Bangkok (Thailand, Penang (Malaysia, Auckland (New Zealand and Busan (South Korea. This Introduction provides a brief overview of the peer-reviewed manuscripts accepted for publication in this Supplement. It exemplifies a typical snapshot of the growing research excellence in bioinformatics of the region as we embark on a trajectory of establishing a solid bioinformatics research culture in the Asia Pacific that is able to contribute fully to the global bioinformatics community.
Ranganathan, Shoba; Hsu, Wen-Lian; Yang, Ueng-Cheng; Tan, Tin Wee
The 2008 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation set up in 1998, was organized as the 7th International Conference on Bioinformatics (InCoB), jointly with the Bioinformatics and Systems Biology in Taiwan (BIT 2008) Conference, Oct. 20-23, 2008 at Taipei, Taiwan. Besides bringing together scientists from the field of bioinformatics in this region, InCoB is actively involving researchers from the area of systems biology, to facilitate greater synergy between these two groups. Marking the 10th Anniversary of APBioNet, this InCoB 2008 meeting followed on from a series of successful annual events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea), New Delhi (India) and Hong Kong. Additionally, tutorials and the Workshop on Education in Bioinformatics and Computational Biology (WEBCB) immediately prior to the 20th Federation of Asian and Oceanian Biochemists and Molecular Biologists (FAOBMB) Taipei Conference provided ample opportunity for inducting mainstream biochemists and molecular biologists from the region into a greater level of awareness of the importance of bioinformatics in their craft. In this editorial, we provide a brief overview of the peer-reviewed manuscripts accepted for publication herein, grouped into thematic areas. As the regional research expertise in bioinformatics matures, the papers fall into thematic areas, illustrating the specific contributions made by APBioNet to global bioinformatics efforts.
The use of X-rays in medical fields has increased significantly in recent years, since various therapeutic procedures can be performed without the need for surgery, which presents the greatest risk to the patient. An example of this increase is the practice of cardiac catheterization, this procedure fluoroscopy is used for placement of central venous catheters and temporary pacemakers, and long-term use increases the risk of exposure to X-rays to the patient, doctor and his assistants. This has been observed with concern by many researchers, since many companies did not fit the standards of radiation protection. This failure can lead to exposure of professionals, patients and caregivers. It is therefore of fundamental importance, the use of personal protective equipment such as aprons and thyroid plumbíferos protectors, for dose reduction produced by the primary and secondary radiation. This study evaluated the knowledge of radiology professionals in Goiânia, on the use of lead apron in collective environments and use of guards in sensitive parts of patients to radiation. Through an information gathering technique based on a questionnaire with closed questions. From dista and focuses on the knowledge of professionals. The results showed that there is a serious deficiency as regards the most radiosensitive organ protection of patients when they are exposed to X-ray beams. (author)
ATM Emdadul Haque
Full Text Available Background A clear majority of teaching staff in UniKL-RCMP are expatriates with different cultural backgrounds, and the university currently accepting international students with a different cultural background in addition to the local culturally diverse students. Aims The purpose was to determine the knowledge and awareness of the lecturers of Faculty of Medicine regarding multiculturalism and the importance in the medical profession. Methods This was a cross-sectional study. A questionnaire was developed based on the relevant demographic information and knowledge and awareness of the cultural issues and the validity was discussed with a survey expert. Results A total of 43 teachers took part in the survey. The respondents were mostly male, expatriate and had very fewer experiences in teaching students of different cultural background. The most important thing affecting teachers’ competence was their experience in teaching students of different culture, and the teachers with experience in teaching in a multicultural environment felt more competent than the ones without experience. Gender or teaching experience did not have a significant impact on their feeling of competence. However, the teachers believed that training on special education program might have helped them more than their educational background to help develop the cultural competence of the students from different cultural backgrounds. Conclusion This study showed that teachers need more training and experiences of the multicultural education program and to facilitate the development of cultural competence of students with cultural diversity, which should be taken into consideration in the faculty development activities.
Brannagan, Kim B; Dellinger, Amy; Thomas, Jan; Mitchell, Denise; Lewis-Trabeaux, Shirleen; Dupre, Susan
Peer teaching has been shown to enhance student learning and levels of self efficacy. The purpose of the current study was to examine the impact of peer-teaching learning experiences on nursing students in roles of tutee and tutor in a clinical lab environment. This study was conducted over a three-semester period at a South Central University that provides baccalaureate nursing education. Over three semesters, 179 first year nursing students and 51 third year nursing students participated in the study. This mixed methods study, through concurrent use of a quantitative intervention design and qualitative survey data, examined differences during three semesters in perceptions of a clinical lab experience, self-efficacy beliefs, and clinical knowledge for two groups: those who received peer teaching-learning in addition to faculty instruction (intervention group) and those who received faculty instruction only (control group). Additionally, peer teachers' perceptions of the peer teaching learning experience were examined. Results indicated positive response from the peer tutors with no statistically significant differences for knowledge acquisition and self-efficacy beliefs between the tutee intervention and control groups. In contrast to previous research, students receiving peer tutoring in conjunction with faculty instruction were statistically more anxious about performing lab skills with their peer tutor than with their instructors. Additionally, some students found instructors' feedback moderately more helpful than their peers and increased gains in knowledge and responsibility for preparation and practice with instructors than with peer tutors. The findings in this study differ from previous research in that the use of peer tutors did not decrease anxiety in first year students, and no differences were found between the intervention and control groups related to self efficacy or cognitive improvement. These findings may indicate the need to better prepare peer
Vetrivel, Umashankar; Pilla, Kalabharath
Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.
Bultet, Lisandra Aguilar; Aguilar Rodriguez, Jose; Ahrens, Christian H; Ahrne, Erik Lennart; Ai, Ni; Aimo, Lucila; Akalin, Altuna; Aleksiev, Tyanko; Alocci, Davide; Altenhoff, Adrian; Alves, Isabel; Ambrosini, Giovanna; Pedone, Pascale Anderle; Angelina, Paolo; Anisimova, Maria
The SIB Swiss Institute of Bioinformatics (www.isb-sib.ch) provides world-class bioinformatics databases, software tools, services and training to the international life science community in academia and industry. These solutions allow life scientists to turn the exponentially growing amount of data into knowledge. Here, we provide an overview of SIB's resources and competence areas, with a strong focus on curated databases and SIB's most popular and widely used resources. In particular, SIB'...
Full Text Available Abstract Background While the evidence suggests that the way physicians provide information to patients is crucial in helping patients decide upon a course of action, the field of knowledge translation and exchange (KTE is silent about how the physician and the patient influence each other during clinical interactions and decision-making. Consequently, based on a novel relationship-centered model, EXACKTE2 (EXploiting the clinicAl Consultation as a Knowledge Transfer and Exchange Environment, this study proposes to assess how patients and physicians influence each other in consultations. Methods We will employ a cross-sectional study design involving 300 pairs of patients and family physicians from two primary care practice-based research networks. The consultation between patient and physician will be audio-taped and transcribed. Following the consultation, patients and physicians will complete a set of questionnaires based on the EXACKTE2 model. All questionnaires will be similar for patients and physicians. These questionnaires will assess the key concepts of our proposed model based on the essential elements of shared decision-making (SDM: definition and explanation of problem; presentation of options; discussion of pros and cons; clarification of patient values and preferences; discussion of patient ability and self-efficacy; presentation of doctor knowledge and recommendation; and checking and clarifying understanding. Patients will be contacted by phone two weeks later and asked to complete questionnaires on decisional regret and quality of life. The analysis will be conducted to compare the key concepts in the EXACKTE2 model between patients and physicians. It will also allow the assessment of how patients and physicians influence each other in consultations. Discussion Our proposed model, EXACKTE2, is aimed at advancing the science of KTE based on a relationship process when decision-making has to take place. It fosters a new KTE
Weber, Tilmann; Kim, Hyun Uk
. In this context, this review gives a summary of tools and databases that currently are available to mine, identify and characterize natural product biosynthesis pathways and their producers based on ‘omics data. A web portal called Secondary Metabolite Bioinformatics Portal (SMBP at http...... analytical and chemical methods gave access to this group of compounds, nowadays genomics-based methods offer complementary approaches to find, identify and characterize such molecules. This paradigm shift also resulted in a high demand for computational tools to assist researchers in their daily work......Natural products are among the most important sources of lead molecules for drug discovery. With the development of affordable whole-genome sequencing technologies and other ‘omics tools, the field of natural products research is currently undergoing a shift in paradigms. While, for decades, mainly...
James F. Aiton
Full Text Available The rapid expansion occurring in World-Wide Web activity is beginning to make the concepts of ‘global hypermedia’ and ‘universal document readership’ realistic objectives of the new revolution in information technology. One consequence of this increase in usage is that educators and students are becoming more aware of the diversity of the knowledge base which can be accessed via the Internet. Although computerised databases and information services have long played a key role in bioinformatics these same resources can also be used to provide core materials for teaching and learning. The large datasets and arch ives th at have been compiled for biomedical research can be enhanced with the addition of a variety of multimedia elements (images. digital videos. animation etc.. The use of this digitally stored information in structured and self-directed learning environments is likely to increase as activity across World-Wide Web increases.
Gressgård, Leif Jarle; Hansen, Kåre
Learning from failures is vital for improvement of safety performance, reliability, and resilience in organizations. In order for such learning to take place in distributed environments, knowledge has to be shared among organizational members at different locations and units. This paper reports on a study conducted in the context of drilling and well operations on the Norwegian Continental Shelf, which represents a high-risk distributed organizational environment. The study investigates the relationships between organizations' abilities to learn from failures, knowledge exchange within and between organizational units, quality of contractor relationship management, and work characteristics. The results show that knowledge exchange between units is the most important predictor of perceived ability to learn from failures. Contractor relationship management, leadership involvement, role clarity, and empowerment are also important factors for failure-based learning, both directly and through increased knowledge exchange. The results of the study enhance our understanding of how abilities to learn from failures can be improved in distributed environments where similar work processes take place at different locations and involve employees from several companies. Theoretical contributions and practical implications are discussed. - Highlights: • We investigate factors affecting failure-based learning in distributed environments. • Knowledge exchange between units is the most important predictor. • Contractor relationship management is positively related to knowledge exchange. • Leadership involvement, role clarity, and empowerment are significant variables. • Respondents from an operator firm and eight contractors are included in the study
Explains the Human Genome Project (HGP) and efforts to sequence the human genome. Describes the role of bioinformatics in the project and considers it the genetics Swiss Army Knife, which has many different uses, for use in forensic science, medicine, agriculture, and environmental sciences. Discusses the use of bioinformatics in the high school…
This article describes a new approach to teaching bioinformatics using "Arabidopsis" genetic sequences. Several open-ended and inquiry-based laboratory exercises have been designed to help students grasp key concepts and gain practical skills in bioinformatics, using "Arabidopsis" leucine-rich repeat receptor-like kinase (LRR…
Heyer, Laurie J.
This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…
Bioinformatics is a scientific discipline that applies computer science and information technology to help understand biological processes. The NIH provides a list of free online bioinformatics tutorials, either generated by the NIH Library or other institutes, which includes introductory lectures and "how to" videos on using various tools.
Full Text Available The purpose of this paper is to present a general view of the current applications of fuzzy logic in medicine and bioinformatics. We particularly review the medical literature using fuzzy logic. We then recall the geometrical interpretation of fuzzy sets as points in a fuzzy hypercube and present two concrete illustrations in medicine (drug addictions and in bioinformatics (comparison of genomes.
Berling, Trine Villumsen
Scientific knowledge in international relations has generally focused on an epistemological distinction between rationalism and reflectivism over the last 25 years. This chapter argues that this distinction has created a double distinction between theory/reality and theory/practice, which works...... and reflectivism. Bourdieu, on the contrary, lets the challenge to the theory/reality distinction spill over into a challenge to the theory/practice distinction by thrusting the scientist in the foreground as not just a factor (discourse/genre) but as an actor. In this way, studies of IR need to include a focus...... as a ghost distinction structuring IR research. While reflectivist studies have emphasised the impossibility of detached, objective knowledge production through a dissolution of the theory/reality distinction, the theory/practice distinction has been left largely untouched by both rationalism...
Connor, Thomas R; Loman, Nicholas J; Thompson, Simon; Smith, Andy; Southgate, Joel; Poplawski, Radoslaw; Bull, Matthew J; Richardson, Emily; Ismail, Matthew; Thompson, Simon Elwood-; Kitchen, Christine; Guest, Martyn; Bakke, Marius; Sheppard, Samuel K; Pallen, Mark J
The increasing availability and decreasing cost of high-throughput sequencing has transformed academic medical microbiology, delivering an explosion in available genomes while also driving advances in bioinformatics. However, many microbiologists are unable to exploit the resulting large genomics datasets because they do not have access to relevant computational resources and to an appropriate bioinformatics infrastructure. Here, we present the Cloud Infrastructure for Microbial Bioinformatics (CLIMB) facility, a shared computing infrastructure that has been designed from the ground up to provide an environment where microbiologists can share and reuse methods and data.
Full Text Available This paper presents development in the bioinformatics services industry value chain, based on cloud computing paradigm. As genome sequencing costs per Megabase exponentially drop, industry needs to adopt. Paper has two parts: theoretical analysis and practical example of Seven Bridges Genomics Company. We are focused on explaining organizational, business and financial aspects of new business model in bioinformatics services, rather than technical side of the problem. In the light of that we present twofold business model fit for core bioinformatics research and Information and Communication Technologie (ICT support in the new environment, with higher level of capital utilization and better resistance to business risks.
Pinho, Jorge; Sobral, João Luis; Rocha, Miguel
A large number of optimization problems within the field of Bioinformatics require methods able to handle its inherent complexity (e.g. NP-hard problems) and also demand increased computational efforts. In this context, the use of parallel architectures is a necessity. In this work, we propose ParJECoLi, a Java based library that offers a large set of metaheuristic methods (such as Evolutionary Algorithms) and also addresses the issue of its efficient execution on a wide range of parallel architectures. The proposed approach focuses on the easiness of use, making the adaptation to distinct parallel environments (multicore, cluster, grid) transparent to the user. Indeed, this work shows how the development of the optimization library can proceed independently of its adaptation for several architectures, making use of Aspect-Oriented Programming. The pluggable nature of parallelism related modules allows the user to easily configure its environment, adding parallelism modules to the base source code when needed. The performance of the platform is validated with two case studies within biological model optimization. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Full Text Available Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS, Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS, and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.
Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather
Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science.
As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics.This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. 2012 Dai et al.; licensee BioMed Central Ltd.
Dai, Lin; Gao, Xin; Guo, Yan; Xiao, Jingfa; Zhang, Zhang
As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics. This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.
Sarachan, B D; Simmons, M K; Subramanian, P; Temkin, J M
Key bioinformatics and medical informatics research areas need to be identified to advance knowledge and understanding of disease risk factors and molecular disease pathology in the 21 st century toward new diagnoses, prognoses, and treatments. Three high-impact informatics areas are identified: predictive medicine (to identify significant correlations within clinical data using statistical and artificial intelligence methods), along with pathway informatics and cellular simulations (that combine biological knowledge with advanced informatics to elucidate molecular disease pathology). Initial predictive models have been developed for a pilot study in Huntington's disease. An initial bioinformatics platform has been developed for the reconstruction and analysis of pathways, and work has begun on pathway simulation. A bioinformatics research program has been established at GE Global Research Center as an important technology toward next generation medical diagnostics. We anticipate that 21 st century medical research will be a combination of informatics tools with traditional biology wet lab research, and that this will translate to increased use of informatics techniques in the clinic.
Jang, Ki Bok; Moon, Hyun Ju; Jeong, Hyun Keun; Kim, Tae Yol [Korea Environment Institute, Seoul (Korea)
The importance of knowledge has been being more stressed now than any other time. How efficiently and effectively knowledge is created, spread, and applied is an important point to secure the competitiveness of an individual economic unit as well as to grow nation's economy. For that reason, the Government has been promoting various policies to accelerate a shift to a knowledge-based economy, establishing 'a Strategy for Knowledge-Based Economic Development', pan-governments level. Companies also have been positively accepting 'a Knowledge-Based Management' as a new strategy of managing companies. Accordingly, only knowledge-based industries, including a high technology manufacturing industry and an information/communication industry, are not sharply grow, but a knowledge-based activity in individual economic activities, such as R and D, has been expanding its share. As such a shift to a knowledge-based economy, it is expected that there are lots of effects in many-sided fields, society, culture, and politics, as well as economy. Based on due consideration to such various effects, the strategy for knowledge-based economic development and the policies on the related fields have to be promoted with a balance. An environmental field also cannot be exceptional. However, there has not yet been a concrete examination on which significance a shift to a knowledge-based economy environmentally has. The purpose of this study is to examine the effects on environment according to a shift to a knowledge-based economy and to find a countermeasure under the awareness of such problems. Anyhow, I hope that the results and the countermeasures from this study can contribute to achieving a shift to an environment-centered and knowledge-based economy. 82 refs., 30 figs., 10 tabs.
Cristancho, Marco; Isaza, Gustavo; Pinzón, Andrés; Rodríguez, Juan
This volume compiles accepted contributions for the 2nd Edition of the Colombian Computational Biology and Bioinformatics Congress CCBCOL, after a rigorous review process in which 54 papers were accepted for publication from 119 submitted contributions. Bioinformatics and Computational Biology are areas of knowledge that have emerged due to advances that have taken place in the Biological Sciences and its integration with Information Sciences. The expansion of projects involving the study of genomes has led the way in the production of vast amounts of sequence data which needs to be organized, analyzed and stored to understand phenomena associated with living organisms related to their evolution, behavior in different ecosystems, and the development of applications that can be derived from this analysis. .
Musmeci, Loredana; Bianchi, Fabrizio; Carere, Mario; Cori, Liliana
The study area includes the Municipalities of Gela, Niscemi and Butera located in the South of Sicily, Italy. In 1990 it was declared Area at High Risk of Environmental Crisis. In 2000 part of it was designated as Gela Reclamation Site of National Interest, RSNI. The site includes a private industrial area, public and marine areas, for a total of 51 km(2). Gela populationin 2008 was 77,145 (54,774 in 1961). Sea level:46 m. Total area: 276 km(2). Grid reference: 37 degrees 4' 0" N, 14 degrees 15' 0" E. Niscemi and Butera are located border to Gela. Populations are respectively 26,541 and 5,063. Sea level respectively: 332 m and 402 m. Close to the city of Gela, the industrial area, operating since 1962, includes chemical production plants, a power station and an oil refinery plant, one of the larger in Europe, refining 5 millions tons of crude per year. From the beginning the workforces decreased from 7,000 to the current 3,000 units. Over the years, these industrial activities have been a major source of environmental pollution. Extremely high levels of toxic, persistent and bio-accumulating chemical pollutants have been documented. Many relevant environmental and health data are available. Prior to the studies described in the present publication, their use in order to identify environmental pressures on health has been limited. Nevertheless, since several years different epidemiological studies have provided evidence of the occurrence of health outcomes significantly higher than in neighbouring areas and compared to regional data. In 2007 a Multidisciplinary Working Group has been established, to analyze the existing data on pollution-exposure-effect and to complete current knowledge on the cycle of pollutants, from migration in the environment to health impact. The present publication is a collection of contribution of this group of experts, supported by the following projects: Evaluation of environmental health impact and estimation of economic costs at of
Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas
Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/.
MIRCEA VALERIA ARINA
Full Text Available In the context of knowledge-based economy and society has acquired a connotation marketing role vital for all fields. Evolution of social, cultural, political and economic, information, design and conduct of marketing activities contribute to increasing the efficiency of any institution. Evolution of marketing over time provoked the great researchers who have tried to define the concept of their views, but only surprising aspects of this vast and important field. The definitions are different as shown in the article approach, the essence is the same. In the banking and financial role of marketing is to continually improve the quality of customer services and products offered by formulating appropriate marketing strategies so as to be able to influence The consumer buying behavior. Customer focus, his loyalty and not least an innovative marketing that starts at the client key aspects FEATURES today. The emphasis on innovation and ingenuity in order to: create new banking services and products, ways to attract customers; loyalty of existing ones, defining marketing and communication strategies lead to appropriate strategies to maximize the results of innovative marketing campaigns. Referring to work in the banking environment we can say that innovation is the key to success BANK and are based on: product and service innovations, process innovations, organizational innovations, and not least of marketing innovations.
Doris A. Ohnesorge; H. Peter Ohly
The report gives an overview of a conference that recognized and focused upon the fundamentals of knowledge organization as well asked questions and offered solutions for practice. The primary emphasis was given to the application of cooperative learning and working environments. The special value of this conference was the focus of the presentations and detailed discussions on current topics in the information sciences. Although the spectrum ranged from scientific to organizational environme...
Ibrahim, Bashar; Arkhipova, Ksenia; Andeweg, Arno C; Posada-Céspedes, Susana; Enault, François; Gruber, Arthur; Koonin, Eugene V; Kupczok, Anne; Lemey, Philippe; McHardy, Alice C; McMahon, Dino P; Pickett, Brett E; Robertson, David L; Scheuermann, Richard H; Zhernakova, Alexandra; Zwart, Mark P; Schönhuth, Alexander; Dutilh, Bas E; Marz, Manja
The Second Annual Meeting of the European Virus Bioinformatics Center (EVBC), held in Utrecht, Netherlands, focused on computational approaches in virology, with topics including (but not limited to) virus discovery, diagnostics, (meta-)genomics, modeling, epidemiology, molecular structure, evolution, and viral ecology. The goals of the Second Annual Meeting were threefold: (i) to bring together virologists and bioinformaticians from across the academic, industrial, professional, and training sectors to share best practice; (ii) to provide a meaningful and interactive scientific environment to promote discussion and collaboration between students, postdoctoral fellows, and both new and established investigators; (iii) to inspire and suggest new research directions and questions. Approximately 120 researchers from around the world attended the Second Annual Meeting of the EVBC this year, including 15 renowned international speakers. This report presents an overview of new developments and novel research findings that emerged during the meeting.
Full Text Available Abstract This paper presents the Bioinformatics Computational Journal (BCJ, a framework for conducting and managing computational experiments in bioinformatics and computational biology. These experiments often involve series of computations, data searches, filters, and annotations which can benefit from a structured environment. Systems to manage computational experiments exist, ranging from libraries with standard data models to elaborate schemes to chain together input and output between applications. Yet, although such frameworks are available, their use is not widespread–ad hoc scripts are often required to bind applications together. The BCJ explores another solution to this problem through a computer based environment suitable for on-site use, which builds on the traditional laboratory notebook paradigm. It provides an intuitive, extensible paradigm designed for expressive composition of applications. Extensive features facilitate sharing data, computational methods, and entire experiments. By focusing on the bioinformatics and computational biology domain, the scope of the computational framework was narrowed, permitting us to implement a capable set of features for this domain. This report discusses the features determined critical by our system and other projects, along with design issues. We illustrate the use of our implementation of the BCJ on two domain-specific examples.
Pritykin, F. N.; Nebritov, V. I.
The paper presents the configuration of knowledge base necessary for intelligent control of android arm mechanism motion with different positions of certain forbidden regions taken into account. The present structure of the knowledge base characterizes the past experience of arm motion synthesis in the vector of velocities with due regard for the known obstacles. This structure also specifies its intrinsic properties. Knowledge base generation is based on the study of the arm mechanism instantaneous states implementations. Computational experiments connected with the virtual control of android arm motion with known forbidden regions using the developed knowledge base are introduced. Using the developed knowledge base to control virtually the arm motion reduces the time of test assignments calculation. The results of the research can be used in developing control systems of autonomous android robots in the known in advance environment.
Jaanus , Jörgen; Ley , Tobias
Part 7: Doctoral Student Papers; International audience; The emergence of Digital Ecosystems can be endorsed by creating shared conceptualizations. Knowledge Organization Systems (KOS) form a backbone of organizing knowledge. Focusing in developing KOS and having its present and future requirements in mind would eventually support knowledge sharing and learning at collective level. Three types of KOS are distinguished: a) private level KOS; b) arbitrary KOS; c) methodic KOS. Knowledge Maturin...
During a project implementation various forms of information and experience are generated within the organization. If this accumulated knowledge is not recorded and shared amongst other projects, this knowledge will be lost and no longer be available to assist future projects. This may lead to increased future projects costs as resources, time and money will be wasted on redefining the knowledge that once existed within the company. By not capturing and redeploying this knowledge, the quality...
Okunoye, Olusoji; Oladejo, Bolanle; Odumuyiwa, Victor
International audience; The shift from industrial economy to knowledge economy in today's world has revolutionalized strategic planning in organizations as well as their problem solving approaches. The point of focus today is knowledge and service production with more emphasis been laid on knowledge capital. Many organizations are investing on tools that facilitate knowledge sharing among their employees and they are as well promoting and encouraging collaboration among their staff in order t...
Ammar Abdullah Mahmoud Ismail
Full Text Available The last few years have witnessed an increased interest in moving away from traditional language instruction settings towards more hybrid and virtual learning environments. Face-to-face interaction, guided practice, and uniformity of knowledge sources and skills are all replaced by settings where multiplicity of views from different learning communities, interconnectedness, self-directedness, and self-management of knowledge and learning are increasingly emphasized. This shift from walled-classroom instruction with its limited scope and resources to hybrid and virtual learning environments with their limitless provisions requires that learners be equipped with requisite skills and strategies to manage knowledge and handle language learning in ways commensurate with the nature and limitless possibilities of these new environments. The current study aimed at enhancing knowledge management strategies of EFL teachers in virtual learning environments and examine the impact on their ideational flexibility and engagement in language learning settings. A knowledge management model was proposed and field-test on a cohort of prospective EFL teachers in the Emirati context. Participants were prospective EFL teachers enrolled in the Methods of Teaching Courses and doing their practicum in the Emirati EFL context. Participants' ideational flexibility was tapped via a bi-methodical approach including a contextualized task and a decontextualized one. Their engagement in virtual language learning settings was tapped via an engagement scale. Results of the study indicated that enhancing prospective EFL teachers' knowledge management strategies in virtual learning environments had a significant impact on their ideational flexibility and engagement in foreign language learning settings. Details of the instructional intervention, instruments for tapping students’ ideational flexibility and engagement, and results of the study are discussed. Implications for
Zhou, Shuigeng; Liao, Ruiqi; Guan, Jihong
In the past decades, with the rapid development of high-throughput technologies, biology research has generated an unprecedented amount of data. In order to store and process such a great amount of data, cloud computing and MapReduce were applied to many fields of bioinformatics. In this paper, we first introduce the basic concepts of cloud computing and MapReduce, and their applications in bioinformatics. We then highlight some problems challenging the applications of cloud computing and MapReduce to bioinformatics. Finally, we give a brief guideline for using cloud computing in biology research.
Yang, Haoyu; An, Zheng; Zhou, Haotian; Hou, Yawen
Faced with the development of bioinformatics, high-throughput genomic technology have enabled biology to enter the era of big data.  Bioinformatics is an interdisciplinary, including the acquisition, management, analysis, interpretation and application of biological information, etc. It derives from the Human Genome Project. The field of machine learning, which aims to develop computer algorithms that improve with experience, holds promise to enable computers to assist humans in the analysis of large, complex data sets.. This paper analyzes and compares various algorithms of machine learning and their applications in bioinformatics.
One area in which many environmental education programs are deficient is in reaching and involving the adult population. For senior adults in particular, the disconnect from environmental centers and other settings represents a missed opportunity for strengthening relationships, utilizing community resources and promoting civic engagement. In this sense, "intergenerational programming" could serve as an effective strategy for broadening the public's awareness and participation in environmental activities. Although the concept of involving older adults and young people in joint environmental education experiences is compelling on several fronts, there is no body of evidence to draw upon; nor is there a blueprint to guide efforts to translate this general goal into practice. This research was therefore designed to: (1) assess the effectiveness of an intergeneration outdoor education program in enhancing participants' environmental knowledge and positive attitudes, (2) explore other program impacts on the participants and the environmental centers, and (3) learn about environmental educators' experiences and opinions in regard to utilizing senior adults in their programs. This study was conducted in two phases in order to address the research purposes: (1) a nonequivalent-control-group quasi-experimental research incorporated with the Outdoor School program at the Shaver's Creek Environmental Center, and (2) a statewide mail-in survey with environmental educators in Pennsylvania. According to the quantitative data, both intergenerational groups obtained higher mean scores for environmental attitudes than the monogenerational groups, although the difference in scores was not statistically significant than one of the two monogenerational groups. The qualitative data showed that senior adults have certain characteristics that allowed them to make a substantial contribution toward enriching children's awareness and appreciation of the natural environment. Although the
Tropical rainforests are biologically rich ecosystems, which are threatened by a variety of different human activities. This study focuses on students' knowledge and understanding of rainforest locations, their reasons for protecting these environments and their familiarity with selected concepts about rainforest vegetation and soil. These…
Goede, M. de; Postma, A.
Males tend to outperform females in their knowledge of relative and absolute distances in spatial layouts and environments. It is unclear yet in how far these differences are innate or develop through life. The aim of the present study was to investigate whether gender differences in configurational
Fufezan, Christian; Specht, Michael
High-throughput bioinformatic analysis tools are needed to mine the large amount of structural data via knowledge based approaches. The development of such tools requires a robust interface to access the structural data in an easy way. For this the Python scripting language is the optimal choice since its philosophy is to write an understandable source code. p3d is an object oriented Python module that adds a simple yet powerful interface to the Python interpreter to process and analyse three dimensional protein structure files (PDB files). p3d's strength arises from the combination of a) very fast spatial access to the structural data due to the implementation of a binary space partitioning (BSP) tree, b) set theory and c) functions that allow to combine a and b and that use human readable language in the search queries rather than complex computer language. All these factors combined facilitate the rapid development of bioinformatic tools that can perform quick and complex analyses of protein structures. p3d is the perfect tool to quickly develop tools for structural bioinformatics using the Python scripting language.
Full Text Available Abstract Background High-throughput bioinformatic analysis tools are needed to mine the large amount of structural data via knowledge based approaches. The development of such tools requires a robust interface to access the structural data in an easy way. For this the Python scripting language is the optimal choice since its philosophy is to write an understandable source code. Results p3d is an object oriented Python module that adds a simple yet powerful interface to the Python interpreter to process and analyse three dimensional protein structure files (PDB files. p3d's strength arises from the combination of a very fast spatial access to the structural data due to the implementation of a binary space partitioning (BSP tree, b set theory and c functions that allow to combine a and b and that use human readable language in the search queries rather than complex computer language. All these factors combined facilitate the rapid development of bioinformatic tools that can perform quick and complex analyses of protein structures. Conclusion p3d is the perfect tool to quickly develop tools for structural bioinformatics using the Python scripting language.
Bioinformatics is an emerging scientific discipline that uses information ... complex biological questions. ... and computer programs for various purposes of primer ..... polymerase chain reaction: Human Immunodeficiency Virus 1 model studies.
Melero, Juan L; Andrades, Sergi; Arola, Lluís; Romeu, Antoni
Psoriasis is an immune-mediated, inflammatory and hyperproliferative disease of the skin and joints. The cause of psoriasis is still unknown. The fundamental feature of the disease is the hyperproliferation of keratinocytes and the recruitment of cells from the immune system in the region of the affected skin, which leads to deregulation of many well-known gene expressions. Based on data mining and bioinformatic scripting, here we show a new dimension of the effect of psoriasis at the genomic level. Using our own pipeline of scripts in Perl and MySql and based on the freely available NCBI Gene Expression Omnibus (GEO) database: DataSet Record GDS4602 (Series GSE13355), we explore the extent of the effect of psoriasis on gene expression in the affected tissue. We give greater insight into the effects of psoriasis on the up-regulation of some genes in the cell cycle (CCNB1, CCNA2, CCNE2, CDK1) or the dynamin system (GBPs, MXs, MFN1), as well as the down-regulation of typical antioxidant genes (catalase, CAT; superoxide dismutases, SOD1-3; and glutathione reductase, GSR). We also provide a complete list of the human genes and how they respond in a state of psoriasis. Our results show that psoriasis affects all chromosomes and many biological functions. If we further consider the stable and mitotically inheritable character of the psoriasis phenotype, and the influence of environmental factors, then it seems that psoriasis has an epigenetic origin. This fit well with the strong hereditary character of the disease as well as its complex genetic background. Copyright © 2017 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.
Gorodkin, Jan; Hofacker, Ivo L.; Ruzzo, Walter L.
RNA bioinformatics and computational RNA biology have emerged from implementing methods for predicting the secondary structure of single sequences. The field has evolved to exploit multiple sequences to take evolutionary information into account, such as compensating (and structure preserving) base...... for interactions between RNA and proteins.Here, we introduce the basic concepts of predicting RNA secondary structure relevant to the further analyses of RNA sequences. We also provide pointers to methods addressing various aspects of RNA bioinformatics and computational RNA biology....
Chen, Liming; Wang, H.; Sterritt, Roy; Okeyo, George
Knowledge-driven activity recognition has recently attracted increasing attention but mainly focused on simple activities. This paper extends previous work to introduce a knowledge-driven approach to recognition of composite activities such as interleaved and concurrent activities. The approach combines ontological and temporal knowledge modelling formalisms for composite activity modelling. It exploits ontological reasoning for simple activity recognition and rule-based temporal inference to...
Moreews, François; Sallou, Olivier; Ménager, Hervé; Le Bras, Yvan; Monjeaud, Cyril; Blanchet, Christophe; Collin, Olivier
Linux container technologies, as represented by Docker, provide an alternative to complex and time-consuming installation processes needed for scientiﬁc software. The ease of deployment and the process isolation they enable, as well as the reproducibility they permit across environments and versions, are among the qualities that make them interesting candidates for the construction of bioinformatic infrastructures, at any scale from single workstations to high throughput computing architectures. The Docker Hub is a public registry which can be used to distribute bioinformatic software as Docker images. However, its lack of curation and its genericity make it difﬁcult for a bioinformatics user to ﬁnd the most appropriate images needed. BioShaDock is a bioinformatics-focused Docker registry, which provides a local and fully controlled environment to build and publish bioinformatic software as portable Docker images. It provides a number of improvements over the base Docker registry on authentication and permissions management, that enable its integration in existing bioinformatic infrastructures such as computing platforms. The metadata associated with the registered images are domain-centric, including for instance concepts deﬁned in the EDAM ontology, a shared and structured vocabulary of commonly used terms in bioinformatics. The registry also includes user deﬁned tags to facilitate its discovery, as well as a link to the tool description in the ELIXIR registry if it already exists. If it does not, the BioShaDock registry will synchronize with the registry to create a new description in the Elixir registry, based on the BioShaDock entry metadata. This link will help users get more information on the tool such as its EDAM operations, input and output types. This allows integration with the ELIXIR Tools and Data Services Registry, thus providing the appropriate visibility of such images to the bioinformatics community.
Ou, C.X.J.; van Hillegersberg, J.; Spiekermann, S.
In this paper, we draw on socio-technical theory to explore how Chinese professionals engage in informal knowledge focused activities facilitated by guanxi networks in the face of restrictive corporate IT policies. Following a short review of the global and Chinese literature on knowledge management
Lingg, Myriam; Wyss, Kaspar; Durán-Arenas, Luis
In organisational theory there is an assumption that knowledge is used effectively in healthcare systems that perform well. Actors in healthcare systems focus on managing knowledge of clinical processes like, for example, clinical decision-making to improve patient care. We know little about connecting that knowledge to administrative processes like high-risk medical device procurement. We analysed knowledge-related factors that influence procurement and clinical procedures for orthopaedic medical devices in Mexico. We based our qualitative study on 48 semi-structured interviews with various stakeholders in Mexico: orthopaedic specialists, government officials, and social security system managers or administrators. We took a knowledge-management related perspective (i) to analyse factors of managing knowledge of clinical procedures, (ii) to assess the role of this knowledge and in relation to procurement of orthopaedic medical devices, and (iii) to determine how to improve the situation. The results of this study are primarily relevant for Mexico but may also give impulsion to other health systems with highly standardized procurement practices. We found that knowledge of clinical procedures in orthopaedics is generated inconsistently and not always efficiently managed. Its support for procuring orthopaedic medical devices is insufficient. Identified deficiencies: leaders who lack guidance and direction and thus use knowledge poorly; failure to share knowledge; insufficiently defined formal structures and processes for collecting information and making it available to actors of health system; lack of strategies to benefit from synergies created by information and knowledge exchange. Many factors are related directly or indirectly to technological aspects, which are insufficiently developed. The content of this manuscript is novel as it analyses knowledge-related factors that influence procurement of orthopaedic medical devices in Mexico. Based on our results we
Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae
User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service. PMID:26393609
Muhammad Golam Kibria
Full Text Available User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service.
Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae
User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service.
Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere. PMID:17291351
Full Text Available Abstract Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.
Maartje ede Goede
Full Text Available Males tend to outperform females in their knowledge of relative and absolute distances in spatial layouts and environments. It is unclear yet in how far these differences are innate or develop through life. The aim of the present study was to investigate whether gender differences in configurational knowledge for a natural environment might be modulated by experience. In order to examine this possibility, distance as well as directional knowledge of the city of Utrecht in the Netherlands was assessed in male and female inhabitants who had different levels of familiarity with this city. Experience affected the ability to solve difficult distance knowledge problems, but only for females. While the quality of the spatial representation of metric distances improved with more experience, this effect was not different for males and females. In contrast directional configurational measures did show a main gender effect but no experience modulation. In general, it seems that we obtain different configurational aspects according to different experiential time schemes. Moreover, the results suggest that experience may be a modulating factor in the occurrence of gender differences in configurational knowledge, though this seems dependent on the type of measurement. It is discussed in how far proficiency in mental rotation ability and spatial working memory accounts for these differences.
De Goede, Maartje; Postma, Albert
Males tend to outperform females in their knowledge of relative and absolute distances in spatial layouts and environments. It is unclear yet in how far these differences are innate or develop through life. The aim of the present study was to investigate whether gender differences in configurational knowledge for a natural environment might be modulated by experience. In order to examine this possibility, distance as well as directional knowledge of the city of Utrecht in the Netherlands was assessed in male and female inhabitants who had different levels of familiarity with this city. Experience affected the ability to solve difficult distance knowledge problems, but only for females. While the quality of the spatial representation of metric distances improved with more experience, this effect was not different for males and females. In contrast directional configurational measures did show a main gender effect but no experience modulation. In general, it seems that we obtain different configurational aspects according to different experiential time schemes. Moreover, the results suggest that experience may be a modulating factor in the occurrence of gender differences in configurational knowledge, though this seems dependent on the type of measurement. It is discussed in how far proficiency in mental rotation ability and spatial working memory accounts for these differences. PMID:25914663
Brazas, Michelle D.; Ouellette, B. F. Francis
With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable...
Leung, Anthony K L; Andersen, Jens S; Mann, Matthias
The nucleolus is a plurifunctional, nuclear organelle, which is responsible for ribosome biogenesis and many other functions in eukaryotes, including RNA processing, viral replication and tumour suppression. Our knowledge of the human nucleolar proteome has been expanded dramatically by the two r...
Introduction: The ToLigado Project--Your School Interactive Newspaper is an interactive virtual learning environment conceived, developed, implemented and supported by researchers at the School of the Future Research Laboratory of the University of Sao Paulo, Brazil. Method: This virtual learning environment aims to motivate trans-disciplinary…
Technology-enabled learning environments are beginning to come of age. Tools and frameworks are now available that have been shown to improve learning and are being deployed more widely in varied school settings. Teachers are now faced with the formidable challenge of integrating these promising new environments with the everyday context in which…
Mulder, Nicola J; Adebiyi, Ezekiel; Adebiyi, Marion; Adeyemi, Seun; Ahmed, Azza; Ahmed, Rehab; Akanle, Bola; Alibi, Mohamed; Armstrong, Don L; Aron, Shaun; Ashano, Efejiro; Baichoo, Shakuntala; Benkahla, Alia; Brown, David K; Chimusa, Emile R; Fadlelmola, Faisal M; Falola, Dare; Fatumo, Segun; Ghedira, Kais; Ghouila, Amel; Hazelhurst, Scott; Isewon, Itunuoluwa; Jung, Segun; Kassim, Samar Kamal; Kayondo, Jonathan K; Mbiyavanga, Mamana; Meintjes, Ayton; Mohammed, Somia; Mosaku, Abayomi; Moussa, Ahmed; Muhammd, Mustafa; Mungloo-Dilmohamud, Zahra; Nashiru, Oyekanmi; Odia, Trust; Okafor, Adaobi; Oladipo, Olaleye; Osamor, Victor; Oyelade, Jellili; Sadki, Khalid; Salifu, Samson Pandam; Soyemi, Jumoke; Panji, Sumir; Radouani, Fouzia; Souiai, Oussama; Tastan Bishop, Özlem
Although pockets of bioinformatics excellence have developed in Africa, generally, large-scale genomic data analysis has been limited by the availability of expertise and infrastructure. H3ABioNet, a pan-African bioinformatics network, was established to build capacity specifically to enable H3Africa (Human Heredity and Health in Africa) researchers to analyze their data in Africa. Since the inception of the H3Africa initiative, H3ABioNet's role has evolved in response to changing needs from the consortium and the African bioinformatics community. H3ABioNet set out to develop core bioinformatics infrastructure and capacity for genomics research in various aspects of data collection, transfer, storage, and analysis. Various resources have been developed to address genomic data management and analysis needs of H3Africa researchers and other scientific communities on the continent. NetMap was developed and used to build an accurate picture of network performance within Africa and between Africa and the rest of the world, and Globus Online has been rolled out to facilitate data transfer. A participant recruitment database was developed to monitor participant enrollment, and data is being harmonized through the use of ontologies and controlled vocabularies. The standardized metadata will be integrated to provide a search facility for H3Africa data and biospecimens. Because H3Africa projects are generating large-scale genomic data, facilities for analysis and interpretation are critical. H3ABioNet is implementing several data analysis platforms that provide a large range of bioinformatics tools or workflows, such as Galaxy, the Job Management System, and eBiokits. A set of reproducible, portable, and cloud-scalable pipelines to support the multiple H3Africa data types are also being developed and dockerized to enable execution on multiple computing infrastructures. In addition, new tools have been developed for analysis of the uniquely divergent African data and for
Full Text Available The constructivists approach on the conception of relative software of modelling to training and teaching of the concepts of current and voltage requires appraisal of several disciplinary fields in order to provide to the learners a training adapted to their representations. Thus, this approach requires the researchers to have adequate knowledge or skills in data processing, didactics and science content. In this regard, several researches underline that the acquisition of basic concepts that span a field of a given knowledge, must take into account the student and the scientific representations. The present research appears in this perspective, and aims to present the interactive computer environments that take into account the students (secondary and college and scientific representations related to simple electric circuits. These computer environments will help the students to analyze the functions of the electric circuits adequately.
Tsala Dimbuene, Zacharie; Kuate Defo, Barthelemy
respondents reported accurate knowledge about HIV transmission routes and prevention strategies. Findings showed that the role of family environment as source of accurate HIV knowledge transmission routes and prevention strategies is of paramount significance; however, families have been poorly integrated in the design and implementation of the first generation of HIV interventions. There is an urgent need that policymakers work together with families to improve the efficiency of these interventions. Peer influences is likely controversial because of the double positive effect of peer-to-peer communication on both accurate and inaccurate knowledge of HIV transmission routes.
Kuate Defo Barthelemy
effects of HIV interventions/programmes in sub-Saharan Africa. Indeed, few respondents reported accurate knowledge about HIV transmission routes and prevention strategies. Findings showed that the role of family environment as source of accurate HIV knowledge transmission routes and prevention strategies is of paramount significance; however, families have been poorly integrated in the design and implementation of the first generation of HIV interventions. There is an urgent need that policymakers work together with families to improve the efficiency of these interventions. Peer influences is likely controversial because of the double positive effect of peer-to-peer communication on both accurate and inaccurate knowledge of HIV transmission routes.
Benilde García Cabrero
Full Text Available A model of analysis of interaction and construction of knowledge in educational environments based on computer-mediated communication (CMC is proposed. This proposal considers: 1 the contextual factors that constitute the input and the scenario of interaction, 2 the interaction processes: types of interaction and its contents (Garrison, Anderson and Archer, 2000 as well as the discursive strategies (Lemke, 1997, and 3 learning results that involve the quality of the knowledge constructed by the participants (Gunawardena, Lowe and Anderson, 1997. This model was applied to the analysis of the interaction among a group of participants in two web forums (with or without the presence of a teacher, during the teaching of a PhD in Psychology program. The results show evidence of the model’s viability to describe the patterns of interaction and the levels of construction of knowledge in web forums.
Cooke, Megan E; Meyers, Jacquelyn L; Latvala, Antti; Korhonen, Tellervo; Rose, Richard J; Kaprio, Jaakko; Salvatore, Jessica E; Dick, Danielle M
The purpose of this study was to address two methodological issues that have called into question whether previously reported gene-environment interaction (GxE) effects for adolescent alcohol use are 'real'. These issues are (1) the potential correlation between the environmental moderator and the outcome across twins and (2) non-linear transformations of the behavioral outcome. Three environments that have been previously studied (peer deviance, parental knowledge, and potentially stressful life events) were examined here. For each moderator (peer deviance, parental knowledge, and potentially stressful life events), a series of models was fit to both a raw and transformed measure of monthly adolescent alcohol use in a sample that included 825 dizygotic (DZ) and 803 monozygotic (MZ) twin pairs. The results showed that the moderating effect of peer deviance was robust to transformation, and that although the significance of moderating effects of parental knowledge and potentially stressful life events were dependent on the scale of the adolescent alcohol use outcome, the overall results were consistent across transformation. In addition, the findings did not vary across statistical models. The consistency of the peer deviance results and the shift of the parental knowledge and potentially stressful life events results between trending and significant, shed some light on why previous findings for certain moderators have been inconsistent and emphasize the importance of considering both methodological issues and previous findings when conducting and interpreting GxE analyses.
Harvie, David P
.... The common nemesis to successfully developing solutions in these environments is change. The challenge of any complex problem solving process is the balance of adapting to multiple changes while keeping focused on the overall desired solution...
Immersive Virtual Learning Environments (IVLEs) are extensively used in training, but few rigorous scientific investigations regarding the : transfer of learning have been conducted. Measurement of learning transfer through evaluative methods is key ...
The scraps of knowledge existing todays about the environmental radiation protection are based on the dose limits (10 and 1 mGy.h -1 ) that have been determined from literature relative to effects of acute exposure by external irradiation on individuals. But these standards come from a context far from environmental reality. Some research directions are coming out: to complete the radioecological knowledge of transfers, the biological accumulation in living organisms, to study the effects of this bioaccumulation in a multi pollution context with chronic low dose exposure; to identify the characteristics of these effects( low doses in chronic multi pollutions) at the ecosystem level. (N.C.)
the Munitions Items Disposition Action System ( MIDAS ). Munitions constituents (MCs) can be identified through the known munitions type. The MIDAS and...physical damage to the casing, adjacent or touching metals, and water or substrate qualities such as temperature, pH, or Redox potential. The...environments, little is known on modeling the fate of MCs in the underwater environment. One of the anticipated problems in predicting the fate and
Baartman, L. K. J.; Kilbrink, N.; de Bruijn, E.
In vocational education, students learn in different school-based and workplace-based learning environments and engage with different types of knowledge in these environments. Students are expected to integrate these experiences and make meaning of them in relation to their own professional knowledge base. This study focuses both on…
Full Text Available The theme of the risk, as a public problem, is rarely debated in Romania. In the world, the research related to the risk perception, especially the environment risk, started 59 years ago, on the grounds of the nuclear danger. The environment-risk perception at children depends on the prior perceptions acting as decoding filters, nonetheless it can be influenced by the targeted environment oriented education, correcting the false perceptions and aiding the children to form a set of perennial values and to digest healthy behaviours. The work presents the results of the study made on 446 pupils in the primary classes, in three schools from Cluj-Napoca, Romania, with the purpose to encourage environmental friendly behaviours by combining previous strategies (modifying the attitudes and the values towards the environment with the consecutive strategies (of recompense for the pro-environment behaviours. The study demonstrates the role and the importance both of the school, and the parents’ level of instruction, in building and consolidating the environment consciousness at children.
Papanicolaou, Alexie; Heckel, David G.
Motivation: Next-generation sequencing technologies have led to the widespread use of -omic applications. As a result, there is now a pronounced bioinformatic bottleneck. The general model organism database (GMOD) tool kit (http://gmod.org) has produced a number of resources aimed at addressing this issue. It lacks, however, a robust online solution that can deploy heterogeneous data and software within a Web content management system (CMS). Results: We present a bioinformatic framework for the Drupal CMS. It consists of three modules. First, GMOD-DBSF is an application programming interface module for the Drupal CMS that simplifies the programming of bioinformatic Drupal modules. Second, the Drupal Bioinformatic Software Bench (biosoftware_bench) allows for a rapid and secure deployment of bioinformatic software. An innovative graphical user interface (GUI) guides both use and administration of the software, including the secure provision of pre-publication datasets. Third, we present genes4all_experiment, which exemplifies how our work supports the wider research community. Conclusion: Given the infrastructure presented here, the Drupal CMS may become a powerful new tool set for bioinformaticians. The GMOD-DBSF base module is an expandable community resource that decreases development time of Drupal modules for bioinformatics. The biosoftware_bench module can already enhance biologists' ability to mine their own data. The genes4all_experiment module has already been responsible for archiving of more than 150 studies of RNAi from Lepidoptera, which were previously unpublished. Availability and implementation: Implemented in PHP and Perl. Freely available under the GNU Public License 2 or later from http://gmod-dbsf.googlecode.com Contact: email@example.com PMID:20971988
Papanicolaou, Alexie; Heckel, David G
Next-generation sequencing technologies have led to the widespread use of -omic applications. As a result, there is now a pronounced bioinformatic bottleneck. The general model organism database (GMOD) tool kit (http://gmod.org) has produced a number of resources aimed at addressing this issue. It lacks, however, a robust online solution that can deploy heterogeneous data and software within a Web content management system (CMS). We present a bioinformatic framework for the Drupal CMS. It consists of three modules. First, GMOD-DBSF is an application programming interface module for the Drupal CMS that simplifies the programming of bioinformatic Drupal modules. Second, the Drupal Bioinformatic Software Bench (biosoftware_bench) allows for a rapid and secure deployment of bioinformatic software. An innovative graphical user interface (GUI) guides both use and administration of the software, including the secure provision of pre-publication datasets. Third, we present genes4all_experiment, which exemplifies how our work supports the wider research community. Given the infrastructure presented here, the Drupal CMS may become a powerful new tool set for bioinformaticians. The GMOD-DBSF base module is an expandable community resource that decreases development time of Drupal modules for bioinformatics. The biosoftware_bench module can already enhance biologists' ability to mine their own data. The genes4all_experiment module has already been responsible for archiving of more than 150 studies of RNAi from Lepidoptera, which were previously unpublished. Implemented in PHP and Perl. Freely available under the GNU Public License 2 or later from http://gmod-dbsf.googlecode.com.
Jou, Min; Lin, Yen-Ting; Wu, Din-Wu
With the development of information technology and popularization of web applications, students nowadays have grown used to skimming through information provided through the Internet. This reading habit led them to be incapable of analyzing or integrating information they have received. Hence, knowledge management and critical thinking (CT) have,…
White, Kenneth C.
Although the literature indicates that knowledge sharing (KS) research is prevalent in the private sector, there is scant empirical research data about KS in the public sector. Moreover, organizations lack an understanding of employee KS behavior. This study investigated two research questions: First, how does the perceived importance of five…
Legare, F.; Stewart, M.; Frosch, D.; Grimshaw, J.; Labrecque, M.; Magnan, M.; Ouimet, M.; Rousseau, M.; Stacey, D.; Weijden, G.D.E.M. van der; Elwyn, G.
ABSTRACT: BACKGROUND: While the evidence suggests that the way physicians provide information to patients is crucial in helping patients decide upon a course of action, the field of knowledge translation and exchange (KTE) is silent about how the physician and the patient influence each other during
The aim of this study is to determine 9th class students knowledge about the internal structures of mice and cockroaches using drawings. Drawings of 122 students from the 9th class of a high school in the center of Konya about the internal structures of mice and cockroaches have been analyzed. Drawings were analyzed independently by two…
Murphy, Glen; Salomone, Sonia
While highly cohesive groups are potentially advantageous they are also often correlated with the emergence of knowledge and information silos based around those same functional or occupational clusters. Consequently, an essential challenge for engineering organisations wishing to overcome informational silos is to implement mechanisms that…
Swan, Karen; Black, John B.
Discussion of computer programming and knowledge-based instruction focuses on three studies of elementary and secondary school students which show that five particular problem-solving strategies can be developed in students explicitly taught the strategies and given practice applying them to solve LOGO programming problems. (Contains 53…
Fraser-Abder, Pamela; Doria, John A.; Yang, Ji-Sup; De Jesus, Angela
The concept of funds of knowledge, as applied to an ethnically popular fruit, is the focus of this module. Teachers can use this concept to create contextually meaningful experiments that can contribute to a culturally relevant and more fully developed educational unit focusing on the science of nutrition and reflecting content Standards A and C.…
Shroyer, Josh; Stewart, Craig
The purpose of this study was to determine the knowledge and opinions on concussions of high school coaches from a geographically large yet rural state in the northern Rocky Mountains of the United States. Few medical issues in sport are more important, or have had as much publicity recently, as concussions. The exposure gleaned from tragic health…
Gray, C.; Turner, R.; Sutton, C.; Petersen, C.; Stevens, S.; Swain, J.; Esmond, B.; Schofield, C.; Thackeray, D.
Knowledge of research methods is regarded as crucial for the UK economy and workforce. However, research methods teaching is viewed as a challenging area for lecturers and students. The pedagogy of research methods teaching within universities has been noted as underdeveloped, with undergraduate students regularly expressing negative dispositions…
Full Text Available In mobile cloud computing environment, the cooperation of distributed computing objects is one of the most important requirements for providing successful cloud services. To satisfy this requirement, all the members, who are employed in the cooperation group, need to share the knowledge for mutual understanding. Even if ontology can be the right tool for this goal, there are several issues to make a right ontology. As the cost and complexity of managing knowledge increase according to the scale of the knowledge, reducing the size of ontology is one of the critical issues. In this paper, we propose a method of extracting ontology module to increase the utility of knowledge. For the given signature, this method extracts the ontology module, which is semantically self-contained to fulfill the needs of the service, by considering the syntactic structure and semantic relation of concepts. By employing this module, instead of the original ontology, the cooperation of computing objects can be performed with less computing load and complexity. In particular, when multiple external ontologies need to be combined for more complex services, this method can be used to optimize the size of shared knowledge.
Lee, Keonsoo; Rho, Seungmin; Lee, Seok-Won
In mobile cloud computing environment, the cooperation of distributed computing objects is one of the most important requirements for providing successful cloud services. To satisfy this requirement, all the members, who are employed in the cooperation group, need to share the knowledge for mutual understanding. Even if ontology can be the right tool for this goal, there are several issues to make a right ontology. As the cost and complexity of managing knowledge increase according to the scale of the knowledge, reducing the size of ontology is one of the critical issues. In this paper, we propose a method of extracting ontology module to increase the utility of knowledge. For the given signature, this method extracts the ontology module, which is semantically self-contained to fulfill the needs of the service, by considering the syntactic structure and semantic relation of concepts. By employing this module, instead of the original ontology, the cooperation of computing objects can be performed with less computing load and complexity. In particular, when multiple external ontologies need to be combined for more complex services, this method can be used to optimize the size of shared knowledge.
Abu-Jamous, Basel; Nandi, Asoke K
Clustering techniques are increasingly being put to use in the analysis of high-throughput biological datasets. Novel computational techniques to analyse high throughput data in the form of sequences, gene and protein expressions, pathways, and images are becoming vital for understanding diseases and future drug discovery. This book details the complete pathway of cluster analysis, from the basics of molecular biology to the generation of biological knowledge. The book also presents the latest clustering methods and clustering validation, thereby offering the reader a comprehensive review o
Campbell, Chad E.; Nehm, Ross H.
The growing importance of genomics and bioinformatics methods and paradigms in biology has been accompanied by an explosion of new curricula and pedagogies. An important question to ask about these educational innovations is whether they are having a meaningful impact on students' knowledge, attitudes, or skills. Although assessments are…
Faisal Manzoor Arain
Full Text Available Construction law is a vital component of the body of knowledge that is needed by construction professionals in order to successfully operate in the commercial world of construction. Construction law plays an important role in shaping building projects. Construction projects are complex because they involve many human and non-human factors and variables. Teaching construction law is therefore a complex issue with several dimensions. In recent years, Information Technology (IT has become strongly established as a supporting tool for many professions, including teachers. If faculty members have a knowledge base established on similar past projects, it would assist the faculty members to present case studies and contractually based scenarios to students. This paper proposes potential utilisation of a Knowledge-based System (KBS for teaching construction law to built environment students. The KBS is primarily designed for building professionals to learn from similar past projects. The KBS is able to assist professionals by providing accurate and timelyinformation for decision making and a user-friendly tool for analysing and selecting the suggested controls for variations in educational buildings. It is recommended that the wealth of knowledge available in the KBS can be very helpful in teaching construction law to built environment students. The system presents real case studies and scenarios to students to allow them to analyse and learn construction law. The KBS could be useful to students as a general research tool because the students could populate it with their own data and use it with the reported educational projects. With further generic modifications, the KBS will also be useful for built environment students to learn about project management of building projects; thus, it will raise the overall level of professional understanding, and eventually productivity, in the construction industry.
Quail, Michelle; Brundage, Shelley B; Spitalnick, Josh; Allen, Peter J; Beilby, Janet
Advanced communication skills are vital for allied health professionals, yet students often have limited opportunities in which to develop them. The option of increasing clinical placement hours is unsustainable in a climate of constrained budgets, limited placement availability and increasing student numbers. Consequently, many educators are considering the potentials of alternative training methods, such as simulation. Simulations provide safe, repeatable and standardised learning environments in which students can practice a variety of clinical skills. This study investigated students' self-rated communication skill, knowledge, confidence and empathy across simulated and traditional learning environments. Undergraduate speech pathology students were randomly allocated to one of three communication partners with whom they engaged conversationally for up to 30 min: a patient in a nursing home (n = 21); an elderly trained patient actor (n = 22); or a virtual patient (n = 19). One week prior to, and again following the conversational interaction, participants completed measures of self-reported communication skill, knowledge and confidence (developed by the authors based on the Four Habit Coding Scheme), as well as the Jefferson Scale of Empathy - Health Professionals (student version). All three groups reported significantly higher communication knowledge, skills and confidence post-placement (Median d = .58), while the degree of change did not vary as a function of group membership (Median η (2) communication skill, knowledge and confidence, though not empathy, following a brief placement in a virtual, standardised or traditional learning environment. The self-reported increases were consistent across the three placement types. It is proposed that the findings from this study provide support for the integration of more sustainable, standardised, virtual patient-based placement models into allied health training programs for the training of
The Radiation Protection Programme of the European Communities is discussed in the context of the behaviour and control of radionuclides in the environment with reference to the aims of the programme, the results of current research activities and requirements for future studies. The summarised results of the radioecological research activities for 1976 - 1980 include the behaviour of α-emitters (Pu, Am, Cm), 99 Tc, 137 Cs, 144 Ce, 106 Ru and 125 Sb in marine environments; atmospheric dispersion of radionuclides; and the transport of radionuclides in components of freshwater and terrestrial ecosystems. (U.K.)
Scheuermann, Richard H; Sinkovits, Robert S; Schenkelberg, Theodore; Koff, Wayne C
Biomedical research has become a data intensive science in which high throughput experimentation is producing comprehensive data about biological systems at an ever-increasing pace. The Human Vaccines Project is a new public-private partnership, with the goal of accelerating development of improved vaccines and immunotherapies for global infectious diseases and cancers by decoding the human immune system. To achieve its mission, the Project is developing a Bioinformatics Hub as an open-source, multidisciplinary effort with the overarching goal of providing an enabling infrastructure to support the data processing, analysis and knowledge extraction procedures required to translate high throughput, high complexity human immunology research data into biomedical knowledge, to determine the core principles driving specific and durable protective immune responses.
ATM Emdadul Haque; Mainul Haque; Wan Putri Elena Wan Dali
Background A clear majority of teaching staff in UniKL-RCMP are expatriates with different cultural backgrounds, and the university currently accepting international students with a different cultural background in addition to the local culturally diverse students. Aims The purpose was to determine the knowledge and awareness of the lecturers of Faculty of Medicine regarding multiculturalism and the importance in the medical profession. Methods This was a cross-sectional study....
Nanloh S Jimam
Full Text Available Background: Due to increased health consciousness among the public, the use of herbal products are on the increase on a daily basis. To achieve optimal benefits, there is a need for pharmacists who are the custodians of knowledge on drugs and drugs-related products to have more understanding and interest in herbal medicine for effective counseling on the products. The purpose of this study was to assess Pharmacists' knowledge and perceptions regarding herbal medicine use. Methods: Self-administered questionnaires were administered to 200 pharmacists working within the study areas to fill; after which the collected data were statistically analyzed using IBM SPSS software programmer, version 20. Results: Only 88.5% of the respondents responded on the questionnaires, and their mean age was 34 years; median year of experience in practice was 8.2 years; and their areas of practice included hospital (56.1%, community (28.1%, academic (8.47%, and industries (4.52%. More than half (76.27% of them believed that herbal products were more efficacious and safer (61.02% than orthodox medicines; with almost all of them (94.92% acknowledging the beneficial effects of incorporating herbal medicines into orthodox medicine practice. However, most of them (72.88% confessed having little knowledge on herbal remedies, especially drug-herbs interactions (81.36%, and their main source of information on herbs was from school (56.50%. Conclusions: The result showed poor level of pharmacists' knowledge on herbal medicine; which might result in poor patients' counseling on herbal therapy, especially regarding their safety and potential interaction with orthodox medicine.
Ramm, Hans Henrik
Technology and knowledge make up the knowledge capital that has been so essential to the oil and gas industry's value creation, competitiveness and internationalization. Report prepared for the Norwegian Oil Industry Association (OLF) and The Norwegian Society of Chartered Technical and Scientific Professionals (Tekna), on the Norwegian petroleum cluster as an environment for creating knowledge capital from human capital, how fiscal and other framework conditions may influence the building of knowledge capital, the long-term perspectives for the petroleum cluster, what Norwegian society can learn from the experiences in the petroleum cluster, and the importance of gaining more knowledge about the functionality of knowledge for increased value creation (author) (ml)
Khosravi, Hassan; Kitto, Kirsty; Cooper, Kendra
Various forms of Peer-Learning Environments are increasingly being used in post-secondary education, often to help build repositories of student generated learning objects. However, large classes can result in an extensive repository, which can make it more challenging for students to search for suitable objects that both reflect their interests…
Erhabor, Norris I.; Don, Juliet U.
Environmentally aware and empowered youths are potentially the greatest agent of change for the long term protection and stewardship of the environment. Thus environmental education which promotes such change will enable these youths to have a greater voice on environmental issue if effectively implemented in Nigeria. Hence, this study was…
Loh, Christian Sebastian
Examines how mobile computers, or personal digital assistants (PDAs), can be used in a Web-based learning environment. Topics include wireless networks on college campuses; online learning; Web-based learning technologies; synchronous and asynchronous communication via the Web; content resources; Web connections; and collaborative learning. (LRW)
reaction (PCR), oligo hybridization and DNA sequencing. Proper primer design is actually one of the most important factors/steps in successful DNA sequencing. Various bioinformatics programs are available for selection of primer pairs from a template sequence. The plethora programs for PCR primer design reflects the.
Kelley, Scott; Alger, Christianna; Deutschman, Douglas
The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP).…
Ondrej, Vladan; Dvorak, Petr
Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…
Nielsen, Henrik; Sperotto, Maria Maddalena
)-based bioinformatics approach. The ANN was trained to recognize feature-based patterns in proteins that are considered to be associated with lipid rafts. The trained ANN was then used to predict protein raftophilicity. We found that, in the case of α-helical membrane proteins, their hydrophobic length does not affect...
Boyle, John A.
Bioinformatics has emerged as an important research tool in recent years. The ability to mine large databases for relevant information has become increasingly central to many different aspects of biochemistry and molecular biology. It is important that undergraduates be introduced to the available information and methodologies. We present a…
Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...
Belmann, Peter; Dröge, Johannes; Bremges, Andreas; McHardy, Alice C; Sczyrba, Alexander; Barton, Michael D
Software is now both central and essential to modern biology, yet lack of availability, difficult installations, and complex user interfaces make software hard to obtain and use. Containerisation, as exemplified by the Docker platform, has the potential to solve the problems associated with sharing software. We propose bioboxes: containers with standardised interfaces to make bioinformatics software interchangeable.
Thus, there is the need for appropriate strategies of introducing the basic components of this emerging scientific field to part of the African populace through the development of an online distance education learning tool. This study involved the design of a bioinformatics online distance educative tool an implementation of ...
Biological databases are having a growth spurt. Much of this results from research in genetics and biodiversity, coupled with fast-paced developments in information technology. The revolution in bioinformatics, defined by Sugden and Pennisi (2000) as the "tools and techniques for...
Raboshchuk, Ganna; Nadeu, Climent; Jancovic, Peter; Lilja, Alex Peiro; Kokuer, Munevver; Munoz Mahamud, Blanca; Riverola De Veciana, Ana
A large number of alarm sounds triggered by biomedical equipment occur frequently in the noisy environment of a neonatal intensive care unit (NICU) and play a key role in providing healthcare. In this paper, our work on the development of an automatic system for detection of acoustic alarms in that difficult environment is presented. Such automatic detection system is needed for the investigation of how a preterm infant reacts to auditory stimuli of the NICU environment and for an improved real-time patient monitoring. The approach presented in this paper consists of using the available knowledge about each alarm class in the design of the detection system. The information about the frequency structure is used in the feature extraction stage, and the time structure knowledge is incorporated at the post-processing stage. Several alternative methods are compared for feature extraction, modeling, and post-processing. The detection performance is evaluated with real data recorded in the NICU of the hospital, and by using both frame-level and period-level metrics. The experimental results show that the inclusion of both spectral and temporal information allows to improve the baseline detection performance by more than 60%.
Full Text Available The article is devoted to development of information material for the study of anthropogenic landscapes in Kharkiv region in the school course in Geography. Analysis of the State Standard of complete secondary education and school programs of Ukraine has showed that the features of transformation natural landscapes given in school geographical education are insufficient. In present-day natural science education it is important not only to expand educational material and increase its complexity but also to deepen the knowledge through disclosure of connections and relationships. This especially applies to geography, the content of which consists of a number of knowledge systems being formed within several courses. Thus, the focus should be directed on development of ideas about the unity of nature, indissolubility of all the components of nature, laws and mechanisms of anthropogenic impacts on the constituents of biosphere, and through them to the biosphere as a whole. Formation of a holistic image of nature begins from the study of real natural objects of the native locality (city, district, region, which allows to understand global laws and processes. Based on informational development about anthropogenic landscapes in Kharkiv region, the authors offered promising areas of work with students in the mode of excursions. Information about anthropogenic landscapes of Kharkiv region is important for visual use in obtaining knowledge about them in the geographical education and will provide attraction of students to practical research activities of study about anthropogenic landscapes. This approach will allow the students to form a spatial idea, and consciously navigate in the social and economic, social and political and environmental problems of the state and its region.
Robertson, Michelle M; Huang, Yueng-Hsiang; O'Neill, Michael J; Schleifer, Lawrence M
A macroergonomics intervention consisting of flexible workspace design and ergonomics training was conducted to examine the effects on psychosocial work environment, musculoskeletal health, and work effectiveness in a computer-based office setting. Knowledge workers were assigned to one of four conditions: flexible workspace (n=121), ergonomics training (n=92), flexible workspace+ergonomics training (n=31), and a no-intervention control (n=45). Outcome measures were collected 2 months prior to the intervention and 3 and 6 months post-intervention. Overall, the study results indicated positive, significant effects on the outcome variables for the two intervention groups compared to the control group, including work-related musculoskeletal discomfort, job control, environmental satisfaction, sense of community, ergonomic climate, communication and collaboration, and business process efficiency (time and costs). However, attrition of workers in the ergonomics training condition precluded an evaluation of the effects of this intervention. This study suggests that a macroergonomics intervention is effective among knowledge workers in office settings.
Full Text Available Allergies and/or food intolerances are a growing problem of the modern world. Diffi culties associated with the correct diagnosis of food allergies result in the need to classify the factors causing allergies and allergens themselves. Therefore, internet databases and other bioinformatic tools play a special role in deepening knowledge of biologically-important compounds. Internet repositories, as a source of information on different chemical compounds, including those related to allergy and intolerance, are increasingly being used by scientists. Bioinformatic methods play a signifi cant role in biological and medical sciences, and their importance in food science is increasing. This study aimed at presenting selected databases and tools of bioinformatic analysis useful in research on food allergies, allergens (11 databases, epitopes (7 databases, and haptens (2 databases. It also presents examples of the application of computer methods in studies related to allergies.
Ever since Sir Francis Bacon coined the adage, scientists have believed that "knowledge is power," but this presupposes that people are willing to embrace knowledge. Today, a significant proportion of the American public rejects the scientific evidence of climate change, and many of these Americans are highly educated, so their views cannot be attributed to scientific illiteracy or misunderstanding. Historical evidence shows that resistance to scientific evidence of climate change--like the earlier resistance to the evidence of acid rain, the ozone hole, and the harms of tobacco use--is rooted in intellectual commitments to freedom, individualism, and the power of the free market to protect political freedom while delivering goods and services. Therefore, good public policy is not likely to be achieved by producing more science, better science, or communicating that science more effectively. Rather, it suggests that effective public policy must acknowledge these commitments and concerns, and offer solutions that are not perceived to threaten the American way of life.
Brazas, Michelle D; Ouellette, B F Francis
With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable in the learning continuum. Bioinformatics.ca, which hosts the Canadian Bioinformatics Workshops, has blended more traditional learning styles with current online and social learning styles. Here we share our growing experiences over the past 12 years and look toward what the future holds for bioinformatics training programs.
McIntyre, A.D.; Turnbull, R.G.H.
The development of the hydrocarbon resources of the North Sea has resulted in both offshore and onshore environmental repercussions, involving the existing physical attributes of the sea and seabed, the coastline and adjoining land. The social and economic repercussions of the industry were equally widespread. The dramatic and speedy impact of the exploration and exploitation of the northern North Sea resources in the early 1970s, on the physical resources of Scotland was quickly realised together with the concern that any environmental and social damage to the physical and social fabric should be kept to a minimum. To this end, a wide range of research and other activities by central and local government, and other interested agencies was undertaken to extend existing knowledge on the marine and terrestrial environments that might be affected by the oil and gas industry. The outcome of these activities is summarized in this paper. The topics covered include a survey of the marine ecosystems of the North Sea, the fishing industry, the impact of oil pollution on seabirds and fish stocks, the ecology of the Scottish coastline and the impact of the petroleum industry on a selection of particular sites. (author)
Full Text Available Any present day approach of the world's most pressing environmental problems involves both scale and governance issues. After all, current local events might have long-term global consequences (the scale issue and solving complex environmental problems requires policy makers to think and govern beyond generally used time-space scales (the governance issue. To an increasing extent, the various scientists in these fields have used concepts like social-ecological systems, hierarchies, scales and levels to understand and explain the "complex cross-scale dynamics" of issues like climate change. A large part of this work manifests a realist paradigm: the scales and levels, either in ecological processes or in governance systems, are considered as "real". However, various scholars question this position and claim that scales and levels are continuously (reconstructed in the interfaces of science, society, politics and nature. Some of these critics even prefer to adopt a non-scalar approach, doing away with notions such as hierarchy, scale and level. Here we take another route, however. We try to overcome the realist-constructionist dualism by advocating a dialogue between them on the basis of exchanging and reflecting on different knowledge claims in transdisciplinary arenas. We describe two important developments, one in the ecological scaling literature and the other in the governance literature, which we consider to provide a basis for such a dialogue. We will argue that scale issues, governance practices as well as their mutual interdependencies should be considered as human constructs, although dialectically related to nature's materiality, and therefore as contested processes, requiring intensive and continuous dialogue and cooperation among natural scientists, social scientists, policy makers and citizens alike. They also require critical reflection on scientists' roles and on academic practices in general. Acknowledging knowledge claims
Yang, Jack Y; Yang, Mary Qu; Zhu, Mengxia Michelle; Arabnia, Hamid R; Deng, Youping
Bioinformatics and Genomics are closely related disciplines that hold great promises for the advancement of research and development in complex biomedical systems, as well as public health, drug design, comparative genomics, personalized medicine and so on. Research and development in these two important areas are impacting the science and technology.High throughput sequencing and molecular imaging technologies marked the beginning of a new era for modern translational medicine and personalized healthcare. The impact of having the human sequence and personalized digital images in hand has also created tremendous demands of developing powerful supercomputing, statistical learning and artificial intelligence approaches to handle the massive bioinformatics and personalized healthcare data, which will obviously have a profound effect on how biomedical research will be conducted toward the improvement of human health and prolonging of human life in the future. The International Society of Intelligent Biological Medicine (http://www.isibm.org) and its official journals, the International Journal of Functional Informatics and Personalized Medicine (http://www.inderscience.com/ijfipm) and the International Journal of Computational Biology and Drug Design (http://www.inderscience.com/ijcbdd) in collaboration with International Conference on Bioinformatics and Computational Biology (Biocomp), touch tomorrow's bioinformatics and personalized medicine throughout today's efforts in promoting the research, education and awareness of the upcoming integrated inter/multidisciplinary field. The 2007 international conference on Bioinformatics and Computational Biology (BIOCOMP07) was held in Las Vegas, the United States of American on June 25-28, 2007. The conference attracted over 400 papers, covering broad research areas in the genomics, biomedicine and bioinformatics. The Biocomp 2007 provides a common platform for the cross fertilization of ideas, and to help shape knowledge and
Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.
There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…
Full Text Available Medicinal plants are important treasures for the treatment of different types of diseases. Current study provides significant ethnopharmacological information, both qualitative and quantitative on medical plants related to children disorders from district Bannu, Khyber Pakhtunkhwa (KPK province of Pakistan. The information gathered was quantitatively analyzed using informant consensus factor, relative frequency of citation and use value method to establish a baseline data for more comprehensive investigations of bioactive compounds of indigenous medicinal plants specifically related to children disorders. To best of our knowledge it is first attempt to document ethno-botanical information of medicinal plants using quantitative approaches. Total of 130 informants were interviewed using questionnaire conducted during 2014–2016 to identify the preparations and uses of the medicinal plants for children diseases treatment. A total of 55 species of flowering plants belonging to 49 genera and 32 families were used as ethno-medicines in the study area. The largest number of specie belong to Leguminosae and Cucurbitaceae families (4 species each followed by Apiaceae, Moraceae, Poaceae, Rosaceae, and Solanaceae (3 species each. In addition leaves and fruits are most used parts (28%, herbs are most used life form (47%, decoction method were used for administration (27%, and oral ingestion was the main used route of application (68.5%. The highest use value was reported for species Momordica charantia and Raphnus sativus (1 for each and highest Informant Consensus Factor was observed for cardiovascular and rheumatic diseases categories (0.5 for each. Most of the species in the present study were used to cure gastrointestinal diseases (39 species. The results of present study revealed the importance of medicinal plant species and their significant role in the health care of the inhabitants in the present area. The people of Bannu own high traditional
Shaheen, Shabnam; Abbas, Safdar; Hussain, Javid; Mabood, Fazal; Umair, Muhammad; Ali, Maroof; Ahmad, Mushtaq; Zafar, Muhammad; Farooq, Umar; Khan, Ajmal
Medicinal plants are important treasures for the treatment of different types of diseases. Current study provides significant ethnopharmacological information, both qualitative and quantitative on medical plants related to children disorders from district Bannu, Khyber Pakhtunkhwa (KPK) province of Pakistan. The information gathered was quantitatively analyzed using informant consensus factor, relative frequency of citation and use value method to establish a baseline data for more comprehensive investigations of bioactive compounds of indigenous medicinal plants specifically related to children disorders. To best of our knowledge it is first attempt to document ethno-botanical information of medicinal plants using quantitative approaches. Total of 130 informants were interviewed using questionnaire conducted during 2014–2016 to identify the preparations and uses of the medicinal plants for children diseases treatment. A total of 55 species of flowering plants belonging to 49 genera and 32 families were used as ethno-medicines in the study area. The largest number of specie belong to Leguminosae and Cucurbitaceae families (4 species each) followed by Apiaceae, Moraceae, Poaceae, Rosaceae, and Solanaceae (3 species each). In addition leaves and fruits are most used parts (28%), herbs are most used life form (47%), decoction method were used for administration (27%), and oral ingestion was the main used route of application (68.5%). The highest use value was reported for species Momordica charantia and Raphnus sativus (1 for each) and highest Informant Consensus Factor was observed for cardiovascular and rheumatic diseases categories (0.5 for each). Most of the species in the present study were used to cure gastrointestinal diseases (39 species). The results of present study revealed the importance of medicinal plant species and their significant role in the health care of the inhabitants in the present area. The people of Bannu own high traditional knowledge
The term environment refers to the internal and external context in which organizations operate. For some scholars, environment is defined as an arrangement of political, economic, social and cultural factors existing in a given context that have an impact on organizational processes and structures....... For others, environment is a generic term describing a large variety of stakeholders and how these interact and act upon organizations. Organizations and their environment are mutually interdependent and organizational communications are highly affected by the environment. This entry examines the origin...... and development of organization-environment interdependence, the nature of the concept of environment and its relevance for communication scholarships and activities....
There is currently a revitalized concern about the potential impact of ionizing radiation on the environment that calls for the construction of a system ensuring an adequate radioprotection of the non-human biota and their associated biotopes. This paper first sets the context of the problem both, with respect to the general philosophy of environmental protection as a whole, but also with respect to the consideration of the environment achieved so far in the purpose of human radioprotection. The current accumulated knowledge on the effects of ionizing radiation to biota (fauna and flora) is then briefly reviewed, encompassing effects at individual and community/ecosystem level, situations of acute and chronic exposure to high and low doses, finally leading to the identification of the most critical gaps in scientific knowledge: effects of mixed low dose rates in chronic exposure to communities and ecosystems. The most significant current international efforts towards the identification of environmental radioprotection criteria and standards are finally presented along with some relevant national examples. (author)
Schönbach, Christian; Verma, Chandra; Bond, Peter J; Ranganathan, Shoba
The International Conference on Bioinformatics (InCoB) has been publishing peer-reviewed conference papers in BMC Bioinformatics since 2006. Of the 44 articles accepted for publication in supplement issues of BMC Bioinformatics, BMC Genomics, BMC Medical Genomics and BMC Systems Biology, 24 articles with a bioinformatics or systems biology focus are reviewed in this editorial. InCoB2017 is scheduled to be held in Shenzen, China, September 20-22, 2017.
Full Text Available We propose the formation of an International PsychoSocial and Cultural Bioinformatics Project (IPCBP to explore the research foundations of Integrative Medical Insights (IMI on all levels from the molecular-genomic to the psychological, cultural, social, and spiritual. Just as The Human Genome Project identified the molecular foundations of modern medicine with the new technology of sequencing DNA during the past decade, the IPCBP would extend and integrate this neuroscience knowledge base with the technology of gene expression via DNA/proteomic microarray research and brain imaging in development, stress, healing, rehabilitation, and the psychotherapeutic facilitation of existentional wellness. We anticipate that the IPCBP will require a unique international collaboration of, academic institutions, researchers, and clinical practioners for the creation of a new neuroscience of mind-body communication, brain plasticity, memory, learning, and creative processing during optimal experiential states of art, beauty, and truth. We illustrate this emerging integration of bioinformatics with medicine with a videotape of the classical 4-stage creative process in a neuroscience approach to psychotherapy.
Full Text Available We propose the formation of an International Psycho-Social and Cultural Bioinformatics Project (IPCBP to explore the research foundations of Integrative Medical Insights (IMI on all levels from the molecular-genomic to the psychological, cultural, social, and spiritual. Just as The Human Genome Project identified the molecular foundations of modern medicine with the new technology of sequencing DNA during the past decade, the IPCBP would extend and integrate this neuroscience knowledge base with the technology of gene expression via DNA/proteomic microarray research and brain imaging in development, stress, healing, rehabilitation, and the psychotherapeutic facilitation of existentional wellness. We anticipate that the IPCBP will require a unique international collaboration of, academic institutions, researchers, and clinical practioners for the creation of a new neuroscience of mind-body communication, brain plasticity, memory, learning, and creative processing during optimal experiential states of art, beauty, and truth. We illustrate this emerging integration of bioinformatics with medicine with a videotape of the classical 4-stage creative process in a neuroscience approach to psychotherapy.
Oluwagbemi, Olugbenga O; Adewumi, Adewole; Esuruoso, Abimbola
Computational biology and bioinformatics are gradually gaining grounds in Africa and other developing nations of the world. However, in these countries, some of the challenges of computational biology and bioinformatics education are inadequate infrastructures, and lack of readily-available complementary and motivational tools to support learning as well as research. This has lowered the morale of many promising undergraduates, postgraduates and researchers from aspiring to undertake future study in these fields. In this paper, we developed and described MACBenAbim (Multi-platform Mobile Application for Computational Biology and Bioinformatics), a flexible user-friendly tool to search for, define and describe the meanings of keyterms in computational biology and bioinformatics, thus expanding the frontiers of knowledge of the users. This tool also has the capability of achieving visualization of results on a mobile multi-platform context. MACBenAbim is available from the authors for non-commercial purposes.
Full Text Available Flavivirus infections are the most prevalent arthropod-borne infections world wide, often causing severe disease especially among children, the elderly, and the immunocompromised. In the absence of effective antiviral treatment, prevention through vaccination would greatly reduce morbidity and mortality associated with flavivirus infections. Despite the success of the empirically developed vaccines against yellow fever virus, Japanese encephalitis virus and tick-borne encephalitis virus, there is an increasing need for a more rational design and development of safe and effective vaccines. Several bioinformatic tools are available to support such rational vaccine design. In doing so, several parameters have to be taken into account, such as safety for the target population, overall immunogenicity of the candidate vaccine, and efficacy and longevity of the immune responses triggered. Examples of how bio-informatics is applied to assist in the rational design and improvements of vaccines, particularly flavivirus vaccines, are presented and discussed.
Christopher L Williams
Full Text Available Objective: Within the information technology (IT industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise′s overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Context: Bioinformatics relies on nimble IT framework which can adapt to changing requirements. Aims: To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics Conclusions: Use of the microservices framework
Williams, Christopher L; Sica, Jeffrey C; Killen, Robert T; Balis, Ulysses G J
Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Bioinformatics relies on nimble IT framework which can adapt to changing requirements. To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics. Use of the microservices framework is an effective methodology for the fabrication and
Williams, Christopher L.; Sica, Jeffrey C.; Killen, Robert T.; Balis, Ulysses G. J.
Objective: Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Context: Bioinformatics relies on nimble IT framework which can adapt to changing requirements. Aims: To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics Conclusions: Use of the microservices framework is an effective
Kunz, Meik; Xiao, Ke; Liang, Chunguang; Viereck, Janika; Pachel, Christina; Frantz, Stefan; Thum, Thomas; Dandekar, Thomas
MicroRNAs (miRNAs) are small ~22 nucleotide non-coding RNAs and are highly conserved among species. Moreover, miRNAs regulate gene expression of a large number of genes associated with important biological functions and signaling pathways. Recently, several miRNAs have been found to be associated with cardiovascular diseases. Thus, investigating the complex regulatory effect of miRNAs may lead to a better understanding of their functional role in the heart. To achieve this, bioinformatics approaches have to be coupled with validation and screening experiments to understand the complex interactions of miRNAs with the genome. This will boost the subsequent development of diagnostic markers and our understanding of the physiological and therapeutic role of miRNAs in cardiac remodeling. In this review, we focus on and explain different bioinformatics strategies and algorithms for the identification and analysis of miRNAs and their regulatory elements to better understand cardiac miRNA biology. Starting with the biogenesis of miRNAs, we present approaches such as LocARNA and miRBase for combining sequence and structure analysis including phylogenetic comparisons as well as detailed analysis of RNA folding patterns, functional target prediction, signaling pathway as well as functional analysis. We also show how far bioinformatics helps to tackle the unprecedented level of complexity and systemic effects by miRNA, underlining the strong therapeutic potential of miRNA and miRNA target structures in cardiovascular disease. In addition, we discuss drawbacks and limitations of bioinformatics algorithms and the necessity of experimental approaches for miRNA target identification. This article is part of a Special Issue entitled 'Non-coding RNAs'. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ma, Shuangge; Huang, Jian
In bioinformatics studies, supervised classification with high-dimensional input variables is frequently encountered. Examples routinely arise in genomic, epigenetic and proteomic studies. Feature selection can be employed along with classifier construction to avoid over-fitting, to generate more reliable classifier and to provide more insights into the underlying causal relationships. In this article, we provide a review of several recently developed penalized feature selection and classific...
Full Text Available This paper argues that the social sciences are fragmented in addressing the environmental challenge of increasing resource depletion. To address this problem, the paper puts forward a framework which encompasses several disciplinary approaches, and above all a long-term historical perspective and a realist sociology of science and technology which, in combination, provide a means of understanding the disruptive changes in the transformation of the environment. The paper then focuses on energy and gives an overview of the various social forces that can potentially counteract the future tensions arising from the foreseeable depletion of energy sources. It argues that only some of these countervailing forces—namely state intervention and technological innovation—provide viable potential solutions to these tensions. However, these solutions themselves face severe constraints. The paper concludes by arguing that a realistic assessment of constraints is the most useful, though limited, service that social science can contribute to our understanding of the relation between social and environmental transformation.
Brinberg, Herbert R.; Pinelli, Thomas E.; Barclay, Rebecca O.
Consideration effort has been devoted over the past 30 years to developing methods and means of assessing the value of information. Two approaches - value in exchange and value in use - dominate; however, neither approach enjoys much practical application because validation schema for decision-making is missing. The approaches fail to measure objectively the real costs of acquiring information and the real benefits that information will yield. Moreover, these approaches collectively fail to provide economic justification to build and/or continue to support an information product or service. In addition, the impact of Cyberspace adds a new dimension to the problem. A new paradigm is required to make economic sense in this revolutionary information environment. In previous work, the authors explored the various approaches to measuring the value of information and concluded that, in large measure, these methods were unworkable concepts and constructs. Instead, they proposed several axioms for valuing information. Most particularly they concluded that the 'value of information cannot be measured in the absence of a specific task, objective, or goal.' This paper builds on those axioms and describes under which circumstances information can be measured in objective and actionable terms. This paper also proposes a methodology for undertaking such measures and validating the results.
Schneider, M.V.; Watson, J.; Attwood, T.
As bioinformatics becomes increasingly central to research in the molecular life sciences, the need to train non-bioinformaticians to make the most of bioinformatics resources is growing. Here, we review the key challenges and pitfalls to providing effective training for users of bioinformatics...... services, and discuss successful training strategies shared by a diverse set of bioinformatics trainers. We also identify steps that trainers in bioinformatics could take together to advance the state of the art in current training practices. The ideas presented in this article derive from the first...
Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H
Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. © The Author 2015. Published by Oxford University Press.
Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.
Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469
Full Text Available After the progress made during the genomics era, bioinformatics was tasked with supporting the flow of information generated by nanobiotechnology efforts. This challenge requires adapting classical bioinformatic and computational chemistry tools to store, standardize, analyze, and visualize nanobiotechnological information. Thus, old and new bioinformatic and computational chemistry tools have been merged into a new sub-discipline: nanoinformatics. This review takes a second look at the development of this new and exciting area as seen from the perspective of the evolution of nanobiotechnology applied to the life sciences. The knowledge obtained at the nano-scale level implies answers to new questions and the development of new concepts in different fields. The rapid convergence of technologies around nanobiotechnologies has spun off collaborative networks and web platforms created for sharing and discussing the knowledge generated in nanobiotechnology. The implementation of new database schemes suitable for storage, processing and integrating physical, chemical, and biological properties of nanoparticles will be a key element in achieving the promises in this convergent field. In this work, we will review some applications of nanobiotechnology to life sciences in generating new requirements for diverse scientific fields, such as bioinformatics and computational chemistry.
Brown, J.E.; Borretzen, P.; Hosseini, A.; Iosjpe, M.
A review on concentration factors (CF) for the marine environment was conducted in order to consider the relevance of existing data from the perspective of environmental protection and to identify areas of data paucity. Data have been organised in a format compatible with a reference organism approach, for selected radionuclides, and efforts have been taken to identify the factors that may be of importance in the context of dosimetric and dose-effects analyses. These reference organism categories had been previously selected by identifying organism groups that were likely to experience the highest levels of radiation exposure, owing to high uptake levels or residence in a particular habitat, for defined scenarios. Significant data gaps in the CF database have been identified, notably for marine mammals and birds. Most empirical information pertains to a limit suite of radionuclides, particularly 137 Cs, 210 Po and 99 Tc. A methodology has been developed to help bridge this information deficit. This has been based on simple dynamic, biokinetic models that mainly use parameters derived from laboratory-based study and field observation. In some cases, allometric relationships have been employed to allow further model parameterization. Initial testing of the model by comparing model output with empirical data sets suggest that the models provide sensible equilibrium CFs. Furthermore, analyses of modelling results suggest that for some radionuclides, in particularly those with long effective half-lives, the time to equilibrium can be far greater than the life-time of an organism. This clearly emphasises the limitations of applying a universal equilibrium approach. The methodology, therefore, has an added advantage that non-equilibrium scenarios can be considered in a more rigorous manner. Further refinements to the modelling approach might be attained by exploring the importance of various model parameters, through sensitivity analyses, and by identifying those
Full Text Available In order to monitor, describe and understand the marine environment, many research institutions are involved in the acquisition and distribution of ocean data, both from observations and models. Scientists from these institutions are spending too much time looking for, accessing, and reformatting data: they need better tools and procedures to make the science they do more efficient. The U.S. Integrated Ocean Observing System (US-IOOS is working on making large amounts of distributed data usable in an easy and efficient way. It is essentially a network of scientists, technicians and technologies designed to acquire, collect and disseminate observational and modelled data resulting from coastal and oceanic marine regions investigations to researchers, stakeholders and policy makers. In order to be successful, this effort requires standard data protocols, web services and standards-based tools. Starting from the US-IOOS approach, which is being adopted throughout much of the oceanographic and meteorological sectors, we describe here the CNR-ISMAR Venice experience in the direction of setting up a national Italian IOOS framework using the THREDDS (THematic Real-time Environmental Distributed Data Services Data Server (TDS, a middleware designed to fill the gap between data providers and data users. The TDS provides services that allow data users to find the data sets pertaining to their scientific needs, to access, to visualize and to use them in an easy way, without downloading files to the local workspace. In order to achieve this, it is necessary that the data providers make their data available in a standard form that the TDS understands, and with sufficient metadata to allow the data to be read and searched in a standard way. The core idea is then to utilize a Common Data Model (CDM, a unified conceptual model that describes different datatypes within each dataset. More specifically, Unidata (www.unidata.ucar.edu has developed CDM
Full Text Available Linux container technologies, as represented by Docker, provide an alternative to complex and time-consuming installation processes needed for scientiﬁc software. The ease of deployment and the process isolation they enable, as well as the reproducibility they permit across environments and versions, are among the qualities that make them interesting candidates for the construction of bioinformatic infrastructures, at any scale from single workstations to high throughput computing architectures. The Docker Hub is a public registry which can be used to distribute bioinformatic software as Docker images. However, its lack of curation and its genericity make it difﬁcult for a bioinformatics user to ﬁnd the most appropriate images needed. BioShaDock is a bioinformatics-focused Docker registry, which provides a local and fully controlled environment to build and publish bioinformatic software as portable Docker images. It provides a number of improvements over the base Docker registry on authentication and permissions management, that enable its integration in existing bioinformatic infrastructures such as computing platforms. The metadata associated with the registered images are domain-centric, including for instance concepts deﬁned in the EDAM ontology, a shared and structured vocabulary of commonly used terms in bioinformatics. The registry also includes user deﬁned tags to facilitate its discovery, as well as a link to the tool description in the ELIXIR registry if it already exists. If it does not, the BioShaDock registry will synchronize with the registry to create a new description in the Elixir registry, based on the BioShaDock entry metadata. This link will help users get more information on the tool such as its EDAM operations, input and output types. This allows integration with the ELIXIR Tools and Data Services Registry, thus providing the appropriate visibility of such images to the bioinformatics community.
Armstrong, Ryan; de Ribaupierre, Sandrine; Eagleson, Roy
This paper describes the design and development of a software tool for the evaluation and training of surgical residents using an interactive, immersive, virtual environment. Our objective was to develop a tool to evaluate user spatial reasoning skills and knowledge in a neuroanatomical context, as well as to augment their performance through interactivity. In the visualization, manually segmented anatomical surface images of MRI scans of the brain were rendered using a stereo display to improve depth cues. A magnetically tracked wand was used as a 3D input device for localization tasks within the brain. The movement of the wand was made to correspond to movement of a spherical cursor within the rendered scene, providing a reference for localization. Users can be tested on their ability to localize structures within the 3D scene, and their ability to place anatomical features at the appropriate locations within the rendering. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Goto, Naohisa; Prins, Pjotr; Nakao, Mitsuteru; Bonnal, Raoul; Aerts, Jan; Katayama, Toshiaki
The BioRuby software toolkit contains a comprehensive set of free development tools and libraries for bioinformatics and molecular biology, written in the Ruby programming language. BioRuby has components for sequence analysis, pathway analysis, protein modelling and phylogenetic analysis; it supports many widely used data formats and provides easy access to databases, external programs and public web services, including BLAST, KEGG, GenBank, MEDLINE and GO. BioRuby comes with a tutorial, documentation and an interactive environment, which can be used in the shell, and in the web browser. BioRuby is free and open source software, made available under the Ruby license. BioRuby runs on all platforms that support Ruby, including Linux, Mac OS X and Windows. And, with JRuby, BioRuby runs on the Java Virtual Machine. The source code is available from http://www.bioruby.org/. firstname.lastname@example.org
Vakser Ilya A
Full Text Available Abstract Background Computational approaches to protein-protein docking typically include scoring aimed at improving the rank of the near-native structure relative to the false-positive matches. Knowledge-based potentials improve modeling of protein complexes by taking advantage of the rapidly increasing amount of experimentally derived information on protein-protein association. An essential element of knowledge-based potentials is defining the reference state for an optimal description of the residue-residue (or atom-atom pairs in the non-interaction state. Results The study presents a new Distance- and Environment-dependent, Coarse-grained, Knowledge-based (DECK potential for scoring of protein-protein docking predictions. Training sets of protein-protein matches were generated based on bound and unbound forms of proteins taken from the DOCKGROUND resource. Each residue was represented by a pseudo-atom in the geometric center of the side chain. To capture the long-range and the multi-body interactions, residues in different secondary structure elements at protein-protein interfaces were considered as different residue types. Five reference states for the potentials were defined and tested. The optimal reference state was selected and the cutoff effect on the distance-dependent potentials investigated. The potentials were validated on the docking decoys sets, showing better performance than the existing potentials used in scoring of protein-protein docking results. Conclusions A novel residue-based statistical potential for protein-protein docking was developed and validated on docking decoy sets. The results show that the scoring function DECK can successfully identify near-native protein-protein matches and thus is useful in protein docking. In addition to the practical application of the potentials, the study provides insights into the relative utility of the reference states, the scope of the distance dependence, and the coarse-graining of
Full Text Available The realisation of the advantages offered by e-learning accompanied by the use of various emerging information technologies has resulted in a noticeable shift by academia towards e-learning. An analysis of the use, knowledge and adoption of emerging technologies by academics in an Open Distance Learning (ODL environment at the University of South Africa (UNISA was undertaken in this study. The aim of the study was to evaluate the use, knowledge and adoption of emerging e-learning technologies by the academics from the selected schools. The academics in the Schools of Arts, Computing and Science were purposively selected in order to draw on views of academics from different teaching and educational backgrounds. Questionnaires were distributed both electronically and manually. The results showed that academics in all the Schools were competent at the use of information technology tools and applications such as emailing, word-processing, Internet, myUnisa (UNISA’s online teaching platform, and Microsoft PowerPoint and Excel. An evaluation of the awareness of different emerging technological tools showed that most academics were aware of Open Access Technologies, Social Networking Sites, Blogs, Video Games and Microblogging Platforms. While the level of awareness was high for these technologies, the use by the academics was low. At least 62.3% of the academics indicated willingness to migrate to online teaching completely and also indicated the need for further training on new technologies. A comparison of the different schools showed no statistically significant difference in the use, knowledge and willingness to adopt technology amongst the academics.
Handl, Julia; Kell, Douglas B; Knowles, Joshua
This paper reviews the application of multiobjective optimization in the fields of bioinformatics and computational biology. A survey of existing work, organized by application area, forms the main body of the review, following an introduction to the key concepts in multiobjective optimization. An original contribution of the review is the identification of five distinct "contexts," giving rise to multiple objectives: These are used to explain the reasons behind the use of multiobjective optimization in each application area and also to point the way to potential future uses of the technique.
Lue, Jaw-Chyng L.; Fang, Wai-Chi
A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.
The general audience for these lectures is mainly physicists, computer scientists, engineers or the general public wanting to know more about what’s going on in the biosciences. What’s bioinformatics and why is all this fuss being made about it ? What’s this revolution triggered by the human genome project ? Are there any results yet ? What are the problems ? What new avenues of research have been opened up ? What about the technology ? These new developments will be compared with what happened at CERN earlier in its evolution, and it is hoped that the similiraties and contrasts will stimulate new curiosity and provoke new thoughts.
Magana, Alejandra J.; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari
Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the…
Mello, Luciane V; Tregilgas, Luke; Cowley, Gwen; Gupta, Anshul; Makki, Fatima; Jhutty, Anjeet; Shanmugasundram, Achchuthan
Teaching bioinformatics is a longstanding challenge for educators who need to demonstrate to students how skills developed in the classroom may be applied to real world research. This study employed an action research methodology which utilised student-staff partnership and peer-learning. It was centred on the experiences of peer-facilitators, students who had previously taken a postgraduate bioinformatics module, and had applied knowledge and skills gained from it to their own research. It aimed to demonstrate to peer-receivers, current students, how bioinformatics could be used in their own research while developing peer-facilitators' teaching and mentoring skills. This student-centred approach was well received by the peer-receivers, who claimed to have gained improved understanding of bioinformatics and its relevance to research. Equally, peer-facilitators also developed a better understanding of the subject and appreciated that the activity was a rare and invaluable opportunity to develop their teaching and mentoring skills, enhancing their employability.
Williams, Jennifer M; Mangan, Mary E; Perreault-Micale, Cynthia; Lathe, Scott; Sirohi, Neeraj; Lathe, Warren C
The amount of biological data is increasing rapidly, and will continue to increase as new rapid technologies are developed. Professionals in every area of bioscience will have data management needs that require publicly available bioinformatics resources. Not all scientists desire a formal bioinformatics education but would benefit from more informal educational sources of learning. Effective bioinformatics education formats will address a broad range of scientific needs, will be aimed at a variety of user skill levels, and will be delivered in a number of different formats to address different learning styles. Informal sources of bioinformatics education that are effective are available, and will be explored in this review.
Barrero, Roberto A; Napier, Kathryn R; Cunnington, James; Liefting, Lia; Keenan, Sandi; Frampton, Rebekah A; Szabo, Tamas; Bulman, Simon; Hunter, Adam; Ward, Lisa; Whattam, Mark; Bellgard, Matthew I
Detection and preventing entry of exotic viruses and viroids at the border is critical for protecting plant industries trade worldwide. Existing post entry quarantine screening protocols rely on time-consuming biological indicators and/or molecular assays that require knowledge of infecting viral pathogens. Plants have developed the ability to recognise and respond to viral infections through Dicer-like enzymes that cleave viral sequences into specific small RNA products. Many studies reported the use of a broad range of small RNAs encompassing the product sizes of several Dicer enzymes involved in distinct biological pathways. Here we optimise the assembly of viral sequences by using specific small RNA subsets. We sequenced the small RNA fractions of 21 plants held at quarantine glasshouse facilities in Australia and New Zealand. Benchmarking of several de novo assembler tools yielded SPAdes using a kmer of 19 to produce the best assembly outcomes. We also found that de novo assembly using 21-25 nt small RNAs can result in chimeric assemblies of viral sequences and plant host sequences. Such non-specific assemblies can be resolved by using 21-22 nt or 24 nt small RNAs subsets. Among the 21 selected samples, we identified contigs with sequence similarity to 18 viruses and 3 viroids in 13 samples. Most of the viruses were assembled using only 21-22 nt long virus-derived siRNAs (viRNAs), except for one Citrus endogenous pararetrovirus that was more efficiently assembled using 24 nt long viRNAs. All three viroids found in this study were fully assembled using either 21-22 nt or 24 nt viRNAs. Optimised analysis workflows were customised within the Yabi web-based analytical environment. We present a fully automated viral surveillance and diagnosis web-based bioinformatics toolkit that provides a flexible, user-friendly, robust and scalable interface for the discovery and diagnosis of viral pathogens. We have implemented an automated viral surveillance and
Wang, Xiran; Jiang, Leiyu; Tang, Haoru
GSTF12 has always been known as a key factor of proanthocyanins accumulate in plant testa. Through bioinformatics analysis of the nucleotide and encoded protein sequence of GSTF12, it is more advantageous to the study of genes related to anthocyanin biosynthesis accumulation pathway. Therefore, we chosen GSTF12 gene of 11 kinds species, downloaded their nucleotide and protein sequence from NCBI as the research object, found strawberry GSTF12 gene via bioinformation analyse, constructed phylogenetic tree. At the same time, we analysed the strawberry GSTF12 gene of physical and chemical properties and its protein structure and so on. The phylogenetic tree showed that Strawberry and petunia were closest relative. By the protein prediction, we found that the protein owed one proper signal peptide without obvious transmembrane regions.
Full Text Available The emergence of next-generation sequencing (NGS platforms imposes increasing demands on statistical methods and bioinformatic tools for the analysis and the management of the huge amounts of data generated by these technologies. Even at the early stages of their commercial availability, a large number of softwares already exist for analyzing NGS data. These tools can be fit into many general categories including alignment of sequence reads to a reference, base-calling and/or polymorphism detection, de novo assembly from paired or unpaired reads, structural variant detection and genome browsing. This manuscript aims to guide readers in the choice of the available computational tools that can be used to face the several steps of the data analysis workflow.
Yukinawa, N; Ishii, S; Takenouchi, T; Oba, S
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods
Surangi W. Punyasena
Full Text Available Recent advances in microscopy, imaging, and data analyses have permitted both the greater application of quantitative methods and the collection of large data sets that can be used to investigate plant morphology. This special issue, the first for Applications in Plant Sciences, presents a collection of papers highlighting recent methods in the quantitative study of plant form. These emerging biometric and bioinformatic approaches to plant sciences are critical for better understanding how morphology relates to ecology, physiology, genotype, and evolutionary and phylogenetic history. From microscopic pollen grains and charcoal particles, to macroscopic leaves and whole root systems, the methods presented include automated classification and identification, geometric morphometrics, and skeleton networks, as well as tests of the limits of human assessment. All demonstrate a clear need for these computational and morphometric approaches in order to increase the consistency, objectivity, and throughput of plant morphological studies.
ACADEMIC TRAINING LECTURE SERIES 27, 28 February 1, 2, 3 March 2006 from 11:00 to 12:00 - Auditorium, bldg. 500 Decoding the Genome A special series of 5 lectures on: Recent extraordinary advances in the life sciences arising through new detection technologies and bioinformatics The past five years have seen an extraordinary change in the information and tools available in the life sciences. The sequencing of the human genome, the discovery that we possess far fewer genes than foreseen, the measurement of the tiny changes in the genomes that differentiate us, the sequencing of the genomes of many pathogens that lead to diseases such as malaria are all examples of completely new information that is now available in the quest for improved healthcare. New tools have allowed similar strides in the discovery of the associated protein structures, providing invaluable information for those searching for new drugs. New DNA microarray chips permit simultaneous measurement of the state of expression of tens...
Helm-Stevens, Roxanne; Brown, Kneeland C.; Russell, Julia K.
Knowledge management has the potential to develop strategic advantage and enhance the performance of an organization in terms of productivity and business process efficiency. For this reason, organizations are contributing significant resources to knowledge management; investing in information location and implementing knowledge management…
Ramlo, Susan E.; McConnell, David; Duan, Zhong-Hui; Moore, Francisco B.
Faculty at a Midwestern metropolitan public university recently developed a course on bioinformatics that emphasized collaboration and inquiry. Bioinformatics, essentially the application of computational tools to biological data, is inherently interdisciplinary. Thus part of the challenge of creating this course was serving the needs and…
Bioinformatics is an interdisciplinary subject, which uses computer application, statistics, mathematics and engineering for the analysis and management of biological information. It has become an important tool for basic and applied research in veterinary sciences. Bioinformatics has brought about advancements into ...
Life sciences research and development has opened up new challenges and opportunities for bioinformatics. The contribution of bioinformatics advances made possible the mapping of the entire human genome and genomes of many other organisms in just over a decade. These discoveries, along with current efforts to ...
Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…
Howard, David R.; Miskowski, Jennifer A.; Grunwald, Sandra K.; Abler, Michael L.
At the University of Wisconsin-La Crosse, we have undertaken a program to integrate the study of bioinformatics across the undergraduate life science curricula. Our efforts have included incorporating bioinformatics exercises into courses in the biology, microbiology, and chemistry departments, as well as coordinating the efforts of faculty within…
Bioinformatics has advanced the course of research and future veterinary vaccines development because it has provided new tools for identification of vaccine targets from sequenced biological data of organisms. In Nigeria, there is lack of bioinformatics training in the universities, expect for short training courses in which ...
The main bottleneck in advancing genomics in present times is the lack of expertise in using bioinformatics tools and approaches for data mining in raw DNA sequences generated by modern high throughput technologies such as next generation sequencing. Although bioinformatics has been making major progress and ...
When bioinformatics education is considered, several issues are addressed. At the undergraduate level, the main issue revolves around conveying information from two main and different fields: biology and computer science. At the graduate level, the main issue is bridging the gap between biology students and computer science students. However, there is an educational component that is rarely addressed within the context of bioinformatics education: the ethics component. Here, a different perspective is provided on bioinformatics education, and the current status of ethics is analyzed within the existing bioinformatics programs. Analysis of the existing undergraduate and graduate programs, in both Europe and the United States, reveals the minimal attention given to ethics within bioinformatics education. Given that bioinformaticians speedily and effectively shape the biomedical sciences and hence their implications for society, here redesigning of the bioinformatics curricula is suggested in order to integrate the necessary ethics education. Unique ethical problems awaiting bioinformaticians and bioinformatics ethics as a separate field of study are discussed. In addition, a template for an "Ethics in Bioinformatics" course is provided.
Challis, Jonathan K; Hanson, Mark L; Friesen, Ken J; Wong, Charles S
This work presents a critical assessment of the state and quality of knowledge around the aquatic photochemistry of human- and veterinary-use pharmaceuticals from laboratory experiments and field observations. A standardized scoring rubric was used to assess relevant studies within four categories: experimental design, laboratory-based direct and indirect photolysis, and field/solar photolysis. Specific metrics for each category are defined to evaluate various aspects of experimental design (e.g., higher scores are given for more appropriate characterization of light source wavelength distribution). This weight of evidence-style approach allowed for identification of knowledge strengths and gaps covering three areas: first, the general extent of photochemical data for specific pharmaceuticals and classes; second, the overall quality of existing data (i.e., strong versus weak); and finally, trends in the photochemistry research around these specific compounds, e.g. the observation of specific and consistent oversights in experimental design. In general, those drugs that were most studied also had relatively good quality data. The four pharmaceuticals studied experimentally at least ten times in the literature had average total scores (lab and field combined) of ≥29, considered decent quality; carbamazepine (13 studies; average score of 31), diclofenac (12 studies; average score of 31), sulfamethoxazole (11 studies; average score of 34), and propranolol (11 studies; average score of 29). Major oversights and errors in data reporting and/or experimental design included: lack of measurement and reporting of incident light source intensity, lack of appropriate controls, use of organic co-solvents in irradiation solutions, and failure to consider solution pH. Consequently, a number of these experimental parameters were likely a cause of inconsistent measurements of direct photolysis rate constants and quantum yields, two photochemical properties that were highly
Parache, V.; Renaud, P. [Institut de Radioprotection et de Surete Nucleaire - IRSN (France)
and propose a reproducible approach for the evaluation of realistic indicators. Furthermore, these study shows that the data obtained on a nuclear site could apply to a nearby nuclear site, with close agro-climatic conditions. The influence of the types of productions and the agricultural and breeding practices, on the potential contamination of foodstuffs produced locally in accidental situation, led the IRSN to develop summary sheets on the agricultural environment of the nuclear sites. These levels of contamination can be very different according to the period of the year in which occurs the accident. For example, the date of harvest determines if these products will be almost exempt from contamination (so already harvested) or if they will present a maximal contamination. In complement, these sheets give some local contacts which would allow obtaining contextual precision around sensitive dates. These data will also allow elaborating sampling strategies to monitor foodstuffs activities after an accidental release. Based on these knowledge of environmental characteristics and considering that the metrological capacity will be necessarily limited, the monitoring strategies must insure that all foodstuff produced on the whole monitored area are below intervention levels. Document available in abstract form only. (authors)
Miller, Maximilian; Zhu, Chengsheng; Bromberg, Yana
With the advent of modern day high-throughput technologies, the bottleneck in biological discovery has shifted from the cost of doing experiments to that of analyzing results. clubber is our automated cluster-load balancing system developed for optimizing these “big data” analyses. Its plug-and-play framework encourages re-use of existing solutions for bioinformatics problems. clubber’s goals are to reduce computation times and to facilitate use of cluster computing. The first goal is achieved by automating the balance of parallel submissions across available high performance computing (HPC) resources. Notably, the latter can be added on demand, including cloud-based resources, and/or featuring heterogeneous environments. The second goal of making HPCs user-friendly is facilitated by an interactive web interface and a RESTful API, allowing for job monitoring and result retrieval. We used clubber to speed up our pipeline for annotating molecular functionality of metagenomes. Here, we analyzed the Deepwater Horizon oil-spill study data to quantitatively show that the beach sands have not yet entirely recovered. Further, our analysis of the CAMI-challenge data revealed that microbiome taxonomic shifts do not necessarily correlate with functional shifts. These examples (21 metagenomes processed in 172 min) clearly illustrate the importance of clubber in the everyday computational biology environment. PMID:28609295
Miller, Maximilian; Zhu, Chengsheng; Bromberg, Yana
With the advent of modern day high-throughput technologies, the bottleneck in biological discovery has shifted from the cost of doing experiments to that of analyzing results. clubber is our automated cluster-load balancing system developed for optimizing these "big data" analyses. Its plug-and-play framework encourages re-use of existing solutions for bioinformatics problems. clubber's goals are to reduce computation times and to facilitate use of cluster computing. The first goal is achieved by automating the balance of parallel submissions across available high performance computing (HPC) resources. Notably, the latter can be added on demand, including cloud-based resources, and/or featuring heterogeneous environments. The second goal of making HPCs user-friendly is facilitated by an interactive web interface and a RESTful API, allowing for job monitoring and result retrieval. We used clubber to speed up our pipeline for annotating molecular functionality of metagenomes. Here, we analyzed the Deepwater Horizon oil-spill study data to quantitatively show that the beach sands have not yet entirely recovered. Further, our analysis of the CAMI-challenge data revealed that microbiome taxonomic shifts do not necessarily correlate with functional shifts. These examples (21 metagenomes processed in 172 min) clearly illustrate the importance of clubber in the everyday computational biology environment.
Full Text Available With the advent of modern day high-throughput technologies, the bottleneck in biological discovery has shifted from the cost of doing experiments to that of analyzing results. clubber is our automated cluster-load balancing system developed for optimizing these “big data” analyses. Its plug-and-play framework encourages re-use of existing solutions for bioinformatics problems. clubber’s goals are to reduce computation times and to facilitate use of cluster computing. The first goal is achieved by automating the balance of parallel submissions across available high performance computing (HPC resources. Notably, the latter can be added on demand, including cloud-based resources, and/or featuring heterogeneous environments. The second goal of making HPCs user-friendly is facilitated by an interactive web interface and a RESTful API, allowing for job monitoring and result retrieval. We used clubber to speed up our pipeline for annotating molecular functionality of metagenomes. Here, we analyzed the Deepwater Horizon oil-spill study data to quantitatively show that the beach sands have not yet entirely recovered. Further, our analysis of the CAMI-challenge data revealed that microbiome taxonomic shifts do not necessarily correlate with functional shifts. These examples (21 metagenomes processed in 172 min clearly illustrate the importance of clubber in the everyday computational biology environment.
Kirkeby, Inge Mette
Although serious efforts are made internationally and nationally, it is a slow process to make our physical environment accessible. In the actual design process, architects play a major role. But what kinds of knowledge, including research-based knowledge, do practicing architects make use of when...... designing accessible environments? The answer to the question is crucially important since it affects how knowledge is distributed and how accessibility can be ensured. In order to get first-hand knowledge about the design process and the sources from which they gain knowledge, 11 qualitative interviews...... were conducted with architects with experience of designing for accessibility. The analysis draws on two theoretical distinctions. The first is research-based knowledge versus knowledge used by architects. The second is context-independent knowledge versus context-dependent knowledge. The practitioners...
Hernández-de-Diego, Rafael; de Villiers, Etienne P; Klingström, Tomas; Gourlé, Hadrien; Conesa, Ana; Bongcam-Rudloff, Erik
Bioinformatics skills have become essential for many research areas; however, the availability of qualified researchers is usually lower than the demand and training to increase the number of able bioinformaticians is an important task for the bioinformatics community. When conducting training or hands-on tutorials, the lack of control over the analysis tools and repositories often results in undesirable situations during training, as unavailable online tools or version conflicts may delay, complicate, or even prevent the successful completion of a training event. The eBioKit is a stand-alone educational platform that hosts numerous tools and databases for bioinformatics research and allows training to take place in a controlled environment. A key advantage of the eBioKit over other existing teaching solutions is that all the required software and databases are locally installed on the system, significantly reducing the dependence on the internet. Furthermore, the architecture of the eBioKit has demonstrated itself to be an excellent balance between portability and performance, not only making the eBioKit an exceptional educational tool but also providing small research groups with a platform to incorporate bioinformatics analysis in their research. As a result, the eBioKit has formed an integral part of training and research performed by a wide variety of universities and organizations such as the Pan African Bioinformatics Network (H3ABioNet) as part of the initiative Human Heredity and Health in Africa (H3Africa), the Southern Africa Network for Biosciences (SAnBio) initiative, the Biosciences eastern and central Africa (BecA) hub, and the International Glossina Genome Initiative.
Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D
Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.
Quality of Agricultural Products and Protection of the Environment: Training, Knowledge Dissemination and Certification. Synthesis Report of a Study in Five European Countries. CEDEFOP Reference Series.
Papadaki-Klavdianou, A.; Menkisoglou-Spiroudi, O.; Tsakiridou, E.
This book examines existing European environmental education and agricultural practices friendly to the environment. Focus is on studies conducted in five countries Germany, Greece, the Netherlands, Portugal, and Spain--that aimed to define new knowledge qualifications related to environmental issues in producing alternative agricultural products…
Lee, Gyungjoo; Yang, Soo; Jang, Mi Heui; Yeom, Mijung
This study was conducted to evaluate the effectiveness of a mother/infant-toddler health program developed to enhance parenting knowledge, behavior and confidence in low income mothers and home environment. A one-group pretest-posttest quasi-experimental design was used. Sixty-nine dyads of mothers and infant-toddlers (aged 0-36 months) were provided with weekly intervention for seven session. Each session consisted of three parts; first, educating to increase integrated knowledge related to the development of the infant/toddler including nutrition, first aid and home environment; second, counseling to share parenting experience among the mothers and to increase their nurturing confidence; third, playing with the infant/toddler to facilitate attachment-based parenting behavior for the mothers. Following the programs, there were significant increases in parenting knowledge on nutrition and first aid. A significant improvement was found in attachment-based parenting behavior, but not in home safety practice. Nurturing confidence was not significantly increased. The program led to more positive home environment for infant/toddler's health and development. The findings provide evidence for mother-infant/toddler health program to improve parenting knowledge, attachment-based parenting behavior and better home environment in low income mothers. Study of the long term effectiveness of this program is recommended for future research.
Narelle S Cox
Trial registration: PROSPERO CRD42015015976. [Cox NS, Oliveira CC, Lahham A, Holland AE (2017 Pulmonary rehabilitation referral and participation are commonly influenced by environment, knowledge, and beliefs about consequences: a systematic review using the Theoretical Domains Framework. Journal of Physiotherapy 63: 84–93
Nyseth, Torill; Viken, Arvid
This is accepted manuscript version. Published version available at http://dx.doi.org/10.1017/S003224741500039X This article engages with knowledge management in governing vulnerable polar areas and tourism. Since the 1870’s Svalbard has been a cruise tourism destination. Due to less ice during the summer period, the number of tourists visiting the remote northeast corner of the archipelago has increased significantly and the potential negative impact on this vulnerable natural...
Full Text Available This research intends to study the factors that can affect the product perception and consumer intention in buying organic product.The study is necessary in that it explores at least some of the factors that can affect the product perception and consumer intention in buying organic product. The research results indicated that there was a positive influence of health consciousness towards environment attitude, consumer’s organic product knowledge towards organic product perception, environment attitude and consumer’s organic knowledge towards intention to buy organic product. But, there was a negative influence between environment attitude, health consciousness towards consumer’s organic product perception, and consumer’s organic product towards intention to buy organic product.
de Groot Joost CW
Full Text Available Abstract Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1 a web based, graphical user interface (GUI that enables a pipeline operator to manage the system; 2 the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3 the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines.
Karezin, V.; Bronnikova, I.; Terentyeva, T.
Full text: Rosatom being the flagman of the Russian nuclear industry has succession planning as one of the crucial strategic HR objectives. Therefore, it builds different approaches to assure attraction and development of the best and most promising specialists including recent and future graduates. Tournament of young professionals (TEMP) is the corner-stone initiative to select best young professionals in frames of crowdsourcing environment where participants raise the level of professional knowledge, learn to better understand the attitudes of work in the nuclear power industry, compete under the essential tasks of real production value while stakeholders build the culture of knowledge sharing. And the entire scheme rests upon knowledge transfer from the nuclear industry experts to potential hiring pool, applied knowledge accumulation, deep industry involvement and modern Web 2.0 technology capabilities. (author
Ranganathan, Shoba; Gribskov, Michael; Tan, Tin Wee
We provide a 2007 update on the bioinformatics research in the Asia-Pacific from the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation set up in 1998. From 2002, APBioNet has organized the first International Conference on Bioinformatics (InCoB) bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2007 Conference was organized as the 6th annual conference of the Asia-Pacific Bioinformatics Network, on Aug. 27-30, 2007 at Hong Kong, following a series of successful events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea) and New Delhi (India). Besides a scientific meeting at Hong Kong, satellite events organized are a pre-conference training workshop at Hanoi, Vietnam and a post-conference workshop at Nansha, China. This Introduction provides a brief overview of the peer-reviewed manuscripts accepted for publication in this Supplement. We have organized the papers into thematic areas, highlighting the growing contribution of research excellence from this region, to global bioinformatics endeavours.
Brazas, Michelle D; Ouellette, B F Francis
Bioinformatics.ca has been hosting continuing education programs in introductory and advanced bioinformatics topics in Canada since 1999 and has trained more than 2,000 participants to date. These workshops have been adapted over the years to keep pace with advances in both science and technology as well as the changing landscape in available learning modalities and the bioinformatics training needs of our audience. Post-workshop surveys have been a mandatory component of each workshop and are used to ensure appropriate adjustments are made to workshops to maximize learning. However, neither bioinformatics.ca nor others offering similar training programs have explored the long-term impact of bioinformatics continuing education training. Bioinformatics.ca recently initiated a look back on the impact its workshops have had on the career trajectories, research outcomes, publications, and collaborations of its participants. Using an anonymous online survey, bioinformatics.ca analyzed responses from those surveyed and discovered its workshops have had a positive impact on collaborations, research, publications, and career progression.
Full Text Available ABSTRACT:The traditional methods for mining foods for bioactive peptides are tedious and long. Similar to the drug industry, the length of time to identify and deliver a commercial health ingredient that reduces disease symptoms can take anything between 5 to 10 years. Reducing this time and effort is crucial in order to create new commercially viable products with clear and important health benefits. In the past few years, bioinformatics, the science that brings together fast computational biology, and efficient genome mining, is appearing as the long awaited solution to this problem. By quickly mining food genomes for characteristics of certain food therapeutic ingredients, researchers can potentially find new ones in a matter of a few weeks. Yet, surprisingly, very little success has been achieved so far using bioinformatics in mining for food bioactives.The absence of food specific bioinformatic mining tools, the slow integration of both experimental mining and bioinformatics, and the important difference between different experimental platforms are some of the reasons for the slow progress of bioinformatics in the field of functional food and more specifically in bioactive peptide discovery.In this paper I discuss some methods that could be easily translated, using a rational peptide bioinformatics design, to food bioactive peptide mining. I highlight the need for an integrated food peptide database. I also discuss how to better integrate experimental work with bioinformatics in order to improve the mining of food for bioactive peptides, therefore achieving a higher success rates.
Menegidio, Fabiano B; Jabes, Daniela L; Costa de Oliveira, Regina; Nunes, Luiz R
This manuscript introduces and describes Dugong, a Docker image based on Ubuntu 16.04, which automates installation of more than 3500 bioinformatics tools (along with their respective libraries and dependencies), in alternative computational environments. The software operates through a user-friendly XFCE4 graphic interface that allows software management and installation by users not fully familiarized with the Linux command line and provides the Jupyter Notebook to assist in the delivery and exchange of consistent and reproducible protocols and results across laboratories, assisting in the development of open science projects. Source code and instructions for local installation are available at https://github.com/DugongBioinformatics, under the MIT open source license. Luiz.email@example.com. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: firstname.lastname@example.org
Kaiser, David Brian; Köhler, Thomas; Weith, Thomas
This article aims to sketch a conceptual design for an information and knowledge management system in sustainability research projects. The suitable frameworks to implement knowledge transfer models constitute social communities, because the mutual exchange and learning processes among all stakeholders promote key sustainable developments through…
Kostousov, Sergei; Kudryavtsev, Dmitry
Problem solving is a critical competency for modern world and also an effective way of learning. Education should not only transfer domain-specific knowledge to students, but also prepare them to solve real-life problems--to apply knowledge from one or several domains within specific situation. Problem solving as teaching tool is known for a long…
Horbach, D.Y.; Usanov, S.A.
One of the mechanisms of external signal transduction (ionizing radiation, toxicants, stress) to the target cell is the existence of membrane and intracellular proteins with intrinsic tyrosine kinase activity. No wonder that etiology of malignant growth links to abnormalities in signal transduction through tyrosine kinases. The epidermal growth factor receptor (EGFR) tyrosine kinases play fundamental roles in development, proliferation and differentiation of tissues of epithelial, mesenchymal and neuronal origin. There are four types of EGFR: EGF receptor (ErbB1/HER1), ErbB2/Neu/HER2, ErbB3/HER3 and ErbB4/HER4. Abnormal expression of EGFR, appearance of receptor mutants with changed ability to protein-protein interactions or increased tyrosine kinase activity have been implicated in the malignancy of different types of human tumors. Bioinformatics is currently using in investigation on design and selection of drugs that can make alterations in structure or competitively bind with receptors and so display antagonistic characteristics. (authors)
Horbach, D Y [International A. Sakharov environmental univ., Minsk (Belarus); Usanov, S A [Inst. of bioorganic chemistry, National academy of sciences of Belarus, Minsk (Belarus)
One of the mechanisms of external signal transduction (ionizing radiation, toxicants, stress) to the target cell is the existence of membrane and intracellular proteins with intrinsic tyrosine kinase activity. No wonder that etiology of malignant growth links to abnormalities in signal transduction through tyrosine kinases. The epidermal growth factor receptor (EGFR) tyrosine kinases play fundamental roles in development, proliferation and differentiation of tissues of epithelial, mesenchymal and neuronal origin. There are four types of EGFR: EGF receptor (ErbB1/HER1), ErbB2/Neu/HER2, ErbB3/HER3 and ErbB4/HER4. Abnormal expression of EGFR, appearance of receptor mutants with changed ability to protein-protein interactions or increased tyrosine kinase activity have been implicated in the malignancy of different types of human tumors. Bioinformatics is currently using in investigation on design and selection of drugs that can make alterations in structure or competitively bind with receptors and so display antagonistic characteristics. (authors)
Basyuni, M.; Wasilah, M.; Sumardi
This study describes the bioinformatics methods to analyze eight actin genes from mangrove plants on DDBJ/EMBL/GenBank as well as predicted the structure, composition, subcellular localization, similarity, and phylogenetic. The physical and chemical properties of eight mangroves showed variation among the genes. The percentage of the secondary structure of eight mangrove actin genes followed the order of a helix > random coil > extended chain structure for BgActl, KcActl, RsActl, and A. corniculatum Act. In contrast to this observation, the remaining actin genes were random coil > extended chain structure > a helix. This study, therefore, shown the prediction of secondary structure was performed for necessary structural information. The values of chloroplast or signal peptide or mitochondrial target were too small, indicated that no chloroplast or mitochondrial transit peptide or signal peptide of secretion pathway in mangrove actin genes. These results suggested the importance of understanding the diversity and functional of properties of the different amino acids in mangrove actin genes. To clarify the relationship among the mangrove actin gene, a phylogenetic tree was constructed. Three groups of mangrove actin genes were formed, the first group contains B. gymnorrhiza BgAct and R. stylosa RsActl. The second cluster which consists of 5 actin genes the largest group, and the last branch consist of one gene, B. sexagula Act. The present study, therefore, supported the previous results that plant actin genes form distinct clusters in the tree.
Gentleman, R.C.; Carey, V.J.; Bates, D.M.
The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisci......The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry...... into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples....
Fu, Zhiyan; Lin, Jing
The rapidly increasing number of characterized allergens has created huge demands for advanced information storage, retrieval, and analysis. Bioinformatics and machine learning approaches provide useful tools for the study of allergens and epitopes prediction, which greatly complement traditional laboratory techniques. The specific applications mainly include identification of B- and T-cell epitopes, and assessment of allergenicity and cross-reactivity. In order to facilitate the work of clinical and basic researchers who are not familiar with bioinformatics, we review in this chapter the most important databases, bioinformatic tools, and methods with relevance to the study of allergens.
Cheung David W
with these web services using a web services choreography language (BPEL4WS. Conclusion While it is relatively straightforward to implement and publish web services, the use of web services choreography engines is still in its infancy. However, industry-wide support and push for web services standards is quickly increasing the chance of success in using web services to unify heterogeneous bioinformatics applications. Due to the immaturity of currently available web services engines, it is still most practical to implement a simple, ad-hoc XML-based workflow by hard coding the workflow as a Java application. For advanced web service users the Collaxa BPEL engine facilitates a configuration and management environment that can fully handle XML-based workflow.
Vamathevan, J; Birney, E
Objectives: To highlight and provide insights into key developments in translational bioinformatics between 2014 and 2016. Methods: This review describes some of the most influential bioinformatics papers and resources that have been published between 2014 and 2016 as well as the national genome sequencing initiatives that utilize these resources to routinely embed genomic medicine into healthcare. Also discussed are some applications of the secondary use of patient data followed by a comprehensive view of the open challenges and emergent technologies. Results: Although data generation can be performed routinely, analyses and data integration methods still require active research and standardization to improve streamlining of clinical interpretation. The secondary use of patient data has resulted in the development of novel algorithms and has enabled a refined understanding of cellular and phenotypic mechanisms. New data storage and data sharing approaches are required to enable diverse biomedical communities to contribute to genomic discovery. Conclusion: The translation of genomics data into actionable knowledge for use in healthcare is transforming the clinical landscape in an unprecedented way. Exciting and innovative models that bridge the gap between clinical and academic research are set to open up the field of translational bioinformatics for rapid growth in a digital era. Georg Thieme Verlag KG Stuttgart.
Izak, Dariusz; Klim, Joanna; Kaczanowski, Szymon
Malaria remains one of the highest mortality infectious diseases. Malaria is caused by parasites from the genus Plasmodium. Most deaths are caused by infections involving Plasmodium falciparum, which has a complex life cycle. Malaria parasites are extremely well adapted for interactions with their host and their host's immune system and are able to suppress the human immune system, erase immunological memory and rapidly alter exposed antigens. Owing to this rapid evolution, parasites develop drug resistance and express novel forms of antigenic proteins that are not recognized by the host immune system. There is an emerging need for novel interventions, including novel drugs and vaccines. Designing novel therapies requires knowledge about host-parasite interactions, which is still limited. However, significant progress has recently been achieved in this field through the application of bioinformatics analysis of parasite genome sequences. In this review, we describe the main achievements in 'malarial' bioinformatics and provide examples of successful applications of protein sequence analysis. These examples include the prediction of protein functions based on homology and the prediction of protein surface localization via domain and motif analysis. Additionally, we describe PlasmoDB, a database that stores accumulated experimental data. This tool allows data mining of the stored information and will play an important role in the development of malaria science. Finally, we illustrate the application of bioinformatics in the development of population genetics research on malaria parasites, an approach referred to as reverse ecology.
Joanna R. Klein
Full Text Available Bioinformatics, the use of computer resources to understand biological information, is an important tool in research, and can be easily integrated into the curriculum of undergraduate courses. Such an example is provided in this series of four activities that introduces students to the field of bioinformatics as they design PCR based tests for pathogenic E. coli strains. A variety of computer tools are used including BLAST searches at NCBI, bacterial genome searches at the Integrated Microbial Genomes (IMG database, protein analysis at Pfam and literature research at PubMed. In the process, students also learn about virulence factors, enzyme function and horizontal gene transfer. Some or all of the four activities can be incorporated into microbiology or general biology courses taken by students at a variety of levels, ranging from high school through college. The activities build on one another as they teach and reinforce knowledge and skills, promote critical thinking, and provide for student collaboration and presentation. The computer-based activities can be done either in class or outside of class, thus are appropriate for inclusion in online or blended learning formats. Assessment data showed that students learned general microbiology concepts related to pathogenesis and enzyme function, gained skills in using tools of bioinformatics and molecular biology, and successfully developed and tested a scientific hypothesis.
Gallagher, Jerry P
... between private security and the KCPD. To empower this resource as a terrorism prevention force multiplier the development of a web based virtual knowledge sharing initiative was explored in this study as a solution to provide "one stop...
Revote, Jerico; Watson-Haigh, Nathan S; Quenette, Steve; Bethwaite, Blair; McGrath, Annette; Shang, Catherine A
The Bioinformatics Training Platform (BTP) has been developed to provide access to the computational infrastructure required to deliver sophisticated hands-on bioinformatics training courses. The BTP is a cloud-based solution that is in active use for delivering next-generation sequencing training to Australian researchers at geographically dispersed locations. The BTP was built to provide an easy, accessible, consistent and cost-effective approach to delivering workshops at host universities and organizations with a high demand for bioinformatics training but lacking the dedicated bioinformatics training suites required. To support broad uptake of the BTP, the platform has been made compatible with multiple cloud infrastructures. The BTP is an open-source and open-access resource. To date, 20 training workshops have been delivered to over 700 trainees at over 10 venues across Australia using the BTP. © The Author 2016. Published by Oxford University Press.
Whyte, Barry James
The National Science Foundation has awarded the Virginia Bioinformatics Institute at Virginia Tech $918,000 to expand its education and outreach program in Cyberinfrastructure - Training, Education, Advancement and Mentoring, commonly known as the CI-TEAM.
Bonny, Talal; Salama, Khaled N.; Zidan, Mohammed A.
Sequence alignment algorithms such as the Smith-Waterman algorithm are among the most important applications in the development of bioinformatics. Sequence alignment algorithms must process large amounts of data which may take a long time. Here, we
Michael R Clay
Full Text Available Training anatomic and clinical pathology residents in the principles of bioinformatics is a challenging endeavor. Most residents receive little to no formal exposure to bioinformatics during medical education, and most of the pathology training is spent interpreting histopathology slides using light microscopy or focused on laboratory regulation, management, and interpretation of discrete laboratory data. At a minimum, residents should be familiar with data structure, data pipelines, data manipulation, and data regulations within clinical laboratories. Fellowship-level training should incorporate advanced principles unique to each subspecialty. Barriers to bioinformatics education include the clinical apprenticeship training model, ill-defined educational milestones, inadequate faculty expertise, and limited exposure during medical training. Online educational resources, case-based learning, and incorporation into molecular genomics education could serve as effective educational strategies. Overall, pathology bioinformatics training can be incorporated into pathology resident curricula, provided there is motivation to incorporate, institutional support, educational resources, and adequate faculty expertise.
Phosphoenolpyruvate carboxykinase (PEPCK), a critical gluconeogenic enzyme, catalyzes the first committed step in the diversion of tricarboxylic acid cycle intermediates toward gluconeogenesis. According to the relative conservation of homologous gene, a bioinformatics strategy was applied to clone Fusarium ...
Via, Allegra; Blicher, Thomas; Bongcam-Rudloff, Erik; Brazas, Michelle D; Brooksbank, Cath; Budd, Aidan; De Las Rivas, Javier; Dreyer, Jacqueline; Fernandes, Pedro L; van Gelder, Celia; Jacob, Joachim; Jimenez, Rafael C; Loveland, Jane; Moran, Federico; Mulder, Nicola; Nyrö nen, Tommi; Rother, Kristian; Schneider, Maria Victoria; Attwood, Teresa K
concepts. Providing bioinformatics training to empower life scientists to handle and analyse their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource
Diaz Acosta, B.
The Microsoft Biology Initiative (MBI) is an effort in Microsoft Research to bring new technology and tools to the area of bioinformatics and biology. This initiative is comprised of two primary components, the Microsoft Biology Foundation (MBF) and the Microsoft Biology Tools (MBT). MBF is a language-neutral bioinformatics toolkit built as an extension to the Microsoft .NET Framework—initially aimed at the area of Genomics research. Currently, it implements a range of parsers for common bioinformatics file formats; a range of algorithms for manipulating DNA, RNA, and protein sequences; and a set of connectors to biological web services such as NCBI BLAST. MBF is available under an open source license, and executables, source code, demo applications, documentation and training materials are freely downloadable from http://research.microsoft.com/bio. MBT is a collection of tools that enable biology and bioinformatics researchers to be more productive in making scientific discoveries.
Bioinformatics tools for development of fast and cost effective simple sequence repeat ... comparative mapping and exploration of functional genetic diversity in the ... Already, a number of computer programs have been implemented that aim at ...
Guilherme Francisco Waterloo Radomsky
Full Text Available This article approaches biodiversity and traditional knowledge, having the notion of risk as its background. The data presented come from an ethnographic study carried out among a network of ecological farmers, Ecovida, in Santa Catarina, southern Brazil. Ecovida is an agro-ecological network of farm producers, consumers and intermediaries. The paper aims to show that in the global context of the advent of the intellectual property regime, especially the provisions on cultivars (plant variety and seed breeding, biodiversity and farming traditional knowledge, as well as their modes of plant breeding, suffer a double "erosion": the decrease on the availability of crop varieties; and it creates a uniformity and depleting of local knowledge. The potential standardization of seeds and knowledge entices new risks to both rural production and social sustainability. Our argument is that all these social actors -- that compose the so called ecological network -- in their activities, seeking to carry on the multiplication and variability of seeds and promote the diversity of knowledge, are also creating collective strategies of social resistance vis a vis nature and knowledge modes of control. As a political outcome of the collective efforts, the network of participatory certification works revealing the risk homogenization and corporate control over crop production.
The Skate Genome Project, a pilot project of the North East Cyber infrastructure Consortium, aims to produce a draft genome sequence of Leucoraja erinacea, the Little Skate. The pilot project was designed to also develop expertise in large scale collaborations across the NECC region. An overview of the bioinformatics and infrastructure challenges faced during the first year of the project will be presented. Results to date and lessons learned from the perspective of a bioinformatics core will be highlighted.
Vand, Kasra; Wahlestedt, Thor; Khomtchouk, Kelly; Sayed, Mohammed; Wahlestedt, Claes; Khomtchouk, Bohdan
We propose a search engine and file retrieval system for all bioinformatics databases worldwide. PubData searches biomedical data in a user-friendly fashion similar to how PubMed searches biomedical literature. PubData is built on novel network programming, natural language processing, and artificial intelligence algorithms that can patch into the file transfer protocol servers of any user-specified bioinformatics database, query its contents, retrieve files for download, and adapt to the use...
Full Text Available Abstract Background Recent advances in experimental and computational technologies have fueled the development of many sophisticated bioinformatics programs. The correctness of such programs is crucial as incorrectly computed results may lead to wrong biological conclusion or misguide downstream experimentation. Common software testing procedures involve executing the target program with a set of test inputs and then verifying the correctness of the test outputs. However, due to the complexity of many bioinformatics programs, it is often difficult to verify the correctness of the test outputs. Therefore our ability to perform systematic software testing is greatly hindered. Results We propose to use a novel software testing technique, metamorphic testing (MT, to test a range of bioinformatics programs. Instead of requiring a mechanism to verify whether an individual test output is correct, the MT technique verifies whether a pair of test outputs conform to a set of domain specific properties, called metamorphic relations (MRs, thus greatly increases the number and variety of test cases that can be applied. To demonstrate how MT is used in practice, we applied MT to test two open-source bioinformatics programs, namely GNLab and SeqMap. In particular we show that MT is simple to implement, and is effective in detecting faults in a real-life program and some artificially fault-seeded programs. Further, we discuss how MT can be applied to test programs from various domains of bioinformatics. Conclusion This paper describes the application of a simple, effective and automated technique to systematically test a range of bioinformatics programs. We show how MT can be implemented in practice through two real-life case studies. Since many bioinformatics programs, particularly those for large scale simulation and data analysis, are hard to test systematically, their developers may benefit from using MT as part of the testing strategy. Therefore our work
Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel
Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.
Ardan, Andam S.
The purposes of this study were (1) to describe the biology learning such as lesson plans, teaching materials, media and worksheets for the tenth grade of High School on the topic of Biodiversity and Basic Classification, Ecosystems and Environment Issues based on local wisdom of Timorese; (2) to analyze the improvement of the environmental…
Knowledge for a sustainable economy. Knowledge questions around the Dutch Memorandum on Environment and Economy ('Nota Milieu en Economie'); Kennis voor een duurzame economie. Kennisvragen rond de Nota Milieu en Economie
Dieleman, J.P.C.; Hafkamp, W.A. [Erasmus Studiecentrum voor Milieukunde, Erasmus Universiteit Rotterdam, Rotterdam (Netherlands)
June 18, 1997, the Dutch government presented the Memorandum Environment and Economy with the aim to contribute to integration of environment and economy and to stimulation the realization of a sustainable economy. Next to a vast overview of actions, ideas, perspectives, staring points, challenges and dilemmas to take into account when forming a sustainable economy, it is indicated in that Memorandum that there is a need for research and knowledge to compile relevant data and insight to support decision making processes. The aim of this report is to develop a framework in which knowledge questions can be generated. The questions that fall outside the framework of the Memorandum concern needs, values and images and are formulated in four groups: (1) what is the role of materialism and stress in processes of conventional economic growth?; (2) What is the importance of reduction of consumption ('consuminderen') and slowing down ('onthaasting' or dehasting) to realize a process of sustainable economic development; (3) which images form the basis of the present process of economic development, where do they come from and how do they change over time; and (4) which images of progression give direction to a sustainable economic development and how do they exist? The questions that follow the Memorandum concern decoupling (of environment and economy), sustainable consumption, knowledge economy, institutions and a process of change. Central in the framework of knowledge questions are questions, related to perspectives and actions, as formulated in the Memorandum for different sectors in the Dutch society: industry and services; agriculture and rural areas; and traffic, transport and infrastructure.
Widyasari; Jony Oktavian Haryanto
This research intends to study the factors that can affect the product perception and consumer intention in buying organic product.The study is necessary in that it explores at least some of the factors that can affect the product perception and consumer intention in buying organic product. The research results indicated that there was a positive influence of health consciousness towards environment attitude, consumer’s organic product knowledge towards organic product perception, environm...
Hanold, Gregg T.; Hanold, David T.
This paper presents a new Route Generation Algorithm that accurately and realistically represents human route planning and navigation for Military Operations in Urban Terrain (MOUT). The accuracy of this algorithm in representing human behavior is measured using the Unreal Tournament(Trademark) 2004 (UT2004) Game Engine to provide the simulation environment in which the differences between the routes taken by the human player and those of a Synthetic Agent (BOT) executing the A-star algorithm and the new Route Generation Algorithm can be compared. The new Route Generation Algorithm computes the BOT route based on partial or incomplete knowledge received from the UT2004 game engine during game play. To allow BOT navigation to occur continuously throughout the game play with incomplete knowledge of the terrain, a spatial network model of the UT2004 MOUT terrain is captured and stored in an Oracle 11 9 Spatial Data Object (SOO). The SOO allows a partial data query to be executed to generate continuous route updates based on the terrain knowledge, and stored dynamic BOT, Player and environmental parameters returned by the query. The partial data query permits the dynamic adjustment of the planned routes by the Route Generation Algorithm based on the current state of the environment during a simulation. The dynamic nature of this algorithm more accurately allows the BOT to mimic the routes taken by the human executing under the same conditions thereby improving the realism of the BOT in a MOUT simulation environment.
Liang, Li; Sharp, Alice
E-waste is the fastest growing waste in the solid waste stream in the urban environment. It has become a widely recognised social and environmental problem; therefore, proper management is vital to protecting the fragile environment from its improper disposal. Questionnaire surveys were conducted to determine the knowledge of environmental impacts of e-waste disposal as it relates to mobile phones among different gender and age groups in China, Laos, and Thailand. The results revealed that gender was positively correlated with their knowledge of the status of environmental conditions (P104) (r = 0.077, n = 1994, p environmental conditions (P105) (r = -0.067, n = 2037, p age was positively correlated with respondents' concern over the environmental conditions (P103) (r = 0.052, n = 2077, p environmental conditions than male respondents in the three countries. Knowledge gaps were detected in the respondents, at age ⩽17, in the three countries, and from age 18-22 to 36-45 or older from Thailand and China, on their knowledge of the existing e-waste-related laws. Thus, an effort to bridge the gaps through initiating proper educational programmes in these two countries is necessary. © The Author(s) 2016.
Johannes, Christine; Fendler, Jan; Seidel, Tina
Despite the complexity of teaching, learning to teach in universities is often "learning by doing". To provide novice university teachers with pedagogic teaching knowledge and to help them develop specific teaching objectives, we created a structured, video-based, one-year training program. In focusing on the core features of…
Conclusion: Although knowledge management is a new area in our country, it is important applying new and beneficial patterns in various aspects of activities like HSE management. Based on obtained results, in can be concluded that the introduced questionnaire is a valid and reliable tool for using in workplace processes.
Hendricson, William D; Rugh, John D; Hatch, John P; Stark, Debra L; Deahl, Thomas; Wallmann, Elizabeth R
This article reports the validation of an assessment instrument designed to measure the outcomes of training in evidence-based practice (EBP) in the context of dentistry. Four EBP dimensions are measured by this instrument: 1) understanding of EBP concepts, 2) attitudes about EBP, 3) evidence-accessing methods, and 4) confidence in critical appraisal. The instrument-the Knowledge, Attitudes, Access, and Confidence Evaluation (KACE)-has four scales, with a total of thirty-five items: EBP knowledge (ten items), EBP attitudes (ten), accessing evidence (nine), and confidence (six). Four elements of validity were assessed: consistency of items within the KACE scales (extent to which items within a scale measure the same dimension), discrimination (capacity to detect differences between individuals with different training or experience), responsiveness (capacity to detect the effects of education on trainees), and test-retest reliability. Internal consistency of scales was assessed by analyzing responses of second-year dental students, dental residents, and dental faculty members using Cronbach coefficient alpha, a statistical measure of reliability. Discriminative validity was assessed by comparing KACE scores for the three groups. Responsiveness was assessed by comparing pre- and post-training responses for dental students and residents. To measure test-retest reliability, the full KACE was completed twice by a class of freshman dental students seventeen days apart, and the knowledge scale was completed twice by sixteen faculty members fourteen days apart. Item-to-scale consistency ranged from 0.21 to 0.78 for knowledge, 0.57 to 0.83 for attitude, 0.70 to 0.84 for accessing evidence, and 0.87 to 0.94 for confidence. For discrimination, ANOVA and post hoc testing by the Tukey-Kramer method revealed significant score differences among students, residents, and faculty members consistent with education and experience levels. For responsiveness to training, dental students
Husebø, Anne Marie Lunde; Storm, Marianne; Våga, Bodil Bø; Rosenberg, Adriana; Akerjordet, Kristin
To give an overview of empirical studies investigating nursing homes as a learning environment during nursing students' clinical practice. A supportive clinical learning environment is crucial to students' learning and for their development into reflective and capable practitioners. Nursing students' experience with clinical practice can be decisive in future workplace choices. A competent workforce is needed for the future care of older people. Opportunities for maximum learning among nursing students during clinical practice studies in nursing homes should therefore be explored. Mixed-method systematic review using PRISMA guidelines, on learning environments in nursing homes, published in English between 2005-2015. Search of CINAHL with Full Text, Academic Search Premier, MEDLINE and SocINDEX with Full Text, in combination with journal hand searches. Three hundred and thirty-six titles were identified. Twenty studies met the review inclusion criteria. Assessment of methodological quality was based on the Mixed Methods Appraisal Tool. Data were extracted and synthesised using a data analysis method for integrative reviews. Twenty articles were included. The majority of the studies showed moderately high methodological quality. Four main themes emerged from data synthesis: "Student characteristic and earlier experience"; "Nursing home ward environment"; "Quality of mentoring relationship and learning methods"; and "Students' achieved nursing competencies." Nursing home learning environments may be optimised by a well-prepared academic-clinical partnership, supervision by encouraging mentors and high-quality nursing care of older people. Positive learning experiences may increase students' professional development through achievement of basic nursing skills and competencies and motivate them to choose the nursing home as their future workplace. An optimal learning environment can be ensured by thorough preplacement preparations in academia and in nursing home wards
Assessments of knowledge and perceptions about influenza were developed for high school students, and used to determine how knowledge, perceptions, and demographic variables relate to students taking precautions and their odds of getting sick. Assessments were piloted with 205 students and validated using the Rasch model. Data were then collected on 410 students from six high schools. Scores were calculated using the 2-parameter logistic model and clustered using the k-means algorithm. Kendall-tau correlations were evaluated at the alpha = 0.05 level, multinomial logistic regression was used to identify the best predictors and to test for interactions, and neural networks were used to test how well precautions and illness can be predicted using the significant correlates. Precautions and illness had more than one statistically significant correlate with small to moderate effect sizes. Knowledge was positively correlated to compliance with vaccination, hand washing frequency, and respiratory etiquette, and negatively correlated with hand sanitizer use. Perceived risk was positively correlated to compliance with flu vaccination; perceived complications to personal distancing and staying home when sick. Perceived risk and complications increased with reported illness severity. Perceived barriers decreased compliance with vaccination, hand washing, and respiratory etiquette. Factors such as gender, ethnicity, and school, had effects on more than one precaution. Hand washing quality and frequency could be predicted moderately well. Other predictions had small-to-negligible associations with actual values. Implications for future uses of the instruments and development of interventions regarding influenza in high schools are discussed.
Politis, John; Politis, Denis
Online learning is becoming more attractive to perspective students because it offers them greater accessibility, convenience and flexibility to study at a reduced cost. While these benefits may attract prospective learners to embark on an online learning environment there remains little empirical evidence relating the skills and traits of…
Adams, Nan B.; DeVaney, Thomas A.; Sawyer, Susan G.
The design of virtual learning environments for post-secondary instruction is rapidly increasing among public and private universities. While the quantity of online courses over the past 10 years has exponentially increased, the quality of these courses has not. As universities increase their online teaching activities, real concern about the best…
Mello, Luciane V.; Tregilgas, Luke; Cowley, Gwen; Gupta, Anshul; Makki, Fatima; Jhutty, Anjeet; Shanmugasundram, Achchuthan
Abstract Teaching bioinformatics is a longstanding challenge for educators who need to demonstrate to students how skills developed in the classroom may be applied to real world research. This study employed an action research methodology which utilised student–staff partnership and peer-learning. It was centred on the experiences of peer-facilitators, students who had previously taken a postgraduate bioinformatics module, and had applied knowledge and skills gained from it to their own research. It aimed to demonstrate to peer-receivers, current students, how bioinformatics could be used in their own research while developing peer-facilitators’ teaching and mentoring skills. This student-centred approach was well received by the peer-receivers, who claimed to have gained improved understanding of bioinformatics and its relevance to research. Equally, peer-facilitators also developed a better understanding of the subject and appreciated that the activity was a rare and invaluable opportunity to develop their teaching and mentoring skills, enhancing their employability. PMID:29098185
Stajich, Jason E; Lapp, Hilmar
This review summarizes important work in open-source bioinformatics software that has occurred over the past couple of years. The survey is intended to illustrate how programs and toolkits whose source code has been developed or released under an Open Source license have changed informatics-heavy areas of life science research. Rather than creating a comprehensive list of all tools developed over the last 2-3 years, we use a few selected projects encompassing toolkit libraries, analysis tools, data analysis environments and interoperability standards to show how freely available and modifiable open-source software can serve as the foundation for building important applications, analysis workflows and resources.
Khandelwal, Siddhartha; Wickstrom, Nicholas
Detecting gait events is the key to many gait analysis applications that would benefit from continuous monitoring or long-term analysis. Most gait event detection algorithms using wearable sensors that offer a potential for use in daily living have been developed from data collected in controlled indoor experiments. However, for real-word applications, it is essential that the analysis is carried out in humans' natural environment; that involves different gait speeds, changing walking terrains, varying surface inclinations and regular turns among other factors. Existing domain knowledge in the form of principles or underlying fundamental gait relationships can be utilized to drive and support the data analysis in order to develop robust algorithms that can tackle real-world challenges in gait analysis. This paper presents a novel approach that exhibits how domain knowledge about human gait can be incorporated into time-frequency analysis to detect gait events from long-term accelerometer signals. The accuracy and robustness of the proposed algorithm are validated by experiments done in indoor and outdoor environments with approximately 93 600 gait events in total. The proposed algorithm exhibits consistently high performance scores across all datasets in both, indoor and outdoor environments.
This study sought to compare a data-rich learning (DRL) environment that utilized online data as a tool for teaching about renewable energy technologies (RET) to a lecture-based learning environment to determine the impact of the learning environment on students' knowledge of Science, Technology, Engineering, and Math (STEM) concepts related…
J. Köster (Johannes); S. Rahmann (Sven)
textabstractSnakemake is a workflow engine that provides a readable Python-based workflow definition language and a powerful execution environment that scales from single-core workstations to compute clusters without modifying the workflow. It is the first system to support the use of automatically
Kwo, Elizabeth; Christiani, David
The interplay between genetic susceptibilities and environmental exposures in the pathogenesis of a variety of diseases is an area of increased scientific, epidemiologic, and social interest. Given the variation in methodologies used in the field, this review aims to create a framework to help understand occupational exposures as they currently exist and provide a foundation for future inquiries into the biological mechanisms of the gene-environment interactions. Understanding of this complex interplay will be important in the context of occupational health, given the public health concerns surrounding a variety of occupational exposures. Studies found evidence that suggest genetics influence the progression of disease postberyllium exposure through genetically encoded major histocompatibility complex, class II, DP alpha 2 (HLA-DP2)-peptide complexes as it relates to T-helper cells. This was characterized at the molecular level by the accumulation of Be-responsive CD4 T cells in the lung, which resulted in posttranslational change in the HLA-DPB1 complex. These studies provide important evidence of gene-environment association, and many provide insights into specific pathogenic mechanisms. The following includes a review of the literature regarding gene-environment associations with a focus on pulmonary diseases as they relate to the workplace.
van Gelder, Celia W G; Hooft, Rob W W; van Rijswijk, Merlijn N; van den Berg, Linda; Kok, Ruben G; Reinders, Marcel; Mons, Barend; Heringa, Jaap
This review provides a historical overview of the inception and development of bioinformatics research in the Netherlands. Rooted in theoretical biology by foundational figures such as Paulien Hogeweg (at Utrecht University since the 1970s), the developments leading to organizational structures supporting a relatively large Dutch bioinformatics community will be reviewed. We will show that the most valuable resource that we have built over these years is the close-knit national expert community that is well engaged in basic and translational life science research programmes. The Dutch bioinformatics community is accustomed to facing the ever-changing landscape of data challenges and working towards solutions together. In addition, this community is the stable factor on the road towards sustainability, especially in times where existing funding models are challenged and change rapidly. © The Author 2017. Published by Oxford University Press.
Attwood, Teresa K; Atwood, Teresa K; Bongcam-Rudloff, Erik; Brazas, Michelle E; Corpas, Manuel; Gaudet, Pascale; Lewitter, Fran; Mulder, Nicola; Palagi, Patricia M; Schneider, Maria Victoria; van Gelder, Celia W G
In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy--paradoxically, many are actually closing "niche" bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all.
Yip, Y L
To summarize current excellent research in the field of bioinformatics. Synopsis of the articles selected for the IMIA Yearbook 2009. The selection process for this yearbook's section on Bioinformatics results in six excellent articles highlighting several important trends First, it can be noted that Semantic Web technology continues to play an important role in heterogeneous data integration. Novel applications also put more emphasis on its ability to make logical inferences leading to new insights and discoveries. Second, translational research, due to its complex nature, increasingly relies on collective intelligence made available through the adoption of community-defined protocols or software architectures for secure data annotation, sharing and analysis. Advances in systems biology, bio-ontologies and text-ming can also be noted. Current biomedical research gradually evolves towards an environment characterized by intensive collaboration and more sophisticated knowledge processing activities. Enabling technologies, either Semantic Web or other solutions, are expected to play an increasingly important role in generating new knowledge in the foreseeable future.
Albertsen, Karen; Rugulies, Reiner; Garde, Anne Helene; Burr, Hermann
Interpersonal relations at work as well as individual factors seem to play prominent roles in the modern labour market, and arguably also for the change in stress symptoms. The aim was to examine whether exposures in the psychosocial work environment predicted symptoms of cognitive stress in a sample of Danish knowledge workers (i.e. employees working with sign, communication or exchange of knowledge) and whether performance-based self-esteem had a main effect, over and above the work environmental factors. 349 knowledge workers, selected from a national, representative cohort study, were followed up with two data collections, 12 months apart. We used data on psychosocial work environment factors and cognitive stress symptoms measured with the Copenhagen Psychosocial Questionnaire (COPSOQ), and a measurement of performance-based self-esteem. Effects on cognitive stress symptoms were analyzed with a GLM procedure with and without adjustment for baseline level. Measures at baseline of quantitative demands, role conflicts, lack of role clarity, recognition, predictability, influence and social support from management were positively associated with cognitive stress symptoms 12 months later. After adjustment for baseline level of cognitive stress symptoms, follow-up level was only predicted by lack of predictability. Performance-based self-esteem was prospectively associated with cognitive stress symptoms and had an independent effect above the psychosocial work environment factors on the level of and changes in cognitive stress symptoms. The results suggest that both work environmental and individual characteristics should be taken into account in order to capture sources of stress in modern working life.
Campbell, Chad E.; Nehm, Ross H.
The growing importance of genomics and bioinformatics methods and paradigms in biology has been accompanied by an explosion of new curricula and pedagogies. An important question to ask about these educational innovations is whether they are having a meaningful impact on students’ knowledge, attitudes, or skills. Although assessments are necessary tools for answering this question, their outputs are dependent on their quality. Our study 1) reviews the central importance of reliability and construct validity evidence in the development and evaluation of science assessments and 2) examines the extent to which published assessments in genomics and bioinformatics education (GBE) have been developed using such evidence. We identified 95 GBE articles (out of 226) that contained claims of knowledge increases, affective changes, or skill acquisition. We found that 1) the purpose of most of these studies was to assess summative learning gains associated with curricular change at the undergraduate level, and 2) a minority (<10%) of studies provided any reliability or validity evidence, and only one study out of the 95 sampled mentioned both validity and reliability. Our findings raise concerns about the quality of evidence derived from these instruments. We end with recommendations for improving assessment quality in GBE. PMID:24006400
Cox, Narelle S; Oliveira, Cristino C; Lahham, Aroub; Holland, Anne E
What are the barriers and enablers of referral, uptake, attendance and completion of pulmonary rehabilitation for people with chronic obstructive pulmonary disease (COPD)? Systematic review of qualitative or quantitative studies reporting data relating to referral, uptake, attendance and/or completion in pulmonary rehabilitation. People aged >18years with a diagnosis of COPD and/or their healthcare professionals. Data were extracted regarding the nature of barriers and enablers of pulmonary rehabilitation referral and participation. Extracted data items were mapped to the Theoretical Domains Framework (TDF). A total of 6969 references were screened, with 48 studies included and 369 relevant items mapped to the TDF. The most frequently represented domain was 'Environment' (33/48 included studies, 37% of mapped items), which included items such as waiting time, burden of illness, travel, transport and health system resources. Other frequently represented domains were 'Knowledge' (18/48 studies, including items such as clinician knowledge of referral processes, patient understanding of rehabilitation content) and 'Beliefs about consequences' (15/48 studies, including items such as beliefs regarding role and safety of exercise, expectations of rehabilitation outcomes). Barriers to referral, uptake, attendance or completion represented 71% (n=183) of items mapped to the TDF. All domains of the TDF were represented; however, items were least frequently coded to the domains of 'Optimism' and 'Memory'. The methodological quality of included studies was fair (mean quality score 9/12, SD 2). Many factors - particularly those related to environment, knowledge, attitudes and behaviours - interact to influence referral, uptake, attendance and completion of pulmonary rehabilitation. Overcoming the challenges associated with the personal and/or healthcare system environment will be imperative to improving access and uptake of pulmonary rehabilitation. PROSPERO CRD42015015976
Full Text Available Multiuser virtual environments (MUVEs generate a large amount of data but most of them are not accessible even to users who triggered them. What’s more, most datasets are not even stored for further use; they have only temporary character and very short "halftime of decay" limited f.e. to onesecondlong screen display. Such a huge loss of data makes evaluation of knowledge transfer in MUVEs almost impossible. There is a need to both improve monitoring capabilities of MUVEs to be able to make completion assessment and use MUVEs that enable simulation (reexperience using complete datasets gathered from environment itself. Future research in the field of simulation methodology is suggested.
Jung, Sook; Main, Dorrie
Recent technological advances in biology promise unprecedented opportunities for rapid and sustainable advancement of crop quality. Following this trend, the Rosaceae research community continues to generate large amounts of genomic, genetic and breeding data. These include annotated whole genome sequences, transcriptome and expression data, proteomic and metabolomic data, genotypic and phenotypic data, and genetic and physical maps. Analysis, storage, integration and dissemination of these data using bioinformatics tools and databases are essential to provide utility of the data for basic, translational and applied research. This review discusses the currently available genomics and bioinformatics resources for the Rosaceae family.
Manning, Timmy; Sleator, Roy D; Walsh, Paul
For decades, computer scientists have looked to nature for biologically inspired solutions to computational problems; ranging from robotic control to scheduling optimization. Paradoxically, as we move deeper into the post-genomics era, the reverse is occurring, as biologists and bioinformaticians look to computational techniques, to solve a variety of biological problems. One of the most common biologically inspired techniques are genetic algorithms (GAs), which take the Darwinian concept of natural selection as the driving force behind systems for solving real world problems, including those in the bioinformatics domain. Herein, we provide an overview of genetic algorithms and survey some of the most recent applications of this approach to bioinformatics based problems.
Via, Allegra; Blicher, Thomas; Bongcam-Rudloff, Erik
their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource-centric demos, using interactivity, problem-solving exercises and cooperative learning to substantially enhance training quality and learning outcomes...... to environmental researchers, a common theme is the need not just to use, and gain familiarity with, bioinformatics tools and resources but also to understand their underlying fundamental theoretical and practical concepts. Providing bioinformatics training to empower life scientists to handle and analyse...
Full Text Available Although many efforts have been made to control Listeria monocytogenes in the food industry, growing pervasiveness amongst the population over the last decades has made this bacterium considered to be one of the most hazardous foodborne pathogens. Its outstanding biocide tolerance capacity and ability to promiscuously associate with other bacterial species forming multispecies communities have permitted this microorganism to survive and persist within the industrial environment. This review is designed to give the reader an overall picture of the current state-of-the-art in L. monocytogenes sessile communities in terms of food safety and legislation, ecological aspects and biocontrol strategies.
Gweon, Gey-Hong; Lee, Hee-Sun; Dorsey, Chad; Tinker, Robert; Finzer, William; Damelin, Daniel
In tracking student learning in on-line learning systems, the Bayesian knowledge tracing (BKT) model is a popular model. However, the model has well-known problems such as the identifiability problem or the empirical degeneracy problem. Understanding of these problems remain unclear and solutions to them remain subjective. Here, we analyze the log data from an online physics learning program with our new model, a Monte Carlo BKT model. With our new approach, we are able to perform a completely unbiased analysis, which can then be used for classifying student learning patterns and performances. Furthermore, a theoretical analysis of the BKT model and our computational work shed new light on the nature of the aforementioned problems. This material is based upon work supported by the National Science Foundation under Grant REC-1147621 and REC-1435470.
Cardoso, Olivier; Porcher, Jean-Marc; Sanchez, Wilfried
Human and veterinary active pharmaceutical ingredients (APIs) are involved in contamination of surface water, ground water, effluents, sediments and biota. Effluents of waste water treatment plants and hospitals are considered as major sources of such contamination. However, recent evidences reveal high concentrations of a large number of APIs in effluents from pharmaceutical factories and in receiving aquatic ecosystems. Moreover, laboratory exposures to these effluents and field experiments reveal various physiological disturbances in exposed aquatic organisms. Also, it seems to be relevant to increase knowledge on this route of contamination but also to develop specific approaches for further environmental monitoring campaigns. The present study summarizes available data related to the impact of pharmaceutical factory discharges on aquatic ecosystem contaminations and presents associated challenges for scientists and environmental managers. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ditty, Jayna L.; Kvaal, Christopher A.; Goodner, Brad; Freyermuth, Sharyn K.; Bailey, Cheryl; Britton, Robert A.; Gordon, Stuart G.; Heinhorst, Sabine; Reed, Kelynne; Xu, Zhaohui; Sanders-Lorenz, Erin R.; Axen, Seth; Kim, Edwin; Johns, Mitrick; Scott, Kathleen; Kerfeld, Cheryl A.
Undergraduate life sciences education needs an overhaul, as clearly described in the National Research Council of the National Academies publication BIO 2010: Transforming Undergraduate Education for Future Research Biologists. Among BIO 2010's top recommendations is the need to involve students in working with real data and tools that reflect the nature of life sciences research in the 21st century. Education research studies support the importance of utilizing primary literature, designing and implementing experiments, and analyzing results in the context of a bona fide scientific question in cultivating the analytical skills necessary to become a scientist. Incorporating these basic scientific methodologies in undergraduate education leads to increased undergraduate and post-graduate retention in the sciences. Toward this end, many undergraduate teaching organizations offer training and suggestions for faculty to update and improve their teaching approaches to help students learn as scientists, through design and discovery (e.g., Council of Undergraduate Research [www.cur.org] and Project Kaleidoscope [www.pkal.org]). With the advent of genome sequencing and bioinformatics, many scientists now formulate biological questions and interpret research results in the context of genomic information. Just as the use of bioinformatic tools and databases changed the way scientists investigate problems, it must change how scientists teach to create new opportunities for students to gain experiences reflecting the influence of genomics, proteomics, and bioinformatics on modern life sciences research. Educators have responded by incorporating bioinformatics into diverse life science curricula. While these published exercises in, and guidelines for, bioinformatics curricula are helpful and inspirational, faculty new to the area of bioinformatics inevitably need training in the theoretical underpinnings of the algorithms. Moreover, effectively integrating bioinformatics
Stephan, Christian; Hamacher, Michael; Blüggel, Martin; Körting, Gerhard; Chamrad, Daniel; Scheer, Christian; Marcus, Katrin; Reidegeld, Kai A; Lohaus, Christiane; Schäfer, Heike; Martens, Lennart; Jones, Philip; Müller, Michael; Auyeung, Kevin; Taylor, Chris; Binz, Pierre-Alain; Thiele, Herbert; Parkinson, David; Meyer, Helmut E; Apweiler, Rolf
The Bioinformatics Committee of the HUPO Brain Proteome Project (HUPO BPP) meets regularly to execute the post-lab analyses of the data produced in the HUPO BPP pilot studies. On July 7, 2005 the members came together for the 5th time at the European Bioinformatics Institute (EBI) in Hinxton, UK, hosted by Rolf Apweiler. As a main result, the parameter set of the semi-automated data re-analysis of MS/MS spectra has been elaborated and the subsequent work steps have been defined.
Kandi, Kamala M.
This study examines the effect of a technology-based instructional tool 'Geniverse' on the content knowledge gains, Science Self-Efficacy, Technology Self-Efficacy, and Career Goal Aspirations among 283 high school learners. The study was conducted in four urban high schools, two of which have achieved Adequate Yearly Progress (AYP) and two have not. Students in both types of schools were taught genetics either through Geniverse, a virtual learning environment or Dragon genetics, a paper-pencil activity embedded in traditional instructional method. Results indicated that students in all schools increased their knowledge of genetics using either type of instructional approach. Students who were taught using Geniverse demonstrated an advantage for genetics knowledge although the effect was small. These increases were more pronounced in the schools that had been meeting the AYP goal. The other significant effect for Geniverse was that students in the technology-enhanced classrooms increased in science Self-Efficacy while students in the non-technology enhanced classrooms decreased. In addition, students from Non-AYP schools showed an improvement in Science and Technology Self-Efficacy; however the effects were small. The implications of these results for the future use of technology-enriched classrooms were discussed. Keywords: Technology-based instruction, Self-Efficacy, career goals and Adequate Yearly Progress (AYP).
Lima, Andre O. S.; Garces, Sergio P. S.
Bioinformatics is one of the fastest growing scientific areas over the last decade. It focuses on the use of informatics tools for the organization and analysis of biological data. An example of their importance is the availability nowadays of dozens of software programs for genomic and proteomic studies. Thus, there is a growing field (private…
A Bioinformatic Strategy to Rapidly Characterize cDNA LibrariesG. Charles Ostermeier1, David J. Dix2 and Stephen A. Krawetz1.1Departments of Obstetrics and Gynecology, Center for Molecular Medicine and Genetics, & Institute for Scientific Computing, Wayne State Univer...
van Gelder, Celia W.G.; Hooft, Rob; van Rijswijk, Merlijn; van den Berg, Linda; Kok, Ruben; Reinders, M.J.T.; Mons, Barend; Heringa, Jaap
This review provides a historical overview of the inception and development of bioinformatics research in the Netherlands. Rooted in theoretical biology by foundational figures such as Paulien Hogeweg (at Utrecht University since the 1970s), the developments leading to organizational structures
Bioinformatics has become an essential tool not only for basic research but also for applied research in biotechnology and biomedical sciences. Optimal primer sequence and appropriate primer concentration are essential for maximal specificity and efficiency of PCR. A poorly designed primer can result in little or no ...
Rasmussen, Morten; Thaysen-Andersen, Morten; Højrup, Peter
We have developed "GLYCANthrope " - CROSSWORKS for glycans: a bioinformatics tool, which assists in identifying N-linked glycosylated peptides as well as their glycan moieties from MS2 data of enzymatically digested glycoproteins. The program runs either as a stand-alone application or as a plug...
Lewis, Jamie; Bartlett, Andrew; Atkinson, Paul
Bioinformatics--the so-called shotgun marriage between biology and computer science--is an interdiscipline. Despite interdisciplinarity being seen as a virtue, for having the capacity to solve complex problems and foster innovation, it has the potential to place projects and people in anomalous categories. For example, valorised…
Chapman, Barbara S.; Christmann, James L.; Thatcher, Eileen F.
We describe an innovative bioinformatics course developed under grants from the National Science Foundation and the California State University Program in Research and Education in Biotechnology for undergraduate biology students. The project has been part of a continuing effort to offer students classroom experiences focused on principles and…
Weisstein, Anton E.
The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes—the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software—the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a ‘two-culture’ problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses. PMID:23821621
Cazals, Frédéric; Dreyfus, Tom
Software in structural bioinformatics has mainly been application driven. To favor practitioners seeking off-the-shelf applications, but also developers seeking advanced building blocks to develop novel applications, we undertook the design of the Structural Bioinformatics Library ( SBL , http://sbl.inria.fr ), a generic C ++/python cross-platform software library targeting complex problems in structural bioinformatics. Its tenet is based on a modular design offering a rich and versatile framework allowing the development of novel applications requiring well specified complex operations, without compromising robustness and performances. The SBL involves four software components (1-4 thereafter). For end-users, the SBL provides ready to use, state-of-the-art (1) applications to handle molecular models defined by unions of balls, to deal with molecular flexibility, to model macro-molecular assemblies. These applications can also be combined to tackle integrated analysis problems. For developers, the SBL provides a broad C ++ toolbox with modular design, involving core (2) algorithms , (3) biophysical models and (4) modules , the latter being especially suited to develop novel applications. The SBL comes with a thorough documentation consisting of user and reference manuals, and a bugzilla platform to handle community feedback. The SBL is available from http://sbl.inria.fr. Frederic.Cazals@inria.fr. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com
Rapid cloning and bioinformatic analysis of spinach Y chromosome- specific EST sequences. Chuan-Liang Deng, Wei-Li Zhang, Ying Cao, Shao-Jing Wang, ... Arabidopsis thaliana mRNA for mitochondrial half-ABC transporter (STA1 gene). 389 2.31E-13. 98.96. SP3−12. Betula pendula histidine kinase 3 (HK3) mRNA, ...
The newly established RNA Biology Laboratory (RBL) at the Center for Cancer Research (CCR), National Cancer Institute (NCI), National Institutes of Health (NIH) in Frederick, Maryland is recruiting a Staff Scientist with strong expertise in RNA bioinformatics to join the Intramural Research Program’s mission of high impact, high reward science. The RBL is the equivalent of an
Jungck, John R; Weisstein, Anton E
The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes-the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software-the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a 'two-culture' problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses.
Brejnrod, Asker Daniel
of various molecular methods to build hypotheses about the impact of a copper contaminated soil. The introduction is a broad introduction to the field of microbiome research with a focus on the technologies that enable these discoveries and how some of the broader issues have related to this thesis......Sequencing based tools have revolutionized microbiology in recent years. Highthroughput DNA sequencing have allowed high-resolution studies on microbial life in many different environments and at unprecedented low cost. These culture-independent methods have helped discovery of novel bacteria...... 1 ,“Large-scale benchmarking reveals false discoveries and count transformation sensitivity in 16S rRNA gene amplicon data analysis methods used in microbiome studies”, benchmarked the performance of a variety of popular statistical methods for discovering differentially abundant bacteria . between...
The rise of social media and web 2.0 technologies over the last few years has impacted many communication functions. One influence is organizational bloggers as knowledge mediators on government agency practices. The ways in which these organizational bloggers in their roles as experts are able...... to change, facilitate, and enable communication about a broad range of specialized knowledge areas, in a more open interactional institutional communication environment than traditional media typically offer, give rise to a set of new implications as regards the mediation of expert knowledge to the target...
Full Text Available Abstract Background There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no sucessful attempts have been made to integrate chemo- and bioinformatics into a single framework. Results Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Conclusion Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL, an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at http://www.bioclipse.net.
Stringer-Calvert David WJ
Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the
Lee, Thomas J; Pouliot, Yannick; Wagner, Valerie; Gupta, Priyanka; Stringer-Calvert, David W J; Tenenbaum, Jessica D; Karp, Peter D
This article addresses the problem of interoperation of heterogeneous bioinformatics databases. We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. BioWarehouse embodies significant progress on the database integration problem for bioinformatics.
Nehm, Ross H.; Budd, Ann F.
NMITA is a reef coral biodiversity database that we use to introduce students to the expansive realm of bioinformatics beyond genetics. We introduce a series of lessons that have students use this database, thereby accessing real data that can be used to test hypotheses about biodiversity and evolution while targeting the "National Science …
Fontaine, Guillaume; Cossette, Sylvie; Maheu-Cadotte, Marc-André; Mailhot, Tanya; Deschênes, Marie-France; Mathieu-Dupuis, Gabrielle
Adaptive e-learning environments (AEEs) can provide tailored instruction by adapting content, navigation, presentation, multimedia, and tools to each user's navigation behavior, individual objectives, knowledge, and preferences. AEEs can have various levels of complexity, ranging from systems using a simple adaptive functionality to systems using artificial intelligence. While AEEs are promising, their effectiveness for the education of health professionals and health professions students remains unclear. The purpose of this systematic review is to assess the effectiveness of AEEs in improving knowledge, competence, and behavior in health professionals and students. We will follow the Cochrane Collaboration and the Effective Practice and Organisation of Care (EPOC) Group guidelines on systematic review methodology. A systematic search of the literature will be conducted in 6 bibliographic databases (CINAHL, EMBASE, ERIC, PsycINFO, PubMed, and Web of Science) using the concepts "adaptive e-learning environments," "health professionals/students," and "effects on knowledge/skills/behavior." We will include randomized and nonrandomized controlled trials, in addition to controlled before-after, interrupted time series, and repeated measures studies published between 2005 and 2017. The title and the abstract of each study followed by a full-text assessment of potentially eligible studies will be independently screened by 2 review authors. Using the EPOC extraction form, 1 review author will conduct data extraction and a second author will validate the data extraction. The methodological quality of included studies will be independently assessed by 2 review authors using the EPOC risk of bias criteria. Included studies will be synthesized by a descriptive analysis. Where appropriate, data will be pooled using meta-analysis by applying the RevMan software version 5.1, considering the heterogeneity of studies. The review is in progress. We plan to submit the results in the
Satpathy, R; Konkimalla, V B; Ratha, J
Microbial dehalogenation is a biochemical process in which the halogenated substances are catalyzed enzymatically in to their non-halogenated form. The microorganisms have a wide range of organohalogen degradation ability both explicit and non-specific in nature. Most of these halogenated organic compounds being pollutants need to be remediated; therefore, the current approaches are to explore the potential of microbes at a molecular level for effective biodegradation of these substances. Several microorganisms with dehalogenation activity have been identified and characterized. In this aspect, the bioinformatics plays a key role to gain deeper knowledge in this field of dehalogenation. To facilitate the data mining, many tools have been developed to annotate these data from databases. Therefore, with the discovery of a microorganism one can predict a gene/protein, sequence analysis, can perform structural modelling, metabolic pathway analysis, biodegradation study and so on. This review highlights various methods of bioinformatics approach that describes the application of various databases and specific tools in the microbial dehalogenation fields with special focus on dehalogenase enzymes. Attempts have also been made to decipher some recent applications of in silico modeling methods that comprise of gene finding, protein modelling, Quantitative Structure Biodegradibility Relationship (QSBR) study and reconstruction of metabolic pathways employed in dehalogenation research area.
This paper reflects on the analytic challenges emerging from the study of bioinformatic tools recently created to store and disseminate biological data, such as databases, repositories, and bio-ontologies. I focus my discussion on the Gene Ontology, a term that defines three entities at once: a classification system facilitating the distribution and use of genomic data as evidence towards new insights; an expert community specialised in the curation of those data; and a scientific institution promoting the use of this tool among experimental biologists. These three dimensions of the Gene Ontology can be clearly distinguished analytically, but are tightly intertwined in practice. I suggest that this is true of all bioinformatic tools: they need to be understood simultaneously as epistemic, social, and institutional entities, since they shape the knowledge extracted from data and at the same time regulate the organisation, development, and communication of research. This viewpoint has one important implication for the methodologies used to study these tools; that is, the need to integrate historical, philosophical, and sociological approaches. I illustrate this claim through examples of misunderstandings that may result from a narrowly disciplinary study of the Gene Ontology, as I experienced them in my own research.
Cavallo, Eugenio; Biddoccu, Marcella; Bagagiolo, Giorgia; De Marziis, Massimo; Gaia Forni, Emanuela; Alemanno, Laura; Ferraris, Stefano; Canone, Davide; Previati, Maurizio; Turconi, Laura; Arattano, Massimo; Coviello, Velio
Environmental sensor monitoring is continuously developing, both in terms of quantity (i.e. measurement sites), and quality (i.e. technological innovation). Environmental monitoring is carried out by either public or private entities for their own specific purposes, such as scientific research, civil protection, support to industrial and agricultural activities, services for citizens, security, education, and information. However, the acquired dataset could be cross-appealing, hence, being interesting for purposes that diverted from their main intended use. The CIRCE project (Cooperative Internet-of-Data Rural-alpine Community Environment) aimed to gather, manage, use and distribute data obtained from sensors and from people, in a multipurpose approach. The CIRCE project was selected within a call for tender launched by Piedmont Region (in collaboration with CSI Piemonte) in order to improve the digital ecosystem represented by YUCCA, an open source platform oriented to the acquisition, sharing and reuse of data resulting both from real-time and on-demand applications. The partnership of the CIRCE project was made by scientific research bodies (IMAMOTER-CNR, IRPI-CNR, DIST) together with SMEs involved in environmental monitoring and ICT sectors (namely: 3a srl, EnviCons srl, Impresa Verde Cuneo srl, and NetValue srl). Within the project a shared network of agro-meteo-hydrological sensors has been created. Then a platform and its interface for collection, management and distribution of data has been developed. The CIRCE network is currently constituted by a total amount of 171 sensors remotely connected and originally belonging to different networks. They are settled-up in order to monitor and investigate agro-meteo-hydrological processes in different rural and mountain areas of Piedmont Region (NW-Italy), including some very sensitive locations, but difficult to access. Each sensor network differs from each other, in terms of purpose of monitoring, monitored
Attwood, Terri K.; Selimas, Ioannis; Buis, Rob; Altenburg, Ruud; Herzog, Robert; Ledent, Valerie; Ghita, Viorica; Fernandes, Pedro; Marques, Isabel; Brugman, Marc
EMBER was a European project aiming to develop bioinformatics teaching materials on the Web and CD-ROM to help address the recognised skills shortage in bioinformatics. The project grew out of pilot work on the development of an interactive web-based bioinformatics tutorial and the desire to repackage that resource with the help of a professional…
Shachak, Aviv; Ophir, Ron; Rubin, Eitan
The need to support bioinformatics training has been widely recognized by scientists, industry, and government institutions. However, the discussion of instructional methods for teaching bioinformatics is only beginning. Here we report on a systematic attempt to design two bioinformatics workshops for graduate biology students on the basis of…
Inlow, Jennifer K.; Miller, Paige; Pittman, Bethany
We describe two bioinformatics exercises intended for use in a computer laboratory setting in an upper-level undergraduate biochemistry course. To introduce students to bioinformatics, the exercises incorporate several commonly used bioinformatics tools, including BLAST, that are freely available online. The exercises build upon the students'…
Furge, Laura Lowe; Stevens-Truss, Regina; Moore, D. Blaine; Langeland, James A.
Bioinformatics education for undergraduates has been approached primarily in two ways: introduction of new courses with largely bioinformatics focus or introduction of bioinformatics experiences into existing courses. For small colleges such as Kalamazoo, creation of new courses within an already resource-stretched setting has not been an option.…
Rumyana Y. Papancheva
Full Text Available The paper presents a review of the contemporary school, the digital generation and the need of teachers equipped with new knowledge and skills, in particular – basic programming skills. The last change of educational system in Bulgaria after the adoption of the new pre-school and general school education act is analysed. New primary school curricula and new standards for teacher’s qualification were implemented. The new school subject “Computer modelling” is presented. Some experience of the authors from project-based work in mathematics with teachers and students is described. The aim is the formation of skills of programming by working within Scratch – visual environment for block-based coding. Some conclusions and ideas for future work are formulated.
Accardi, L.; Freudenberg, Wolfgang; Ohya, Masanori
/ H. Kamimura -- Massive collection of full-length complementary DNA clones and microarray analyses: keys to rice transcriptome analysis / S. Kikuchi -- Changes of influenza A(H5) viruses by means of entropic chaos degree / K. Sato and M. Ohya -- Basics of genome sequence analysis in bioinformatics - its fundamental ideas and problems / T. Suzuki and S. Miyazaki -- A basic introduction to gene expression studies using microarray expression data analysis / D. Wanke and J. Kilian -- Integrating biological perspectives: a quantum leap for microarray expression analysis / D. Wanke ... [et al.].
Jing, Xia; Cimino, James J; Del Fiol, Guilherme
The Librarian Infobutton Tailoring Environment (LITE) is a Web-based knowledge capture, management, and configuration tool with which users can build profiles used by OpenInfobutton, an open source infobutton manager, to provide electronic health record users with context-relevant links to online knowledge resources. We conducted a multipart evaluation study to explore users' attitudes and acceptance of LITE and to guide future development. The evaluation consisted of an initial online survey to all LITE users, followed by an observational study of a subset of users in which evaluators' sessions were recorded while they conducted assigned tasks. The observational study was followed by administration of a modified System Usability Scale (SUS) survey. Fourteen users responded to the survey and indicated good acceptance of LITE with feedback that was mostly positive. Six users participated in the observational study, demonstrating average task completion time of less than 6 minutes and an average SUS score of 72, which is considered good compared with other SUS scores. LITE can be used to fulfill its designated tasks quickly and successfully. Evaluators proposed suggestions for improvements in LITE functionality and user interface.
Le Heron, Richard
The challenges of managing marine ecosystems for multiple users, while well recognised, has not led to clear strategies, principles or practice. The paper uses novel workshop based thought-experiments to address these concerns. These took the form of trans-disciplinary Non-Sectarian Scenario Experiments (NSSE), involving participants who agreed to put aside their disciplinary interests and commercial and institutional obligations. The NSSE form of co-production of knowledge is a distinctive addition to the participatory and scenario literatures in marine resource management (MRM). Set in the context of resource use conflicts in New Zealand, the workshops assembled diverse participants in the marine economy to co-develop and co-explore the making of socio-ecological knowledge and identify capability required for a new generation of multi-use oriented resource management. The thought-experiments assumed that non-sectarian navigation of scenarios will resource a step-change in marine management by facilitating new connections, relationships, and understandings of potential marine futures. Two questions guided workshop interactions: what science needs spring from pursuing imaginable possibilities and directions in a field of scenarios, and what kinds of institutions would aid the generation of science knowledge, and it application to policy and management solutions. The effectiveness of the thought- experiments helped identify ways of dealing with core problems in multi-use marine management, such as the urgent need to cope with ecological and socio-economic surprise, and define and address cumulative impacts. Discussion focuses on how the workshops offered fresh perspectives and insights into a number of challenges. These challenges include building relations of trust and collective organisation, showing the importance of values-means-ends pathways, developing facilitative legislation to enable initiatives, and the utility of the NSSEs in informing new governance and
Full Text Available A new bioinformatic methodology was developed founded on the Unsupervised Pattern Cognition Analysis of GRID-based BioGPS descriptors (Global Positioning System in Biological Space. The procedure relies entirely on three-dimensional structure analysis of enzymes and does not stem from sequence or structure alignment. The BioGPS descriptors account for chemical, geometrical and physical-chemical features of enzymes and are able to describe comprehensively the active site of enzymes in terms of "pre-organized environment" able to stabilize the transition state of a given reaction. The efficiency of this new bioinformatic strategy was demonstrated by the consistent clustering of four different Ser hydrolases classes, which are characterized by the same active site organization but able to catalyze different reactions. The method was validated by considering, as a case study, the engineering of amidase activity into the scaffold of a lipase. The BioGPS tool predicted correctly the properties of lipase variants, as demonstrated by the projection of mutants inside the BioGPS "roadmap".
Ison, Jon; Rapacki, Kristoffer; Ménager, Hervé; Kalaš, Matúš; Rydza, Emil; Chmura, Piotr; Anthon, Christian; Beard, Niall; Berka, Karel; Bolser, Dan; Booth, Tim; Bretaudeau, Anthony; Brezovsky, Jan; Casadio, Rita; Cesareni, Gianni; Coppens, Frederik; Cornell, Michael; Cuccuru, Gianmauro; Davidsen, Kristian; Vedova, Gianluca Della; Dogan, Tunca; Doppelt-Azeroual, Olivia; Emery, Laura; Gasteiger, Elisabeth; Gatter, Thomas; Goldberg, Tatyana; Grosjean, Marie; Grüning, Björn; Helmer-Citterich, Manuela; Ienasescu, Hans; Ioannidis, Vassilios; Jespersen, Martin Closter; Jimenez, Rafael; Juty, Nick; Juvan, Peter; Koch, Maximilian; Laibe, Camille; Li, Jing-Woei; Licata, Luana; Mareuil, Fabien; Mičetić, Ivan; Friborg, Rune Møllegaard; Moretti, Sebastien; Morris, Chris; Möller, Steffen; Nenadic, Aleksandra; Peterson, Hedi; Profiti, Giuseppe; Rice, Peter; Romano, Paolo; Roncaglia, Paola; Saidi, Rabie; Schafferhans, Andrea; Schwämmle, Veit; Smith, Callum; Sperotto, Maria Maddalena; Stockinger, Heinz; Vařeková, Radka Svobodová; Tosatto, Silvio C.E.; de la Torre, Victor; Uva, Paolo; Via, Allegra; Yachdav, Guy; Zambelli, Federico; Vriend, Gert; Rost, Burkhard; Parkinson, Helen; Løngreen, Peter; Brunak, Søren
Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task at hand. Here we present a community-driven curation effort, supported by ELIXIR—the European infrastructure for biological information—that aspires to a comprehensive and consistent registry of information about bioinformatics resources. The sustainable upkeep of this Tools and Data Services Registry is assured by a curation effort driven by and tailored to local needs, and shared amongst a network of engaged partners. As of November 2015, the registry includes 1785 resources, with depositions from 126 individual registrations including 52 institutional providers and 74 individuals. With community support, the registry can become a standard for dissemination of information about bioinformatics resources: we welcome everyone to join us in this common endeavour. The registry is freely available at https://bio.tools. PMID:26538599
Hou, Jie; Acharya, Lipi; Zhu, Dongxiao; Cheng, Jianlin
The advent of high-throughput genomics techniques, along with the completion of genome sequencing projects, identification of protein-protein interactions and reconstruction of genome-scale pathways, has accelerated the development of systems biology research in the yeast organism Saccharomyces cerevisiae In particular, discovery of biological pathways in yeast has become an important forefront in systems biology, which aims to understand the interactions among molecules within a cell leading to certain cellular processes in response to a specific environment. While the existing theoretical and experimental approaches enable the investigation of well-known pathways involved in metabolism, gene regulation and signal transduction, bioinformatics methods offer new insights into computational modeling of biological pathways. A wide range of computational approaches has been proposed in the past for reconstructing biological pathways from high-throughput datasets. Here we review selected bioinformatics approaches for modeling biological pathways inS. cerevisiae, including metabolic pathways, gene-regulatory pathways and signaling pathways. We start with reviewing the research on biological pathways followed by discussing key biological databases. In addition, several representative computational approaches for modeling biological pathways in yeast are discussed. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please email: firstname.lastname@example.org.
At the end of January I travelled to the States to speak at and attend the first O'Reilly Bioinformatics Technology Conference. It was a large, well-organized and diverse meeting with an interesting history. Although the meeting was not a typical academic conference, its style will, I am sure, become more typical of meetings in both biological and computational sciences.Speakers at the event included prominent bioinformatics researchers such as Ewan Birney, Terry Gaasterland and Lincoln Stein; authors and leaders in the open source programming community like Damian Conway and Nat Torkington; and representatives from several publishing companies including the Nature Publishing Group, Current Science Group and the President of O'Reilly himself, Tim O'Reilly. There were presentations, tutorials, debates, quizzes and even a 'jam session' for musical bioinformaticists.
Christos A Ouzounis
Full Text Available The field of bioinformatics and computational biology has gone through a number of transformations during the past 15 years, establishing itself as a key component of new biology. This spectacular growth has been challenged by a number of disruptive changes in science and technology. Despite the apparent fatigue of the linguistic use of the term itself, bioinformatics has grown perhaps to a point beyond recognition. We explore both historical aspects and future trends and argue that as the field expands, key questions remain unanswered and acquire new meaning while at the same time the range of applications is widening to cover an ever increasing number of biological disciplines. These trends appear to be pointing to a redefinition of certain objectives, milestones, and possibly the field itself.
Full Text Available With the rapid advance in genomics, proteomics, metabolomics, and other types of omics technologies during the past decades, a tremendous amount of data related to molecular biology has been produced. It is becoming a big challenge for the bioinformatists to analyze and interpret these data with conventional intelligent techniques, for example, support vector machines. Recently, the hybrid intelligent methods, which integrate several standard intelligent approaches, are becoming more and more popular due to their robustness and efficiency. Specifically, the hybrid intelligent approaches based on evolutionary algorithms (EAs are widely used in various fields due to the efficiency and robustness of EAs. In this review, we give an introduction about the applications of hybrid intelligent methods, in particular those based on evolutionary algorithm, in bioinformatics. In particular, we focus on their applications to three common problems that arise in bioinformatics, that is, feature selection, parameter estimation, and reconstruction of biological networks.
Calabrese, Barbara; Cannataro, Mario
High-throughput platforms such as microarray, mass spectrometry, and next-generation sequencing are producing an increasing volume of omics data that needs large data storage and computing power. Cloud computing offers massive scalable computing and storage, data sharing, on-demand anytime and anywhere access to resources and applications, and thus, it may represent the key technology for facing those issues. In fact, in the recent years it has been adopted for the deployment of different bioinformatics solutions and services both in academia and in the industry. Although this, cloud computing presents several issues regarding the security and privacy of data, that are particularly important when analyzing patients data, such as in personalized medicine. This chapter reviews main academic and industrial cloud-based bioinformatics solutions; with a special focus on microarray data analysis solutions and underlines main issues and problems related to the use of such platforms for the storage and analysis of patients data.
Varma, B Sharat Chandra; Balakrishnan, M
This book presents an evaluation methodology to design future FPGA fabrics incorporating hard embedded blocks (HEBs) to accelerate applications. This methodology will be useful for selection of blocks to be embedded into the fabric and for evaluating the performance gain that can be achieved by such an embedding. The authors illustrate the use of their methodology by studying the impact of HEBs on two important bioinformatics applications: protein docking and genome assembly. The book also explains how the respective HEBs are designed and how hardware implementation of the application is done using these HEBs. It shows that significant speedups can be achieved over pure software implementations by using such FPGA-based accelerators. The methodology presented in this book may also be used for designing HEBs for accelerating software implementations in other domains besides bioinformatics. This book will prove useful to students, researchers, and practicing engineers alike.
Full Text Available The application of whole-genome shotgun sequencing to microbial communities represents a major development in metagenomics, the study of uncultured microbes via the tools of modern genomic analysis. In the past year, whole-genome shotgun sequencing projects of prokaryotic communities from an acid mine biofilm, the Sargasso Sea, Minnesota farm soil, three deep-sea whale falls, and deep-sea sediments have been reported, adding to previously published work on viral communities from marine and fecal samples. The interpretation of this new kind of data poses a wide variety of exciting and difficult bioinformatics problems. The aim of this review is to introduce the bioinformatics community to this emerging field by surveying existing techniques and promising new approaches for several of the most interesting of these computational problems.
This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...
Dacinia Crina Petrescu
Full Text Available The present research is based on the premise that people perceive radiation risks in different ways, depending on their cultural background, information exposure, economic level, and educational status, which are specific to each country. The main objective was to assess and report, for the first time, the Romanians’ attitude (perceptions, knowledge, and behaviors related to residential radon, in order to contribute to the creation of a healthier living environment. A convenience sample of 229 people from different parts of Romania, including radon prone areas, was used. Results profiled a population vulnerable to radon threats from the perspective of their awareness and perceptions. Thus, study results showed that most participants did not perceive the risk generated by radon exposure as significant to their health; only 13.1% of interviewed people considered the danger to their health as “high” or “very high”. Additionally, it was found that awareness of radon itself was low: 62.4% of the sample did not know what radon was. From a practical perspective, the study shows that in Romania, increasing awareness, through the provision of valid information, should be a major objective of strategies that aim to reduce radon exposure. The present study takes a bottom-up perspective by assessing Romanian citizens’ attitudes toward radon. Therefore, it compensates for a gap in the behavioral studies literature by providing practical support for radon risk mitigation and creating the premises for a healthier living environment.
Adriansen, Hanne Kirstine; Valentin, Karen; Nielsen, Gritt B.
; on the other hand, the rationale for strengthening mobility through internationalisation is based on an imagination of the potentials of particular locations (academic institutions). Intrigued by this tension between universality and particularity in academic knowledge production, this paper presents...... preliminary findings from a project that study internationalisation of higher education as an agent in the interrelated processes of place-making and knowledge-making. The project is based on three case-studies. In this paper, focus is on PhD students’ change of research environment. This is used as a case......Internationalisation of higher education is premised by a seeming paradox: On the one hand, academic knowledge strives to be universal in the sense that it claims to produce generalizable, valid and reliable knowledge that can be used, critiqued, and redeveloped by academics from all over the world...
Palma, Jonathan P.; Benitz, William E.; Tarczy-Hornoch, Peter; Butte, Atul J.; Longhurst, Christopher A.
The future of neonatal informatics will be driven by the availability of increasingly vast amounts of clinical and genetic data. The field of translational bioinformatics is concerned with linking and learning from these data and applying new findings to clinical care to transform the data into proactive, predictive, preventive, and participatory health. As a result of advances in translational informatics, the care of neonates will become more data driven, evidence based, and personalized. PMID:22924023
Full Text Available Designers have a saying that "the joy of an early release lasts but a short time. The bitterness of an unusable system lasts for years." It is indeed disappointing to discover that your data resources are not being used to their full potential. Not only have you invested your time, effort, and research grant on the project, but you may face costly redesigns if you want to improve the system later. This scenario would be less likely if the product was designed to provide users with exactly what they need, so that it is fit for purpose before its launch. We work at EMBL-European Bioinformatics Institute (EMBL-EBI, and we consult extensively with life science researchers to find out what they need from biological data resources. We have found that although users believe that the bioinformatics community is providing accurate and valuable data, they often find the interfaces to these resources tricky to use and navigate. We believe that if you can find out what your users want even before you create the first mock-up of a system, the final product will provide a better user experience. This would encourage more people to use the resource and they would have greater access to the data, which could ultimately lead to more scientific discoveries. In this paper, we explore the need for a user-centred design (UCD strategy when designing bioinformatics resources and illustrate this with examples from our work at EMBL-EBI. Our aim is to introduce the reader to how selected UCD techniques may be successfully applied to software design for bioinformatics.
Budd, Aidan; Corpas, Manuel; Brazas, Michelle D.; Fuller, Jonathan C.; Goecks, Jeremy; Mulder, Nicola J.; Michaut, Magali; Ouellette, B. F. Francis; Pawlik, Aleksandra; Blomberg, Niklas
“Scientific community” refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i) the exchange and development of ideas and expertise; (ii) career development; (iii) coordinated funding activities; (iv) interactions and engagement with professionals from other fields; and (v) other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop “The ‘How To Guide’ for Establishing a Successful Bioinformatics Network” at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) and the 12th European Conference on Computational Biology (ECCB). PMID:25654371
The cluster orchestration tool Kubernetes enables easy deployment and reproducibility of life science research by utilizing the advantages of the container technology. The container technology allows for easy tool creation, sharing and runs on any Linux system once it has been built. The applicability of Kubernetes as an approach to run bioinformatic workflows was evaluated and resulted in some examples of how Kubernetes and containers could be used within the field of life science and how th...
Díaz-Del-Pino, Sergio; Falgueras, Juan; Perez-Wohlfeil, Esteban; Trelles, Oswaldo
Nearly 10 years have passed since the first mobile apps appeared. Given the fact that bioinformatics is a web-based world and that mobile devices are endowed with web-browsers, it seemed natural that bioinformatics would transit from personal computers to mobile devices but nothing could be further from the truth. The transition demands new paradigms, designs and novel implementations. Throughout an in-depth analysis of requirements of existing bioinformatics applications we designed and deployed an easy-to-use web-based lightweight mobile client. Such client is able to browse, select, compose automatically interface parameters, invoke services and monitor the execution of Web Services using the service's metadata stored in catalogs or repositories. mORCA is available at http://bitlab-es.com/morca/app as a web-app. It is also available in the App store by Apple and Play Store by Google. The software will be available for at least 2 years. email@example.com. Source code, final web-app, training material and documentation is available at http://bitlab-es.com/morca. © The Author(s) 2017. Published by Oxford University Press.
Atwood, Teresa K.; Bongcam-Rudloff, Erik; Brazas, Michelle E.; Corpas, Manuel; Gaudet, Pascale; Lewitter, Fran; Mulder, Nicola; Palagi, Patricia M.; Schneider, Maria Victoria; van Gelder, Celia W. G.
In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy—paradoxically, many are actually closing “niche” bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all. PMID:25856076
Repin, Rul Aisyah Mat; Mutalib, Sahilah Abdul; Shahimi, Safiyyah; Khalid, Rozida Mohd.; Ayob, Mohd. Khan; Bakar, Mohd. Faizal Abu; Isa, Mohd Noor Mat
In this study, we performed bioinformatics analysis toward genome sequence of Lysinibacillussphaericus (L. sphaericus) to determine gene encoded for gelatinase. L. sphaericus was isolated from soil and gelatinase species-specific bacterium to porcine and bovine gelatin. This bacterium offers the possibility of enzymes production which is specific to both species of meat, respectively. The main focus of this research is to identify the gelatinase encoded gene within the bacteria of L. Sphaericus using bioinformatics analysis of partially sequence genome. From the research study, three candidate gene were identified which was, gelatinase candidate gene 1 (P1), NODE_71_length_93919_cov_158.931839_21 which containing 1563 base pair (bp) in size with 520 amino acids sequence; Secondly, gelatinase candidate gene 2 (P2), NODE_23_length_52851_cov_190.061386_17 which containing 1776 bp in size with 591 amino acids sequence; and Thirdly, gelatinase candidate gene 3 (P3), NODE_106_length_32943_cov_169.147919_8 containing 1701 bp in size with 566 amino acids sequence. Three pairs of oligonucleotide primers were designed and namely as, F1, R1, F2, R2, F3 and R3 were targeted short sequences of cDNA by PCR. The amplicons were reliably results in 1563 bp in size for candidate gene P1 and 1701 bp in size for candidate gene P3. Therefore, the results of bioinformatics analysis of L. Sphaericus resulting in gene encoded gelatinase were identified.
Oshita, Kazuki; Arakawa, Kazuharu; Tomita, Masaru
The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS) UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS), adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded) and http://soap.g-language.org/kbws_dl.wsdl (Document/literal).
Full Text Available Abstract The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS, adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded and http://soap.g-language.org/kbws_dl.wsdl (Document/literal.
Fourment, Mathieu; Gillings, Michael R
The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from http://www.bioinformatics.org/benchmark/. This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language.
The mountains of data thrusting from the new landscape of modern high-throughput biology are irrevocably changing biomedical research and creating a near-insatiable demand for training in data management and manipulation and data mining and analysis. Among life scientists, from clinicians to environmental researchers, a common theme is the need not just to use, and gain familiarity with, bioinformatics tools and resources but also to understand their underlying fundamental theoretical and practical concepts. Providing bioinformatics training to empower life scientists to handle and analyse their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource-centric demos, using interactivity, problem-solving exercises and cooperative learning to substantially enhance training quality and learning outcomes. In this context, this article discusses various pragmatic criteria for identifying training needs and learning objectives, for selecting suitable trainees and trainers, for developing and maintaining training skills and evaluating training quality. Adherence to these criteria may help not only to guide course organizers and trainers on the path towards bioinformatics training excellence but, importantly, also to improve the training experience for life scientists.
Microarray technology is being used widely in various biomedical research areas; the corresponding microarray data analysis is an essential step toward the best utilizing of array technologies. Here we review two components of the microarray data analysis: a low level of microarray data analysis that emphasizes the designing, the quality control, and the preprocessing of microarray experiments, then a high level of microarray data analysis that focuses on the domain-specific microarray applications such as tumor classification, biomarker prediction, analyzing array CGH experiments, and reverse engineering of gene expression networks. Additionally, we will review the recent development of building a predictive model in genome expression and regulation studies. This review may help biologists grasp a basic knowledge of microarray bioinformatics as well as its potential impact on the future evolvement of biomedical research fields.
Rocha, Miguel; Fdez-Riverola, Florentino; Santana, Juan
Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis of the datasets produced and their integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Clearly, Biology is more and more a science of information requiring tools from the computational sciences. In the last few years, we have seen the surge of a new generation of interdisciplinary scientists that have a strong background in the biological and computational sciences. In this context, the interaction of researche...
Mohamad, Mohd; Rocha, Miguel; Paz, Juan; Pinto, Tiago
Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next-generation sequencing technologies, together with novel and constantly evolving, distinct types of omics data technologies, have created an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis of the datasets produced and their integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Clearly, Biology is more and more a science of information and requires tools from the computational sciences. In the last few years, we have seen the rise of a new generation of interdisciplinary scientists with a strong background in the biological and computational sciences. In this context, the interaction of r...
Rocha, Miguel; Fdez-Riverola, Florentino; Mayo, Francisco; Paz, Juan
Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis of the datasets produced and their integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Clearly, Biology is more and more a science of information requiring tools from the computational sciences. In the last few years, we have seen the surge of a new generation of interdisciplinary scientists that have a strong background in the biological and computational sciences. In this context, the interaction of researche...
Full Text Available Since the decoding of the Human Genome, techniques from bioinformatics, statistics, and machine learning have been instrumental in uncovering patterns in increasing amounts and types of different data produced by technical profiling technologies applied to clinical samples, animal models, and cellular systems. Yet, progress on unravelling biological mechanisms, causally driving diseases, has been limited, in part due to the inherent complexity of biological systems. Whereas we have witnessed progress in the areas of cancer, cardiovascular and metabolic diseases, the area of neurodegenerative diseases has proved to be very challenging. This is in part because the aetiology of neurodegenerative diseases such as Alzheimer´s disease or Parkinson´s disease is unknown, rendering it very difficult to discern early causal events. Here we describe a panel of bioinformatics and modeling approaches that have recently been developed to identify candidate mechanisms of neurodegenerative diseases based on publicly available data and knowledge. We identify two complementary strategies—data mining techniques using genetic data as a starting point to be further enriched using other data-types, or alternatively to encode prior knowledge about disease mechanisms in a model based framework supporting reasoning and enrichment analysis. Our review illustrates the challenges entailed in integrating heterogeneous, multiscale and multimodal information in the area of neurology in general and neurodegeneration in particular. We conclude, that progress would be accelerated by increasing efforts on performing systematic collection of multiple data-types over time from each individual suffering from neurodegenerative disease. The work presented here has been driven by project AETIONOMY; a project funded in the course of the Innovative Medicines Initiative (IMI; which is a public-private partnership of the European Federation of Pharmaceutical Industry Associations
Elledge, Ross; McAleer, Sean; Thakar, Meera; Begum, Fathema; Singhota, Sanjeet; Grew, Nicholas
Many graduates will take up junior roles in accident and emergency (A&E) departments to which a large proportion of patients present with facial injuries caused by interpersonal violence. However, it is widely recognised that undergraduates and postgraduates have few opportunities for training in oral and maxillofacial surgery. We aimed to assess the impact of a specifically designed maxillofacial emergencies virtual learning environment (VLE) on the knowledge and confidence of junior doctors in two A&E departments. They were given free access to the VLE for one month, and were asked to complete multiple choice questions and to rate their confidence to deal with 10 common situations on visual analogue scales (VAS) at baseline and one month after training. A total of 29 doctors agreed to pilot the website, 21 (72%) completed both sets of questions, and 18 (62%) completed both VAS assessments. The mean (SD) multiple choice score improved from 10 (2.52) to 13 (3.56) out of a maximum of 20 (p=0.004) and the mean (SD) VAS improved from 29.2 (19.2) mm to 45.7 (16.6) mm out of a maximum of 100 mm (p=0.007). This was a small pilot study with limited numbers, but it showed improvements in the knowledge of maxillofacial emergencies and in confidence, although the latter remained low. Further work is needed to examine how these brief educational interventions affect the attitudes of frontline staff to maxillofacial emergencies. Copyright © 2015 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Margaria, Tiziana; Kubczak, Christian; Steffen, Bernhard
With Bio-jETI, we introduce a service platform for interdisciplinary work on biological application domains and illustrate its use in a concrete application concerning statistical data processing in R and xcms for an LC/MS analysis of FAAH gene knockout. Bio-jETI uses the jABC environment for service-oriented modeling and design as a graphical process modeling tool and the jETI service integration technology for remote tool execution. As a service definition and provisioning platform, Bio-jETI has the potential to become a core technology in interdisciplinary service orchestration and technology transfer. Domain experts, like biologists not trained in computer science, directly define complex service orchestrations as process models and use efficient and complex bioinformatics tools in a simple and intuitive way.
Ju, Feng; Zhang, Tong
Recent advances in DNA sequencing technologies have prompted the widespread application of metagenomics for the investigation of novel bioresources (e.g., industrial enzymes and bioactive molecules) and unknown biohazards (e.g., pathogens and antibiotic resistance genes) in natural and engineered microbial systems across multiple disciplines. This review discusses the rigorous experimental design and sample preparation in the context of applying metagenomics in environmental sciences and biotechnology. Moreover, this review summarizes the principles, methodologies, and state-of-the-art bioinformatics procedures, tools and database resources for metagenomics applications and discusses two popular strategies (analysis of unassembled reads versus assembled contigs/draft genomes) for quantitative or qualitative insights of microbial community structure and functions. Overall, this review aims to facilitate more extensive application of metagenomics in the investigation of uncultured microorganisms, novel enzymes, microbe-environment interactions, and biohazards in biotechnological applications where microbial communities are engineered for bioenergy production, wastewater treatment, and bioremediation.
Full Text Available The background to this article review is governmental interest in finding reasons why a majority of the employees in Sweden who are on sick leave are women. In order to find answers to these questions three issues will be discussed from a meso-level: (i recent changes in the Swedish health care sector’s working organization and their effects on gender, (ii what research says about work health and gender in the health care sector, and (iii the meaning of gender at work. The aim is to first discuss these three issues to give a picture of what gender research says concerning work organization and work health, and second to examine the theories behind the issue. In this article the female-dominated health care sector is in focus. This sector strives for efficiency relating to invisible job tasks and emotional work performed by women. In contemporary work organizations gender segregation has a tendency to take on new and subtler forms. One reason for this is today’s de-hierarchized and flexible organizations. A burning question connected to this is whether new constructions of masculinities and femininities really are ways of relating to the prevailing norm in a profession or are ways of deconstructing the gender order. To gain a deeper understanding of working life we need multidisciplinary research projects where gender-critical knowledge is interwoven into research not only on organizations, but also into research concerning the physical work environment, in order to be able to develop good and sustainable work environments, in this case in the health care sector
Almeida, Jonas S; Iriabho, Egiebade E; Gorrepati, Vijaya L; Wilkinson, Sean R; Grüneberg, Alexander; Robbins, David E; Hackney, James R
Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local "download and installation".
Jonas S Almeida
Full Text Available Background: Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. Materials and Methods: ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Results : Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH′s popular ImageJ application. Conclusions : The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without
Losko, Sascha; Heumann, Klaus
The vast quantities of information generated by academic and industrial research groups are reflected in a rapidly growing body of scientific literature and exponentially expanding resources of formalized data, including experimental data, originating from a multitude of "-omics" platforms, phenotype information, and clinical data. For bioinformatics, the challenge remains to structure this information so that scientists can identify relevant information, to integrate this information as specific "knowledge bases," and to formalize this knowledge across multiple scientific domains to facilitate hypothesis generation and validation. Here we report on progress made in building a generic knowledge management environment capable of representing and mining both explicit and implicit knowledge and, thus, generating new knowledge. Risk management in drug discovery and clinical research is used as a typical example to illustrate this approach. In this chapter we introduce techniques and concepts (such as ontologies, semantic objects, typed relationships, contexts, graphs, and information layers) that are used to represent complex biomedical networks. The BioXM™ Knowledge Management Environment is used as an example to demonstrate how a domain such as oncology is represented and how this representation is utilized for research.
Rein, Diane C
Purdue University is a major agricultural, engineering, biomedical, and applied life science research institution with an increasing focus on bioinformatics research that spans multiple disciplines and campus academic units. The Purdue University Libraries (PUL) hired a molecular biosciences specialist to discover, engage, and support bioinformatics needs across the campus. After an extended period of information needs assessment and environmental scanning, the specialist developed a week of focused bioinformatics instruction (Bioinformatics Week) to launch system-wide, library-based bioinformatics services. The specialist employed a two-tiered approach to assess user information requirements and expectations. The first phase involved careful observation and collection of information needs in-context throughout the campus, attending laboratory meetings, interviewing department chairs and individual researchers, and engaging in strategic planning efforts. Based on the information gathered during the integration phase, several survey instruments were developed to facilitate more critical user assessment and the recovery of quantifiable data prior to planning. Given information gathered while working with clients and through formal needs assessments, as well as the success of instructional approaches used in Bioinformatics Week, the specialist is developing bioinformatics support services for the Purdue community. The specialist is also engaged in training PUL faculty librarians in bioinformatics to provide a sustaining culture of library-based bioinformatics support and understanding of Purdue's bioinformatics-related decision and policy making.
Geary, Janis; Jardine, Cynthia G; Guebert, Jenilee; Bubela, Tania
Research in northern Canada focused on Aboriginal peoples has historically benefited academia with little consideration for the people being researched or their traditional knowledge (TK). Although this attitude is changing, the complexity of TK makes it difficult to develop mechanisms to preserve and protect it. Protecting TK becomes even more important when outside groups become interested in using TK or materials with associated TK. In the latter category are genetic resources, which may have commercial value and are the focus of this article. This article addresses access to and use of genetic resources and associated TK in the context of the historical power-imbalances in research relationships in Canadian north. Review. Research involving genetic resources and TK is becoming increasingly relevant in northern Canada. The legal framework related to genetic resources and the cultural shift of universities towards commercial goals in research influence the environment for negotiating research agreements. Current guidelines for research agreements do not offer appropriate guidelines to achieve mutual benefit, reflect unequal bargaining power or take the relationship between parties into account. Relational contract theory may be a useful framework to address the social, cultural and legal hurdles inherent in creating research agreements.
Full Text Available Background. Research in northern Canada focused on Aboriginal peoples has historically benefited academia with little consideration for the people being researched or their traditional knowledge (TK. Although this attitude is changing, the complexity of TK makes it difficult to develop mechanisms to preserve and protect it. Protecting TK becomes even more important when outside groups become interested in using TK or materials with associated TK. In the latter category are genetic resources, which may have commercial value and are the focus of this article. Objective. This article addresses access to and use of genetic resources and associated TK in the context of the historical power-imbalances in research relationships in Canadian north. Design. Review. Results. Research involving genetic resources and TK is becoming increasingly relevant in northern Canada. The legal framework related to genetic resources and the cultural shift of universities towards commercial goals in research influence the environment for negotiating research agreements. Current guidelines for research agreements do not offer appropriate guidelines to achieve mutual benefit, reflect unequal bargaining power or take the relationship between parties into account. Conclusions. Relational contract theory may be a useful framework to address the social, cultural and legal hurdles inherent in creating research agreements.
Lin, Sabrina; Fonteno, Shawn; Satish, Shruthi; Bhanu, Bir; Talbot, Prue
Because video data are complex and are comprised of many images, mining information from video material is difficult to do without the aid of computer software. Video bioinformatics is a powerful quantitative approach for extracting spatio-temporal data from video images using computer software to perform dating mining and analysis. In this article, we introduce a video bioinformatics method for quantifying the growth of human embryonic stem cells (hESC) by analyzing time-lapse videos collected in a Nikon BioStation CT incubator equipped with a camera for video imaging. In our experiments, hESC colonies that were attached to Matrigel were filmed for 48 hours in the BioStation CT. To determine the rate of growth of these colonies, recipes were developed using CL-Quant software which enables users to extract various types of data from video images. To accurately evaluate colony growth, three recipes were created. The first segmented the image into the colony and background, the second enhanced the image to define colonies throughout the video sequence accurately, and the third measured the number of pixels in the colony over time. The three recipes were run in sequence on video data collected in a BioStation CT to analyze the rate of growth of individual hESC colonies over 48 hours. To verify the truthfulness of the CL-Quant recipes, the same data were analyzed manually using Adobe Photoshop software. When the data obtained using the CL-Quant recipes and Photoshop were compared, results were virtually identical, indicating the CL-Quant recipes were truthful. The method described here could be applied to any video data to measure growth rates of hESC or other cells that grow in colonies. In addition, other video bioinformatics recipes can be developed in the future for other cell processes such as migration, apoptosis, and cell adhesion. PMID:20495527
Pharmacogenetics refers to the study of the individual pharmacological response based on the genotype. Its objective is to optimize treatment in an individual basis, thereby creating a more efficient and safe personalized therapy. In the second part of this review, the molecular methods of study in pharmacogenetics, including microarray technology or DNA chips, are discussed. Among them we highlight the microarrays used to determine the gene expression that detect specific RNA sequences, and the microarrays employed to determine the genotype that detect specific DNA sequences, including polymorphisms, particularly single nucleotide polymorphisms (SNPs). The relationship between pharmacogenetics, bioinformatics and ethical concerns is reviewed.
Wiwanitkit, Somsri; Wiwanitkit, Viroj
The role of microRNA in the pathogenesis of pulmonary tuberculosis is the interesting topic in chest medicine at present. Recently, it was proposed that the microRNA can be a useful biomarker for monitoring of pulmonary tuberculosis and might be the important part in pathogenesis of disease. Here, the authors perform a bioinformatics study to assess the microRNA within known tuberculosis RNA. The microRNA part can be detected and this can be important key information in further study of the p...
Romano, Paolo; Bartocci, Ezio; Bertolini, Guglielmo; De Paoli, Flavio; Marra, Domenico; Mauri, Giancarlo; Merelli, Emanuela; Milanesi, Luciano
The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS), can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical databases and analysis software and the creation of
Full Text Available Abstract Background The huge amount of biological information, its distribution over the Internet and the heterogeneity of available software tools makes the adoption of new data integration and analysis network tools a necessity in bioinformatics. ICT standards and tools, like Web Services and Workflow Management Systems (WMS, can support the creation and deployment of such systems. Many Web Services are already available and some WMS have been proposed. They assume that researchers know which bioinformatics resources can be reached through a programmatic interface and that they are skilled in programming and building workflows. Therefore, they are not viable to the majority of unskilled researchers. A portal enabling these to take profit from new technologies is still missing. Results We designed biowep, a web based client application that allows for the selection and execution of a set of predefined workflows. The system is available on-line. Biowep architecture includes a Workflow Manager, a User Interface and a Workflow Executor. The task of the Workflow Manager is the creation and annotation of workflows. These can be created by using either the Taverna Workbench or BioWMS. Enactment of workflows is carried out by FreeFluo for Taverna workflows and by BioAgent/Hermes, a mobile agent-based middleware, for BioWMS ones. Main workflows' processing steps are annotated on the basis of their input and output, elaboration type and application domain by using a classification of bioinformatics data and tasks. The interface supports users authentication and profiling. Workflows can be selected on the basis of users' profiles and can be searched through their annotations. Results can be saved. Conclusion We developed a web system that support the selection and execution of predefined workflows, thus simplifying access for all researchers. The implementation of Web Services allowing specialized software to interact with an exhaustive set of biomedical
Rezig, Slim; Sakhri, Saber
Salmonellas are the main responsible agent for the frequent food-borne gastrointestinal diseases. Their detection using classical methods are laborious and their results take a lot of time to be revealed. In this context, we tried to set up a revealing technique of the invA virulence gene, found in the majority of Salmonella species. After amplification with PCR using specific primers created and verified by bioinformatics programs, two couples of primers were set up and they appeared to be very specific and sensitive for the detection of invA gene. (Author)
Supreet Kaur Gill
Full Text Available Clinical research is making toiling efforts for promotion and wellbeing of the health status of the people. There is a rapid increase in number and severity of diseases like cancer, hepatitis, HIV etc, resulting in high morbidity and mortality. Clinical research involves drug discovery and development whereas clinical trials are performed to establish safety and efficacy of drugs. Drug discovery is a long process starting with the target identification, validation and lead optimization. This is followed by the preclinical trials, intensive clinical trials and eventually post marketing vigilance for drug safety. Softwares and the bioinformatics tools play a great role not only in the drug discovery but also in drug development. It involves the use of informatics in the development of new knowledge pertaining to health and disease, data management during clinical trials and to use clinical data for secondary research. In addition, new technology likes molecular docking, molecular dynamics simulation, proteomics and quantitative structure activity relationship in clinical research results in faster and easier drug discovery process. During the preclinical trials, the software is used for randomization to remove bias and to plan study design. In clinical trials software like electronic data capture, Remote data capture and electronic case report form (eCRF is used to store the data. eClinical, Oracle clinical are software used for clinical data management and for statistical analysis of the data. After the drug is marketed the safety of a drug could be monitored by drug safety software like Oracle Argus or ARISg. Therefore, softwares are used from the very early stages of drug designing, to drug development, clinical trials and during pharmacovigilance. This review describes different aspects related to application of computers and bioinformatics in drug designing, discovery and development, formulation designing and clinical research.
Full Text Available Sex steroids play a key role in triggering sex differentiation in fish, the use of exogenous hormone treatment leading to partial or complete sex reversal. This phenomenon has attracted attention since the discovery that even low environmental doses of exogenous steroids can adversely affect gonad morphology (ovotestis development and induce reproductive failure. Modern genomic-based technologies have enhanced opportunities to find out mechanisms of actions (MOA and identify biomarkers related to the toxic action of a compound. However, high throughput data interpretation relies on statistical analysis, species genomic resources, and bioinformatics tools. The goals of this study are to improve the knowledge of feminisation in fish, by the analysis of molecular responses in the gonads of rainbow trout fry after chronic exposure to several doses (0.01, 0.1, 1 and 10 μg/L of ethynylestradiol (EE2 and to offer target genes as potential biomarkers of ovotestis development. We successfully adapted a bioinformatics microarray analysis workflow elaborated on human data to a toxicogenomic study using rainbow trout, a fish species lacking accurate functional annotation and genomic resources. The workflow allowed to obtain lists of genes supposed to be enriched in true positive differentially expressed genes (DEGs, which were subjected to over-representation analysis methods (ORA. Several pathways and ontologies, mostly related to cell division and metabolism, sexual reproduction and steroid production, were found significantly enriched in our analyses. Moreover, two sets of potential ovotestis biomarkers were selected using several criteria. The first group displayed specific potential biomarkers belonging to pathways/ontologies highlighted in the experiment. Among them, the early ovarian differentiation gene foxl2a was overexpressed. The second group, which was highly sensitive but not specific, included the DEGs presenting the highest fold change and