Crichton, Daniel; Srivastava, Sudhir; Johnsey, Donald
Discovery of disease biomarkers for cancer is a leading focus of early detection. The National Cancer Institute created a network of collaborating institutions focused on the discovery and validation of cancer biomarkers called the Early Detection Research Network (EDRN). Informatics plays a key role in enabling a virtual knowledge environment that provides scientists real time access to distributed data sets located at research institutions across the nation. The distributed and heterogeneous nature of the collaboration makes data sharing across institutions very difficult. EDRN has developed a comprehensive informatics effort focused on developing a national infrastructure enabling seamless access, sharing and discovery of science data resources across all EDRN sites. This paper will discuss the EDRN knowledge system architecture, its objectives and its accomplishments.
Baldi, Pierre; Brunak, Søren
, and medicine will be particularly affected by the new results and the increased understanding of life at the molecular level. Bioinformatics is the development and application of computer methods for analysis, interpretation, and prediction, as well as for the design of experiments. It has emerged...
Montoni, Mariella A.; Galotta, Catia; Rocha, Ana Regina; Rabelo, Álvaro; Rabelo, Lisia
Knowledge management supports decision-making by capturing and analyzing key performance indicators, providing visibility into the effectiveness of the business model, and by concentrating collaborative work and employee knowledge reviews on critical business problems. CardioKnowledge is a knowledge management environment based on the business and process requirements of a health care organization in Cardiology. CardioKnowledge supports organizational processes in order to facilitate the comm...
Ménager, Hervé; Kalaš, Matúš; Rapacki, Kristoffer
within convenient, integrated “workbench” environments. Resource descriptions are the core element of registry and workbench systems, which are used to both help the user find and comprehend available software tools, data resources, and Web Services, and to localise, execute and combine them......, a software component that will ease the integration of bioinformatics resources in a workbench environment, using their description provided by the existing ELIXIR Tools and Data Services Registry....
Chimusa, Emile R; Mbiyavanga, Mamana; Masilela, Velaphi; Kumuthini, Judit
A shortage of practical skills and relevant expertise is possibly the primary obstacle to social upliftment and sustainable development in Africa. The "omics" fields, especially genomics, are increasingly dependent on the effective interpretation of large and complex sets of data. Despite abundant natural resources and population sizes comparable with many first-world countries from which talent could be drawn, countries in Africa still lag far behind the rest of the world in terms of specialized skills development. Moreover, there are serious concerns about disparities between countries within the continent. The multidisciplinary nature of the bioinformatics field, coupled with rare and depleting expertise, is a critical problem for the advancement of bioinformatics in Africa. We propose a formalized matchmaking system, which is aimed at reversing this trend, by introducing the Knowledge Transfer Programme (KTP). Instead of individual researchers travelling to other labs to learn, researchers with desirable skills are invited to join African research groups for six weeks to six months. Visiting researchers or trainers will pass on their expertise to multiple people simultaneously in their local environments, thus increasing the efficiency of knowledge transference. In return, visiting researchers have the opportunity to develop professional contacts, gain industry work experience, work with novel datasets, and strengthen and support their ongoing research. The KTP develops a network with a centralized hub through which groups and individuals are put into contact with one another and exchanges are facilitated by connecting both parties with potential funding sources. This is part of the PLOS Computational Biology Education collection.
Linn, Marcia C.
Explains Knowledge Integration Environment (KIE) activities which are designed to promote lifelong science learning. Describes the partnership process that guided the design as well as the Scaffolded Knowledge Integration (SKI) framework that gave the partnership a head start on creating effective materials. (Contains 52 references.) (Author/YDS)
Athanassios M. Kintsakis
Full Text Available Hermes introduces a new “describe once, run anywhere” paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.
Kintsakis, Athanassios M.; Psomopoulos, Fotis E.; Symeonidis, Andreas L.; Mitkas, Pericles A.
Hermes introduces a new "describe once, run anywhere" paradigm for the execution of bioinformatics workflows in hybrid cloud environments. It combines the traditional features of parallelization-enabled workflow management systems and of distributed computing platforms in a container-based approach. It offers seamless deployment, overcoming the burden of setting up and configuring the software and network requirements. Most importantly, Hermes fosters the reproducibility of scientific workflows by supporting standardization of the software execution environment, thus leading to consistent scientific workflow results and accelerating scientific output.
Kanterakis, Alexandros; Kuiper, Joël; Potamias, George; Swertz, Morris A
Today researchers can choose from many bioinformatics protocols for all types of life sciences research, computational environments and coding languages. Although the majority of these are open source, few of them possess all virtues to maximize reuse and promote reproducible science. Wikipedia has proven a great tool to disseminate information and enhance collaboration between users with varying expertise and background to author qualitative content via crowdsourcing. However, it remains an open question whether the wiki paradigm can be applied to bioinformatics protocols. We piloted PyPedia, a wiki where each article is both implementation and documentation of a bioinformatics computational protocol in the python language. Hyperlinks within the wiki can be used to compose complex workflows and induce reuse. A RESTful API enables code execution outside the wiki. Initial content of PyPedia contains articles for population statistics, bioinformatics format conversions and genotype imputation. Use of the easy to learn wiki syntax effectively lowers the barriers to bring expert programmers and less computer savvy researchers on the same page. PyPedia demonstrates how wiki can provide a collaborative development, sharing and even execution environment for biologists and bioinformaticians that complement existing resources, useful for local and multi-center research teams. PyPedia is available online at: http://www.pypedia.com. The source code and installation instructions are available at: https://github.com/kantale/PyPedia_server. The PyPedia python library is available at: https://github.com/kantale/pypedia. PyPedia is open-source, available under the BSD 2-Clause License.
Emile R Chimusa
Full Text Available A shortage of practical skills and relevant expertise is possibly the primary obstacle to social upliftment and sustainable development in Africa. The "omics" fields, especially genomics, are increasingly dependent on the effective interpretation of large and complex sets of data. Despite abundant natural resources and population sizes comparable with many first-world countries from which talent could be drawn, countries in Africa still lag far behind the rest of the world in terms of specialized skills development. Moreover, there are serious concerns about disparities between countries within the continent. The multidisciplinary nature of the bioinformatics field, coupled with rare and depleting expertise, is a critical problem for the advancement of bioinformatics in Africa. We propose a formalized matchmaking system, which is aimed at reversing this trend, by introducing the Knowledge Transfer Programme (KTP. Instead of individual researchers travelling to other labs to learn, researchers with desirable skills are invited to join African research groups for six weeks to six months. Visiting researchers or trainers will pass on their expertise to multiple people simultaneously in their local environments, thus increasing the efficiency of knowledge transference. In return, visiting researchers have the opportunity to develop professional contacts, gain industry work experience, work with novel datasets, and strengthen and support their ongoing research. The KTP develops a network with a centralized hub through which groups and individuals are put into contact with one another and exchanges are facilitated by connecting both parties with potential funding sources. This is part of the PLOS Computational Biology Education collection.
Zhao, Shanrong; Lu, Jin
A challenge to antibody engineering is the large number of positions and nature of variation and opposing concerns of introducing unfavorable biochemical properties. While large libraries are quite successful in identifying antibodies with improved binding or activity, still only a fraction of possibilities can be explored and that would require considerable effort. The vast array of natural antibody sequences provides a potential wealth of information on (1) selecting hotspots for variation, and (2) designing mutants to mimic natural variations seen in hotspots. The human immune system can generate an enormous diversity of immunoglobulins against an almost unlimited range of antigens by gene rearrangement of a limited number of germline variable, diversity and joining genes followed by somatic hypermutation and antigen selection. All the antibody sequences in NCBI database can be assigned to different germline genes. As a result, a position specific scoring matrix for each germline gene can be constructed by aligning all its member sequences and calculating the amino acid frequencies for each position. The position specific scoring matrix for each germline gene characterizes "hotspots" and the nature of variations, and thus reduces the sequence space of exploration in antibody engineering. We have developed a bioinformatics pipeline to conduct analysis of human antibody sequences, and generated a comprehensive knowledge database for in silico antibody engineering. The pipeline is fully automatic and the knowledge database can be refreshed anytime by re-running the pipeline. The refresh process is fast, typically taking 1min on a Lenovo ThinkPad T60 laptop with 3G memory. Our knowledge database consists of (1) the individual germline gene usage in generation of natural antibodies; (2) the CDR length distributions; and (3) the position specific scoring matrix for each germline gene. The knowledge database provides comprehensive support for antibody engineering
Thiele, Herbert; Glandorf, Jörg; Hufnagel, Peter
With the large variety of Proteomics workflows, as well as the large variety of instruments and data-analysis software available, researchers today face major challenges validating and comparing their Proteomics data. Here we present a new generation of the ProteinScape bioinformatics platform, now enabling researchers to manage Proteomics data from the generation and data warehousing to a central data repository with a strong focus on the improved accuracy, reproducibility and comparability demanded by many researchers in the field. It addresses scientists; current needs in proteomics identification, quantification and validation. But producing large protein lists is not the end point in Proteomics, where one ultimately aims to answer specific questions about the biological condition or disease model of the analyzed sample. In this context, a new tool has been developed at the Spanish Centro Nacional de Biotecnologia Proteomics Facility termed PIKE (Protein information and Knowledge Extractor) that allows researchers to control, filter and access specific information from genomics and proteomic databases, to understand the role and relationships of the proteins identified in the experiments. Additionally, an EU funded project, ProDac, has coordinated systematic data collection in public standards-compliant repositories like PRIDE. This will cover all aspects from generating MS data in the laboratory, assembling the whole annotation information and storing it together with identifications in a standardised format.
Lin, Chun-Hung Richard; Wen, Chun-Hao; Lin, Ying-Chih; Tung, Kuang-Yuan; Lin, Rung-Wei; Lin, Chun-Yuan
Bioinformatics is advanced from in-house computing infrastructure to cloud computing for tackling the vast quantity of biological data. This advance enables large number of collaborative researches to share their works around the world. In view of that, retrieving biological data over the internet becomes more and more difficult because of the explosive growth and frequent changes. Various efforts have been made to address the problems of data discovery and delivery in the cloud framework, but most of them suffer the hindrance by a MapReduce master server to track all available data. In this paper, we propose an alternative approach, called PRKad, which exploits a Peer-to-Peer (P2P) model to achieve efficient data discovery and delivery. PRKad is a Kademlia-based implementation with Round-Trip-Time (RTT) as the associated key, and it locates data according to Distributed Hash Table (DHT) and XOR metric. The simulation results exhibit that our PRKad has the low link latency to retrieve data. As an interdisciplinary application of P2P computing for bioinformatics, PRKad also provides good scalability for servicing a greater number of users in dynamic cloud environments.
Chun-Hung Richard Lin
Full Text Available Bioinformatics is advanced from in-house computing infrastructure to cloud computing for tackling the vast quantity of biological data. This advance enables large number of collaborative researches to share their works around the world. In view of that, retrieving biological data over the internet becomes more and more difficult because of the explosive growth and frequent changes. Various efforts have been made to address the problems of data discovery and delivery in the cloud framework, but most of them suffer the hindrance by a MapReduce master server to track all available data. In this paper, we propose an alternative approach, called PRKad, which exploits a Peer-to-Peer (P2P model to achieve efficient data discovery and delivery. PRKad is a Kademlia-based implementation with Round-Trip-Time (RTT as the associated key, and it locates data according to Distributed Hash Table (DHT and XOR metric. The simulation results exhibit that our PRKad has the low link latency to retrieve data. As an interdisciplinary application of P2P computing for bioinformatics, PRKad also provides good scalability for servicing a greater number of users in dynamic cloud environments.
The book brings together certain facts which were collated and processed on the subject of ''Man-Environment-Knowledge'' within the scope of a tutorial function of the same name organised and held by the ''ETH'' (the Swiss Federal Institute of Technology in Zurich). Starting off from certain important fundamental facts which reproduce the current state of our planet in the form of data, it goes on to illustrate those interconnections and processes that are important for the understanding of the world in which we live. figs., tabs., [800 refs.
Suciu, Radu M; Aydin, Emir; Chen, Brian E
With the exponential increase and widespread availability of genomic, transcriptomic, and proteomic data, accessing these '-omics' data is becoming increasingly difficult. The current resources for accessing and analyzing these data have been created to perform highly specific functions intended for specialists, and thus typically emphasize functionality over user experience. We have developed a web-based application, GeneDig.org, that allows any general user access to genomic information with ease and efficiency. GeneDig allows for searching and browsing genes and genomes, while a dynamic navigator displays genomic, RNA, and protein information simultaneously for co-navigation. We demonstrate that our application allows more than five times faster and efficient access to genomic information than any currently available methods. We have developed GeneDig as a platform for bioinformatics integration focused on usability as its central design. This platform will introduce genomic navigation to broader audiences while aiding the bioinformatics analyses performed in everyday biology research.
Knowledge, especially tacit knowledge, has gained more and more attention in recent years. The author claims that, with the development of information technology, more knowledge sharing takes place online rather than face-to-face. The purpose of this study is to explore how tacit knowledge is externalized in online environments. To answer this…
Zhang, Jiang; Rossello Busquet, Ana; Soler, José
This paper makes three contributions to assist households to control their home devices in an easy way and to simplify the software installation and configuration processes across multi-vendor environments. First, a Home Environment Service Knowledge Management System is proposed, which is based...... on the knowledge implemented by ontology and uses the inference function of reasoner to find out available software services according to household requests. Second, this paper provides a concrete methodology to exploit and acquire conflict-free information from ontology knowledge by using a reasoner. At last...
Full Text Available Abstract Background We introduce a Knowledge-based Decision Support System (KDSS in order to face the Protein Complex Extraction issue. Using a Knowledge Base (KB coding the expertise about the proposed scenario, our KDSS is able to suggest both strategies and tools, according to the features of input dataset. Our system provides a navigable workflow for the current experiment and furthermore it offers support in the configuration and running of every processing component of that workflow. This last feature makes our system a crossover between classical DSS and Workflow Management Systems. Results We briefly present the KDSS' architecture and basic concepts used in the design of the knowledge base and the reasoning component. The system is then tested using a subset of Saccharomyces cerevisiae Protein-Protein interaction dataset. We used this subset because it has been well studied in literature by several research groups in the field of complex extraction: in this way we could easily compare the results obtained through our KDSS with theirs. Our system suggests both a preprocessing and a clustering strategy, and for each of them it proposes and eventually runs suited algorithms. Our system's final results are then composed of a workflow of tasks, that can be reused for other experiments, and the specific numerical results for that particular trial. Conclusions The proposed approach, using the KDSS' knowledge base, provides a novel workflow that gives the best results with regard to the other workflows produced by the system. This workflow and its numeric results have been compared with other approaches about PPI network analysis found in literature, offering similar results.
In this preliminary report we present ongoing research on intelligent knowledge management (KM) environments supporting communication in a virtual environment. An agent community handles the interaction between knowledge sources of different degrees of formality and knowledge users and
Crichton, Daniel; Mahabal, Ashish; Anton, Kristen; Cinquini, Luca; Colbert, Maureen; Djorgovski, S. George; Kincaid, Heather; Kelly, Sean; Liu, David
We describe here the Early Detection Research Network (EDRN) for Cancer's knowledge environment. It is an open source platform built by NASA's Jet Propulsion Laboratory with contributions from the California Institute of Technology, and Giesel School of Medicine at Dartmouth. It uses tools like Apache OODT, Plone, and Solr, and borrows heavily from JPL's Planetary Data System's ontological infrastructure. It has accumulated data on hundreds of thousands of biospecemens and serves over 1300 registered users across the National Cancer Institute (NCI). The scalable computing infrastructure is built such that we are being able to reach out to other agencies, provide homogeneous access, and provide seamless analytics support and bioinformatics tools through community engagement.
Gelbart, Hadas; Yarden, Anat
Following the rationale that learning is an active process of knowledge construction as well as enculturation into a community of experts, we developed a novel web-based learning environment in bioinformatics for high-school biology majors in Israel. The learning environment enables the learners to actively participate in a guided inquiry process…
Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh
In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: email@example.com.
Andrew B. Kinghorn
Full Text Available Aptamers are short nucleic acid sequences capable of specific, high-affinity molecular binding. They are isolated via SELEX (Systematic Evolution of Ligands by Exponential Enrichment, an evolutionary process that involves iterative rounds of selection and amplification before sequencing and aptamer characterization. As aptamers are genetic in nature, bioinformatic approaches have been used to improve both aptamers and their selection. This review will discuss the advancements made in several enclaves of aptamer bioinformatics, including simulation of aptamer selection, fragment-based aptamer design, patterning of libraries, identification of lead aptamers from high-throughput sequencing (HTS data and in silico aptamer optimization.
Full Text Available Abstract: In today’s competitive global economy characterized by knowledge acquisition, the concept of knowledge management has become increasingly prevalent in academic and business practices. Knowledge creation is an important factor and remains a source of competitive advantage over knowledge management. Information technology facilitates knowledge management practices by disseminating knowledge and making codified knowledge retrievable. Thus, this paper proposes a framework of knowledge creation in online learning environments. In addition, the features and issues of knowledge creation in these environments are discussed.
.... Subjects were trained to recognize key features of a venue using one of: an immersive virtual environment, a nonimmersive virtual environment, an exocentric virtual model of the venue, and a walkthrough of the actual venue...
In this preliminary report we present ongoing research on intelligent knowledge management (KM) environments supporting communication in a virtual environment. An agent community handles the interaction between knowledge sources of different degrees of formality and knowledge users and creators, based on real user needs and virtual collaboration. Intelligent agents handle knowledge sources and tasks, and a personal assistant provides a personalised, dynamic interface to users. The concept of ...
The concept of organisational knowledge as a valuable strategic asset has become quite popular recently. Increased competition, globalisation and the emergence of new organisational models built on process-based organisational structures require organisations to create, capture, share and apply
Briggs, Hugh C.
The Aerospace and Defense industry is experiencing an increasing loss of knowledge through workforce reductions associated with business consolidation and retirement of senior personnel. Significant effort is being placed on process definition as part of ISO certification and, more recently, CMMI certification. The process knowledge in these efforts represents the simplest of engineering knowledge and many organizations are trying to get senior engineers to write more significant guidelines, best practices and design manuals. A new generation of design software, known as Product Lifecycle Management systems, has many mechanisms for capturing and deploying a wider variety of engineering knowledge than simple process definitions. These hold the promise of significant improvements through reuse of prior designs, codification of practices in workflows, and placement of detailed how-tos at the point of application.
Hellström, Tomas; Husted, Kenneth
This paper argues that knowledge mapping may provide a fruitful avenue for intellectual capitalmanagement in academic environments such as university departments. However, while some researchhas been conducted on knowledge mapping and intellectual capital management in the public sector...... reflect of the uses of knowledge mapping at their departments and institutes. Finally a number ofsuggestions are made as to the rationale and conduct of knowledge mapping in academe.Keywords: Knowledge mapping, academic, intellectual capital management, focus group, researchmanagement...
The paper seeks to take a technical view of access to knowledge resources and highlights the role of libraries in facilitating access to knowledge in the academic environment. It analyzes the processes involved in organization and retrieval of knowledge resources, which encompasses collection building, organization of ...
Full Text Available .The importance of teamwork in a multidisciplinary environment and the required work integration and information flow is also discussed together with the concept of pull of the correct information at the correct abstraction level at the right time...
Bakar, Muhamad Shahbani Abu; Jalil, Dzulkafli
The growth of Knowledge Economy has transformed human capital to be the vital asset in business organization of the 21st century. Arguably, due to its white-collar nature, knowledge-based industry is more favorable than traditional manufacturing business. However, over dependency on human capital can also be a major challenge as any workers will inevitably leave the company or retire. This situation will possibly create knowledge gap that may impact business continuity of the enterprise. Knowledge retention in the corporate environment has been of many research interests. Learning Management System (LMS) refers to the system that provides the delivery, assessment and management tools for an organization to handle its knowledge repository. By using the aspirations of a proven LMS implemented in an academic environment, this paper proposes LMS model that can be used to enable peer-to-peer knowledge capture and sharing in the knowledge-based organization. Cloud Enterprise Resource Planning (ERP), referred to an ERP solution in the internet cloud environment was chosen as the domain knowledge. The complexity of the Cloud ERP business and its knowledge make it very vulnerable to the knowledge retention problem. This paper discusses how the company's essential knowledge can be retained using the LMS system derived from academic environment into the corporate model.
This report first presents the nuclear and physical-chemical properties of tritium and addresses the notions of bioaccumulation, bio-magnification and remanence. It describes and comments the natural and anthropic origins of tritium (natural production, quantities released in the environment in France by nuclear tests, nuclear plants, nuclear fuel processing plants, research centres). It describes how tritium is measured as a free element (sampling, liquid scintillation, proportional counting, enrichment method) or linked to organic matter (combustion, oxidation, helium-3-based measurement). It discusses tritium concentrations noticed in different parts of the environment (soils, continental waters, sea). It describes how tritium is transferred to ecosystems (transfer of atmospheric tritium to ground ecosystems, and to soft water ecosystems). It discusses existing models which describe the behaviour of tritium in ecosystems. It finally describes and comments toxic effects of tritium on living ground and aquatic organisms
Johnson, Kathy A.
For the purpose of this paper, bioinformatics is defined as the application of computer technology to the management of biological information. It can be thought of as the science of developing computer databases and algorithms to facilitate and expedite biological research. This is a crosscutting capability that supports nearly all human health areas ranging from computational modeling, to pharmacodynamics research projects, to decision support systems within autonomous medical care. Bioinformatics serves to increase the efficiency and effectiveness of the life sciences research program. It provides data, information, and knowledge capture which further supports management of the bioastronautics research roadmap - identifying gaps that still remain and enabling the determination of which risks have been addressed.
Boeck, H.; Villa, M.
In this work authors present the maintenance of nuclear knowledge in an antinuclear environment in Austria. Participation of the TRIGA Mark II research reactor in the Atominstitut in different courses, in research projects and education is presented.
Hofmann-Apitius, Martin; Fluck, Juliane; Furlong, Laura; Fornes, Oriol; Kolárik, Corinna; Hanser, Susanne; Boeker, Martin; Schulz, Stefan; Sanz, Ferran; Klinger, Roman; Mevissen, Theo; Gattermayer, Tobias; Oliva, Baldo; Friedrich, Christoph M
In essence, the virtual physiological human (VPH) is a multiscale representation of human physiology spanning from the molecular level via cellular processes and multicellular organization of tissues to complex organ function. The different scales of the VPH deal with different entities, relationships and processes, and in consequence the models used to describe and simulate biological functions vary significantly. Here, we describe methods and strategies to generate knowledge environments representing molecular entities that can be used for modelling the molecular scale of the VPH. Our strategy to generate knowledge environments representing molecular entities is based on the combination of information extraction from scientific text and the integration of information from biomolecular databases. We introduce @neuLink, a first prototype of an automatically generated, disease-specific knowledge environment combining biomolecular, chemical, genetic and medical information. Finally, we provide a perspective for the future implementation and use of knowledge environments representing molecular entities for the VPH.
Baloian, Nelson; Zurita, Gustavo
Knowledge management is a critical activity for any organization. It has been said to be a differentiating factor and an important source of competitiveness if this knowledge is constructed and shared among its members, thus creating a learning organization. Knowledge construction is critical for any collaborative organizational learning environment. Nowadays workers must perform knowledge creation tasks while in motion, not just in static physical locations; therefore it is also required that knowledge construction activities be performed in ubiquitous scenarios, and supported by mobile and pervasive computational systems. These knowledge creation systems should help people in or outside organizations convert their tacit knowledge into explicit knowledge, thus supporting the knowledge construction process. Therefore in our understanding, we consider highly relevant that undergraduate university students learn about the knowledge construction process supported by mobile and ubiquitous computing. This has been a little explored issue in this field. This paper presents the design, implementation, and an evaluation of a system called MCKC for Mobile Collaborative Knowledge Construction, supporting collaborative face-to-face tacit knowledge construction and sharing in ubiquitous scenarios. The MCKC system can be used by undergraduate students to learn how to construct knowledge, allowing them anytime and anywhere to create, make explicit and share their knowledge with their co-learners, using visual metaphors, gestures and sketches to implement the human-computer interface of mobile devices (PDAs).
Good, Benjamin M; Su, Andrew I
Bioinformatics is faced with a variety of problems that require human involvement. Tasks like genome annotation, image analysis, knowledge-base population and protein structure determination all benefit from human input. In some cases, people are needed in vast quantities, whereas in others, we need just a few with rare abilities. Crowdsourcing encompasses an emerging collection of approaches for harnessing such distributed human intelligence. Recently, the bioinformatics community has begun to apply crowdsourcing in a variety of contexts, yet few resources are available that describe how these human-powered systems work and how to use them effectively in scientific domains. Here, we provide a framework for understanding and applying several different types of crowdsourcing. The framework considers two broad classes: systems for solving large-volume 'microtasks' and systems for solving high-difficulty 'megatasks'. Within these classes, we discuss system types, including volunteer labor, games with a purpose, microtask markets and open innovation contests. We illustrate each system type with successful examples in bioinformatics and conclude with a guide for matching problems to crowdsourcing solutions that highlights the positives and negatives of different approaches.
Burton, Brian G.; Martin, Barbara N.
The purpose of this case study was to determine if learning occurred within a 3D virtual learning environment by determining if elements of collaboration and Nonaka and Takeuchi's (1995) knowledge spiral were present. A key portion of this research was the creation of a Virtual Learning Environment. This 3D VLE utilized the Torque Game Engine…
Kim, Mucheol; Rho, Seungmin
Due to the spread of smart devices and the development of network technology, a large number of people can now easily utilize the web for acquiring information and various services. Further, collective intelligence has emerged as a core player in the evolution of technology in web 2.0 generation. It means that people who are interested in a specific domain of knowledge can not only make use of the information, but they can also participate in the knowledge production processes. Since a large volume of knowledge is produced by multiple contributors, it is important to integrate and manage knowledge efficiently. In this paper, we propose a social tagging-based dynamic knowledge management system in crowdsourcing environments. The approach here is to categorize and package knowledge from multiple sources, in such a way that it easily links to target knowledge.
Attwell, Graham; Cook, John; Ravenscroft, Andrew
The development of Technology Enhanced Learning has been dominated by the education paradigm. However social software and new forms of knowledge development and collaborative meaning making are challenging such domination. Technology is increasingly being used to mediate the development of work process knowledge and these processes are leading to the evolution of rhizomatic forms of community based knowledge development. Technologies can support different forms of contextual knowledge development through Personal Learning Environments. The appropriation or shaping of technologies to develop Personal Learning Environments may be seen as an outcome of learning in itself. Mobile devices have the potential to support situated and context based learning, as exemplified in projects undertaken at London Metropolitan University. This work provides the basis for the development of a Work Orientated MoBile Learning Environment (WOMBLE).
Frisvad, Jeppe Revall; Falster, Peter; Møller, Gert Lykke
To obtain unpredictable social interaction between autonomous agents in real-time environments, we present a simple method for logic-based knowledge exchange. A method which is able to form new knowledge rather than do simple exchange of particular rules found in predetermined rule sets. The appl....... The applicability of our concept is demonstrated through a simple visualization of a real-time 3D environment, where agents seek to persuade opponents to join their team. This is done through cooperation with friends and education of neutral agents.......To obtain unpredictable social interaction between autonomous agents in real-time environments, we present a simple method for logic-based knowledge exchange. A method which is able to form new knowledge rather than do simple exchange of particular rules found in predetermined rule sets...
Hellström, Tomas; Husted, Kenneth
This paper argues that knowledge mapping may provide a fruitful avenue for intellectual capitalmanagement in academic environments such as university departments. However, while some researchhas been conducted on knowledge mapping and intellectual capital management in the public sector,the unive......This paper argues that knowledge mapping may provide a fruitful avenue for intellectual capitalmanagement in academic environments such as university departments. However, while some researchhas been conducted on knowledge mapping and intellectual capital management in the public sector......,the university has so far not been directly considered for this type of management. The paper initiallyreviews the functions and techniques of knowledge mapping and assesses these in the light of academicdemands. Secondly, the result of a focus group study is presented, where academic leaders were askedto...
Wang, Dong; Deem, Michael W
Migration is a key mechanism for expansion of communities. In spatially heterogeneous environments, rapidly gaining knowledge about the local environment is key to the evolutionary success of a migrating population. For historical human migration, environmental heterogeneity was naturally asymmetric in the north-south (NS) and east-west (EW) directions. We here consider the human migration process in the Americas, modelled as random, asymmetric, modularly correlated environments. Knowledge about the environments determines the fitness of each individual. We present a phase diagram for asymmetry of migration as a function of carrying capacity and fitness threshold. We find that the speed of migration is proportional to the inverse complement of the spatial environmental gradient, and in particular, we find that NS migration rates are lower than EW migration rates when the environmental gradient is higher in the NS direction. Communication of knowledge between individuals can help to spread beneficial knowledge within the population. The speed of migration increases when communication transmits pieces of knowledge that contribute in a modular way to the fitness of individuals. The results for the dependence of migration rate on asymmetry and modularity are consistent with existing archaeological observations. The results for asymmetry of genetic divergence are consistent with patterns of human gene flow. © 2016 The Author(s).
Stiglic, Gregor; Kocbek, Simon; Pernek, Igor; Kokol, Peter
Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets with binary class attributes and a high number of possibly
Full Text Available PURPOSE: Classification is an important and widely used machine learning technique in bioinformatics. Researchers and other end-users of machine learning software often prefer to work with comprehensible models where knowledge extraction and explanation of reasoning behind the classification model are possible. METHODS: This paper presents an extension to an existing machine learning environment and a study on visual tuning of decision tree classifiers. The motivation for this research comes from the need to build effective and easily interpretable decision tree models by so called one-button data mining approach where no parameter tuning is needed. To avoid bias in classification, no classification performance measure is used during the tuning of the model that is constrained exclusively by the dimensions of the produced decision tree. RESULTS: The proposed visual tuning of decision trees was evaluated on 40 datasets containing classical machine learning problems and 31 datasets from the field of bioinformatics. Although we did not expected significant differences in classification performance, the results demonstrate a significant increase of accuracy in less complex visually tuned decision trees. In contrast to classical machine learning benchmarking datasets, we observe higher accuracy gains in bioinformatics datasets. Additionally, a user study was carried out to confirm the assumption that the tree tuning times are significantly lower for the proposed method in comparison to manual tuning of the decision tree. CONCLUSIONS: The empirical results demonstrate that by building simple models constrained by predefined visual boundaries, one not only achieves good comprehensibility, but also very good classification performance that does not differ from usually more complex models built using default settings of the classical decision tree algorithm. In addition, our study demonstrates the suitability of visually tuned decision trees for datasets
Nunes, Eunice P dos Santos; Roque, Licínio G; Nunes, Fatima de Lourdes dos Santos
Virtual environments can contribute to the effective learning of various subjects for people of all ages. Consequently, they assist in reducing the cost of maintaining physical structures of teaching, such as laboratories and classrooms. However, the measurement of how learners acquire knowledge in such environments is still incipient in the literature. This article presents a method to evaluate the knowledge acquisition in 3D virtual learning environments (3D VLEs) by using the learner's interactions in the VLE. Three experiments were conducted that demonstrate the viability of using this method and its computational implementation. The results suggest that it is possible to automatically assess learning in predetermined contexts and that some types of user interactions in 3D VLEs are correlated with the user's learning differential.
Vesth, Tammi Camilla; Rasmussen, Jane Lind Nybo; Theobald, Sebastian
with the Joint Genome Institute. The Aspergillus Mine is not intended as a genomic data sharing service but instead focuses on creating an environment where the results of bioinformatic analysis is made available for inspection. The data and code is public upon request and figures can be obtained directly from...
Alexander O. Karpov
Full Text Available Cognitive-active learning research-type environment is the fundamental component of the education system for the knowledge society. The purpose of the research is the development of conceptual bases and a constructional model of a cognitively active learning environment that stimulates the creation of new knowledge and its socio-economic application. Research methods include epistemic-didactic analysis of empirical material collected as a result of the study of research environments at schools and universities; conceptualization and theoretical modeling of the cognitively active surrounding, which provides an infrastructure of the research-type cognitive process. The empirical material summarized in this work was collected in the research-cognitive space of the “Step into the Future” program, which is one of the most powerful systems of research education in present-day Russia. The article presents key points of the author's concept of generative learning environments and a model of learning and scientific innovation environment implemented at Russian schools and universities.
Jones, Anna Marie
The nutrition environment in schools can influence the risk for childhood overweight and obesity, which in turn can have life-long implications for risk of chronic disease. This dissertation aimed to examine the nutrition environment in primary public schools in California with regards to the amount of nutrition education provided in the classroom, the nutrition knowledge of teachers, and the training needs of school nutrition personnel. In order to determine nutrition knowledge of teachers, a valid and reliable questionnaire was developed to assess knowledge. The systematic process involved cognitive interviews, a mail-based pretest that utilized a random sample of addresses in California, and validity and reliability testing in a sample of university students. Results indicated that the questionnaire had adequate construct validity, internal consistency reliability, and test-retest reliability. Following the validation of the knowledge questionnaire, it was used in a study of public school teachers in California to determine the relationship between demographic and classroom characteristics and nutrition knowledge, in addition to barriers to nutrition education and resources used to plan nutrition lessons. Nutrition knowledge was not found to be associated with teaching nutrition in the classroom, however it was associated with gender, identifying as Hispanic or Latino, and grade level grouping taught. The most common barriers to nutrition education were time, and unrelated subject matter. The most commonly used resources to plan nutrition lessons were Dairy Council of California educational materials. The school nutrition program was the second area of the school nutrition environment to be examined, and the primary focus was to determine the perceived training needs of California school nutrition personnel. Respondents indicated a need for training in topics related to: program management; the Healthy, Hunger-Free Kids Act of 2010; nutrition, health and
Chakraborty, Chiranjib; George Priya Doss, C; Zhu, Hailong; Agoramoorthy, Govindasamy
Hong Kong's bioinformatics sector is attaining new heights in combination with its economic boom and the predominance of the working-age group in its population. Factors such as a knowledge-based and free-market economy have contributed towards a prominent position on the world map of bioinformatics. In this review, we have considered the educational measures, landmark research activities and the achievements of bioinformatics companies and the role of the Hong Kong government in the establishment of bioinformatics as strength. However, several hurdles remain. New government policies will assist computational biologists to overcome these hurdles and further raise the profile of the field. There is a high expectation that bioinformatics in Hong Kong will be a promising area for the next generation.
Schweighofer, Karl; Pohorille, Andrew
Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.
Full Text Available Aim/Purpose: The purpose of the study is to provide foundational research to exemplify how knowledge construction takes place in microblogging-based learning environments, to understand learner interaction representing the knowledge construction process, and to analyze learner perception, thereby suggesting a model of delivery for microblogging. Background: Up-and-coming digital native learners crave the real-time, multimedia, global-interconnectedness of microblogging, yet there has been limited research that specifically proposes a working model of Twitter’s classroom integration for designers and practitioners without bundling it in with other social media tools. Methodology: This semester-long study utilized a case-study research design via a multi-dimensional approach in a hybrid classroom with both face-to-face and online environments. Tweets were collected from four types of activities and coded based on content within their contextual setting. Twenty-four college students participated in the study. Contribution: The findings shed light on the process of knowledge construction in mi-croblogging and reveal key types of knowledge manifested during learning activities. The study also proposes a model for delivering microblogging to formal learning environments applicable to various contexts for designers and practitioners. Findings: There are distinct learner interaction patterns representing the process of knowledge construction in microblogging activities ranging from low-order to high-order cognitive tasks. Students generally were in favor of the Twitter integration in this study. Recommendations for Practitioners: The three central activities (exploring hashtags, discussion topics, and participating in live chats along with the backchannel activity formulate a working model that represents the sequential process of Twitter integration into classrooms. Impact on Society: Microblogging allows learners omnichannel access while hashtags
Muhammad Kamarul Kabilan
Full Text Available This study investigates the effectiveness of using Facebook in enhancing vocabulary knowledge among Community College students. Thirty-three (33 Community College students are exposed to the use of Facebook as an environment of learning and enhancing their English vocabulary. They are given a pre-test and a post-test and the findings indicate that students perform significantly better in the post-test compared to the pre-test. It appears that Facebook could be considered as a supplementary learning environment or learning platform or a learning tool; with meaningful and engaging activities that require students to collaborate, network and functions as a community of practice, particularly for introverted students with low proficiency levels and have low self-esteem.
Hassanien, Aboul Ella; Al-Shammari, Eiman Tamah; Ghali, Neveen I
Computational intelligence (CI) is a well-established paradigm with current systems having many of the characteristics of biological computers and capable of performing a variety of tasks that are difficult to do using conventional techniques. It is a methodology involving adaptive mechanisms and/or an ability to learn that facilitate intelligent behavior in complex and changing environments, such that the system is perceived to possess one or more attributes of reason, such as generalization, discovery, association and abstraction. The objective of this article is to present to the CI and bioinformatics research communities some of the state-of-the-art in CI applications to bioinformatics and motivate research in new trend-setting directions. In this article, we present an overview of the CI techniques in bioinformatics. We will show how CI techniques including neural networks, restricted Boltzmann machine, deep belief network, fuzzy logic, rough sets, evolutionary algorithms (EA), genetic algorithms (GA), swarm intelligence, artificial immune systems and support vector machines, could be successfully employed to tackle various problems such as gene expression clustering and classification, protein sequence classification, gene selection, DNA fragment assembly, multiple sequence alignment, and protein function prediction and its structure. We discuss some representative methods to provide inspiring examples to illustrate how CI can be utilized to address these problems and how bioinformatics data can be characterized by CI. Challenges to be addressed and future directions of research are also presented and an extensive bibliography is included. Copyright © 2013 Elsevier Ltd. All rights reserved.
Gelbart, Hadas; Ben-Dor, Shifra; Yarden, Anat
Despite the central place held by bioinformatics in modern life sciences and related areas, it has only recently been integrated to a limited extent into high-school teaching and learning programs. Here we describe the assessment of a learning environment entitled ‘Bioinformatics in the Service of Biotechnology’. Students’ learning outcomes and attitudes toward the bioinformatics learning environment were measured by analyzing their answers to questions embedded within the activities, questionnaires, interviews and observations. Students’ difficulties and knowledge acquisition were characterized based on four categories: the required domain-specific knowledge (declarative, procedural, strategic or situational), the scientific field that each question stems from (biology, bioinformatics or their combination), the associated cognitive-process dimension (remember, understand, apply, analyze, evaluate, create) and the type of question (open-ended or multiple choice). Analysis of students’ cognitive outcomes revealed learning gains in bioinformatics and related scientific fields, as well as appropriation of the bioinformatics approach as part of the students’ scientific ‘toolbox’. For students, questions stemming from the ‘old world’ biology field and requiring declarative or strategic knowledge were harder to deal with. This stands in contrast to their teachers’ prediction. Analysis of students’ affective outcomes revealed positive attitudes toward bioinformatics and the learning environment, as well as their perception of the teacher’s role. Insights from this analysis yielded implications and recommendations for curriculum design, classroom enactment, teacher education and research. For example, we recommend teaching bioinformatics in an integrative and comprehensive manner, through an inquiry process, and linking it to the wider science curriculum. PMID:26801769
Vicari, Rosa M; Flores, Cecilia D; Silvestre, André M; Seixas, Louise J; Ladeira, Marcelo; Coelho, Helder
AMPLIA is a multi-agent intelligent learning environment designed to support training of diagnostic reasoning and modelling of domains with complex and uncertain knowledge. AMPLIA focuses on the medical area. It is a system that deals with uncertainty under the Bayesian network approach, where learner-modelling tasks will consist of creating a Bayesian network for a problem the system will present. The construction of a network involves qualitative and quantitative aspects. The qualitative part concerns the network topology, that is, causal relations among the domain variables. After it is ready, the quantitative part is specified. It is composed of the distribution of conditional probability of the variables represented. A negotiation process (managed by an intelligent MediatorAgent) will treat the differences of topology and probability distribution between the model the learner built and the one built-in in the system. That negotiation process occurs between the agents that represent the expert knowledge domain (DomainAgent) and the agent that represents the learner knowledge (LearnerAgent).
Fuller, Jonathan C; Khoueiry, Pierre; Dinkel, Holger; Forslund, Kristoffer; Stamatakis, Alexandros; Barry, Joseph; Budd, Aidan; Soldatos, Theodoros G; Linssen, Katja; Rajput, Abdul Mateen
The third Heidelberg Unseminars in Bioinformatics (HUB) was held on 18th October 2012, at Heidelberg University, Germany. HUB brought together around 40 bioinformaticians from academia and industry to discuss the 'Biggest Challenges in Bioinformatics' in a 'World Café' style event.
Fuller, Jonathan C; Khoueiry, Pierre; Dinkel, Holger; Forslund, Kristoffer; Stamatakis, Alexandros; Barry, Joseph; Budd, Aidan; Soldatos, Theodoros G; Linssen, Katja; Rajput, Abdul Mateen
The third Heidelberg Unseminars in Bioinformatics (HUB) was held in October at Heidelberg University in Germany. HUB brought together around 40 bioinformaticians from academia and industry to discuss the ‘Biggest Challenges in Bioinformatics' in a ‘World Café' style event.
Background MicroRNAs (miRNAs) are noncoding RNAs that direct post-transcriptional regulation of protein coding genes. Recent studies have shown miRNAs are important for controlling many biological processes, including nervous system development, and are highly conserved across species. Given their importance, computational tools are necessary for analysis, interpretation and integration of high-throughput (HTP) miRNA data in an increasing number of model species. The Bioinformatics Resource Manager (BRM) v2.3 is a software environment for data management, mining, integration and functional annotation of HTP biological data. In this study, we report recent updates to BRM for miRNA data analysis and cross-species comparisons across datasets. Results BRM v2.3 has the capability to query predicted miRNA targets from multiple databases, retrieve potential regulatory miRNAs for known genes, integrate experimentally derived miRNA and mRNA datasets, perform ortholog mapping across species, and retrieve annotation and cross-reference identifiers for an expanded number of species. Here we use BRM to show that developmental exposure of zebrafish to 30 uM nicotine from 6–48 hours post fertilization (hpf) results in behavioral hyperactivity in larval zebrafish and alteration of putative miRNA gene targets in whole embryos at developmental stages that encompass early neurogenesis. We show typical workflows for using BRM to integrate experimental zebrafish miRNA and mRNA microarray datasets with example retrievals for zebrafish, including pathway annotation and mapping to human ortholog. Functional analysis of differentially regulated (p<0.05) gene targets in BRM indicates that nicotine exposure disrupts genes involved in neurogenesis, possibly through misregulation of nicotine-sensitive miRNAs. Conclusions BRM provides the ability to mine complex data for identification of candidate miRNAs or pathways that drive phenotypic outcome and, therefore, is a useful hypothesis
Tilton Susan C
Full Text Available Abstract Background MicroRNAs (miRNAs are noncoding RNAs that direct post-transcriptional regulation of protein coding genes. Recent studies have shown miRNAs are important for controlling many biological processes, including nervous system development, and are highly conserved across species. Given their importance, computational tools are necessary for analysis, interpretation and integration of high-throughput (HTP miRNA data in an increasing number of model species. The Bioinformatics Resource Manager (BRM v2.3 is a software environment for data management, mining, integration and functional annotation of HTP biological data. In this study, we report recent updates to BRM for miRNA data analysis and cross-species comparisons across datasets. Results BRM v2.3 has the capability to query predicted miRNA targets from multiple databases, retrieve potential regulatory miRNAs for known genes, integrate experimentally derived miRNA and mRNA datasets, perform ortholog mapping across species, and retrieve annotation and cross-reference identifiers for an expanded number of species. Here we use BRM to show that developmental exposure of zebrafish to 30 uM nicotine from 6–48 hours post fertilization (hpf results in behavioral hyperactivity in larval zebrafish and alteration of putative miRNA gene targets in whole embryos at developmental stages that encompass early neurogenesis. We show typical workflows for using BRM to integrate experimental zebrafish miRNA and mRNA microarray datasets with example retrievals for zebrafish, including pathway annotation and mapping to human ortholog. Functional analysis of differentially regulated (p Conclusions BRM provides the ability to mine complex data for identification of candidate miRNAs or pathways that drive phenotypic outcome and, therefore, is a useful hypothesis generation tool for systems biology. The miRNA workflow in BRM allows for efficient processing of multiple miRNA and mRNA datasets in a single
Fatumo, Segun A.; Adoga, Moses P.; Ojo, Opeolu O.; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi
Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries. PMID:24763310
Segun A Fatumo
Full Text Available Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.
Fatumo, Segun A; Adoga, Moses P; Ojo, Opeolu O; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi
Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.
Researchers take on challenges and opportunities to mine "Big Data" for answers to complex biological questions. Learn how bioinformatics uses advanced computing, mathematics, and technological platforms to store, manage, analyze, and understand data.
Huang, Hsiu-Mei; Liaw, Shu-Sheng
In today's competitive global economy characterized by knowledge acquisition, the concept of knowledge management has become increasingly prevalent in academic and business practices. Knowledge creation is an important factor and remains a source of competitive advantage over knowledge management. Information technology facilitates knowledge…
Tee, Meng Yew; Karney, Dennis
Research on knowledge cultivation often focuses on explicit forms of knowledge. However, knowledge can also take a tacit form--a form that is often difficult or impossible to tease out, even when it is considered critical in an educational context. A review of the literature revealed that few studies have examined tacit knowledge issues in online…
Full Text Available VGE geographic knowledge refers to the abstract and repeatable geo-information which is related to the geo-science problem, geographical phenomena and geographical laws supported by VGE. That includes expert experiences, evolution rule, simulation processes and prediction results in VGE. This paper proposes a conceptual framework for VGE knowledge engineering in order to effectively manage and use geographic knowledge in VGE. Our approach relies on previous well established theories on knowledge engineering and VGE. The main contribution of this report is following: (1 The concepts of VGE knowledge and VGE knowledge engineering which are defined clearly; (2 features about VGE knowledge different with common knowledge; (3 geographic knowledge evolution process that help users rapidly acquire knowledge in VGE; and (4 a conceptual framework for VGE knowledge engineering providing the supporting methodologies system for building an intelligent VGE. This conceptual framework systematically describes the related VGE knowledge theories and key technologies. That will promote the rapid transformation from geodata to geographic knowledge, and furtherly reduce the gap between the data explosion and knowledge absence.
Khalid Abdul Wahid
Full Text Available The purpose of this research is to investigate the impact of organizational knowledge factors and market knowledge factors on knowledge creation among Thai innovative companies. 464 questionnaires were distributed to Thai innovative companies registered under the National Innovation Agency (NIA and 217 were returned. Structural Equation Modelling (SEM is used to determine the effect of two sets of knowledge creation sources: organizational knowledge (social interaction, organizational routines and information system and market knowledge (customer orientation, competitor orientation and supplier orientation on knowledge creation (product and service outcome, process outcome and market outcome. The results indicated that the integration of organizational knowledge and market knowledge is the main driver of knowledge creation. Furthermore, the findings suggest that social interaction and customer orientation are the most significant predictors of knowledge creation. This study provides an empirical analysis on the importance of different sources of knowledge in the knowledge creation process in SMEs and its impact on companies’ innovative knowledge outcomes.
Morales, Hernán F; Giovambattista, Guillermo
We have developed BioSmalltalk, a new environment system for pure object-oriented bioinformatics programming. Adaptive end-user programming systems tend to become more important for discovering biological knowledge, as is demonstrated by the emergence of open-source programming toolkits for bioinformatics in the past years. Our software is intended to bridge the gap between bioscientists and rapid software prototyping while preserving the possibility of scaling to whole-system biology applications. BioSmalltalk performs better in terms of execution time and memory usage than Biopython and BioPerl for some classical situations. BioSmalltalk is cross-platform and freely available (MIT license) through the Google Project Hosting at http://code.google.com/p/biosmalltalk firstname.lastname@example.org Supplementary data are available at Bioinformatics online.
Chen, Xiaoling; Chang, Jeffrey T
Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. https://github.com/jefftc/changlab. email@example.com. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org
Chen, Xiaoling; Chang, Jeffrey T.
Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: email@example.com PMID:28052928
Kampf, Constance; Islas Sedano, Carolina
with technology in knowledge management systems. So, is knowledge communication a process that can be technologically enabled? In this presentation, we explore the possibilities of socio-technical interaction for knowledge communication through the use of a mobile phone game as a knowledge communication tool....... Our research focuses in on use of this mobile phone game as a case study for a Project Management course given simultaneously at the Aarhus School of Business and the Helsinki School of Economics. The students used knowledge communication and knowledge management theories as part of their project...... conception & project planning processes for situating the mobile game in a social knowledge communication context such as a museum exhibit. We will discuss the HSE students' use of theories and the reception of their project ideas by clients, as well as the ASB students' response to the case of implementing...
A. E. Zhukov
Full Text Available Knowledge, along with skills and abilities forms the concept of professional qualification. Knowledge management as function and type of enterprise management activities includes making the knowledge more practically valuable and creating an active learning environment. Acquisition and assimilation of new knowledge includes six steps discussed in the article: definition, acquisition, selection, storage, distribution, use, development and implementation.
In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…
Cushman, Mike; Cornford, Tony; Venters, Will
This paper proposes a sociology of knowledge approach as a basis for understanding the potential of knowledge management for the work of a complex interorganisational domain – the UK construction industry and has the specific aim of increasing the sustainability of the processes and products of this industry. To this end, soft systems methodology is introduced as a method of conceptualising the industry’s knowledge environment and thus moving towards technological interventions which aim to i...
Lecocq, R; Gauvin, M
...) practices, namely on Knowledge Creation, Learning and Collaboration, the present work performs a detailed comparison of the states of these practices between the different military environments...
Mihalas, George I; Tudor, Anca; Paralescu, Sorin; Andor, Minodora; Stoicu-Tivadar, Lacramioara
The paper refers to our methodology and experience in establishing the content of the course in bioinformatics introduced to the school of "Information Systems in Healthcare" (SIIS), master level. The syllabi of both lectures and laboratory works are presented and discussed.
...) in training dismounted soldiers. This experiment investigated the effects of different VE parameters on spatial knowledge acquisition by comparing learning in advanced VE, restricted VE, and standard map training...
Latoschik, Marc Erich; Biermann, Peter; Wachsmuth, Ipke; Butz, Andreas; Fisher, Brian; Krüger, Antonio; Olivier, Patrick
This article describes the integration of knowledge based techniques into simulative Virtual Reality (VR) applications. The approach is motivated using multimodal Virtual Construction as an example domain. An abstract Knowledge Representation Layer (KRL) is proposed which is expressive enough to define all necessary data for diverse simulation tasks and which additionally provides a base formalism for the integration of Artificial Intelligence (AI) representations. The KRL supports two differ...
Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T
Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations. Copyright © 2015 Elsevier Inc. All rights reserved.
Jones, Anna Marie
The nutrition environment in schools can influence the risk for childhood overweight and obesity, which in turn can have life-long implications for risk of chronic disease. This dissertation aimed to examine the nutrition environment in primary public schools in California with regards to the amount of nutrition education provided in the…
J. Dul (Jan); C. Ceylan (Canan); F.P.H. Jaspers (Ferdinand)
textabstractABSTRACT The present study examines the effect of the physical work environment on the creativity of knowledge workers, compared with the effects of creative personality and the social-organizational work environment. Based on data from 274 knowledge workers in 27 SMEs, we conclude that
Johnson, James E.; McGillicuddy-Delisi, Ann
Investigated relationships among socioeconomic status, family constellation, parental practices, and preschool-age children's awareness of and rationales for rules and conventions. Children's knowledge of rules and conventions was related to social class variables. Parental behaviors were found to be better predictors of the level of children's…
Wei, Dongqing; Zhao, Tangzhen; Dai, Hao
This text examines in detail mathematical and physical modeling, computational methods and systems for obtaining and analyzing biological structures, using pioneering research cases as examples. As such, it emphasizes programming and problem-solving skills. It provides information on structure bioinformatics at various levels, with individual chapters covering introductory to advanced aspects, from fundamental methods and guidelines on acquiring and analyzing genomics and proteomics sequences, the structures of protein, DNA and RNA, to the basics of physical simulations and methods for conform
Evens, Marie; Larmuseau, Charlotte; Dewaele, Katrien; Van Craesbeek, Leen; Elen, Jan; Depaepe, Fien
This study examines the effects of an online learning environment on preservice teachers' pedagogical content knowledge (PCK), content knowledge (CK) (related to French in primary teacher education), and pedagogical knowledge (PK) in a quasi-experimental design. More specifically, the following research question is addressed: Is a systematically…
Adams, Nan B.
Society's relationship to knowledge and what is considered to be factual is changing. Effective teaching models focused on leveraging strategic control of the knowledge from teachers to learners in virtual learning environments are critical to insuring a positive path is charted. The Knowledge Development Model serves as the guide for determining…
Burr, Tom L [Los Alamos National Laboratory
Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.
Full Text Available This article discusses the understanding and behavior of entrepreneurs incubated for software companies (development of information systems, provide services in information technology - hardware and software, and advise on the implementation of administrative management systems in relation to knowledge, in the obtaining and facilitating the use of knowledge and availability of media. Initially, it sought a fundamental concept for the study and is based contextualise the issue, making the connection with the knowledge of the views of academic and business. After an analysis was conducted based on applied research to executives on their understanding of the concept of knowledge regarding the way in which it operates (professional and personal, concluding with the importance of knowledge for business growth from the person.
Full Text Available The proposed goal oriented knowledge acquisition and assessment are based on the flexible educational model and allows to implement an adaptive control of the enhanced learning process according to the requirements of student's knowledge level, his state of cognition and subject learning history. The enhanced learner knowledge model specifies how the cognition state of the user will be achieved step by step. The use case actions definition is a starting point of the specification, which depends on different levels of learning scenarios and user cognition sub goals. The use case actions specification is used as a basis to set the requirements for service software specification and attributes of learning objects respectively. The paper presents the enhanced architecture of the student self-evaluation and on-line assessment system TestTool. The system is explored as an assessment engine capable of supporting and improving the individualized intelligent goal oriented self-instructional and simulation based mode of learning, grounded on the GRID distributed service architecture.
The author investigated the interaction effect of immersive virtual reality (VR) in the classroom. The objective of the project was to develop and provide a low-cost, scalable, and portable VR system containing purposely designed and developed immersive virtual learning environments for the US Army. The purpose of the mixed design experiment was…
Cox, C.W.J.; Duursma, C.M.; Pernot, C.E.E.
In the last decade complaints from office-workers have increased. The phenomenon "Sick Building Syndrome" (SBS) is often mentioned. It is estimated that 20 to 30% of the existing building stock in Europe and North America are problem buildings. Work related complaints on the indoor environment of
The topic of this paper is play-like learning as it occurs when technology based learning environments is invited into the classroom. Observations of 5th grade classes playing with Lego Robolab, is used to illustrate that different ways of learning becomes visible when digital technology is emplo...... and more research needs to be done before the cocktail; digital technology, play and schooling, can be shaken more widely....
Garnier-Laplace, J.; Adam-Guillermin, C.; Antonelli, C.; Beaugelin-Seiller, K.; Boyer, P.; Bailly du Bois, P.; Fievet, B.; Masson, M.; Gariel, J.C.; Pierrard, O.; Renaud, P.; Roussel-Debet, S.; Gurrarian, R.; Le Dizes-Maurel, S.; Maro, D.
The authors first outline that tritium is, along with carbon 14, the main radionuclide in France in terms of activity released by nuclear facilities, whatever it concerns gaseous or liquid releases. They describe its behaviour, its various forms in the atmosphere and in the ecosystems, its transfer to plants (results of surveys are evoked which seem to demonstrate that there is no significant bio-accumulation). They comment the current knowledge and results of surveys about the presence of tritium in land and sea animals, and about the toxicity of tritium for non-human organisms
Full Text Available Flow cytometry bioinformatics is the application of bioinformatics to flow cytometry data, which involves storing, retrieving, organizing, and analyzing flow cytometry data using extensive computational resources and tools. Flow cytometry bioinformatics requires extensive use of and contributes to the development of techniques from computational statistics and machine learning. Flow cytometry and related methods allow the quantification of multiple independent biomarkers on large numbers of single cells. The rapid growth in the multidimensionality and throughput of flow cytometry data, particularly in the 2000s, has led to the creation of a variety of computational analysis methods, data standards, and public databases for the sharing of results. Computational methods exist to assist in the preprocessing of flow cytometry data, identifying cell populations within it, matching those cell populations across samples, and performing diagnosis and discovery using the results of previous steps. For preprocessing, this includes compensating for spectral overlap, transforming data onto scales conducive to visualization and analysis, assessing data for quality, and normalizing data across samples and experiments. For population identification, tools are available to aid traditional manual identification of populations in two-dimensional scatter plots (gating, to use dimensionality reduction to aid gating, and to find populations automatically in higher dimensional space in a variety of ways. It is also possible to characterize data in more comprehensive ways, such as the density-guided binary space partitioning technique known as probability binning, or by combinatorial gating. Finally, diagnosis using flow cytometry data can be aided by supervised learning techniques, and discovery of new cell types of biological importance by high-throughput statistical methods, as part of pipelines incorporating all of the aforementioned methods. Open standards, data
Nielsen, Jørgen Lerche; Meyer, Kirsten
and creation processes. The aim is to obtain a deeper comprehension of which factors determine whether the use of information technology becomes a success or a failure in relation to knowledge sharing and creation. The paper is based on three previous studies investigating the use of information technology...
Watanuki, Keiichi; Kojima, Kazuyuki
The environment in which Japanese industry has achieved great respect is changing tremendously due to the globalization of world economies, while Asian countries are undergoing economic and technical development as well as benefiting from the advances in information technology. For example, in the design of custom-made casting products, a designer who lacks knowledge of casting may not be able to produce a good design. In order to obtain a good design and manufacturing result, it is necessary to equip the designer and manufacturer with a support system related to casting design, or a so-called knowledge transfer and creation system. This paper proposes a new virtual reality based knowledge acquisition and job training system for casting design, which is composed of the explicit and tacit knowledge transfer systems using synchronized multimedia and the knowledge internalization system using portable virtual environment. In our proposed system, the education content is displayed in the immersive virtual environment, whereby a trainee may experience work in the virtual site operation. Provided that the trainee has gained explicit and tacit knowledge of casting through the multimedia-based knowledge transfer system, the immersive virtual environment catalyzes the internalization of knowledge and also enables the trainee to gain tacit knowledge before undergoing on-the-job training at a real-time operation site.
Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu
Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.
BASIC QUALIFICATIONS To be considered for this position, you must minimally meet the knowledge, skills, and abilities listed below: Bachelor’s degree in life science/bioinformatics/math/physics/computer related field from an accredited college or university according to the Council for Higher Education Accreditation (CHEA). (Additional qualifying experience may be substituted for the required education). Foreign degrees must be evaluated for U.S. equivalency. In addition to the educational requirements, a minimum of five (5) years of progressively responsible relevant experience. Must be able to obtain and maintain a security clearance. PREFERRED QUALIFICATIONS Candidates with these desired skills will be given preferential consideration: A Masters’ or PhD degree in any quantitative science is preferred. Commitment to solving biological problems and communicating these solutions. Ability to multi-task across projects. Experience in submitting data sets to public repositories. Management of large genomic data sets including integration with data available from public sources. Prior customer-facing role. Record of scientific achievements including journal publications and conference presentations. Expected Competencies: Deep understanding of and experience in processing high throughput biomedical data: data cleaning, normalization, analysis, interpretation and visualization. Ability to understand and analyze data from complex experimental designs. Proficiency in at least two of the following programming languages: Perl, Python, R, Java and C/C++. Experience in at least two of the following areas: metagenomics, ChIPSeq, RNASeq, ExomeSeq, DHS-Seq, microarray analysis. Familiarity with public databases: NCBI, Ensembl, TCGA, cBioPortal, Broad FireHose. Knowledge of working in a cluster environment.
Martine R. Haas
Knowledge gathering can create problems as well as benefits for project teams in work environments characterized by overload, ambiguity, and politics. This paper proposes that the value of knowledge gathering in such environments is greater under conditions that enhance team processing, sensemaking, and buffering capabilities. The hypotheses were tested using independent quality ratings of 96 projects and survey data from 485 project-team members collected during a multimethod field study. Th...
Nomi L Harris
Full Text Available The Bioinformatics Open Source Conference (BOSC is organized by the Open Bioinformatics Foundation (OBF, a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG before the annual Intelligent Systems in Molecular Biology (ISMB conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.
Harris, Nomi L; Cock, Peter J A; Lapp, Hilmar; Chapman, Brad; Davey, Rob; Fields, Christopher; Hokamp, Karsten; Munoz-Torres, Monica
The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.
Tilchin, Oleg; Kittany, Mohamed
In this paper we propose an adaptive approach to managing the development of students' knowledge in the comprehensive project-based learning (PBL) environment. Subject study is realized by two-stage PBL. It shapes adaptive knowledge management (KM) process and promotes the correct balance between personalized and collaborative learning. The…
Ojo, T.; Bonner, J.; Hodges, B.; Maidment, D.; Montagna, P.; Minsker, B.
conserved through a strong vortex spawning from the ~ 20 m deep ship channel that runs east-west along the northernmost portion of the bay. HF radar "observations" however does not indicate this vortical structure suggesting that water conservation is maintained through vertical eddies, captured by 3D current measurements using Acoustic Doppler profilers. This is an example of where advanced sensors indicate needs for more advanced modeling, leading us toward the development of 3D hydrodynamic model for the bay. The geomorphology of the bay (shallow with respect to the deep ship channel) poses a challenge in this model development. Knowledge of stratification in this system of bays has been increased through this study. Measurements taken using the instrument suite deployed by our research facility was coupled with (observed and predicted) hydrodynamic and meteorological data, providing new insight into stratification in Corpus Christi Bay. The bay was observed as cycling through quiescent and well-mixed periods under strong wind influence with the onset of hypoxia during the summer months (June through August). Quiescent periods, when combined with tidal cycling and inland horizontal gradient propagation (from adjoining water bodies as described) lead to conditions favorable to stratification.
Emergent Computation is concerned with recent applications of Mathematical Linguistics or Automata Theory. This subject has a primary focus upon "Bioinformatics" (the Genome and arising interest in the Proteome), but the closing chapter also examines applications in Biology, Medicine, Anthropology, etc. The book is composed of an organized examination of DNA, RNA, and the assembly of amino acids into proteins. Rather than examine these areas from a purely mathematical viewpoint (that excludes much of the biochemical reality), the author uses scientific papers written mostly by biochemists based upon their laboratory observations. Thus while DNA may exist in its double stranded form, triple stranded forms are not excluded. Similarly, while bases exist in Watson-Crick complements, mismatched bases and abasic pairs are not excluded, nor are Hoogsteen bonds. Just as there are four bases naturally found in DNA, the existence of additional bases is not ignored, nor amino acids in addition to the usual complement of...
Cantacessi, C; Campbell, B E; Jex, A R; Young, N D; Hall, R S; Ranganathan, S; Gasser, R B
The advent and integration of high-throughput '-omics' technologies (e.g. genomics, transcriptomics, proteomics, metabolomics, glycomics and lipidomics) are revolutionizing the way biology is done, allowing the systems biology of organisms to be explored. These technologies are now providing unique opportunities for global, molecular investigations of parasites. For example, studies of a transcriptome (all transcripts in an organism, tissue or cell) have become instrumental in providing insights into aspects of gene expression, regulation and function in a parasite, which is a major step to understanding its biology. The purpose of this article was to review recent applications of next-generation sequencing technologies and bioinformatic tools to large-scale investigations of the transcriptomes of parasitic nematodes of socio-economic significance (particularly key species of the order Strongylida) and to indicate the prospects and implications of these explorations for developing novel methods of parasite intervention. © 2011 Blackwell Publishing Ltd.
T.-C. Liu (Tzu-Chien); Y.-C. Lin (Yi-Chun); G.W.C. Paas (Fred)
textabstractTwo experiments examined the effects of prior knowledge on learning from different compositions of multiple representations in a mobile learning environment on plant leaf morphology for primary school students. Experiment 1 compared the learning effects of a mobile learning environment
Tolvanen, Martti; Vihinen, Mauno
Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…
Kortsarts, Yana; Morris, Robert W.; Utell, Janine M.
Bioinformatics is a relatively new interdisciplinary field that integrates computer science, mathematics, biology, and information technology to manage, analyze, and understand biological, biochemical and biophysical information. We present our experience in teaching an interdisciplinary course, Introduction to Bioinformatics, which was developed…
Full Text Available Background: Law firms in Botswana offer a particularly interesting context to explore the effects of transition in the knowledge economy. Acquiring and leveraging knowledge effectively in law firms through knowledge management can result in competitive advantage; yet the adoption of this approach remains in its infancy. Objectives: This article investigates the factors that will motivate the adoption of knowledge management in law firms in Botswana, and creates an awareness of the potential benefits of knowledge management in these firms. Method: The article uses both quantitative and qualitative research methods and the survey research design. A survey was performed on all 115 registered law firms and 217 lawyers in Botswana. Interviews were conducted with selected lawyers for more insight. Results: Several changes in the legal environment have motivated law firms to adopt knowledge management. Furthermore, lawyers appreciate the potential benefits of knowledge management. Conclusion: With the rise of the knowledge-based economy, coupled with the pressures faced by the legal industry in recent years, law firms in Botswana can no longer afford to rely on the traditional methods of managing knowledge. Knowledge management will, therefore, enhance the cost effectiveness of these firms. Strategic knowledge management certainly helps to prepare law firms in Botswana to be alive to the fact that the systematic harnessing of legal knowledge is no longer a luxury, but an absolute necessity in the knowledge economy. It will also provide an enabling business environment for private sector development and growth and, therefore, facilitate Botswana’s drive towards the knowledge-based economy.
Mishima, Hiroyuki; Sasaki, Kensaku; Tanaka, Masahiro; Tatebe, Osamu; Yoshiura, Koh-Ichiro
In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error.Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability and maintainability of rakefiles
Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error. Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. Findings We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK) and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Conclusions Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows. Furthermore, readability
Saslis-Lagoudakis, C Haris; Hawkins, Julie A; Greenhill, Simon J; Pendry, Colin A; Watson, Mark F; Tuladhar-Douglas, Will; Baral, Sushim R; Savolainen, Vincent
Traditional knowledge is influenced by ancestry, inter-cultural diffusion and interaction with the natural environment. It is problematic to assess the contributions of these influences independently because closely related ethnic groups may also be geographically close, exposed to similar environments and able to exchange knowledge readily. Medicinal plant use is one of the most important components of traditional knowledge, since plants provide healthcare for up to 80% of the world's population. Here, we assess the significance of ancestry, geographical proximity of cultures and the environment in determining medicinal plant use for 12 ethnic groups in Nepal. Incorporating phylogenetic information to account for plant evolutionary relatedness, we calculate pairwise distances that describe differences in the ethnic groups' medicinal floras and floristic environments. We also determine linguistic relatedness and geographical separation for all pairs of ethnic groups. We show that medicinal uses are most similar when cultures are found in similar floristic environments. The correlation between medicinal flora and floristic environment was positive and strongly significant, in contrast to the effects of shared ancestry and geographical proximity. These findings demonstrate the importance of adaptation to local environments, even at small spatial scale, in shaping traditional knowledge during human cultural evolution.
Full Text Available The human microbiome has received much attention because many studies have reported that the human gut microbiome is associated with several diseases. The very large datasets that are produced by these kinds of studies means that bioinformatics approaches are crucial for their analysis. Here, we systematically reviewed bioinformatics tools that are commonly used in microbiome research, including a typical pipeline and software for sequence alignment, abundance profiling, enterotype determination, taxonomic diversity, identifying differentially abundant species/genes, gene cataloging, and functional analyses. We also summarized the algorithms and methods used to define metagenomic species and co-abundance gene groups to expand our understanding of unclassified and poorly understood gut microbes that are undocumented in the current genome databases. Additionally, we examined the methods used to identify metagenomic biomarkers based on the gut microbiome, which might help to expand the knowledge and approaches for disease detection and monitoring.
Full Text Available In complex environment with hybrid terrain, different regions may have different terrain. Path planning for robots in such environment is an open NP-complete problem, which lacks effective methods. The paper develops a novel global path planning method based on common sense and evolution knowledge by adopting dual evolution structure in culture algorithms. Common sense describes terrain information and feasibility of environment, which is used to evaluate and select the paths. Evolution knowledge describes the angle relationship between the path and the obstacles, or the common segments of paths, which is used to judge and repair infeasible individuals. Taken two types of environments with different obstacles and terrain as examples, simulation results indicate that the algorithm can effectively solve path planning problem in complex environment and decrease the computation complexity for judgment and repair of infeasible individuals. It also can improve the convergence speed and have better computation stability.
Pallen, Mark J
Microbial bioinformatics in 2020 will remain a vibrant, creative discipline, adding value to the ever-growing flood of new sequence data, while embracing novel technologies and fresh approaches. Databases and search strategies will struggle to cope and manual curation will not be sustainable during the scale-up to the million-microbial-genome era. Microbial taxonomy will have to adapt to a situation in which most microorganisms are discovered and characterised through the analysis of sequences. Genome sequencing will become a routine approach in clinical and research laboratories, with fresh demands for interpretable user-friendly outputs. The "internet of things" will penetrate healthcare systems, so that even a piece of hospital plumbing might have its own IP address that can be integrated with pathogen genome sequences. Microbiome mania will continue, but the tide will turn from molecular barcoding towards metagenomics. Crowd-sourced analyses will collide with cloud computing, but eternal vigilance will be the price of preventing the misinterpretation and overselling of microbial sequence data. Output from hand-held sequencers will be analysed on mobile devices. Open-source training materials will address the need for the development of a skilled labour force. As we boldly go into the third decade of the twenty-first century, microbial sequence space will remain the final frontier! © 2016 The Author. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.
Kim, Dong Hoon; Song, Jun Yeob; Lee, Jong Hyun; Cha, Suk Keun
In the near future, the foreseen improvement in machine tools will be in the form of a knowledge evolution-based intelligent device. The goal of this study is to develop intelligent machine tools having knowledge-evolution capability in Machine to Machine (M2M) wired and wireless environment. The knowledge evolution-based intelligent machine tools are expected to be capable of gathering knowledge autonomously, producing knowledge, understanding knowledge, applying reasoning to knowledge, making new decisions, dialoguing with other machines, etc. The concept of the knowledge-evolution intelligent machine originated from the process of machine control operation by the sense, dialogue and decision of a human expert. The structure of knowledge evolution in M2M and the scheme for a dialogue agent among agent-based modules such as a sensory agent, a dialogue agent and an expert system (decision support agent) are presented in this paper, and work-offset compensation from thermal change and recommendation of cutting condition are performed on-line for knowledge-evolution verification
Lawlor, Brendan; Walsh, Paul
There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians.
Antonsson, Ann-Beth; Hasle, Peter
towards knowledge-based behaviour. Additionally the time required increases when moving from skill- to knowledge-based behaviour. On the other hand, skill-based behaviour lacks the ability to solve problems and adapt to new situations. In the working environment risk assessment as well as the development...... for risk assessment or risk management. However, there is a lot of criticism towards this kind of good practice, ranging from that it can easily be used for behavioural control to the problem with odd working environments and the need for tailoring the solutions to each workplace. The pros and cons...
Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P
Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.
Cohen, K Bretonnel; Hunter, Lawrence E
Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing.
Ruiz, F.; Gonzalez, J.; Delgado, J.L.
Full text: Technology, the social nature of learning and the generational learning style are conforming new models of training that are changing the roles of the instructors, the channels of communication and the proper learning content of the knowledge to be transferred. New training methodologies are being using in the primary and secondary education and “Vintage” classroom learning does not meet the educational requirements of these methodologies; therefore, it’s necessary to incorporate them in the Knowledge Management processes used in the nuclear industry. This paper describes a practical approach of an enriched learning environment with the purpose of creating and transferring nuclear knowledge. (author
Kamali, Amir Hossein; Giannoulatou, Eleni; Chen, Tsong Yueh; Charleston, Michael A; McEwan, Alistair L; Ho, Joshua W K
Bioinformatics is the application of computational, mathematical and statistical techniques to solve problems in biology and medicine. Bioinformatics programs developed for computational simulation and large-scale data analysis are widely used in almost all areas of biophysics. The appropriate choice of algorithms and correct implementation of these algorithms are critical for obtaining reliable computational results. Nonetheless, it is often very difficult to systematically test these programs as it is often hard to verify the correctness of the output, and to effectively generate failure-revealing test cases. Software testing is an important process of verification and validation of scientific software, but very few studies have directly dealt with the issues of bioinformatics software testing. In this work, we review important concepts and state-of-the-art methods in the field of software testing. We also discuss recent reports on adapting and implementing software testing methodologies in the bioinformatics field, with specific examples drawn from systems biology and genomic medicine.
Bruhn, Russel Elton; Burton, Philip John
Data interchange bioinformatics databases will, in the future, most likely take place using extensible markup language (XML). The document structure will be described by an XML Schema rather than a document type definition (DTD). To ensure flexibility, the XML Schema must incorporate aspects of Object-Oriented Modeling. This impinges on the choice of the data model, which, in turn, is based on the organization of bioinformatics data by biologists. Thus, there is a need for the general bioinformatics community to be aware of the design issues relating to XML Schema. This paper, which is aimed at a general bioinformatics audience, uses examples to describe the differences between a DTD and an XML Schema and indicates how Unified Modeling Language diagrams may be used to incorporate Object-Oriented Modeling in the design of schema.
Murphy, Glen; Salomone, Sonia
While highly cohesive groups are potentially advantageous they are also often correlated with the emergence of knowledge and information silos based around those same functional or occupational clusters. Consequently, an essential challenge for engineering organisations wishing to overcome informational silos is to implement mechanisms that facilitate, encourage and sustain interactions between otherwise disconnected groups. This paper acts as a primer for those seeking to gain an understanding of the design, functionality and utility of a suite of software tools generically termed social media technologies in the context of optimising the management of tacit engineering knowledge. Underpinned by knowledge management theory and using detailed case examples, this paper explores how social media technologies achieve such goals, allowing for the transfer of knowledge by tapping into the tacit and explicit knowledge of disparate groups in complex engineering environments.
de Jong, Anne; van Heel, Auke J.; Kuipers, Oscar P.
Bioinformatic tools can greatly improve the efficiency of bacteriocin screening efforts by limiting the amount of strains. Different classes of bacteriocins can be detected in genomes by looking at different features. Finding small bacteriocins can be especially challenging due to low homology and because small open reading frames (ORFs) are often omitted from annotations. In this chapter, several bioinformatic tools/strategies to identify bacteriocins in genomes are discussed.
Burte, Heather; Montello, Daniel R
People's impression of their own "sense-of-direction" (SOD) is related to their ability to effectively find their way through environments, such as neighborhoods and cities, but is also related to the speed and accuracy with which they learn new environments. In the current literature, it is unclear whether the cognitive skills underlying SOD require intentional cognitive effort to produce accurate knowledge of a new environment. The cognitive skills underlying SOD could exert their influence automatically-without conscious intention-or they might need to be intentionally and effortfully applied. Determining the intentionality of acquiring environmental spatial knowledge would shed light on whether individuals with a poor SOD can be trained to use the skill set of an individual with good SOD, thereby improving their wayfinding and spatial learning. Therefore, this research investigates the accuracy of spatial knowledge acquisition during a walk through a previously unfamiliar neighborhood by individuals with differing levels of self-assessed SOD, as a function of whether their spatial learning was intentional or incidental. After walking a route through the neighborhood, participants completed landmark, route, and survey knowledge tasks. SOD was related to the accuracy of acquired spatial knowledge, as has been found previously. However, learning intentionality did not affect spatial knowledge acquisition, neither as a main effect nor in interaction with SOD. This research reveals that while the accuracy of spatial knowledge acquired via direct travel through an environment is validly measured by self-reported SOD, the spatial skills behind a good SOD appear to operate with or without intentional application.
Hugh P Shanahan
Full Text Available We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development.
Shanahan, Hugh P; Owen, Anne M; Harrison, Andrew P
We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development.
Phillips, Beth M.; Morse, Erika E.
This paper presents findings from a stratified-random survey of family child care providers' backgrounds, caregiving environments, practices, attitudes, and knowledge related to language, literacy, and mathematics development for preschool children. Descriptive results are consistent with prior studies suggesting that home-based providers are…
Moller, Leslie; Prestera, Gustavo E.; Harvey, Douglas; Downs-Keller, Margaret; McCausland, Jo-Ann
Discusses organic architecture and suggests that learning environments should be designed and constructed using an organic approach, so that learning is not viewed as a distinct human activity but incorporated into everyday performance. Highlights include an organic knowledge-building model; information objects; scaffolding; discourse action…
Akman, Ozkan; Alagoz, Bulent
The purpose of education should be to raise people who are researchers, developer, investigating what they find, use their knowledge in their behaviors and who can interpret and put new things on them. When children are being educated, the experience should be before the occurrence of the story. First, good and bad environment should be shown,…
Anjewierden, Anjo Allert; Shostak, I.; Tsjernikova, Irina; de Hoog, Robert; Gómez Perez, A.; Benjamins, V.R.
This paper presents a new approach to modelling process-oriented knowledge management (KM) and describes a simulation environment (called KMSIM) that embodies the approach. Since the beginning of modelling researchers have been looking for better and novel ways to model systems and to use
Sawyer, Brook E.; Justice, Laura M.; Guo, Ying; Logan, Jessica A. R.; Petrill, Stephen A.; Glenn-Applegate, Katherine; Kaderavek, Joan N.; Pentimonti, Jill M.
To contribute to the modest body of work examining the home literacy environment (HLE) and emergent literacy outcomes for children with disabilities, this study addressed two aims: (a) to determine the unique contributions of the HLE on print knowledge of preschool children with language impairment and (b) to identify whether specific child…
de Goede, Maartje; Postma, Albert
Males tend to outperform females in their knowledge of relative and absolute distances in spatial layouts and environments. It is unclear yet in how far these differences are innate or develop through life. The aim of the present study was to investigate whether gender differences in configurational
Ali, Taqdir; Hussain, Maqbool; Ali Khan, Wajahat; Afzal, Muhammad; Hussain, Jamil; Ali, Rahman; Hassan, Waseem; Jamshed, Arif; Kang, Byeong Ho; Lee, Sungyoung
Technologically integrated healthcare environments can be realized if physicians are encouraged to use smart systems for the creation and sharing of knowledge used in clinical decision support systems (CDSS). While CDSSs are heading toward smart environments, they lack support for abstraction of technology-oriented knowledge from physicians. Therefore, abstraction in the form of a user-friendly and flexible authoring environment is required in order for physicians to create shareable and interoperable knowledge for CDSS workflows. Our proposed system provides a user-friendly authoring environment to create Arden Syntax MLM (Medical Logic Module) as shareable knowledge rules for intelligent decision-making by CDSS. Existing systems are not physician friendly and lack interoperability and shareability of knowledge. In this paper, we proposed Intelligent-Knowledge Authoring Tool (I-KAT), a knowledge authoring environment that overcomes the above mentioned limitations. Shareability is achieved by creating a knowledge base from MLMs using Arden Syntax. Interoperability is enhanced using standard data models and terminologies. However, creation of shareable and interoperable knowledge using Arden Syntax without abstraction increases complexity, which ultimately makes it difficult for physicians to use the authoring environment. Therefore, physician friendliness is provided by abstraction at the application layer to reduce complexity. This abstraction is regulated by mappings created between legacy system concepts, which are modeled as domain clinical model (DCM) and decision support standards such as virtual medical record (vMR) and Systematized Nomenclature of Medicine - Clinical Terms (SNOMED CT). We represent these mappings with a semantic reconciliation model (SRM). The objective of the study is the creation of shareable and interoperable knowledge using a user-friendly and flexible I-KAT. Therefore we evaluated our system using completeness and user satisfaction
Full Text Available As a focal point of biotechnology, bioinformatics integrates knowledge from biology, mathematics, physics, chemistry, computer science and information science. It generally deals with genome informatics, protein structure and drug design. However, the data or information thus acquired from the main areas of bioinformatics may not be effective. Some researchers combined bioinformatics with wireless sensor network (WSN into biosensor and other tools, and applied them to such areas as fermentation, environmental monitoring, food engineering, clinical medicine and military. In the combination, the WSN is used to collect data and information. The reliability of the WSN in bioinformatics is the prerequisite to effective utilization of information. It is greatly influenced by factors like quality, benefits, service, timeliness and stability, some of them are qualitative and some are quantitative. Hence, it is necessary to develop a method that can handle both qualitative and quantitative assessment of information. A viable option is the fuzzy linguistic method, especially 2-tuple linguistic model, which has been extensively used to cope with such issues. As a result, this paper introduces 2-tuple linguistic representation to assist experts in giving their opinions on different WSNs in bioinformatics that involve multiple factors. Moreover, the author proposes a novel way to determine attribute weights and uses the method to weigh the relative importance of different influencing factors which can be considered as attributes in the assessment of the WSN in bioinformatics. Finally, an illustrative example is given to provide a reasonable solution for the assessment.
Ranganathan, Shoba; Hsu, Wen-Lian; Yang, Ueng-Cheng; Tan, Tin Wee
The 2008 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation set up in 1998, was organized as the 7th International Conference on Bioinformatics (InCoB), jointly with the Bioinformatics and Systems Biology in Taiwan (BIT 2008) Conference, Oct. 20?23, 2008 at Taipei, Taiwan. Besides bringing together scientists from the field of bioinformatics in this region, InCoB is actively involving researchers from the area of systems biology,...
Rajkumar, E; Julious, S; Salome, A; Jennifer, G; John, A S; Kannan, L; Richard, J
The objective of this cross-sectional comparative study was to find the effects of environment and education on knowledge and attitude of nursing students towards leprosy. Data were collected, using a pretested questionnaire, from the first year and third year students of a School of Nursing attached to a leprosy specialty hospital and also from a comparable School of Nursing attached to a general hospital. The results showed that trainees acquired more knowledge on leprosy during training in both schools of nursing. However, those trained in leprosy hospital environment had higher knowledge and attitude scores than those trained in general hospital environment. The attitude of the trainees attached to leprosy hospital was favourable even before they had formal training in leprosy. Those trained in the general hospital showed more favourable attitude after training compared to before training. School of Nursing attached to leprosy hospital provided an atmosphere conducive to learning and understanding more about leprosy. The trainees retained what was learnt because of regular association with patients affected by leprosy. For employment in hospital or community based services or research related to leprosy, nurses trained in a leprosy hospital would have added value of knowledge and attitude.
van Kampen, Antoine H C; Moerland, Perry D
Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically contributes to systems medicine. First, we explain the role of bioinformatics in the management and analysis of data. In particular we show the importance of publicly available biological and clinical repositories to support systems medicine studies. Second, we discuss how the integration and analysis of multiple types of omics data through integrative bioinformatics may facilitate the determination of more predictive and robust disease signatures, lead to a better understanding of (patho)physiological molecular mechanisms, and facilitate personalized medicine. Third, we focus on network analysis and discuss how gene networks can be constructed from omics data and how these networks can be decomposed into smaller modules. We discuss how the resulting modules can be used to generate experimentally testable hypotheses, provide insight into disease mechanisms, and lead to predictive models. Throughout, we provide several examples demonstrating how bioinformatics contributes to systems medicine and discuss future challenges in bioinformatics that need to be addressed to enable the advancement of systems medicine.
Hamada, Michiaki; Kiryu, Hisanori; Iwasaki, Wataru; Asai, Kiyoshi
In a number of estimation problems in bioinformatics, accuracy measures of the target problem are usually given, and it is important to design estimators that are suitable to those accuracy measures. However, there is often a discrepancy between an employed estimator and a given accuracy measure of the problem. In this study, we introduce a general class of efficient estimators for estimation problems on high-dimensional binary spaces, which represent many fundamental problems in bioinformatics. Theoretical analysis reveals that the proposed estimators generally fit with commonly-used accuracy measures (e.g. sensitivity, PPV, MCC and F-score) as well as it can be computed efficiently in many cases, and cover a wide range of problems in bioinformatics from the viewpoint of the principle of maximum expected accuracy (MEA). It is also shown that some important algorithms in bioinformatics can be interpreted in a unified manner. Not only the concept presented in this paper gives a useful framework to design MEA-based estimators but also it is highly extendable and sheds new light on many problems in bioinformatics. PMID:21365017
Brooksbank, Cath; Morgan, Sarah L.; Rosenwald, Anne; Warnow, Tandy; Welch, Lonnie
Bioinformatics is recognized as part of the essential knowledge base of numerous career paths in biomedical research and healthcare. However, there is little agreement in the field over what that knowledge entails or how best to provide it. These disagreements are compounded by the wide range of populations in need of bioinformatics training, with divergent prior backgrounds and intended application areas. The Curriculum Task Force of the International Society of Computational Biology (ISCB) Education Committee has sought to provide a framework for training needs and curricula in terms of a set of bioinformatics core competencies that cut across many user personas and training programs. The initial competencies developed based on surveys of employers and training programs have since been refined through a multiyear process of community engagement. This report describes the current status of the competencies and presents a series of use cases illustrating how they are being applied in diverse training contexts. These use cases are intended to demonstrate how others can make use of the competencies and engage in the process of their continuing refinement and application. The report concludes with a consideration of remaining challenges and future plans. PMID:29390004
Jagul Huma Lashari
Full Text Available It is well understood that the creation and applications of new knowledge is the primary factor that drives economic growth. Aim ofthis research is to examine the KTCs (Knowledge Transfer Channels in the universities of Sindh, Pakistan leading towards the scientific and technological development of environment sector. This research identified 29 KTCs from literature those were examined, making exploratory interviews with PhD faculty members of universities offering degrees in field of environment. The identified 29 KTCs are grouped together into 7 groups based on their characteristics. KTC-1: Publications (2-variables; KTC-2: Networking (4-variables; KTC-3: Mobility of Researchers (6- variables; KTC-4: Joint Research (5-variables; KTC-5: Intellectual Property with (2-variables; KTC- 6: Co-operations (6-variables; KTC-7: Institutional Infrastructure (3-variables. Findingsshows, relevancy of KTCs in terms of their role towards the utilization of knowledge capital towards development by means of professional publications from KTC-1, participation of industry staff in conferences and workshops from KTC-2, students working as trainees in the industry and outflow of graduates at M.Phil. level from KTC-3, consultancy of university staff members in the industry from KTC-4, research work in co-operation with research institutes and with consultants from KTC-6 and sharing of physical infrastructure from KTC-7 also shows their impact towards the utilization of knowledge capital for development of environment sector. None of variablefrom KTC-5 related to intellectual property rights shows their impact towards utilization of knowledge capital. This research contributes empirical results of KTCs in universities, with policy implications for future knowledge transfer, which can contribute in the development of society.
Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G
Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).
Likic, Vladimir A.
This article describes the experience of teaching structural bioinformatics to third year undergraduate students in a subject titled "Biomolecular Structure and Bioinformatics." Students were introduced to computer programming and used this knowledge in a practical application as an alternative to the well established Internet bioinformatics…
Although the era of big data has produced many bioinformatics tools and databases, using them effectively often requires specialized knowledge. Many groups lack bioinformatics expertise, and frequently find that software documentation is inadequate and local colleagues may be overburdened or unfamil...
Weber, Tilmann; Kim, Hyun Uk
. In this context, this review gives a summary of tools and databases that currently are available to mine, identify and characterize natural product biosynthesis pathways and their producers based on ‘omics data. A web portal called Secondary Metabolite Bioinformatics Portal (SMBP at http......://www.secondarymetabolites.org) is introduced to provide a one-stop catalog and links to these bioinformatics resources. In addition, an outlook is presented how the existing tools and those to be developed will influence synthetic biology approaches in the natural products field....
Florin Gheorghe FILIP
Full Text Available Health care practitioners continually confront with a wide range of challenges, seeking to making difficult diagnoses, avoiding errors, ensuring highest quality, maximizing efficacy and reducing costs. Information technology has the potential to reduce clinical errors and to im-prove the decision making in the clinical milieu. This paper presents a pilot development of a clinical decision support systems (CDSS entitled MEDIS that was designed to incorporate knowledge from heterogeneous environments with the purpose of increasing the efficiency and the quality of the decision making process, and reducing costs based on advances of in-formation technologies, especially under the impact of the transition towards the mobile space. The system aims to capture and reuse knowledge in order to provide real-time access to clinical knowledge for a variety of users, including medical personnel, patients, teachers and students.
Gillette, Brandon A.
During the last several decades, the nature of childhood has changed. There is not much nature in it anymore. Numerous studies in environmental education, environmental psychology, and conservation psychology show that the time children spend outdoors encourages healthy physical development, enriches creativity and imagination, and enhances classroom performance. Additional research shows that people's outdoor experiences as children, and adults can lead to more positive attitudes and behavior towards the environment, along with more environmental knowledge with which to guide public policy decisions. The overall purpose of this study was to examine the effect of middle childhood (age 6-11) outdoor experiences on an individual's current knowledge of the environment. This correlational study evaluated the following potential relationships: 1) The effect of "outdoorsiness" (defined as a fondness or enjoyment of the outdoors and related activities) on an individual's environmental knowledge; 2) The effect of gender on an individual's level of outdoorsiness; 3) The effect of setting (urban, suburban, rural, farm) on an individual's level of outdoorsiness and environmental knowledge; 4) The effect of formal [science] education on an individual's level of outdoorsiness and environmental knowledge; and 5) The effect of informal, free-choice learning on an individual's level of outdoorsiness and environmental knowledge. Outdoorsiness was measured using the Natural Experience Scale (NES), which was developed through a series of pilot surveys and field-tested in this research study. Participants included 382 undergraduate students at the University of Kansas with no preference or bias given to declared or undeclared majors. The information from this survey was used to analyze the question of whether outdoor experiences as children are related in some way to an adult's environmental knowledge after accounting for other factors of knowledge acquisition such as formal education
Pettey, Gary R.
Examines the interplay between an individual's social environment and the individual's own motivations for political knowledge, such as political interest and attention to public affairs media. Finds that the perception of one's social environment made a significant contribution to the respondent's level of political knowledge. (MS)
van Kampen, Antoine H. C.; Moerland, Perry D.
Systems medicine promotes a range of approaches and strategies to study human health and disease at a systems level with the aim of improving the overall well-being of (healthy) individuals, and preventing, diagnosing, or curing disease. In this chapter we discuss how bioinformatics critically
Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael
Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…
This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...
Vaez Barzani, Ahmad
In this thesis we present an overview of bioinformatics-based approaches for genomic association mapping, with emphasis on human quantitative traits and their contribution to complex diseases. We aim to provide a comprehensive walk-through of the classic steps of genomic association mapping
Xu, Tao; Chen, Qijun
Walking is the basic skill of a legged robot, and one of the promising ways to improve the walking performance and its adaptation to environment changes is to let the robot learn its walking by itself. Currently, most of the walking learning methods are based on robot vision system or some external sensing equipment to estimate the walking performance of certain walking parameters, and therefore are usually only applicable under laboratory condition, where environment can be pre-defined. Inspired by the rhythmic swing movement during walking of legged animals and the behavior of their adjusting their walking gait on different walking surfaces, a concept of walking rhythmic pattern(WRP) is proposed to evaluate the walking specialty of legged robot, which is just based on the walking dynamics of the robot. Based on the onboard acceleration sensor data, a method to calculate WRP using power spectrum in frequency domain and diverse smooth filters is also presented. Since the evaluation of WRP is only based on the walking dynamics data of the robot's body, the proposed method doesn't require prior knowledge of environment and thus can be applied in unknown environment. A gait learning approach of legged robots based on WRP and evolution algorithm(EA) is introduced. By using the proposed approach, a quadruped robot can learn its locomotion by its onboard sensing in an unknown environment, where the robot has no prior knowledge about this place. The experimental result proves proportional relationship exits between WRP match score and walking performance of legged robot, which can be used to evaluate the walking performance in walking optimization under unknown environment.
Khan, Abdul Azeez; Khader, Sheik Abdul
E-learning or electronic learning platforms facilitate delivery of the knowledge spectrum to the learning community through information and communication technologies. The transfer of knowledge takes place from experts to learners, and externalization of the knowledge transfer is significant. In the e-learning environment, the learners seek…
"The overall aim of "EURASIP Journal on Bioinformatics and Systems Biology" is to publish research results related to signal processing and bioinformatics theories and techniques relevant to a wide...
Brown, James A. L.
A pedagogic intervention, in the form of an inquiry-based peer-assisted learning project (as a practical student-led bioinformatics module), was assessed for its ability to increase students' engagement, practical bioinformatic skills and process-specific knowledge. Elements assessed were process-specific knowledge following module completion,…
Feenstra, K. Anton; Abeln, Sanne
While many good textbooks are available on Protein Structure, Molecular Simulations, Thermodynamics and Bioinformatics methods in general, there is no good introductory level book for the field of Structural Bioinformatics. This book aims to give an introduction into Structural Bioinformatics, which
Hong, Huang-Yao; Chiu, Chieh-Hsin
This study explored how students viewed the role of ideas for knowledge work and how such a view was related to their inquiry activities. Data mainly came from students' online interaction logs, group discussion and inquiry, and a survey concerning the role of ideas for knowledge work. The findings suggest that knowledge building was conducive to…
Spengler, Sylvia J.
There is a well-known story about the blind man examining the elephant: the part of the elephant examined determines his perception of the whole beast. Perhaps bioinformatics--the shotgun marriage between biology and mathematics, computer science, and engineering--is like an elephant that occupies a large chair in the scientific living room. Given the demand for and shortage of researchers with the computer skills to handle large volumes of biological data, where exactly does the bioinformatics elephant sit? There are probably many biologists who feel that a major product of this bioinformatics elephant is large piles of waste material. If you have tried to plow through Web sites and software packages in search of a specific tool for analyzing and collating large amounts of research data, you may well feel the same way. But there has been progress with major initiatives to develop more computing power, educate biologists about computers, increase funding, and set standards. For our purposes, bioinformatics is not simply a biologically inclined rehash of information theory (1) nor is it a hodgepodge of computer science techniques for building, updating, and accessing biological data. Rather bioinformatics incorporates both of these capabilities into a broad interdisciplinary science that involves both conceptual and practical tools for the understanding, generation, processing, and propagation of biological information. As such, bioinformatics is the sine qua non of 21st-century biology. Analyzing gene expression using cDNA microarrays immobilized on slides or other solid supports (gene chips) is set to revolutionize biology and medicine and, in so doing, generate vast quantities of data that have to be accurately interpreted (Fig. 1). As discussed at a meeting a few months ago (Microarray Algorithms and Statistical Analysis: Methods and Standards; Tahoe City, California; 9-12 November 1999), experiments with cDNA arrays must be subjected to quality control
Gront, Dominik; Kolinski, Andrzej
In this Note we present a new software library for structural bioinformatics. The library contains programs, computing sequence- and profile-based alignments and a variety of structural calculations with user-friendly handling of various data formats. The software organization is very flexible. Algorithms are written in Java language and may be used by Java programs. Moreover the modules can be accessed from Jython (Python scripting language implemented in Java) scripts. Finally, the new version of BioShell delivers several utility programs that can do typical bioinformatics task from a command-line level. Availability The software is available for download free of charge from its website: http://bioshell.chem.uw.edu.pl. This website provides also numerous examples, code snippets and API documentation.
Han, Chia Yung; Wan, Liqun; Wee, William G.
A knowledge-based interactive problem solving environment called KIPSE1 is presented. The KIPSE1 is a system built on a commercial expert system shell, the KEE system. This environment gives user capability to carry out exploratory data analysis and pattern classification tasks. A good solution often consists of a sequence of steps with a set of methods used at each step. In KIPSE1, solution is represented in the form of a decision tree and each node of the solution tree represents a partial solution to the problem. Many methodologies are provided at each node to the user such that the user can interactively select the method and data sets to test and subsequently examine the results. Otherwise, users are allowed to make decisions at various stages of problem solving to subdivide the problem into smaller subproblems such that a large problem can be handled and a better solution can be found.
Riva, A; Bellazzi, R; Lanzola, G; Stefanelli, M
The World-Wide Web (WWW) is increasingly being used as a platform to develop distributed applications, particularly in contexts, such as medical ones, where high usability and availability are required. In this paper we propose a methodology for the development of knowledge-based medical applications on the web, based on the use of an explicit domain ontology to automatically generate parts of the system. We describe a development environment, centred on the LISPWEB Common Lisp HTTP server, that supports this methodology, and we show how it facilitates the creation of complex web-based applications, by overcoming the limitations that normally affect the adequacy of the web for this purpose. Finally, we present an outline of a system for the management of diabetic patients built using the LISPWEB environment.
Baker, Lisa M.
bias in earlier studies using science-like tasks, in which characteristics of the alternate hypothesis space may have made it unfeasible for participants to generate and test alternate hypotheses. In general, scientists and science undergraduates were found to engage in a systematic experimental design process that responded to salient features of the problem environment, including the constant potential for experimental error, availability of alternate hypotheses, and access to both theoretical knowledge and knowledge of experimental techniques.
Full Text Available The changes of production factors priorities affect more and more the evolution of global economy, requiring reorientation of development policies both at company level and at national economies level to adapt to the phenomenon called: “the new economy” or “knowledge economy”. If for classical economy the ability to compete - competitiveness- depends largely on the quantity or the amount of production factors, at the present, has gained importance the efficiency of their use. The idea of making this article appeared both from the necessity of a systematic research to approach the Romanian competitive environment and the desire to stoop to this important conference theme. We can certainly say that the actual business environment is characterized by a particularly dynamic due to changes that occur within it, especially under the impact of technical and scientific revolution which brought to the fore the knowledge as an essential element of achieving a high competitiveness. It is intended that the proposed theme of this article to have an economic importance for the actual information and adequacy of the current economy in Romania.
Moskaliuk, Johannes; Bertram, Johanna; Cress, Ulrike
Virtual training environments are appropriate to train complex tasks that require collaboration and interaction among the members of a team, especially if training in reality is not possible, too expensive or too dangerous. The field study reported in this paper compared three training conditions (virtual condition, standard condition, and control condition). The participants were police officers who were being trained in the communication between ground forces and a helicopter crew during an operation. This task (like many other tasks of the police, fire brigade and emergency services) is of high complexity and has no single "correct" solution, is based on specialization of tasks within a team, requires intensive communication among team members, and consists of situations in which human beings are in danger. Learning outcomes and knowledge transfer were measured as dependent variables. The results validate that virtual training was as efficient as standard training with regard to knowledge acquisition, and it was even more efficient with regard to knowledge transfer. With regard to the perceived value of the training, the participants judged standard training to be better than virtual training (except for training satisfaction, where no difference was found between standard and virtual training). These results indicate that virtual training is an effective tool for training in complex tasks that require collaboration and cannot fully be trained for in reality.
Full Text Available Precision medicine (PM requires the delivery of individually adapted medical care based on the genetic characteristics of each patient and his/her tumor. The last decade witnessed the development of high-throughput technologies such as microarrays and next-generation sequencing which paved the way to PM in the field of oncology. While the cost of these technologies decreases, we are facing an exponential increase in the amount of data produced. Our ability to use this information in daily practice relies strongly on the availability of an efficient bioinformatics system that assists in the translation of knowledge from the bench towards molecular targeting and diagnosis. Clinical trials and routine diagnoses constitute different approaches, both requiring a strong bioinformatics environment capable of i warranting the integration and the traceability of data, ii ensuring the correct processing and analyses of genomic data and iii applying well-defined and reproducible procedures for workflow management and decision-making. To address the issues, a seamless information system was developed at Institut Curie which facilitates the data integration and tracks in real-time the processing of individual samples. Moreover, computational pipelines were developed to identify reliably genomic alterations and mutations from the molecular profiles of each patient. After a rigorous quality control, a meaningful report is delivered to the clinicians and biologists for the therapeutic decision. The complete bioinformatics environment and the key points of its implementation are presented in the context of the SHIVA clinical trial, a multicentric randomized phase II trial comparing targeted therapy based on tumor molecular profiling versus conventional therapy in patients with refractory cancer. The numerous challenges faced in practice during the setting up and the conduct of this trial are discussed as an illustration of PM application.
This theme issue on knowledge includes annotated listings of Web sites, CD-ROMs and computer software, videos, books, and additional resources that deal with knowledge and differences between how animals and humans learn. Sidebars discuss animal intelligence, learning proper behavior, and getting news from the Internet. (LRW)
Zrinka Ristić Dedić
Full Text Available The study examines two components of metacognitive knowledge in the context of inquiry learning: metatask and metastrategic. Existing work on the topic has shown that adolescents often lacked metacognitive understanding necessary for optimal inquiry learning (Keselman & Kuhn, 2002; Kuhn, 2002a; Kuhn, Black, Keselman, & Kaplan, 2000, but demonstrated that engagement with inquiry tasks may improve it (Keselman, 2003; Kuhn & Pearsall, 1998.The aim of the study is to investigate the gains in metacognitive knowledge that occur as a result of repeated engagement with an inquiry learning task, and to examine the relationship between metacognitive knowledge and performance on the task.The participants were 34 eighth grade pupils, who participated in a self-directed experimentation task using the FILE programme (Hulshof, Wilhelm, Beishuizen, & van Rijn, 2005. The task required pupils to design and conduct experiments and to make inferences regarding the causal structure of a multivariable system. Pupils participated in four learning sessions over the course of one month. Metacognitive knowledge was assessed by the questionnaire before and after working in FILE.The results indicate that pupils improved in metacognitive knowledge following engagement with the task. However, many pupils showed insufficient metacognitive knowledge in the post-test and failed to apply newly achieved knowledge to the transfer task. Pupils who attained a higher level of metacognitive knowledge were more successful on the task than pupils who did not improve on metacognitive knowledge. A particular level of metacognitive understanding is a necessary, but not sufficient condition for successful performance on the task.
Wood, Louisa; Gebhardt, Philipp
Since 2010, the European Molecular Biology Laboratory's (EMBL) Heidelberg laboratory and the European Bioinformatics Institute (EMBL-EBI) have jointly run bioinformatics training courses developed specifically for secondary school science teachers within Europe and EMBL member states. These courses focus on introducing bioinformatics, databases, and data-intensive biology, allowing participants to explore resources and providing classroom-ready materials to support them in sharing this new knowledge with their students. In this article, we chart our progress made in creating and running three bioinformatics training courses, including how the course resources are received by participants and how these, and bioinformatics in general, are subsequently used in the classroom. We assess the strengths and challenges of our approach, and share what we have learned through our interactions with European science teachers.
Rocha, Miguel; Fdez-Riverola, Florentino; Paz, Juan
This proceedings presents recent practical applications of Computational Biology and Bioinformatics. It contains the proceedings of the 9th International Conference on Practical Applications of Computational Biology & Bioinformatics held at University of Salamanca, Spain, at June 3rd-5th, 2015. The International Conference on Practical Applications of Computational Biology & Bioinformatics (PACBB) is an annual international meeting dedicated to emerging and challenging applied research in Bioinformatics and Computational Biology. Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis o...
As library and information science (LIS) becomes an increasingly technology-driven profession, particularly in the academic library environment, questions arise as to the extent of information technology (IT) knowledge and skills that LIS professionals require. The purpose of this paper is to ascertain what IT knowledge and skills are needed by…
Songhao, He; Saito, Kenji; Maeda, Takashi; Kubo, Takara
For people who live in the knowledge society which has rapidly been changing, learning in the widest sense becomes indispensable in all phases of working, living and playing. The construction of an environment, to meet the demands of people who need to acquire new knowledge and skills as the need arises, and enlighten each other regularly, is…
Schneider, Maria V; Walter, Peter; Blatter, Marie-Claude; Watson, James; Brazas, Michelle D; Rother, Kristian; Budd, Aidan; Via, Allegra; van Gelder, Celia W G; Jacob, Joachim; Fernandes, Pedro; Nyrönen, Tommi H; De Las Rivas, Javier; Blicher, Thomas; Jimenez, Rafael C; Loveland, Jane; McDowall, Jennifer; Jones, Phil; Vaughan, Brendan W; Lopez, Rodrigo; Attwood, Teresa K; Brooksbank, Catherine
Funding bodies are increasingly recognizing the need to provide graduates and researchers with access to short intensive courses in a variety of disciplines, in order both to improve the general skills base and to provide solid foundations on which researchers may build their careers. In response to the development of 'high-throughput biology', the need for training in the field of bioinformatics, in particular, is seeing a resurgence: it has been defined as a key priority by many Institutions and research programmes and is now an important component of many grant proposals. Nevertheless, when it comes to planning and preparing to meet such training needs, tension arises between the reward structures that predominate in the scientific community which compel individuals to publish or perish, and the time that must be devoted to the design, delivery and maintenance of high-quality training materials. Conversely, there is much relevant teaching material and training expertise available worldwide that, were it properly organized, could be exploited by anyone who needs to provide training or needs to set up a new course. To do this, however, the materials would have to be centralized in a database and clearly tagged in relation to target audiences, learning objectives, etc. Ideally, they would also be peer reviewed, and easily and efficiently accessible for downloading. Here, we present the Bioinformatics Training Network (BTN), a new enterprise that has been initiated to address these needs and review it, respectively, to similar initiatives and collections.
Wightman, Bruce; Hark, Amy T
The development of fields such as bioinformatics and genomics has created new challenges and opportunities for undergraduate biology curricula. Students preparing for careers in science, technology, and medicine need more intensive study of bioinformatics and more sophisticated training in the mathematics on which this field is based. In this study, we deliberately integrated bioinformatics instruction at multiple course levels into an existing biology curriculum. Students in an introductory biology course, intermediate lab courses, and advanced project-oriented courses all participated in new course components designed to sequentially introduce bioinformatics skills and knowledge, as well as computational approaches that are common to many bioinformatics applications. In each course, bioinformatics learning was embedded in an existing disciplinary instructional sequence, as opposed to having a single course where all bioinformatics learning occurs. We designed direct and indirect assessment tools to follow student progress through the course sequence. Our data show significant gains in both student confidence and ability in bioinformatics during individual courses and as course level increases. Despite evidence of substantial student learning in both bioinformatics and mathematics, students were skeptical about the link between learning bioinformatics and learning mathematics. While our approach resulted in substantial learning gains, student "buy-in" and engagement might be better in longer project-based activities that demand application of skills to research problems. Nevertheless, in situations where a concentrated focus on project-oriented bioinformatics is not possible or desirable, our approach of integrating multiple smaller components into an existing curriculum provides an alternative. Copyright © 2012 Wiley Periodicals, Inc.
Pettenati, M C; Pettenati, Corrado
In this paper we will highlight some important issues which will influence the redefinition of roles and duties of libraries and librarians in a networked based educational environment. Although librarians will also keep their traditional roles of faculty support services as well as reference service and research assistance, we identify the participation in the instructional design process, the support in the evaluation, development and use of a proper authoring system and the customization of information access, as being the domains where libraries and librarians should mainly involve themselves in the next future and make profit of their expertise in information and knowledge organization in order to properly and effectively support the institutions in the use of Information Technology in education.
Ionela Corina CHIRILEASA (DEDIȚĂ
Full Text Available In the context of knowledge-based information society, universities are increasingly recognized as having a key role to play in the regional development process (Charles, 2006, p. 117, citing on Goddard et al., 1994; Keane & Allison, 1999; Chatterton & Goddard, 2000, have being considered active participants in building of regional competitive advantage (Chatterton & Goddard, 2000, p. 479. Although the most authors recognize that the primary mission of universities remains the teaching and the research, in recent years the emphasis is on adapting these two roles to particular needs of regions. It is bring up increasingly the subject of the „third role” of universities, through which higher education institutions contribute to development of human capital, to the regional innovation, put their imprint on the local community and participate in regional leadership, contributing to development insertion environment.
Thomas K Karikari
Full Text Available Until recently, bioinformatics, an important discipline in the biological sciences, was largely limited to countries with advanced scientific resources. Nonetheless, several developing countries have lately been making progress in bioinformatics training and applications. In Africa, leading countries in the discipline include South Africa, Nigeria, and Kenya. However, one country that is less known when it comes to bioinformatics is Ghana. Here, I provide a first description of the development of bioinformatics activities in Ghana and how these activities contribute to the overall development of the discipline in Africa. Over the past decade, scientists in Ghana have been involved in publications incorporating bioinformatics analyses, aimed at addressing research questions in biomedical science and agriculture. Scarce research funding and inadequate training opportunities are some of the challenges that need to be addressed for Ghanaian scientists to continue developing their expertise in bioinformatics.
Karikari, Thomas K
Until recently, bioinformatics, an important discipline in the biological sciences, was largely limited to countries with advanced scientific resources. Nonetheless, several developing countries have lately been making progress in bioinformatics training and applications. In Africa, leading countries in the discipline include South Africa, Nigeria, and Kenya. However, one country that is less known when it comes to bioinformatics is Ghana. Here, I provide a first description of the development of bioinformatics activities in Ghana and how these activities contribute to the overall development of the discipline in Africa. Over the past decade, scientists in Ghana have been involved in publications incorporating bioinformatics analyses, aimed at addressing research questions in biomedical science and agriculture. Scarce research funding and inadequate training opportunities are some of the challenges that need to be addressed for Ghanaian scientists to continue developing their expertise in bioinformatics.
The use of X-rays in medical fields has increased significantly in recent years, since various therapeutic procedures can be performed without the need for surgery, which presents the greatest risk to the patient. An example of this increase is the practice of cardiac catheterization, this procedure fluoroscopy is used for placement of central venous catheters and temporary pacemakers, and long-term use increases the risk of exposure to X-rays to the patient, doctor and his assistants. This has been observed with concern by many researchers, since many companies did not fit the standards of radiation protection. This failure can lead to exposure of professionals, patients and caregivers. It is therefore of fundamental importance, the use of personal protective equipment such as aprons and thyroid plumbíferos protectors, for dose reduction produced by the primary and secondary radiation. This study evaluated the knowledge of radiology professionals in Goiânia, on the use of lead apron in collective environments and use of guards in sensitive parts of patients to radiation. Through an information gathering technique based on a questionnaire with closed questions. From dista and focuses on the knowledge of professionals. The results showed that there is a serious deficiency as regards the most radiosensitive organ protection of patients when they are exposed to X-ray beams. (author)
ATM Emdadul Haque
Full Text Available Background A clear majority of teaching staff in UniKL-RCMP are expatriates with different cultural backgrounds, and the university currently accepting international students with a different cultural background in addition to the local culturally diverse students. Aims The purpose was to determine the knowledge and awareness of the lecturers of Faculty of Medicine regarding multiculturalism and the importance in the medical profession. Methods This was a cross-sectional study. A questionnaire was developed based on the relevant demographic information and knowledge and awareness of the cultural issues and the validity was discussed with a survey expert. Results A total of 43 teachers took part in the survey. The respondents were mostly male, expatriate and had very fewer experiences in teaching students of different cultural background. The most important thing affecting teachers’ competence was their experience in teaching students of different culture, and the teachers with experience in teaching in a multicultural environment felt more competent than the ones without experience. Gender or teaching experience did not have a significant impact on their feeling of competence. However, the teachers believed that training on special education program might have helped them more than their educational background to help develop the cultural competence of the students from different cultural backgrounds. Conclusion This study showed that teachers need more training and experiences of the multicultural education program and to facilitate the development of cultural competence of students with cultural diversity, which should be taken into consideration in the faculty development activities.
Brannagan, Kim B; Dellinger, Amy; Thomas, Jan; Mitchell, Denise; Lewis-Trabeaux, Shirleen; Dupre, Susan
Peer teaching has been shown to enhance student learning and levels of self efficacy. The purpose of the current study was to examine the impact of peer-teaching learning experiences on nursing students in roles of tutee and tutor in a clinical lab environment. This study was conducted over a three-semester period at a South Central University that provides baccalaureate nursing education. Over three semesters, 179 first year nursing students and 51 third year nursing students participated in the study. This mixed methods study, through concurrent use of a quantitative intervention design and qualitative survey data, examined differences during three semesters in perceptions of a clinical lab experience, self-efficacy beliefs, and clinical knowledge for two groups: those who received peer teaching-learning in addition to faculty instruction (intervention group) and those who received faculty instruction only (control group). Additionally, peer teachers' perceptions of the peer teaching learning experience were examined. Results indicated positive response from the peer tutors with no statistically significant differences for knowledge acquisition and self-efficacy beliefs between the tutee intervention and control groups. In contrast to previous research, students receiving peer tutoring in conjunction with faculty instruction were statistically more anxious about performing lab skills with their peer tutor than with their instructors. Additionally, some students found instructors' feedback moderately more helpful than their peers and increased gains in knowledge and responsibility for preparation and practice with instructors than with peer tutors. The findings in this study differ from previous research in that the use of peer tutors did not decrease anxiety in first year students, and no differences were found between the intervention and control groups related to self efficacy or cognitive improvement. These findings may indicate the need to better prepare peer
Suplatov, Dmitry; Voevodin, Vladimir; Švedas, Vytas
The ability of proteins and enzymes to maintain a functionally active conformation under adverse environmental conditions is an important feature of biocatalysts, vaccines, and biopharmaceutical proteins. From an evolutionary perspective, robust stability of proteins improves their biological fitness and allows for further optimization. Viewed from an industrial perspective, enzyme stability is crucial for the practical application of enzymes under the required reaction conditions. In this review, we analyze bioinformatic-driven strategies that are used to predict structural changes that can be applied to wild type proteins in order to produce more stable variants. The most commonly employed techniques can be classified into stochastic approaches, empirical or systematic rational design strategies, and design of chimeric proteins. We conclude that bioinformatic analysis can be efficiently used to study large protein superfamilies systematically as well as to predict particular structural changes which increase enzyme stability. Evolution has created a diversity of protein properties that are encoded in genomic sequences and structural data. Bioinformatics has the power to uncover this evolutionary code and provide a reproducible selection of hotspots - key residues to be mutated in order to produce more stable and functionally diverse proteins and enzymes. Further development of systematic bioinformatic procedures is needed to organize and analyze sequences and structures of proteins within large superfamilies and to link them to function, as well as to provide knowledge-based predictions for experimental evaluation. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Oakley, Mark T; Barthel, Daniel; Bykov, Yuri; Garibaldi, Jonathan M; Burke, Edmund K; Krasnogor, Natalio; Hirst, Jonathan D
Optimisation problems pervade structural bioinformatics. In this review, we describe recent work addressing a selection of bioinformatics challenges. We begin with a discussion of research into protein structure comparison, and highlight the utility of Kolmogorov complexity as a measure of structural similarity. We then turn to research into de novo protein structure prediction, in which structures are generated from first principles. In this endeavour, there is a compromise between the detail of the model and the extent to which the conformational space of the protein can be sampled. We discuss some developments in this area, including off-lattice structure prediction using the great deluge algorithm. One strategy to reduce the size of the search space is to restrict the protein chain to sites on a regular lattice. In this context, we highlight the use of memetic algorithms, which combine genetic algorithms with local optimisation, to the study of simple protein models on the two-dimensional square lattice and the face-centred cubic lattice.
Ranganathan, Shoba; Hsu, Wen-Lian; Yang, Ueng-Cheng; Tan, Tin Wee
The 2008 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation set up in 1998, was organized as the 7th International Conference on Bioinformatics (InCoB), jointly with the Bioinformatics and Systems Biology in Taiwan (BIT 2008) Conference, Oct. 20-23, 2008 at Taipei, Taiwan. Besides bringing together scientists from the field of bioinformatics in this region, InCoB is actively involving researchers from the area of systems biology, to facilitate greater synergy between these two groups. Marking the 10th Anniversary of APBioNet, this InCoB 2008 meeting followed on from a series of successful annual events in Bangkok (Thailand), Penang (Malaysia), Auckland (New Zealand), Busan (South Korea), New Delhi (India) and Hong Kong. Additionally, tutorials and the Workshop on Education in Bioinformatics and Computational Biology (WEBCB) immediately prior to the 20th Federation of Asian and Oceanian Biochemists and Molecular Biologists (FAOBMB) Taipei Conference provided ample opportunity for inducting mainstream biochemists and molecular biologists from the region into a greater level of awareness of the importance of bioinformatics in their craft. In this editorial, we provide a brief overview of the peer-reviewed manuscripts accepted for publication herein, grouped into thematic areas. As the regional research expertise in bioinformatics matures, the papers fall into thematic areas, illustrating the specific contributions made by APBioNet to global bioinformatics efforts.
Schneider, Maria V.; Walter, Peter; Blatter, Marie-Claude
Funding bodies are increasingly recognizing the need to provide graduates and researchers with access to short intensive courses in a variety of disciplines, in order both to improve the general skills base and to provide solid foundations on which researchers may build their careers. In response...... to the development of ‘high-throughput biology’, the need for training in the field of bioinformatics, in particular, is seeing a resurgence: it has been defined as a key priority by many Institutions and research programmes and is now an important component of many grant proposals. Nevertheless, when it comes...... to planning and preparing to meet such training needs, tension arises between the reward structures that predominate in the scientific community which compel individuals to publish or perish, and the time that must be devoted to the design, delivery and maintenance of high-quality training materials...
Vetrivel, Umashankar; Pilla, Kalabharath
Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.
Gressgård, Leif Jarle; Hansen, Kåre
Learning from failures is vital for improvement of safety performance, reliability, and resilience in organizations. In order for such learning to take place in distributed environments, knowledge has to be shared among organizational members at different locations and units. This paper reports on a study conducted in the context of drilling and well operations on the Norwegian Continental Shelf, which represents a high-risk distributed organizational environment. The study investigates the relationships between organizations' abilities to learn from failures, knowledge exchange within and between organizational units, quality of contractor relationship management, and work characteristics. The results show that knowledge exchange between units is the most important predictor of perceived ability to learn from failures. Contractor relationship management, leadership involvement, role clarity, and empowerment are also important factors for failure-based learning, both directly and through increased knowledge exchange. The results of the study enhance our understanding of how abilities to learn from failures can be improved in distributed environments where similar work processes take place at different locations and involve employees from several companies. Theoretical contributions and practical implications are discussed. - Highlights: • We investigate factors affecting failure-based learning in distributed environments. • Knowledge exchange between units is the most important predictor. • Contractor relationship management is positively related to knowledge exchange. • Leadership involvement, role clarity, and empowerment are significant variables. • Respondents from an operator firm and eight contractors are included in the study
Landgrebe, T. C.; Müller, R. D.; EathByte Group
Geographic information systems form a core part of Earth Science education and teaching, allowing the ever-growing repositories of digital geo-data to be integrated and visualised in a unified fashion. These systems cope with the wide variety of spatial data types, each with their own properties and metadata, allowing for a better understanding of how Earth processes operate. A unique requirement for the Earth Sciences is to take into account plate motion and crustal deformation processes acting through time, thus altering the various spatial relationships. The open-source GPlates software (www.gplates.org) infrastructure has become a standard tool for this type of analysis, providing the ability to reconstruct various datasets through time interactively by attaching arbitrary data to tectonic plates. Combining vast datasets in this manner is increasing the analysis complexity, with traditional visualisation-based approaches becoming ineffective in extracting necessary information and discovering new insights. In addressing this, GPlates has been extended with two key technologies, manifesting itself as a powerful interactive knowledge-discovery platform. The first technology is a "data coregistration" tool, in which desired relationships between various datasets are recursively defined, thus providing the key link between a qualitative visualisation environment and a quantitative multivariate statistical analysis framework. The second technology is a data-mining environment (Orange, http://orange.biolab.si), better suited to coping with complexities due to large datasets, high dimensionality, spatial and temporal dynamics, different data types etc. The data-mining tool has a diverse library of components allowing for interactive filtering, combining, transforming and pattern analysis of incoming data. Attached to the data-mining tool is a visual-programming environment in which underlying software complexities are abstracted from the user, allowing for the rapid
Jang, Ki Bok; Moon, Hyun Ju; Jeong, Hyun Keun; Kim, Tae Yol [Korea Environment Institute, Seoul (Korea)
The importance of knowledge has been being more stressed now than any other time. How efficiently and effectively knowledge is created, spread, and applied is an important point to secure the competitiveness of an individual economic unit as well as to grow nation's economy. For that reason, the Government has been promoting various policies to accelerate a shift to a knowledge-based economy, establishing 'a Strategy for Knowledge-Based Economic Development', pan-governments level. Companies also have been positively accepting 'a Knowledge-Based Management' as a new strategy of managing companies. Accordingly, only knowledge-based industries, including a high technology manufacturing industry and an information/communication industry, are not sharply grow, but a knowledge-based activity in individual economic activities, such as R and D, has been expanding its share. As such a shift to a knowledge-based economy, it is expected that there are lots of effects in many-sided fields, society, culture, and politics, as well as economy. Based on due consideration to such various effects, the strategy for knowledge-based economic development and the policies on the related fields have to be promoted with a balance. An environmental field also cannot be exceptional. However, there has not yet been a concrete examination on which significance a shift to a knowledge-based economy environmentally has. The purpose of this study is to examine the effects on environment according to a shift to a knowledge-based economy and to find a countermeasure under the awareness of such problems. Anyhow, I hope that the results and the countermeasures from this study can contribute to achieving a shift to an environment-centered and knowledge-based economy. 82 refs., 30 figs., 10 tabs.
James F. Aiton
Full Text Available The rapid expansion occurring in World-Wide Web activity is beginning to make the concepts of ‘global hypermedia’ and ‘universal document readership’ realistic objectives of the new revolution in information technology. One consequence of this increase in usage is that educators and students are becoming more aware of the diversity of the knowledge base which can be accessed via the Internet. Although computerised databases and information services have long played a key role in bioinformatics these same resources can also be used to provide core materials for teaching and learning. The large datasets and arch ives th at have been compiled for biomedical research can be enhanced with the addition of a variety of multimedia elements (images. digital videos. animation etc.. The use of this digitally stored information in structured and self-directed learning environments is likely to increase as activity across World-Wide Web increases.
Musmeci, Loredana; Bianchi, Fabrizio; Carere, Mario; Cori, Liliana
The study area includes the Municipalities of Gela, Niscemi and Butera located in the South of Sicily, Italy. In 1990 it was declared Area at High Risk of Environmental Crisis. In 2000 part of it was designated as Gela Reclamation Site of National Interest, RSNI. The site includes a private industrial area, public and marine areas, for a total of 51 km(2). Gela populationin 2008 was 77,145 (54,774 in 1961). Sea level:46 m. Total area: 276 km(2). Grid reference: 37 degrees 4' 0" N, 14 degrees 15' 0" E. Niscemi and Butera are located border to Gela. Populations are respectively 26,541 and 5,063. Sea level respectively: 332 m and 402 m. Close to the city of Gela, the industrial area, operating since 1962, includes chemical production plants, a power station and an oil refinery plant, one of the larger in Europe, refining 5 millions tons of crude per year. From the beginning the workforces decreased from 7,000 to the current 3,000 units. Over the years, these industrial activities have been a major source of environmental pollution. Extremely high levels of toxic, persistent and bio-accumulating chemical pollutants have been documented. Many relevant environmental and health data are available. Prior to the studies described in the present publication, their use in order to identify environmental pressures on health has been limited. Nevertheless, since several years different epidemiological studies have provided evidence of the occurrence of health outcomes significantly higher than in neighbouring areas and compared to regional data. In 2007 a Multidisciplinary Working Group has been established, to analyze the existing data on pollution-exposure-effect and to complete current knowledge on the cycle of pollutants, from migration in the environment to health impact. The present publication is a collection of contribution of this group of experts, supported by the following projects: Evaluation of environmental health impact and estimation of economic costs at of
Full Text Available The purpose of this paper is to present a general view of the current applications of fuzzy logic in medicine and bioinformatics. We particularly review the medical literature using fuzzy logic. We then recall the geometrical interpretation of fuzzy sets as points in a fuzzy hypercube and present two concrete illustrations in medicine (drug addictions and in bioinformatics (comparison of genomes.
Bioinformatics is a scientific discipline that applies computer science and information technology to help understand biological processes. The NIH provides a list of free online bioinformatics tutorials, either generated by the NIH Library or other institutes, which includes introductory lectures and "how to" videos on using various tools.
This article describes a new approach to teaching bioinformatics using "Arabidopsis" genetic sequences. Several open-ended and inquiry-based laboratory exercises have been designed to help students grasp key concepts and gain practical skills in bioinformatics, using "Arabidopsis" leucine-rich repeat receptor-like kinase (LRR…
Heyer, Laurie J.
This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…
Dai, Lin; Gao, Xin; Guo, Yan; Xiao, Jingfa; Zhang, Zhang
As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics. This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.
Harris, Nomi L; Cock, Peter J A; Chapman, Brad; Fields, Christopher J; Hokamp, Karsten; Lapp, Hilmar; Muñoz-Torres, Monica; Wiencko, Heather
Message from the ISCB: The Bioinformatics Open Source Conference (BOSC) is a yearly meeting organized by the Open Bioinformatics Foundation (OBF), a non-profit group dedicated to promoting the practice and philosophy of Open Source software development and Open Science within the biological research community. BOSC has been run since 2000 as a two-day Special Interest Group (SIG) before the annual ISMB conference. The 17th annual BOSC ( http://www.open-bio.org/wiki/BOSC_2016) took place in Orlando, Florida in July 2016. As in previous years, the conference was preceded by a two-day collaborative coding event open to the bioinformatics community. The conference brought together nearly 100 bioinformatics researchers, developers and users of open source software to interact and share ideas about standards, bioinformatics software development, and open and reproducible science.
Full Text Available Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS, Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS, and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.
Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. PMID:23190475
As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics.This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. 2012 Dai et al.; licensee BioMed Central Ltd.
Foo, Patrick; Duchon, Andrew; Warren, William H; Tarr, Michael J
Using a metric shortcut paradigm, we have found that like honeybees (Dyer in Animal Behaviour 41:239-246, 1991), humans do not seem to build a metric "cognitive map" from path integration. Instead, observers take novel shortcuts based on visual landmarks whenever they are available and reliable (Foo, Warren, Duchon, & Tarr in Journal of Experimental Psychology-Learning Memory and Cognition 31(2):195-215, 2005). In the present experiment we examine whether humans, like ants (Wolf & Wehner in Journal of Experimental Biology 203:857-868, 2000), first use survey-type path knowledge, built up from path integration, and then subsequently shift to reliance on landmarks. In our study participants walked in an immersive virtual environment while head position and orientation were recorded. During training, participants learned two legs of a triangle with feedback: paths from Home to Red and Home to Blue. A configuration of colored posts surrounded the Red location. To test reliance on landmarks, these posts were covertly translated, rotated, or left unchanged during six probe trials. These probe trials were interspersed during the training procedure to measure changes over learning. Dependence on visual landmarks was immediate and sustained during training, and no significant learning effects were observed other than a decrease in hesitation time. Our results suggest that while humans have at least two distinct navigational strategies available to them, unlike ants, a computationally-simpler landmark strategy dominates during novel shortcut navigation.
MIRCEA VALERIA ARINA
Full Text Available In the context of knowledge-based economy and society has acquired a connotation marketing role vital for all fields. Evolution of social, cultural, political and economic, information, design and conduct of marketing activities contribute to increasing the efficiency of any institution. Evolution of marketing over time provoked the great researchers who have tried to define the concept of their views, but only surprising aspects of this vast and important field. The definitions are different as shown in the article approach, the essence is the same. In the banking and financial role of marketing is to continually improve the quality of customer services and products offered by formulating appropriate marketing strategies so as to be able to influence The consumer buying behavior. Customer focus, his loyalty and not least an innovative marketing that starts at the client key aspects FEATURES today. The emphasis on innovation and ingenuity in order to: create new banking services and products, ways to attract customers; loyalty of existing ones, defining marketing and communication strategies lead to appropriate strategies to maximize the results of innovative marketing campaigns. Referring to work in the banking environment we can say that innovation is the key to success BANK and are based on: product and service innovations, process innovations, organizational innovations, and not least of marketing innovations.
Cristancho, Marco; Isaza, Gustavo; Pinzón, Andrés; Rodríguez, Juan
This volume compiles accepted contributions for the 2nd Edition of the Colombian Computational Biology and Bioinformatics Congress CCBCOL, after a rigorous review process in which 54 papers were accepted for publication from 119 submitted contributions. Bioinformatics and Computational Biology are areas of knowledge that have emerged due to advances that have taken place in the Biological Sciences and its integration with Information Sciences. The expansion of projects involving the study of genomes has led the way in the production of vast amounts of sequence data which needs to be organized, analyzed and stored to understand phenomena associated with living organisms related to their evolution, behavior in different ecosystems, and the development of applications that can be derived from this analysis. .
Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas
Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/.
Full Text Available Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/.
Pritykin, F. N.; Nebritov, V. I.
The paper presents the configuration of knowledge base necessary for intelligent control of android arm mechanism motion with different positions of certain forbidden regions taken into account. The present structure of the knowledge base characterizes the past experience of arm motion synthesis in the vector of velocities with due regard for the known obstacles. This structure also specifies its intrinsic properties. Knowledge base generation is based on the study of the arm mechanism instantaneous states implementations. Computational experiments connected with the virtual control of android arm motion with known forbidden regions using the developed knowledge base are introduced. Using the developed knowledge base to control virtually the arm motion reduces the time of test assignments calculation. The results of the research can be used in developing control systems of autonomous android robots in the known in advance environment.
During a project implementation various forms of information and experience are generated within the organization. If this accumulated knowledge is not recorded and shared amongst other projects, this knowledge will be lost and no longer be available to assist future projects. This may lead to increased future projects costs as resources, time and money will be wasted on redefining the knowledge that once existed within the company. By not capturing and redeploying this knowledge, the quality...
Okunoye, Olusoji; Oladejo, Bolanle; Odumuyiwa, Victor
International audience; The shift from industrial economy to knowledge economy in today's world has revolutionalized strategic planning in organizations as well as their problem solving approaches. The point of focus today is knowledge and service production with more emphasis been laid on knowledge capital. Many organizations are investing on tools that facilitate knowledge sharing among their employees and they are as well promoting and encouraging collaboration among their staff in order t...
Ammar Abdullah Mahmoud Ismail
Full Text Available The last few years have witnessed an increased interest in moving away from traditional language instruction settings towards more hybrid and virtual learning environments. Face-to-face interaction, guided practice, and uniformity of knowledge sources and skills are all replaced by settings where multiplicity of views from different learning communities, interconnectedness, self-directedness, and self-management of knowledge and learning are increasingly emphasized. This shift from walled-classroom instruction with its limited scope and resources to hybrid and virtual learning environments with their limitless provisions requires that learners be equipped with requisite skills and strategies to manage knowledge and handle language learning in ways commensurate with the nature and limitless possibilities of these new environments. The current study aimed at enhancing knowledge management strategies of EFL teachers in virtual learning environments and examine the impact on their ideational flexibility and engagement in language learning settings. A knowledge management model was proposed and field-test on a cohort of prospective EFL teachers in the Emirati context. Participants were prospective EFL teachers enrolled in the Methods of Teaching Courses and doing their practicum in the Emirati EFL context. Participants' ideational flexibility was tapped via a bi-methodical approach including a contextualized task and a decontextualized one. Their engagement in virtual language learning settings was tapped via an engagement scale. Results of the study indicated that enhancing prospective EFL teachers' knowledge management strategies in virtual learning environments had a significant impact on their ideational flexibility and engagement in foreign language learning settings. Details of the instructional intervention, instruments for tapping students’ ideational flexibility and engagement, and results of the study are discussed. Implications for
One area in which many environmental education programs are deficient is in reaching and involving the adult population. For senior adults in particular, the disconnect from environmental centers and other settings represents a missed opportunity for strengthening relationships, utilizing community resources and promoting civic engagement. In this sense, "intergenerational programming" could serve as an effective strategy for broadening the public's awareness and participation in environmental activities. Although the concept of involving older adults and young people in joint environmental education experiences is compelling on several fronts, there is no body of evidence to draw upon; nor is there a blueprint to guide efforts to translate this general goal into practice. This research was therefore designed to: (1) assess the effectiveness of an intergeneration outdoor education program in enhancing participants' environmental knowledge and positive attitudes, (2) explore other program impacts on the participants and the environmental centers, and (3) learn about environmental educators' experiences and opinions in regard to utilizing senior adults in their programs. This study was conducted in two phases in order to address the research purposes: (1) a nonequivalent-control-group quasi-experimental research incorporated with the Outdoor School program at the Shaver's Creek Environmental Center, and (2) a statewide mail-in survey with environmental educators in Pennsylvania. According to the quantitative data, both intergenerational groups obtained higher mean scores for environmental attitudes than the monogenerational groups, although the difference in scores was not statistically significant than one of the two monogenerational groups. The qualitative data showed that senior adults have certain characteristics that allowed them to make a substantial contribution toward enriching children's awareness and appreciation of the natural environment. Although the
Yang, Ren-Min; Yang, Fan; Yang, Fei; Huang, Lai-Ming; Liu, Feng; Yang, Jin-Ling; Zhao, Yu-Guo; Li, De-Cheng; Zhang, Gan-Lin
Accurate estimation of soil carbon is essential for accounting carbon cycling on the background of global environment change. However, previous studies made little contribution to the patterns and stocks of soil inorganic carbon (SIC) in large scales. In this study, we defined the structure of the soil depth function to fit vertical distribution of SIC based on pedogenic knowledge across various landscapes. Soil depth functions were constructed from a dataset of 99 soil profiles in the alpine area of the northeastern Tibetan Plateau. The parameters of depth functions were mapped from environmental covariates using random forest. Finally, SIC stocks at three depth intervals in the upper 1m depth were mapped across the entire study area by applying predicted soil depth functions at each location. The results showed that the soil depth functions were able to improve accuracy for fitting the vertical distribution of the SIC content, with a mean determination coefficient of R 2 =0.93. Overall accuracy for predicted SIC stocks was assessed on training samples. High Lin's concordance correlation coefficient values (0.84-0.86) indicate that predicted and observed values were in good agreement (RMSE: 1.52-1.67kgm -2 and ME: -0.33 to -0.29kgm -2 ). Variable importance showed that geographic position predictors (longitude, latitude) were key factors predicting the distribution of SIC. Terrain covariates were important variables influencing the three-dimensional distribution of SIC in mountain areas. By applying the proposed approach, the total SIC stock in this area is estimated at 75.41Tg in the upper 30cm, 113.15Tg in the upper 50cm and 190.30Tg in the upper 1m. We concluded that the methodology would be applicable for further prediction of SIC stocks in the Tibetan Plateau or other similar areas. Copyright © 2017 Elsevier B.V. All rights reserved.
Goede, M. de; Postma, A.
Males tend to outperform females in their knowledge of relative and absolute distances in spatial layouts and environments. It is unclear yet in how far these differences are innate or develop through life. The aim of the present study was to investigate whether gender differences in configurational
Hitt, Fernando; Kieran, Carolyn
Our research project aimed at understanding the complexity of the construction of knowledge in a CAS environment. Basing our work on the French instrumental approach, in particular the Task-Technique-Theory (T-T-T) theoretical frame as adapted from Chevallard's Anthropological Theory of Didactics, we were mindful that a careful task design process…
Zhou, Shuigeng; Liao, Ruiqi; Guan, Jihong
In the past decades, with the rapid development of high-throughput technologies, biology research has generated an unprecedented amount of data. In order to store and process such a great amount of data, cloud computing and MapReduce were applied to many fields of bioinformatics. In this paper, we first introduce the basic concepts of cloud computing and MapReduce, and their applications in bioinformatics. We then highlight some problems challenging the applications of cloud computing and MapReduce to bioinformatics. Finally, we give a brief guideline for using cloud computing in biology research.
Keerthikumar, Shivakumar; Gangoda, Lahiru; Gho, Yong Song; Mathivanan, Suresh
Extracellular vesicles (EVs) are a class of membranous vesicles that are released by multiple cell types into the extracellular environment. This unique class of extracellular organelles which play pivotal role in intercellular communication are conserved across prokaryotes and eukaryotes. Depending upon the cell origin and the functional state, the molecular cargo including proteins, lipids, and RNA within the EVs are modulated. Owing to this, EVs are considered as a subrepertoire of the host cell and are rich reservoirs of disease biomarkers. In addition, the availability of EVs in multiple bodily fluids including blood has created significant interest in biomarker and signaling research. With the advancement in high-throughput techniques, multiple EV studies have embarked on profiling the molecular cargo. To benefit the scientific community, existing free Web-based resources including ExoCarta, EVpedia, and Vesiclepedia catalog multiple datasets. These resources aid in elucidating molecular mechanism and pathophysiology underlying different disease conditions from which EVs are isolated. Here, the existing bioinformatics tools to perform integrated analysis to identify key functional components in the EV datasets are discussed.
Full Text Available The drastic increase in the number of coronaviruses discovered and coronavirus genomes being sequenced have given us an unprecedented opportunity to perform genomics and bioinformatics analysis on this family of viruses. Coronaviruses possess the largest genomes (26.4 to 31.7 kb among all known RNA viruses, with G + C contents varying from 32% to 43%. Variable numbers of small ORFs are present between the various conserved genes (ORF1ab, spike, envelope, membrane and nucleocapsid and downstream to nucleocapsid gene in different coronavirus lineages. Phylogenetically, three genera, Alphacoronavirus, Betacoronavirus and Gammacoronavirus, with Betacoronavirus consisting of subgroups A, B, C and D, exist. A fourth genus, Deltacoronavirus, which includes bulbul coronavirus HKU11, thrush coronavirus HKU12 and munia coronavirus HKU13, is emerging. Molecular clock analysis using various gene loci revealed that the time of most recent common ancestor of human/civet SARS related coronavirus to be 1999-2002, with estimated substitution rate of 4´10-4 to 2´10-2 substitutions per site per year. Recombination in coronaviruses was most notable between different strains of murine hepatitis virus (MHV, between different strains of infectious bronchitis virus, between MHV and bovine coronavirus, between feline coronavirus (FCoV type I and canine coronavirus generating FCoV type II, and between the three genotypes of human coronavirus HKU1 (HCoV-HKU1. Codon usage bias in coronaviruses were observed, with HCoV-HKU1 showing the most extreme bias, and cytosine deamination and selection of CpG suppressed clones are the two major independent biological forces that shape such codon usage bias in coronaviruses.
Melero, Juan L; Andrades, Sergi; Arola, Lluís; Romeu, Antoni
Psoriasis is an immune-mediated, inflammatory and hyperproliferative disease of the skin and joints. The cause of psoriasis is still unknown. The fundamental feature of the disease is the hyperproliferation of keratinocytes and the recruitment of cells from the immune system in the region of the affected skin, which leads to deregulation of many well-known gene expressions. Based on data mining and bioinformatic scripting, here we show a new dimension of the effect of psoriasis at the genomic level. Using our own pipeline of scripts in Perl and MySql and based on the freely available NCBI Gene Expression Omnibus (GEO) database: DataSet Record GDS4602 (Series GSE13355), we explore the extent of the effect of psoriasis on gene expression in the affected tissue. We give greater insight into the effects of psoriasis on the up-regulation of some genes in the cell cycle (CCNB1, CCNA2, CCNE2, CDK1) or the dynamin system (GBPs, MXs, MFN1), as well as the down-regulation of typical antioxidant genes (catalase, CAT; superoxide dismutases, SOD1-3; and glutathione reductase, GSR). We also provide a complete list of the human genes and how they respond in a state of psoriasis. Our results show that psoriasis affects all chromosomes and many biological functions. If we further consider the stable and mitotically inheritable character of the psoriasis phenotype, and the influence of environmental factors, then it seems that psoriasis has an epigenetic origin. This fit well with the strong hereditary character of the disease as well as its complex genetic background. Copyright © 2017 Japanese Society for Investigative Dermatology. Published by Elsevier B.V. All rights reserved.
Niemeier, Brandi S.; Tande, Desiree L.; Hwang, Joyce; Stastny, Sherri; Hektner, Joel M.
Because children's eating habits predict their adult eating habits, educating children about healthy foods is essential (U.S. Department of Health and Human Services, 2000). A Midwest Extension Service created and delivered an educational experience for preschool children to increase knowledge of fruits and vegetables. The knowledge assessment…
Aimée Hoeve; Dr. Ilya Zitter
This paper deals with the problematic nature of the transition between education and the workplace. A smooth transition between education and the workplace requires learners to develop an integrated knowledge base, but this is problematic as most educational programmes offer knowledge and
Ou, C.X.J.; van Hillegersberg, J.; Spiekermann, S.
In this paper, we draw on socio-technical theory to explore how Chinese professionals engage in informal knowledge focused activities facilitated by guanxi networks in the face of restrictive corporate IT policies. Following a short review of the global and Chinese literature on knowledge management
Olsen, Lars Rønn; Campos, Benito; Barnkob, Mike Stein
cancer immunotherapies has yet to be fulfilled. The insufficient efficacy of existing treatments can be attributed to a number of biological and technical issues. In this review, we detail the current limitations of immunotherapy target selection and design, and review computational methods to streamline...... therapy target discovery in a bioinformatics analysis pipeline. We describe specialized bioinformatics tools and databases for three main bottlenecks in immunotherapy target discovery: the cataloging of potentially antigenic proteins, the identification of potential HLA binders, and the selection epitopes...
Lingg, Myriam; Wyss, Kaspar; Durán-Arenas, Luis
In organisational theory there is an assumption that knowledge is used effectively in healthcare systems that perform well. Actors in healthcare systems focus on managing knowledge of clinical processes like, for example, clinical decision-making to improve patient care. We know little about connecting that knowledge to administrative processes like high-risk medical device procurement. We analysed knowledge-related factors that influence procurement and clinical procedures for orthopaedic medical devices in Mexico. We based our qualitative study on 48 semi-structured interviews with various stakeholders in Mexico: orthopaedic specialists, government officials, and social security system managers or administrators. We took a knowledge-management related perspective (i) to analyse factors of managing knowledge of clinical procedures, (ii) to assess the role of this knowledge and in relation to procurement of orthopaedic medical devices, and (iii) to determine how to improve the situation. The results of this study are primarily relevant for Mexico but may also give impulsion to other health systems with highly standardized procurement practices. We found that knowledge of clinical procedures in orthopaedics is generated inconsistently and not always efficiently managed. Its support for procuring orthopaedic medical devices is insufficient. Identified deficiencies: leaders who lack guidance and direction and thus use knowledge poorly; failure to share knowledge; insufficiently defined formal structures and processes for collecting information and making it available to actors of health system; lack of strategies to benefit from synergies created by information and knowledge exchange. Many factors are related directly or indirectly to technological aspects, which are insufficiently developed. The content of this manuscript is novel as it analyses knowledge-related factors that influence procurement of orthopaedic medical devices in Mexico. Based on our results we
Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae
User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service.
Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae
User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service. PMID:26393609
Maartje ede Goede
Full Text Available Males tend to outperform females in their knowledge of relative and absolute distances in spatial layouts and environments. It is unclear yet in how far these differences are innate or develop through life. The aim of the present study was to investigate whether gender differences in configurational knowledge for a natural environment might be modulated by experience. In order to examine this possibility, distance as well as directional knowledge of the city of Utrecht in the Netherlands was assessed in male and female inhabitants who had different levels of familiarity with this city. Experience affected the ability to solve difficult distance knowledge problems, but only for females. While the quality of the spatial representation of metric distances improved with more experience, this effect was not different for males and females. In contrast directional configurational measures did show a main gender effect but no experience modulation. In general, it seems that we obtain different configurational aspects according to different experiential time schemes. Moreover, the results suggest that experience may be a modulating factor in the occurrence of gender differences in configurational knowledge, though this seems dependent on the type of measurement. It is discussed in how far proficiency in mental rotation ability and spatial working memory accounts for these differences.
Full Text Available Abstract Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.
Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. BIRCH (Biological Research Computing Hierarchy) is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment) graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.
Roth, Robert E.; Perez, Julio
Reported is an assessment of secondary school pupils regarding their attitudes about and knowledge of environmental issues. It was found that gender was a significant variable and that poverty and deforestation were ranked as the most critical environmental problems. (CW)
Technology-enabled learning environments are beginning to come of age. Tools and frameworks are now available that have been shown to improve learning and are being deployed more widely in varied school settings. Teachers are now faced with the formidable challenge of integrating these promising new environments with the everyday context in which…
Full Text Available The constructivists approach on the conception of relative software of modelling to training and teaching of the concepts of current and voltage requires appraisal of several disciplinary fields in order to provide to the learners a training adapted to their representations. Thus, this approach requires the researchers to have adequate knowledge or skills in data processing, didactics and science content. In this regard, several researches underline that the acquisition of basic concepts that span a field of a given knowledge, must take into account the student and the scientific representations. The present research appears in this perspective, and aims to present the interactive computer environments that take into account the students (secondary and college and scientific representations related to simple electric circuits. These computer environments will help the students to analyze the functions of the electric circuits adequately.
Leung, Anthony K L; Andersen, Jens S; Mann, Matthias
The nucleolus is a plurifunctional, nuclear organelle, which is responsible for ribosome biogenesis and many other functions in eukaryotes, including RNA processing, viral replication and tumour suppression. Our knowledge of the human nucleolar proteome has been expanded dramatically by the two r...
Kuate Defo Barthelemy
effects of HIV interventions/programmes in sub-Saharan Africa. Indeed, few respondents reported accurate knowledge about HIV transmission routes and prevention strategies. Findings showed that the role of family environment as source of accurate HIV knowledge transmission routes and prevention strategies is of paramount significance; however, families have been poorly integrated in the design and implementation of the first generation of HIV interventions. There is an urgent need that policymakers work together with families to improve the efficiency of these interventions. Peer influences is likely controversial because of the double positive effect of peer-to-peer communication on both accurate and inaccurate knowledge of HIV transmission routes.
Brazas, Michelle D.; Ouellette, B. F. Francis
With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable...
Benilde García Cabrero
Full Text Available A model of analysis of interaction and construction of knowledge in educational environments based on computer-mediated communication (CMC is proposed. This proposal considers: 1 the contextual factors that constitute the input and the scenario of interaction, 2 the interaction processes: types of interaction and its contents (Garrison, Anderson and Archer, 2000 as well as the discursive strategies (Lemke, 1997, and 3 learning results that involve the quality of the knowledge constructed by the participants (Gunawardena, Lowe and Anderson, 1997. This model was applied to the analysis of the interaction among a group of participants in two web forums (with or without the presence of a teacher, during the teaching of a PhD in Psychology program. The results show evidence of the model’s viability to describe the patterns of interaction and the levels of construction of knowledge in web forums.
Mulder, Nicola J; Adebiyi, Ezekiel; Adebiyi, Marion; Adeyemi, Seun; Ahmed, Azza; Ahmed, Rehab; Akanle, Bola; Alibi, Mohamed; Armstrong, Don L; Aron, Shaun; Ashano, Efejiro; Baichoo, Shakuntala; Benkahla, Alia; Brown, David K; Chimusa, Emile R; Fadlelmola, Faisal M; Falola, Dare; Fatumo, Segun; Ghedira, Kais; Ghouila, Amel; Hazelhurst, Scott; Isewon, Itunuoluwa; Jung, Segun; Kassim, Samar Kamal; Kayondo, Jonathan K; Mbiyavanga, Mamana; Meintjes, Ayton; Mohammed, Somia; Mosaku, Abayomi; Moussa, Ahmed; Muhammd, Mustafa; Mungloo-Dilmohamud, Zahra; Nashiru, Oyekanmi; Odia, Trust; Okafor, Adaobi; Oladipo, Olaleye; Osamor, Victor; Oyelade, Jellili; Sadki, Khalid; Salifu, Samson Pandam; Soyemi, Jumoke; Panji, Sumir; Radouani, Fouzia; Souiai, Oussama; Tastan Bishop, Özlem
Although pockets of bioinformatics excellence have developed in Africa, generally, large-scale genomic data analysis has been limited by the availability of expertise and infrastructure. H3ABioNet, a pan-African bioinformatics network, was established to build capacity specifically to enable H3Africa (Human Heredity and Health in Africa) researchers to analyze their data in Africa. Since the inception of the H3Africa initiative, H3ABioNet's role has evolved in response to changing needs from the consortium and the African bioinformatics community. H3ABioNet set out to develop core bioinformatics infrastructure and capacity for genomics research in various aspects of data collection, transfer, storage, and analysis. Various resources have been developed to address genomic data management and analysis needs of H3Africa researchers and other scientific communities on the continent. NetMap was developed and used to build an accurate picture of network performance within Africa and between Africa and the rest of the world, and Globus Online has been rolled out to facilitate data transfer. A participant recruitment database was developed to monitor participant enrollment, and data is being harmonized through the use of ontologies and controlled vocabularies. The standardized metadata will be integrated to provide a search facility for H3Africa data and biospecimens. Because H3Africa projects are generating large-scale genomic data, facilities for analysis and interpretation are critical. H3ABioNet is implementing several data analysis platforms that provide a large range of bioinformatics tools or workflows, such as Galaxy, the Job Management System, and eBiokits. A set of reproducible, portable, and cloud-scalable pipelines to support the multiple H3Africa data types are also being developed and dockerized to enable execution on multiple computing infrastructures. In addition, new tools have been developed for analysis of the uniquely divergent African data and for
mmersive Virtual Learning Environments (IVLEs) are extensively used in training, but few rigorous scienti c investigations regarding : the transfer of learning have been conducted. Measurement of learning transfer through evaluative methods is key...
Immersive Virtual Learning Environments (IVLEs) are extensively used in training, but few rigorous scientific investigations regarding the : transfer of learning have been conducted. Measurement of learning transfer through evaluative methods is key ...
The scraps of knowledge existing todays about the environmental radiation protection are based on the dose limits (10 and 1 mGy.h -1 ) that have been determined from literature relative to effects of acute exposure by external irradiation on individuals. But these standards come from a context far from environmental reality. Some research directions are coming out: to complete the radioecological knowledge of transfers, the biological accumulation in living organisms, to study the effects of this bioaccumulation in a multi pollution context with chronic low dose exposure; to identify the characteristics of these effects( low doses in chronic multi pollutions) at the ecosystem level. (N.C.)
Shroyer, Josh; Stewart, Craig
The purpose of this study was to determine the knowledge and opinions on concussions of high school coaches from a geographically large yet rural state in the northern Rocky Mountains of the United States. Few medical issues in sport are more important, or have had as much publicity recently, as concussions. The exposure gleaned from tragic health…
Fraser-Abder, Pamela; Doria, John A.; Yang, Ji-Sup; De Jesus, Angela
The concept of funds of knowledge, as applied to an ethnically popular fruit, is the focus of this module. Teachers can use this concept to create contextually meaningful experiments that can contribute to a culturally relevant and more fully developed educational unit focusing on the science of nutrition and reflecting content Standards A and C.…
Shah, Abhik; Woolf, Peter
Summary In this paper, we introduce pebl, a Python library and application for learning Bayesian network structure from data and prior knowledge that provides features unmatched by alternative software packages: the ability to use interventional data, flexible specification of structural priors, modeling with hidden variables and exploitation of parallel processing. PMID:20161541
Gray, C.; Turner, R.; Sutton, C.; Petersen, C.; Stevens, S.; Swain, J.; Esmond, B.; Schofield, C.; Thackeray, D.
Knowledge of research methods is regarded as crucial for the UK economy and workforce. However, research methods teaching is viewed as a challenging area for lecturers and students. The pedagogy of research methods teaching within universities has been noted as underdeveloped, with undergraduate students regularly expressing negative dispositions…
Yildirm, Isil; Zengel, Rengin
In parallel with the technological developments dominating usage of digital tools in science and education, caused the transform of knowledge in new ways. The reflection of these integration is seen in design discipline as its active role in this circle whether in practice or in the era of education, Benefit from the capabilities of new…
Jou, Min; Lin, Yen-Ting; Wu, Din-Wu
With the development of information technology and popularization of web applications, students nowadays have grown used to skimming through information provided through the Internet. This reading habit led them to be incapable of analyzing or integrating information they have received. Hence, knowledge management and critical thinking (CT) have,…
Full Text Available The impact of the knowledge based society, especially on knowledge intensive business services, proved to have a really significant influence on the development of services industry. Consultancy services are acknowledged as innovation-intensive and knowledge-intensive business services at the same time. On the base of qualitative and quantitative research, by combining the disparities analysis with logical ranking, critical assessments and explanatory associations, comparative analysis, and empirical research, this article aims to contribute to a better appreciation and understanding of the consultancy services sector in Romania, as part of the larger family of knowledge intensive business services. The regional differences of business and personalized consultancy services are considered for discussion and revealed in the specific of the national network that has a clearly defined center. The concentration of the consultancy companies and most employees in four regions reflects the theory according to which the supply of these services is unevenly distributed, following the potential clients from better developed areas of the country. In all four types of analyzed consultancy activities, the total profit is bigger than economic loss. The disparities between regions are also supported by the dynamic evolution of the consultancy sector in Romania.
Lee, Keonsoo; Rho, Seungmin; Lee, Seok-Won
In mobile cloud computing environment, the cooperation of distributed computing objects is one of the most important requirements for providing successful cloud services. To satisfy this requirement, all the members, who are employed in the cooperation group, need to share the knowledge for mutual understanding. Even if ontology can be the right tool for this goal, there are several issues to make a right ontology. As the cost and complexity of managing knowledge increase according to the scale of the knowledge, reducing the size of ontology is one of the critical issues. In this paper, we propose a method of extracting ontology module to increase the utility of knowledge. For the given signature, this method extracts the ontology module, which is semantically self-contained to fulfill the needs of the service, by considering the syntactic structure and semantic relation of concepts. By employing this module, instead of the original ontology, the cooperation of computing objects can be performed with less computing load and complexity. In particular, when multiple external ontologies need to be combined for more complex services, this method can be used to optimize the size of shared knowledge.
Nys, Marion; Gyselinck, Valérie; Orriols, Eric; Hickmann, Maya
This study investigates the development of landmark and route knowledge in complex wayfinding situations. It focuses on how children (aged 6, 8, and 10 years) and young adults (n = 79) indicate, recognize, and bind landmarks and directions in both verbal and visuo-spatial tasks after learning a virtual route. Performance in these tasks is also related to general verbal and visuo-spatial abilities as assessed by independent standardized tests (attention, working memory, perception of direction, production and comprehension of spatial terms, sentences and stories). The results first show that the quantity and quality of landmarks and directions produced and recognized by participants in both verbal and visuo-spatial tasks increased with age. In addition, an increase with age was observed in participants' selection of decisional landmarks (i.e., landmarks associated with a change of direction), as well as in their capacity to bind landmarks and directions. Our results support the view that children first acquire landmark knowledge, then route knowledge, as shown by their late developing ability to bind knowledge of directions and landmarks. Overall, the quality of verbal and visuo-spatial information in participants' spatial representations was found to vary mostly with their visuo-spatial abilities (attention and perception of directions) and not with their verbal abilities. Interestingly, however, when asked to recognize landmarks encountered during the route, participants show an increasing bias with age toward choosing a related landmark of the same category, regardless of its visual characteristics, i.e., they incorrectly choose the picture of another fountain. The discussion highlights the need for further studies to determine more precisely the role of verbal and visuo-spatial knowledge and the nature of how children learn to represent and memorize routes.
Aanpassen aan een veranderende omgeving : de rol van kennis in individuen, groepen en organisaties [Adapting to a changing environment : The role of knowledge in individuals, groups, and organizations
In this chapter, the thesis is developed that adaptation to a changing environment is a matter of knowledge management. Individuals need to manage their knowledge by correctly applying general knowledge and strategies in novel situations. However, evolution has limited the flexibility with which
Quail, Michelle; Brundage, Shelley B; Spitalnick, Josh; Allen, Peter J; Beilby, Janet
Advanced communication skills are vital for allied health professionals, yet students often have limited opportunities in which to develop them. The option of increasing clinical placement hours is unsustainable in a climate of constrained budgets, limited placement availability and increasing student numbers. Consequently, many educators are considering the potentials of alternative training methods, such as simulation. Simulations provide safe, repeatable and standardised learning environments in which students can practice a variety of clinical skills. This study investigated students' self-rated communication skill, knowledge, confidence and empathy across simulated and traditional learning environments. Undergraduate speech pathology students were randomly allocated to one of three communication partners with whom they engaged conversationally for up to 30 min: a patient in a nursing home (n = 21); an elderly trained patient actor (n = 22); or a virtual patient (n = 19). One week prior to, and again following the conversational interaction, participants completed measures of self-reported communication skill, knowledge and confidence (developed by the authors based on the Four Habit Coding Scheme), as well as the Jefferson Scale of Empathy - Health Professionals (student version). All three groups reported significantly higher communication knowledge, skills and confidence post-placement (Median d = .58), while the degree of change did not vary as a function of group membership (Median η (2) communication skill, knowledge and confidence, though not empathy, following a brief placement in a virtual, standardised or traditional learning environment. The self-reported increases were consistent across the three placement types. It is proposed that the findings from this study provide support for the integration of more sustainable, standardised, virtual patient-based placement models into allied health training programs for the training of
Alako Tadontsop, F.B.
In this thesis we describe different approaches aiding in the utilization of the exponentially growing amount of information available in the life sciences. Briefly, we address two issues in molecular biology, on sequence analysis, and on text mining. The former issue addresses the problem how to
Alako Tadontsop, F.B.
In this thesis we describe different approaches aiding in the utilization of the exponentially growing amount of information available in the life sciences. Briefly, we address two issues in molecular biology, on sequence analysis, and on text mining. The former issue addresses the problem how to determine remote sequence homology especially when the sequence similarity is very low. For this a visualisation tool is introduced that combines sequence alignment, domain prediction and phylogeny. ...
Abu-Jamous, Basel; Nandi, Asoke K
Clustering techniques are increasingly being put to use in the analysis of high-throughput biological datasets. Novel computational techniques to analyse high throughput data in the form of sequences, gene and protein expressions, pathways, and images are becoming vital for understanding diseases and future drug discovery. This book details the complete pathway of cluster analysis, from the basics of molecular biology to the generation of biological knowledge. The book also presents the latest clustering methods and clustering validation, thereby offering the reader a comprehensive review o
Abeln, S.; Molenaar, D.; Feenstra, K.A.; Hoefsloot, H.C.J.; Teusink, B.; Heringa, J.
Teaching students with very diverse backgrounds can be extremely challenging. This article uses the Bioinformatics and Systems Biology MSc in Amsterdam as a case study to describe how the knowledge gap for students with heterogeneous backgrounds can be bridged. We show that a mix in backgrounds can
Campbell, Chad E.; Nehm, Ross H.
The growing importance of genomics and bioinformatics methods and paradigms in biology has been accompanied by an explosion of new curricula and pedagogies. An important question to ask about these educational innovations is whether they are having a meaningful impact on students' knowledge, attitudes, or skills. Although assessments are…
Ramm, Hans Henrik
Technology and knowledge make up the knowledge capital that has been so essential to the oil and gas industry's value creation, competitiveness and internationalization. Report prepared for the Norwegian Oil Industry Association (OLF) and The Norwegian Society of Chartered Technical and Scientific Professionals (Tekna), on the Norwegian petroleum cluster as an environment for creating knowledge capital from human capital, how fiscal and other framework conditions may influence the building of knowledge capital, the long-term perspectives for the petroleum cluster, what Norwegian society can learn from the experiences in the petroleum cluster, and the importance of gaining more knowledge about the functionality of knowledge for increased value creation (author) (ml)
Scheuermann, Richard H; Sinkovits, Robert S; Schenkelberg, Theodore; Koff, Wayne C
Biomedical research has become a data intensive science in which high throughput experimentation is producing comprehensive data about biological systems at an ever-increasing pace. The Human Vaccines Project is a new public-private partnership, with the goal of accelerating development of improved vaccines and immunotherapies for global infectious diseases and cancers by decoding the human immune system. To achieve its mission, the Project is developing a Bioinformatics Hub as an open-source, multidisciplinary effort with the overarching goal of providing an enabling infrastructure to support the data processing, analysis and knowledge extraction procedures required to translate high throughput, high complexity human immunology research data into biomedical knowledge, to determine the core principles driving specific and durable protective immune responses.
Bansal Arvind K
expression analysis to derive regulatory pathways, the development of statistical techniques, clustering techniques and data mining techniques to derive protein-protein and protein-DNA interactions, and modeling of 3D structure of proteins and 3D docking between proteins and biochemicals for rational drug design, difference analysis between pathogenic and non-pathogenic strains to identify candidate genes for vaccines and anti-microbial agents, and the whole genome comparison to understand the microbial evolution. The development of bioinformatics techniques has enhanced the pace of biological discovery by automated analysis of large number of microbial genomes. We are on the verge of using all this knowledge to understand cellular mechanisms at the systemic level. The developed bioinformatics techniques have potential to facilitate (i the discovery of causes of diseases, (ii vaccine and rational drug design, and (iii improved cost effective agents for bioremediation by pruning out the dead ends. Despite the fast paced global effort, the current analysis is limited by the lack of available gene-functionality from the wet-lab data, the lack of computer algorithms to explore vast amount of data with unknown functionality, limited availability of protein-protein and protein-DNA interactions, and the lack of knowledge of temporal and transient behavior of genes and pathways.
McIntyre, A.D.; Turnbull, R.G.H.
The development of the hydrocarbon resources of the North Sea has resulted in both offshore and onshore environmental repercussions, involving the existing physical attributes of the sea and seabed, the coastline and adjoining land. The social and economic repercussions of the industry were equally widespread. The dramatic and speedy impact of the exploration and exploitation of the northern North Sea resources in the early 1970s, on the physical resources of Scotland was quickly realised together with the concern that any environmental and social damage to the physical and social fabric should be kept to a minimum. To this end, a wide range of research and other activities by central and local government, and other interested agencies was undertaken to extend existing knowledge on the marine and terrestrial environments that might be affected by the oil and gas industry. The outcome of these activities is summarized in this paper. The topics covered include a survey of the marine ecosystems of the North Sea, the fishing industry, the impact of oil pollution on seabirds and fish stocks, the ecology of the Scottish coastline and the impact of the petroleum industry on a selection of particular sites. (author)
Ever since Sir Francis Bacon coined the adage, scientists have believed that "knowledge is power," but this presupposes that people are willing to embrace knowledge. Today, a significant proportion of the American public rejects the scientific evidence of climate change, and many of these Americans are highly educated, so their views cannot be attributed to scientific illiteracy or misunderstanding. Historical evidence shows that resistance to scientific evidence of climate change--like the earlier resistance to the evidence of acid rain, the ozone hole, and the harms of tobacco use--is rooted in intellectual commitments to freedom, individualism, and the power of the free market to protect political freedom while delivering goods and services. Therefore, good public policy is not likely to be achieved by producing more science, better science, or communicating that science more effectively. Rather, it suggests that effective public policy must acknowledge these commitments and concerns, and offer solutions that are not perceived to threaten the American way of life.
The thesis focuses on two bioinformatics research topics: the development of tools for an efficient and reliable identification of single nucleotides polymorphisms (SNPs) and polymorphic simple sequence repeats (SSRs) from expressed sequence tags (ESTs) (Chapter 2, 3 and 4), and the subsequent
Biological databases are having a growth spurt. Much of this results from research in genetics and biodiversity, coupled with fast-paced developments in information technology. The revolution in bioinformatics, defined by Sugden and Pennisi (2000) as the "tools and techniques for...
An integrative bioinformatics pipeline for the genomewide identification of novel porcine microRNA genes. Wei Fang, Na Zhou, Dengyun Li, Zhigang Chen, Pengfei Jiang and Deli Zhang. J. Genet. 92,587 593. Figure 1. Primary sequence of the predicted SSc-mir-2053 precursor and locations of some terms in the secondary ...
Lelieveld, S.H.; Veltman, J.A.; Gilissen, C.F.
With the widespread adoption of next generation sequencing technologies by the genetics community and the rapid decrease in costs per base, exome sequencing has become a standard within the repertoire of genetic experiments for both research and diagnostics. Although bioinformatics now offers
Dec 6, 2013 ... The majority of miRNAs in pig (Sus scrofa), an impor- tant domestic animal, remain unknown. From this perspec- tive, we attempted the genomewide identification of novel porcine miRNAs. Here, we propose a novel integrative bioinformatics pipeline to identify conservative and non- conservative novel ...
Thus, there is the need for appropriate strategies of introducing the basic components of this emerging scientific field to part of the African populace through the development of an online distance education learning tool. This study involved the design of a bioinformatics online distance educative tool an implementation of ...
reaction (PCR), oligo hybridization and DNA sequencing. Proper primer design is actually one of the most important factors/steps in successful DNA sequencing. Various bioinformatics programs are available for selection of primer pairs from a template sequence. The plethora programs for PCR primer design reflects the.
Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...
Kelley, Scott; Alger, Christianna; Deutschman, Douglas
The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP).…
Nielsen, Henrik; Sperotto, Maria Maddalena
)-based bioinformatics approach. The ANN was trained to recognize feature-based patterns in proteins that are considered to be associated with lipid rafts. The trained ANN was then used to predict protein raftophilicity. We found that, in the case of α-helical membrane proteins, their hydrophobic length does not affect...
Ondrej, Vladan; Dvorak, Petr
Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…
Boyle, John A.
Bioinformatics has emerged as an important research tool in recent years. The ability to mine large databases for relevant information has become increasingly central to many different aspects of biochemistry and molecular biology. It is important that undergraduates be introduced to the available information and methodologies. We present a…
In this thesis, I detail my 4-year efforts in developing bioinformatics tools and algorithms to address the growing demands of current proteomics endeavors, covering a range of facets such as large-scale protein expression profiling, charting post-translation modifications as well as
Malis, Martin; Svensson, Carsten; Hvam, Lars
information to reach several companies within a network. The day-to-day interaction for design and purchase of products in a B-to-B environment will be in focus. It is illustrated how the use of web-based expert systems can improve the efficiency in the sales process significantly. The use of configurators...... has changed the daily interaction between a company and its suppliers. Large multinational companies (e.g. Dell and Cisco) have demonstrated how the use of web-based expert systems can revolutionize the sale of customized products and change market paradigms. The companies have reached new levels...
significantly better. I also would like to say “thank-you” to the anonymous Delphi study participants who enthusiastically shared their knowledge...21 Delphi Forecasting Steps……………………………………………………………..22 Summary…………..…..……………………………………………………………...27 IV. Analysis and Results...28 Chapter Overview…………………………………………………………………….28 Delphi Panel and Demographics……………………………………………………...28 Investigative Questions
Full Text Available Any present day approach of the world's most pressing environmental problems involves both scale and governance issues. After all, current local events might have long-term global consequences (the scale issue and solving complex environmental problems requires policy makers to think and govern beyond generally used time-space scales (the governance issue. To an increasing extent, the various scientists in these fields have used concepts like social-ecological systems, hierarchies, scales and levels to understand and explain the "complex cross-scale dynamics" of issues like climate change. A large part of this work manifests a realist paradigm: the scales and levels, either in ecological processes or in governance systems, are considered as "real". However, various scholars question this position and claim that scales and levels are continuously (reconstructed in the interfaces of science, society, politics and nature. Some of these critics even prefer to adopt a non-scalar approach, doing away with notions such as hierarchy, scale and level. Here we take another route, however. We try to overcome the realist-constructionist dualism by advocating a dialogue between them on the basis of exchanging and reflecting on different knowledge claims in transdisciplinary arenas. We describe two important developments, one in the ecological scaling literature and the other in the governance literature, which we consider to provide a basis for such a dialogue. We will argue that scale issues, governance practices as well as their mutual interdependencies should be considered as human constructs, although dialectically related to nature's materiality, and therefore as contested processes, requiring intensive and continuous dialogue and cooperation among natural scientists, social scientists, policy makers and citizens alike. They also require critical reflection on scientists' roles and on academic practices in general. Acknowledging knowledge claims
Oluwagbemi Olugbenga OLUSEUN
Full Text Available New scientific research fields are evolving on a yearly basis but some parts of the African continent are less aware. Thus, there arises the need for a suitable implementation strategy in introducing the basic components of an emerging scientific field to some part of the African populace through the development of an online distance education learning tool. This emerging field is known as bioinformatics. This research work was instrumental in elucidating the need for a suitable implementation platform for bioinformatics education in parts of the African continent that are less aware of this innovative and interesting field. The aim of this research work was to disseminate the basic knowledge and applications of bioinformatics to these parts of the African continent.
Full Text Available Allergies and/or food intolerances are a growing problem of the modern world. Diffi culties associated with the correct diagnosis of food allergies result in the need to classify the factors causing allergies and allergens themselves. Therefore, internet databases and other bioinformatic tools play a special role in deepening knowledge of biologically-important compounds. Internet repositories, as a source of information on different chemical compounds, including those related to allergy and intolerance, are increasingly being used by scientists. Bioinformatic methods play a signifi cant role in biological and medical sciences, and their importance in food science is increasing. This study aimed at presenting selected databases and tools of bioinformatic analysis useful in research on food allergies, allergens (11 databases, epitopes (7 databases, and haptens (2 databases. It also presents examples of the application of computer methods in studies related to allergies.
The term environment refers to the internal and external context in which organizations operate. For some scholars, environment is defined as an arrangement of political, economic, social and cultural factors existing in a given context that have an impact on organizational processes and structures....... For others, environment is a generic term describing a large variety of stakeholders and how these interact and act upon organizations. Organizations and their environment are mutually interdependent and organizational communications are highly affected by the environment. This entry examines the origin...... and development of organization-environment interdependence, the nature of the concept of environment and its relevance for communication scholarships and activities....
Brazas, Michelle D; Ouellette, B F Francis
With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable in the learning continuum. Bioinformatics.ca, which hosts the Canadian Bioinformatics Workshops, has blended more traditional learning styles with current online and social learning styles. Here we share our growing experiences over the past 12 years and look toward what the future holds for bioinformatics training programs.
Hiew, Hong Liang; Bellgard, Matthew
Life Science research faces the constant challenge of how to effectively handle an ever-growing body of bioinformatics software and online resources. The users and developers of bioinformatics resources have a diverse set of competing demands on how these resources need to be developed and organised. Unfortunately, there does not exist an adequate community-wide framework to integrate such competing demands. The problems that arise from this include unstructured standards development, the emergence of tools that do not meet specific needs of researchers, and often times a communications gap between those who use the tools and those who supply them. This paper presents an overview of the different functions and needs of bioinformatics stakeholders to determine what may be required in a community-wide framework. A Bioinformatics Reference Model is proposed as a basis for such a framework. The reference model outlines the functional relationship between research usage and technical aspects of bioinformatics resources. It separates important functions into multiple structured layers, clarifies how they relate to each other, and highlights the gaps that need to be addressed for progress towards a diverse, manageable, and sustainable body of resources. The relevance of this reference model to the bioscience research community, and its implications in progress for organising our bioinformatics resources, are discussed.
Full Text Available Medicinal plants are important treasures for the treatment of different types of diseases. Current study provides significant ethnopharmacological information, both qualitative and quantitative on medical plants related to children disorders from district Bannu, Khyber Pakhtunkhwa (KPK province of Pakistan. The information gathered was quantitatively analyzed using informant consensus factor, relative frequency of citation and use value method to establish a baseline data for more comprehensive investigations of bioactive compounds of indigenous medicinal plants specifically related to children disorders. To best of our knowledge it is first attempt to document ethno-botanical information of medicinal plants using quantitative approaches. Total of 130 informants were interviewed using questionnaire conducted during 2014–2016 to identify the preparations and uses of the medicinal plants for children diseases treatment. A total of 55 species of flowering plants belonging to 49 genera and 32 families were used as ethno-medicines in the study area. The largest number of specie belong to Leguminosae and Cucurbitaceae families (4 species each followed by Apiaceae, Moraceae, Poaceae, Rosaceae, and Solanaceae (3 species each. In addition leaves and fruits are most used parts (28%, herbs are most used life form (47%, decoction method were used for administration (27%, and oral ingestion was the main used route of application (68.5%. The highest use value was reported for species Momordica charantia and Raphnus sativus (1 for each and highest Informant Consensus Factor was observed for cardiovascular and rheumatic diseases categories (0.5 for each. Most of the species in the present study were used to cure gastrointestinal diseases (39 species. The results of present study revealed the importance of medicinal plant species and their significant role in the health care of the inhabitants in the present area. The people of Bannu own high traditional
Shaheen, Shabnam; Abbas, Safdar; Hussain, Javid; Mabood, Fazal; Umair, Muhammad; Ali, Maroof; Ahmad, Mushtaq; Zafar, Muhammad; Farooq, Umar; Khan, Ajmal
Medicinal plants are important treasures for the treatment of different types of diseases. Current study provides significant ethnopharmacological information, both qualitative and quantitative on medical plants related to children disorders from district Bannu, Khyber Pakhtunkhwa (KPK) province of Pakistan. The information gathered was quantitatively analyzed using informant consensus factor, relative frequency of citation and use value method to establish a baseline data for more comprehensive investigations of bioactive compounds of indigenous medicinal plants specifically related to children disorders. To best of our knowledge it is first attempt to document ethno-botanical information of medicinal plants using quantitative approaches. Total of 130 informants were interviewed using questionnaire conducted during 2014–2016 to identify the preparations and uses of the medicinal plants for children diseases treatment. A total of 55 species of flowering plants belonging to 49 genera and 32 families were used as ethno-medicines in the study area. The largest number of specie belong to Leguminosae and Cucurbitaceae families (4 species each) followed by Apiaceae, Moraceae, Poaceae, Rosaceae, and Solanaceae (3 species each). In addition leaves and fruits are most used parts (28%), herbs are most used life form (47%), decoction method were used for administration (27%), and oral ingestion was the main used route of application (68.5%). The highest use value was reported for species Momordica charantia and Raphnus sativus (1 for each) and highest Informant Consensus Factor was observed for cardiovascular and rheumatic diseases categories (0.5 for each). Most of the species in the present study were used to cure gastrointestinal diseases (39 species). The results of present study revealed the importance of medicinal plant species and their significant role in the health care of the inhabitants in the present area. The people of Bannu own high traditional knowledge
There is currently a revitalized concern about the potential impact of ionizing radiation on the environment that calls for the construction of a system ensuring an adequate radioprotection of the non-human biota and their associated biotopes. This paper first sets the context of the problem both, with respect to the general philosophy of environmental protection as a whole, but also with respect to the consideration of the environment achieved so far in the purpose of human radioprotection. The current accumulated knowledge on the effects of ionizing radiation to biota (fauna and flora) is then briefly reviewed, encompassing effects at individual and community/ecosystem level, situations of acute and chronic exposure to high and low doses, finally leading to the identification of the most critical gaps in scientific knowledge: effects of mixed low dose rates in chronic exposure to communities and ecosystems. The most significant current international efforts towards the identification of environmental radioprotection criteria and standards are finally presented along with some relevant national examples. (author)
The aim of this review is to discuss the importance of bioinformatics and emphasize the need to acquire bioinformatics training and skills so as to maximize its potentials for improved delivery of animal health. In this review, bioinformatics is introduced, challenges to effective animal disease diagnosis, prevention and control, ...
Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.
There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…
Brinberg, Herbert R.; Pinelli, Thomas E.; Barclay, Rebecca O.
Consideration effort has been devoted over the past 30 years to developing methods and means of assessing the value of information. Two approaches - value in exchange and value in use - dominate; however, neither approach enjoys much practical application because validation schema for decision-making is missing. The approaches fail to measure objectively the real costs of acquiring information and the real benefits that information will yield. Moreover, these approaches collectively fail to provide economic justification to build and/or continue to support an information product or service. In addition, the impact of Cyberspace adds a new dimension to the problem. A new paradigm is required to make economic sense in this revolutionary information environment. In previous work, the authors explored the various approaches to measuring the value of information and concluded that, in large measure, these methods were unworkable concepts and constructs. Instead, they proposed several axioms for valuing information. Most particularly they concluded that the 'value of information cannot be measured in the absence of a specific task, objective, or goal.' This paper builds on those axioms and describes under which circumstances information can be measured in objective and actionable terms. This paper also proposes a methodology for undertaking such measures and validating the results.
Full Text Available This paper argues that the social sciences are fragmented in addressing the environmental challenge of increasing resource depletion. To address this problem, the paper puts forward a framework which encompasses several disciplinary approaches, and above all a long-term historical perspective and a realist sociology of science and technology which, in combination, provide a means of understanding the disruptive changes in the transformation of the environment. The paper then focuses on energy and gives an overview of the various social forces that can potentially counteract the future tensions arising from the foreseeable depletion of energy sources. It argues that only some of these countervailing forces—namely state intervention and technological innovation—provide viable potential solutions to these tensions. However, these solutions themselves face severe constraints. The paper concludes by arguing that a realistic assessment of constraints is the most useful, though limited, service that social science can contribute to our understanding of the relation between social and environmental transformation.
Schönbach, Christian; Verma, Chandra; Bond, Peter J; Ranganathan, Shoba
The International Conference on Bioinformatics (InCoB) has been publishing peer-reviewed conference papers in BMC Bioinformatics since 2006. Of the 44 articles accepted for publication in supplement issues of BMC Bioinformatics, BMC Genomics, BMC Medical Genomics and BMC Systems Biology, 24 articles with a bioinformatics or systems biology focus are reviewed in this editorial. InCoB2017 is scheduled to be held in Shenzen, China, September 20-22, 2017.
Full Text Available We propose the formation of an International Psycho-Social and Cultural Bioinformatics Project (IPCBP to explore the research foundations of Integrative Medical Insights (IMI on all levels from the molecular-genomic to the psychological, cultural, social, and spiritual. Just as The Human Genome Project identified the molecular foundations of modern medicine with the new technology of sequencing DNA during the past decade, the IPCBP would extend and integrate this neuroscience knowledge base with the technology of gene expression via DNA/proteomic microarray research and brain imaging in development, stress, healing, rehabilitation, and the psychotherapeutic facilitation of existentional wellness. We anticipate that the IPCBP will require a unique international collaboration of, academic institutions, researchers, and clinical practioners for the creation of a new neuroscience of mind-body communication, brain plasticity, memory, learning, and creative processing during optimal experiential states of art, beauty, and truth. We illustrate this emerging integration of bioinformatics with medicine with a videotape of the classical 4-stage creative process in a neuroscience approach to psychotherapy.
Full Text Available We propose the formation of an International PsychoSocial and Cultural Bioinformatics Project (IPCBP to explore the research foundations of Integrative Medical Insights (IMI on all levels from the molecular-genomic to the psychological, cultural, social, and spiritual. Just as The Human Genome Project identified the molecular foundations of modern medicine with the new technology of sequencing DNA during the past decade, the IPCBP would extend and integrate this neuroscience knowledge base with the technology of gene expression via DNA/proteomic microarray research and brain imaging in development, stress, healing, rehabilitation, and the psychotherapeutic facilitation of existentional wellness. We anticipate that the IPCBP will require a unique international collaboration of, academic institutions, researchers, and clinical practioners for the creation of a new neuroscience of mind-body communication, brain plasticity, memory, learning, and creative processing during optimal experiential states of art, beauty, and truth. We illustrate this emerging integration of bioinformatics with medicine with a videotape of the classical 4-stage creative process in a neuroscience approach to psychotherapy.
Brown, J.E.; Borretzen, P.; Hosseini, A.; Iosjpe, M.
A review on concentration factors (CF) for the marine environment was conducted in order to consider the relevance of existing data from the perspective of environmental protection and to identify areas of data paucity. Data have been organised in a format compatible with a reference organism approach, for selected radionuclides, and efforts have been taken to identify the factors that may be of importance in the context of dosimetric and dose-effects analyses. These reference organism categories had been previously selected by identifying organism groups that were likely to experience the highest levels of radiation exposure, owing to high uptake levels or residence in a particular habitat, for defined scenarios. Significant data gaps in the CF database have been identified, notably for marine mammals and birds. Most empirical information pertains to a limit suite of radionuclides, particularly 137 Cs, 210 Po and 99 Tc. A methodology has been developed to help bridge this information deficit. This has been based on simple dynamic, biokinetic models that mainly use parameters derived from laboratory-based study and field observation. In some cases, allometric relationships have been employed to allow further model parameterization. Initial testing of the model by comparing model output with empirical data sets suggest that the models provide sensible equilibrium CFs. Furthermore, analyses of modelling results suggest that for some radionuclides, in particularly those with long effective half-lives, the time to equilibrium can be far greater than the life-time of an organism. This clearly emphasises the limitations of applying a universal equilibrium approach. The methodology, therefore, has an added advantage that non-equilibrium scenarios can be considered in a more rigorous manner. Further refinements to the modelling approach might be attained by exploring the importance of various model parameters, through sensitivity analyses, and by identifying those
Oluwagbemi, Olugbenga O; Adewumi, Adewole; Esuruoso, Abimbola
Computational biology and bioinformatics are gradually gaining grounds in Africa and other developing nations of the world. However, in these countries, some of the challenges of computational biology and bioinformatics education are inadequate infrastructures, and lack of readily-available complementary and motivational tools to support learning as well as research. This has lowered the morale of many promising undergraduates, postgraduates and researchers from aspiring to undertake future study in these fields. In this paper, we developed and described MACBenAbim (Multi-platform Mobile Application for Computational Biology and Bioinformatics), a flexible user-friendly tool to search for, define and describe the meanings of keyterms in computational biology and bioinformatics, thus expanding the frontiers of knowledge of the users. This tool also has the capability of achieving visualization of results on a mobile multi-platform context. MACBenAbim is available from the authors for non-commercial purposes.
Gorodkin, Jan; Hofacker, Ivo L.; Ruzzo, Walter L.
RNA bioinformatics and computational RNA biology have emerged from implementing methods for predicting the secondary structure of single sequences. The field has evolved to exploit multiple sequences to take evolutionary information into account, such as compensating (and structure preserving) base...... changes. These methods have been developed further and applied for computational screens of genomic sequence. Furthermore, a number of additional directions have emerged. These include methods to search for RNA 3D structure, RNA-RNA interactions, and design of interfering RNAs (RNAi) as well as methods...... for interactions between RNA and proteins.Here, we introduce the basic concepts of predicting RNA secondary structure relevant to the further analyses of RNA sequences. We also provide pointers to methods addressing various aspects of RNA bioinformatics and computational RNA biology....
Picard, Delphine; Pry, Rene
This study assessed the efficiency of a model of a familiar urban area for enhancing knowledge of the spatial environment by adults with visual impairments. It found a significant improvement in knowledge of spatial configuration after exposure to the model, suggesting that models are powerful means of developing cognitive mapping in people who…
Fang, Wai-Chi; Lue, Jaw-Chyng
A system comprising very-large-scale integrated (VLSI) circuits is being developed as a means of bioinformatics-oriented analysis and recognition of patterns of fluorescence generated in a microarray in an advanced, highly miniaturized, portable genetic-expression-assay instrument. Such an instrument implements an on-chip combination of polymerase chain reactions and electrochemical transduction for amplification and detection of deoxyribonucleic acid (DNA).
Christopher L Williams
Full Text Available Objective: Within the information technology (IT industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise′s overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Context: Bioinformatics relies on nimble IT framework which can adapt to changing requirements. Aims: To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics Conclusions: Use of the microservices framework
Hildebrandt, Anna Katharina; Stöckel, Daniel; Fischer, Nina M; de la Garza, Luis; Krüger, Jens; Nickels, Stefan; Röttig, Marc; Schärfe, Charlotta; Schumann, Marcel; Thiel, Philipp; Lenhof, Hans-Peter; Kohlbacher, Oliver; Hildebrandt, Andreas
Web-based workflow systems have gained considerable momentum in sequence-oriented bioinformatics. In structural bioinformatics, however, such systems are still relatively rare; while commercial stand-alone workflow applications are common in the pharmaceutical industry, academic researchers often still rely on command-line scripting to glue individual tools together. In this work, we address the problem of building a web-based system for workflows in structural bioinformatics. For the underlying molecular modelling engine, we opted for the BALL framework because of its extensive and well-tested functionality in the field of structural bioinformatics. The large number of molecular data structures and algorithms implemented in BALL allows for elegant and sophisticated development of new approaches in the field. We hence connected the versatile BALL library and its visualization and editing front end BALLView with the Galaxy workflow framework. The result, which we call ballaxy, enables the user to simply and intuitively create sophisticated pipelines for applications in structure-based computational biology, integrated into a standard tool for molecular modelling. ballaxy consists of three parts: some minor modifications to the Galaxy system, a collection of tools and an integration into the BALL framework and the BALLView application for molecular modelling. Modifications to Galaxy will be submitted to the Galaxy project, and the BALL and BALLView integrations will be integrated in the next major BALL release. After acceptance of the modifications into the Galaxy project, we will publish all ballaxy tools via the Galaxy toolshed. In the meantime, all three components are available from http://www.ball-project.org/ballaxy. Also, docker images for ballaxy are available at https://registry.hub.docker.com/u/anhi/ballaxy/dockerfile/. ballaxy is licensed under the terms of the GPL. © The Author 2014. Published by Oxford University Press. All rights reserved. For
Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S.
Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469
Schneider, Maria Victoria; Watson, James; Attwood, Teresa; Rother, Kristian; Budd, Aidan; McDowall, Jennifer; Via, Allegra; Fernandes, Pedro; Nyronen, Tommy; Blicher, Thomas; Jones, Phil; Blatter, Marie-Claude; De Las Rivas, Javier; Judge, David Phillip; van der Gool, Wouter; Brooksbank, Cath
As bioinformatics becomes increasingly central to research in the molecular life sciences, the need to train non-bioinformaticians to make the most of bioinformatics resources is growing. Here, we review the key challenges and pitfalls to providing effective training for users of bioinformatics services, and discuss successful training strategies shared by a diverse set of bioinformatics trainers. We also identify steps that trainers in bioinformatics could take together to advance the state of the art in current training practices. The ideas presented in this article derive from the first Trainer Networking Session held under the auspices of the EU-funded SLING Integrating Activity, which took place in November 2009.
Schneider, M.V.; Watson, J.; Attwood, T.
As bioinformatics becomes increasingly central to research in the molecular life sciences, the need to train non-bioinformaticians to make the most of bioinformatics resources is growing. Here, we review the key challenges and pitfalls to providing effective training for users of bioinformatics...... services, and discuss successful training strategies shared by a diverse set of bioinformatics trainers. We also identify steps that trainers in bioinformatics could take together to advance the state of the art in current training practices. The ideas presented in this article derive from the first...
Colucci, S; Donini, F M; Di Sciascio, E
Comparison of resources is a frequent task in different bio-informatics applications, including drug-target interaction, drug repositioning and mechanism of action understanding, among others. This paper proposes a general method for the logical comparison of resources modeled in Resource Description Framework and shows its distinguishing features with reference to the comparison of drugs. In particular, the method returns a description of the commonalities between resources, rather than a numerical value estimating their similarity and/or relatedness. The approach is domain-independent and may be flexibly adapted to heterogeneous use cases, according to a process for setting parameters which is completely explicit. The paper also presents an experiment using the dataset Bioportal as knowledge source; the experiment is fully reproducible, thanks to the elicitation of criteria and values for parameter customization. Copyright © 2017 Elsevier Inc. All rights reserved.
Full Text Available In order to monitor, describe and understand the marine environment, many research institutions are involved in the acquisition and distribution of ocean data, both from observations and models. Scientists from these institutions are spending too much time looking for, accessing, and reformatting data: they need better tools and procedures to make the science they do more efficient. The U.S. Integrated Ocean Observing System (US-IOOS is working on making large amounts of distributed data usable in an easy and efficient way. It is essentially a network of scientists, technicians and technologies designed to acquire, collect and disseminate observational and modelled data resulting from coastal and oceanic marine regions investigations to researchers, stakeholders and policy makers. In order to be successful, this effort requires standard data protocols, web services and standards-based tools. Starting from the US-IOOS approach, which is being adopted throughout much of the oceanographic and meteorological sectors, we describe here the CNR-ISMAR Venice experience in the direction of setting up a national Italian IOOS framework using the THREDDS (THematic Real-time Environmental Distributed Data Services Data Server (TDS, a middleware designed to fill the gap between data providers and data users. The TDS provides services that allow data users to find the data sets pertaining to their scientific needs, to access, to visualize and to use them in an easy way, without downloading files to the local workspace. In order to achieve this, it is necessary that the data providers make their data available in a standard form that the TDS understands, and with sufficient metadata to allow the data to be read and searched in a standard way. The core idea is then to utilize a Common Data Model (CDM, a unified conceptual model that describes different datatypes within each dataset. More specifically, Unidata (www.unidata.ucar.edu has developed CDM
Torre, Denis; Krawczuk, Patrycja; Jagodnik, Kathleen M.; Lachmann, Alexander; Wang, Zichen; Wang, Lily; Kuleshov, Maxim V.; Ma'Ayan, Avi
Biomedical data repositories such as the Gene Expression Omnibus (GEO) enable the search and discovery of relevant biomedical digital data objects. Similarly, resources such as OMICtools, index bioinformatics tools that can extract knowledge from these digital data objects. However, systematic access to pre-generated 'canned' analyses applied by bioinformatics tools to biomedical digital data objects is currently not available. Datasets2Tools is a repository indexing 31,473 canned bioinformatics analyses applied to 6,431 datasets. The Datasets2Tools repository also contains the indexing of 4,901 published bioinformatics software tools, and all the analyzed datasets. Datasets2Tools enables users to rapidly find datasets, tools, and canned analyses through an intuitive web interface, a Google Chrome extension, and an API. Furthermore, Datasets2Tools provides a platform for contributing canned analyses, datasets, and tools, as well as evaluating these digital objects according to their compliance with the findable, accessible, interoperable, and reusable (FAIR) principles. By incorporating community engagement, Datasets2Tools promotes sharing of digital resources to stimulate the extraction of knowledge from biomedical research data. Datasets2Tools is freely available from: http://amp.pharm.mssm.edu/datasets2tools.
Armstrong, Ryan; de Ribaupierre, Sandrine; Eagleson, Roy
This paper describes the design and development of a software tool for the evaluation and training of surgical residents using an interactive, immersive, virtual environment. Our objective was to develop a tool to evaluate user spatial reasoning skills and knowledge in a neuroanatomical context, as well as to augment their performance through interactivity. In the visualization, manually segmented anatomical surface images of MRI scans of the brain were rendered using a stereo display to improve depth cues. A magnetically tracked wand was used as a 3D input device for localization tasks within the brain. The movement of the wand was made to correspond to movement of a spherical cursor within the rendered scene, providing a reference for localization. Users can be tested on their ability to localize structures within the 3D scene, and their ability to place anatomical features at the appropriate locations within the rendering. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Full Text Available The realisation of the advantages offered by e-learning accompanied by the use of various emerging information technologies has resulted in a noticeable shift by academia towards e-learning. An analysis of the use, knowledge and adoption of emerging technologies by academics in an Open Distance Learning (ODL environment at the University of South Africa (UNISA was undertaken in this study. The aim of the study was to evaluate the use, knowledge and adoption of emerging e-learning technologies by the academics from the selected schools. The academics in the Schools of Arts, Computing and Science were purposively selected in order to draw on views of academics from different teaching and educational backgrounds. Questionnaires were distributed both electronically and manually. The results showed that academics in all the Schools were competent at the use of information technology tools and applications such as emailing, word-processing, Internet, myUnisa (UNISA’s online teaching platform, and Microsoft PowerPoint and Excel. An evaluation of the awareness of different emerging technological tools showed that most academics were aware of Open Access Technologies, Social Networking Sites, Blogs, Video Games and Microblogging Platforms. While the level of awareness was high for these technologies, the use by the academics was low. At least 62.3% of the academics indicated willingness to migrate to online teaching completely and also indicated the need for further training on new technologies. A comparison of the different schools showed no statistically significant difference in the use, knowledge and willingness to adopt technology amongst the academics.
Vakser Ilya A
Full Text Available Abstract Background Computational approaches to protein-protein docking typically include scoring aimed at improving the rank of the near-native structure relative to the false-positive matches. Knowledge-based potentials improve modeling of protein complexes by taking advantage of the rapidly increasing amount of experimentally derived information on protein-protein association. An essential element of knowledge-based potentials is defining the reference state for an optimal description of the residue-residue (or atom-atom pairs in the non-interaction state. Results The study presents a new Distance- and Environment-dependent, Coarse-grained, Knowledge-based (DECK potential for scoring of protein-protein docking predictions. Training sets of protein-protein matches were generated based on bound and unbound forms of proteins taken from the DOCKGROUND resource. Each residue was represented by a pseudo-atom in the geometric center of the side chain. To capture the long-range and the multi-body interactions, residues in different secondary structure elements at protein-protein interfaces were considered as different residue types. Five reference states for the potentials were defined and tested. The optimal reference state was selected and the cutoff effect on the distance-dependent potentials investigated. The potentials were validated on the docking decoys sets, showing better performance than the existing potentials used in scoring of protein-protein docking results. Conclusions A novel residue-based statistical potential for protein-protein docking was developed and validated on docking decoy sets. The results show that the scoring function DECK can successfully identify near-native protein-protein matches and thus is useful in protein docking. In addition to the practical application of the potentials, the study provides insights into the relative utility of the reference states, the scope of the distance dependence, and the coarse-graining of
Bachri, Syamsul; Stötter, Johann; Sartohadi, Junun
People in the Bromo area (located within Tengger Caldera) have learn to live with the threat of volcanic hazard since this volcano is categorized as an active volcano in Indonesia. During 2010, the eruption intensity increased yielding heavy ash fall and glowing rock fragments. A significant risk is also presented by mass movement which reaches areas up to 25 km from the crater. As a result of the 2010 eruption, 12 houses were destroyed, 25 houses collapsed and there were severe also effects on agriculture and the livestock sector. This paper focuses on understanding the interaction of Bromo volcanic eruption processes and their social responses. The specific aims are to 1) identify the 2010 eruption of Bromo 2) examine the human-volcano relationship within Bromo area in general, and 3) investigate the local knowledge related to hazard, risk perception and their adaptation strategies in specific. In-depth interviews with 33 informants from four districts nearest to the crater included local people and authorities were carried out. The survey focused on farmers, key persons (dukun), students and teachers in order to understand how people respond to Bromo eruption. The results show that the eruption in 2010 was unusual as it took continued for nine months, the longest period in Bromo history. The type of eruption was phreatomagmatic producing material dominated by ash to fine sand. This kind of sediment typically belongs to Tengger mountain eruptions which had produced vast explosions in the past. Furthermore, two years after the eruption, the interviewed people explained that local knowledge and their experiences with volcanic activity do not influence their risk perception. Dealing with this eruption, people in the Bromo area applied 'lumbung desa' (traditional saving systems) and mutual aid activity for surviving the volcanic eruption. Keywords: Human-environment system, local knowledge, risk perception, adaptation strategies, Bromo Volcano Indonesia
Full Text Available Linux container technologies, as represented by Docker, provide an alternative to complex and time-consuming installation processes needed for scientiﬁc software. The ease of deployment and the process isolation they enable, as well as the reproducibility they permit across environments and versions, are among the qualities that make them interesting candidates for the construction of bioinformatic infrastructures, at any scale from single workstations to high throughput computing architectures. The Docker Hub is a public registry which can be used to distribute bioinformatic software as Docker images. However, its lack of curation and its genericity make it difﬁcult for a bioinformatics user to ﬁnd the most appropriate images needed. BioShaDock is a bioinformatics-focused Docker registry, which provides a local and fully controlled environment to build and publish bioinformatic software as portable Docker images. It provides a number of improvements over the base Docker registry on authentication and permissions management, that enable its integration in existing bioinformatic infrastructures such as computing platforms. The metadata associated with the registered images are domain-centric, including for instance concepts deﬁned in the EDAM ontology, a shared and structured vocabulary of commonly used terms in bioinformatics. The registry also includes user deﬁned tags to facilitate its discovery, as well as a link to the tool description in the ELIXIR registry if it already exists. If it does not, the BioShaDock registry will synchronize with the registry to create a new description in the Elixir registry, based on the BioShaDock entry metadata. This link will help users get more information on the tool such as its EDAM operations, input and output types. This allows integration with the ELIXIR Tools and Data Services Registry, thus providing the appropriate visibility of such images to the bioinformatics community.
Goto, Naohisa; Prins, Pjotr; Nakao, Mitsuteru; Bonnal, Raoul; Aerts, Jan; Katayama, Toshiaki
The BioRuby software toolkit contains a comprehensive set of free development tools and libraries for bioinformatics and molecular biology, written in the Ruby programming language. BioRuby has components for sequence analysis, pathway analysis, protein modelling and phylogenetic analysis; it supports many widely used data formats and provides easy access to databases, external programs and public web services, including BLAST, KEGG, GenBank, MEDLINE and GO. BioRuby comes with a tutorial, documentation and an interactive environment, which can be used in the shell, and in the web browser. BioRuby is free and open source software, made available under the Ruby license. BioRuby runs on all platforms that support Ruby, including Linux, Mac OS X and Windows. And, with JRuby, BioRuby runs on the Java Virtual Machine. The source code is available from http://www.bioruby.org/. firstname.lastname@example.org
biodiversity. Consequently, the major environmental challenges facing us in the 21st century include: global climate change , energy, population and food...technological prowess, and security interests. Challenges Global Climate Change – Evidence shows that our environment and the global climate ... urbanization will continue to pressure the regional environment . Although most countries have environmental protection ministries or agencies, a lack of
Bioinformatic evaluation of L-arginine catabolic pathways in 24 cyanobacteria and transcriptional analysis of genes encoding enzymes of L-arginine catabolism in the cyanobacterium Synechocystis sp. PCC 6803
Schriek, Sarah; R?ckert, Christian; Staiger, Dorothee; Pistorius, Elfriede K; Michel, Klaus-Peter
Abstract Background So far very limited knowledge exists on L-arginine catabolism in cyanobacteria, although six major L-arginine-degrading pathways have been described for prokaryotes. Thus, we have performed a bioinformatic analysis of possible L-arginine-degrading pathways in cyanobacteria. Further, we chose Synechocystis sp. PCC 6803 for a more detailed bioinformatic analysis and for validation of the bioinformatic predictions on L-arginine catabolism with a transcript analysis. Results W...
Evolution has shaped the life forms for billion of years. Domestication is an accelerated process that can be used as a model for evolutionary changes. The aim of this thesis project has been to carry out extensive bioinformatic analyses of whole genome sequencing data to reveal SNPs, InDels and selective sweeps in the chicken, pig and dog genome. Pig genome sequencing revealed loci under selection for elongation of back and increased number of vertebrae, associated with the NR6A1, PLAG1,...
Karimzadeh, Mehran; Hoffman, Michael M
Investing in documenting your bioinformatics software well can increase its impact and save your time. To maximize the effectiveness of your documentation, we suggest following a few guidelines we propose here. We recommend providing multiple avenues for users to use your research software, including a navigable HTML interface with a quick start, useful help messages with detailed explanation and thorough examples for each feature of your software. By following these guidelines, you can assure that your hard work maximally benefits yourself and others. © The Author 2017. Published by Oxford University Press.
The general audience for these lectures is mainly physicists, computer scientists, engineers or the general public wanting to know more about what’s going on in the biosciences. What’s bioinformatics and why is all this fuss being made about it ? What’s this revolution triggered by the human genome project ? Are there any results yet ? What are the problems ? What new avenues of research have been opened up ? What about the technology ? These new developments will be compared with what happened at CERN earlier in its evolution, and it is hoped that the similiraties and contrasts will stimulate new curiosity and provoke new thoughts.
Lue, Jaw-Chyng L.; Fang, Wai-Chi
A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.
Handl, Julia; Kell, Douglas B; Knowles, Joshua
This paper reviews the application of multiobjective optimization in the fields of bioinformatics and computational biology. A survey of existing work, organized by application area, forms the main body of the review, following an introduction to the key concepts in multiobjective optimization. An original contribution of the review is the identification of five distinct "contexts," giving rise to multiple objectives: These are used to explain the reasons behind the use of multiobjective optimization in each application area and also to point the way to potential future uses of the technique.
Karim, Md Rezaul; Michel, Audrey; Zappa, Achille; Baranov, Pavel; Sahay, Ratnesh; Rebholz-Schuhmann, Dietrich
Data workflow systems (DWFSs) enable bioinformatics researchers to combine components for data access and data analytics, and to share the final data analytics approach with their collaborators. Increasingly, such systems have to cope with large-scale data, such as full genomes (about 200 GB each), public fact repositories (about 100 TB of data) and 3D imaging data at even larger scales. As moving the data becomes cumbersome, the DWFS needs to embed its processes into a cloud infrastructure, where the data are already hosted. As the standardized public data play an increasingly important role, the DWFS needs to comply with Semantic Web technologies. This advancement to DWFS would reduce overhead costs and accelerate the progress in bioinformatics research based on large-scale data and public resources, as researchers would require less specialized IT knowledge for the implementation. Furthermore, the high data growth rates in bioinformatics research drive the demand for parallel and distributed computing, which then imposes a need for scalability and high-throughput capabilities onto the DWFS. As a result, requirements for data sharing and access to public knowledge bases suggest that compliance of the DWFS with Semantic Web standards is necessary. In this article, we will analyze the existing DWFS with regard to their capabilities toward public open data use as well as large-scale computational and human interface requirements. We untangle the parameters for selecting a preferable solution for bioinformatics research with particular consideration to using cloud services and Semantic Web technologies. Our analysis leads to research guidelines and recommendations toward the development of future DWFS for the bioinformatics research community. © The Author 2017. Published by Oxford University Press.
With the development of the Internet and the growth of online resources, bioinformatics training for wet-lab biologists became necessary as a part of their education. This article describes a one-semester course 'Applied Bioinformatics Course' (ABC, http://abc.cbi.pku.edu.cn/) that the author has been teaching to biological graduate students at the Peking University and the Chinese Academy of Agricultural Sciences for the past 13 years. ABC is a hands-on practical course to teach students to use online bioinformatics resources to solve biological problems related to their ongoing research projects in molecular biology. With a brief introduction to the background of the course, detailed information about the teaching strategies of the course are outlined in the 'How to teach' section. The contents of the course are briefly described in the 'What to teach' section with some real examples. The author wishes to share his teaching experiences and the online teaching materials with colleagues working in bioinformatics education both in local and international universities. © The Author 2013. Published by Oxford University Press.
Magana, Alejandra J.; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari
Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the…
Mello, Luciane V; Tregilgas, Luke; Cowley, Gwen; Gupta, Anshul; Makki, Fatima; Jhutty, Anjeet; Shanmugasundram, Achchuthan
Teaching bioinformatics is a longstanding challenge for educators who need to demonstrate to students how skills developed in the classroom may be applied to real world research. This study employed an action research methodology which utilised student-staff partnership and peer-learning. It was centred on the experiences of peer-facilitators, students who had previously taken a postgraduate bioinformatics module, and had applied knowledge and skills gained from it to their own research. It aimed to demonstrate to peer-receivers, current students, how bioinformatics could be used in their own research while developing peer-facilitators' teaching and mentoring skills. This student-centred approach was well received by the peer-receivers, who claimed to have gained improved understanding of bioinformatics and its relevance to research. Equally, peer-facilitators also developed a better understanding of the subject and appreciated that the activity was a rare and invaluable opportunity to develop their teaching and mentoring skills, enhancing their employability.
An Investigation of the Reliability of Knowledge Measures Through Relational Mapping in Joint Military Environments: Knowledge, Models and Tools to Improve the Effectiveness of Naval Distance Learning
Baker, Eva L; Lee, John J; Chung, Gregory K; Bewley, William L; Cheak, Alicia M; Ellis, Karen
...' understanding of joint mission-essential tasks. Analyses of scoring techniques yielded important information about the quality of the knowledge maps, and the assessments provided valuable information regarding student understanding of course content...
The automatic classification of GPCRs by bioinformatics methodology can provide functional information for new GPCRs in the whole 'GPCR proteome' and this information is important for the development of novel drugs. Since GPCR proteome is classified hierarchically, general ways for GPCR function prediction are based on hierarchical classification. Various computational tools have been developed to predict GPCR functions; those tools use not simple sequence searches but more powerful methods, such as alignment-free methods, statistical model methods, and machine learning methods used in protein sequence analysis, based on learning datasets. The first stage of hierarchical function prediction involves the discrimination of GPCRs from non-GPCRs and the second stage involves the classification of the predicted GPCR candidates into family, subfamily, and sub-subfamily levels. Then, further classification is performed according to their protein-protein interaction type: binding G-protein type, oligomerized partner type, etc. Those methods have achieved predictive accuracies of around 90 %. Finally, I described the future subject of research of the bioinformatics technique about functional prediction of GPCR.
Full Text Available One of main steps in a study of microbial communities is resolving their composition, diversity and function. In the past, these issues were mostly addressed by the use of amplicon sequencing of a target gene because of reasonable price and easier computational postprocessing of the bioinformatic data. With the advancement of sequencing techniques, the main focus shifted to the whole metagenome shotgun sequencing, which allows much more detailed analysis of the metagenomic data, including reconstruction of novel microbial genomes and to gain knowledge about genetic potential and metabolic capacities of whole environments. On the other hand, the output of whole metagenomic shotgun sequencing is mixture of short DNA fragments belonging to various genomes, therefore this approach requires more sophisticated computational algorithms for clustering of related sequences, commonly referred to as sequence binning. There are currently two types of binning methods: taxonomy dependent and taxonomy independent. The first type classifies the DNA fragments by performing a standard homology inference against a reference database, while the latter performs the reference-free binning by applying clustering techniques on features extracted from the sequences. In this review, we describe the strategies within the second approach. Although these strategies do not require prior knowledge, they have higher demands on the length of sequences. Besides their basic principle, an overview of particular methods and tools is provided. Furthermore, the review covers the utilization of the methods in context with the length of sequences and discusses the needs for metagenomic data preprocessing in form of initial assembly prior to binning.
Ehmann, Andreas F.; Downie, J. Stephen
The objective of the International Music Information Retrieval Systems Evaluation Laboratory (IMIRSEL) project is the creation of a large, secure corpus of audio and symbolic music data accessible to the music information retrieval (MIR) community for the testing and evaluation of various MIR techniques. As part of the IMIRSEL project, a cross-platform JAVA based visual programming environment called Music to Knowledge (M2K) is being developed for a variety of music information retrieval related tasks. The primary objective of M2K is to supply the MIR community with a toolset that provides the ability to rapidly prototype algorithms, as well as foster the sharing of techniques within the MIR community through the use of a standardized set of tools. Due to the relatively large size of audio data and the computational costs associated with some digital signal processing and machine learning techniques, M2K is also designed to support distributed computing across computing clusters. In addition, facilities to allow the integration of non-JAVA based (e.g., C/C++, MATLAB, etc.) algorithms and programs are provided within M2K. [Work supported by the Andrew W. Mellon Foundation and NSF Grants No. IIS-0340597 and No. IIS-0327371.
Reproducibility in Data Analysis research has long been a significant concern, particularly in the areas of Bioinformatics and Computational Biology. Towards the aim of developing reproducible and reusable processes, Data Analysis management tools can help giving structure and coherence to complex data flows. Nonetheless, improved software quality comes at the cost of additional design and planning effort, which may become impractical in rapidly changing development environments. I propose that an adjustment of focus from processes to data in the management of Bioinformatic pipelines may help improving reproducibility with minimal impact on preexisting development practices. In this paper I introduce the repo R package for bioinformatic analysis management. The tool supports a data-centered philosophy that aims at improving analysis reproducibility and reusability with minimal design overhead. The core of repo lies in its support for easy data storage, retrieval, distribution and annotation. In repo the data analysis flow is derived a posteriori from dependency annotations. The repo package constitutes an unobtrusive data and flow management extension of the R statistical language. Its adoption, together with good development practices, can help improving data analysis management, sharing and reproducibility, especially in the fields of Bioinformatics and Computational Biology.
Sandler, Inga; Abu-Qarn, Mehtap; Aharoni, Amir
Molecular co-evolution is manifested by compensatory changes in proteins designed to enable adaptation to their natural environment. In recent years, bioinformatics approaches allowed for the detection of co-evolution at the level of the whole protein or of specific residues. Such efforts enabled prediction of protein-protein interactions, functional assignments of proteins and the identification of interacting residues, thereby providing information on protein structure. Still, despite such advances, relatively little is known regarding the functional implications of sequence divergence resulting from protein co-evolution. While bioinformatics approaches usually analyze thousands of proteins to obtain a broad view of protein co-evolution, experimental evaluation of protein co-evolution serves to study only individual proteins. In this review, we describe recent advances in bioinformatics and experimental efforts aimed at examining protein co-evolution. Accordingly, we discuss possible modes of crosstalk between the bioinformatics and experimental approaches to facilitate the identification of co-evolutionary signals in proteins and to understand their implications for the structure and function of proteins.
Hodor, Paul; Chawla, Amandeep; Clark, Andrew; Neal, Lauren
: One of the solutions proposed for addressing the challenge of the overwhelming abundance of genomic sequence and other biological data is the use of the Hadoop computing framework. Appropriate tools are needed to set up computational environments that facilitate research of novel bioinformatics methodology using Hadoop. Here, we present cl-dash, a complete starter kit for setting up such an environment. Configuring and deploying new Hadoop clusters can be done in minutes. Use of Amazon Web Services ensures no initial investment and minimal operation costs. Two sample bioinformatics applications help the researcher understand and learn the principles of implementing an algorithm using the MapReduce programming pattern. Source code is available at https://bitbucket.org/booz-allen-sci-comp-team/cl-dash.git. email@example.com. © The Author 2015. Published by Oxford University Press.
Helm-Stevens, Roxanne; Brown, Kneeland C.; Russell, Julia K.
Knowledge management has the potential to develop strategic advantage and enhance the performance of an organization in terms of productivity and business process efficiency. For this reason, organizations are contributing significant resources to knowledge management; investing in information location and implementing knowledge management…
Parache, V.; Renaud, P. [Institut de Radioprotection et de Surete Nucleaire - IRSN (France)
and propose a reproducible approach for the evaluation of realistic indicators. Furthermore, these study shows that the data obtained on a nuclear site could apply to a nearby nuclear site, with close agro-climatic conditions. The influence of the types of productions and the agricultural and breeding practices, on the potential contamination of foodstuffs produced locally in accidental situation, led the IRSN to develop summary sheets on the agricultural environment of the nuclear sites. These levels of contamination can be very different according to the period of the year in which occurs the accident. For example, the date of harvest determines if these products will be almost exempt from contamination (so already harvested) or if they will present a maximal contamination. In complement, these sheets give some local contacts which would allow obtaining contextual precision around sensitive dates. These data will also allow elaborating sampling strategies to monitor foodstuffs activities after an accidental release. Based on these knowledge of environmental characteristics and considering that the metrological capacity will be necessarily limited, the monitoring strategies must insure that all foodstuff produced on the whole monitored area are below intervention levels. Document available in abstract form only. (authors)
Modelling physics teachers' pedagogical content knowledge through purposeful relationships between semiotic registers: KEPLER - "knowledge environment for physics learning and evaluation of relationships"
Mothersole, Peter John Michael
Constructivism and considers that learning is greatly influenced by collaboration between active learners. Although learning has this social dimension, the individual learner builds a personalised version of relevant concepts. Ideas in science are not communicated solely through written and spoken language. Use is made of different types of context-sensitive, semiotic register (e.g. diagrams, graphs and equations). The science teacher expands the set of such artefacts by introducing other types pertinent to teaching and learning. The full set may be used by collaborating learners for the purpose of concept development, problem solving and knowledge construction. It is argued that in science pedagogy such semiotic registers are not used in isolation, but are interrelated by a tutor for pedagogical purposes. The teacher may wish to highlight more semantically rich, localized areas on a semiotic and exploited for pedagogical purposes. Although the concept of purposeful relationships may be of relevance to knowledge-based systems in general, this work considers the framework of such relationships to be a component in a teacher's pedagogical content knowledge (PCK). By investigating a representation in software of such a framework belonging to an experienced teacher, it is envisaged that pre-service teachers may gain an insight into how subject knowledge may be structured for pedagogical purposes.
and dhurrin, which have not previously been characterized in blueberries. There are more than 44,500 spider species with distinct habitats and unique characteristics. Spiders are masters of producing silk webs to catch prey and using venom to neutralize. The exploration of the genetics behind these properties...... japonicus (Lotus), Vaccinium corymbosum (blueberry), Stegodyphus mimosarum (spider) and Trifolium occidentale (clover). From a bioinformatics data analysis perspective, my work can be divided into three parts; genome annotation, small RNA, and gene expression analysis. Lotus is a legume of significant...... has just started. We have assembled and annotated the first two spider genomes to facilitate our understanding of spiders at the molecular level. The need for analyzing the large and increasing amount of sequencing data has increased the demand for efficient, user friendly, and broadly applicable...
Surangi W. Punyasena
Full Text Available Recent advances in microscopy, imaging, and data analyses have permitted both the greater application of quantitative methods and the collection of large data sets that can be used to investigate plant morphology. This special issue, the first for Applications in Plant Sciences, presents a collection of papers highlighting recent methods in the quantitative study of plant form. These emerging biometric and bioinformatic approaches to plant sciences are critical for better understanding how morphology relates to ecology, physiology, genotype, and evolutionary and phylogenetic history. From microscopic pollen grains and charcoal particles, to macroscopic leaves and whole root systems, the methods presented include automated classification and identification, geometric morphometrics, and skeleton networks, as well as tests of the limits of human assessment. All demonstrate a clear need for these computational and morphometric approaches in order to increase the consistency, objectivity, and throughput of plant morphological studies.
japonicus (Lotus), Vaccinium corymbosum (blueberry), Stegodyphus mimosarum (spider) and Trifolium occidentale (clover). From a bioinformatics data analysis perspective, my work can be divided into three parts; genome annotation, small RNA, and gene expression analysis. Lotus is a legume of significant...... biology and genetics studies. We present an improved Lotus genome assembly and annotation, a catalog of natural variation based on re-sequencing of 29 accessions, and describe the involvement of small RNAs in the plant-bacteria symbiosis. Blueberries contain anthocyanins, other pigments and various...... polyphenolic compounds, which have been linked to protection against diabetes, cardiovascular disease and age-related cognitive decline. We present the first genome- guided approach in blueberry to identify genes involved in the synthesis of health-protective compounds. Using RNA-Seq data from five stages...
ACADEMIC TRAINING LECTURE SERIES 27, 28 February 1, 2, 3 March 2006 from 11:00 to 12:00 - Auditorium, bldg. 500 Decoding the Genome A special series of 5 lectures on: Recent extraordinary advances in the life sciences arising through new detection technologies and bioinformatics The past five years have seen an extraordinary change in the information and tools available in the life sciences. The sequencing of the human genome, the discovery that we possess far fewer genes than foreseen, the measurement of the tiny changes in the genomes that differentiate us, the sequencing of the genomes of many pathogens that lead to diseases such as malaria are all examples of completely new information that is now available in the quest for improved healthcare. New tools have allowed similar strides in the discovery of the associated protein structures, providing invaluable information for those searching for new drugs. New DNA microarray chips permit simultaneous measurement of the state of expression of tens...
Yukinawa, N; Ishii, S; Takenouchi, T; Oba, S
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods
Wang, Xiran; Jiang, Leiyu; Tang, Haoru
GSTF12 has always been known as a key factor of proanthocyanins accumulate in plant testa. Through bioinformatics analysis of the nucleotide and encoded protein sequence of GSTF12, it is more advantageous to the study of genes related to anthocyanin biosynthesis accumulation pathway. Therefore, we chosen GSTF12 gene of 11 kinds species, downloaded their nucleotide and protein sequence from NCBI as the research object, found strawberry GSTF12 gene via bioinformation analyse, constructed phylogenetic tree. At the same time, we analysed the strawberry GSTF12 gene of physical and chemical properties and its protein structure and so on. The phylogenetic tree showed that Strawberry and petunia were closest relative. By the protein prediction, we found that the protein owed one proper signal peptide without obvious transmembrane regions.
Kirkeby, Inge Mette
Although serious efforts are made internationally and nationally, it is a slow process to make our physical environment accessible. In the actual design process, architects play a major role. But what kinds of knowledge, including research-based knowledge, do practicing architects make use of when...... designing accessible environments? The answer to the question is crucially important since it affects how knowledge is distributed and how accessibility can be ensured. In order to get first-hand knowledge about the design process and the sources from which they gain knowledge, 11 qualitative interviews...... were conducted with architects with experience of designing for accessibility. The analysis draws on two theoretical distinctions. The first is research-based knowledge versus knowledge used by architects. The second is context-independent knowledge versus context-dependent knowledge. The practitioners...
J. Köster (Johannes)
textabstractWe present Rust-Bio, the first general purpose bioinformatics library for the innovative Rust programming language. Rust-Bio leverages the unique combination of speed, memory safety and high-level syntax offered by Rust to provide a fast and safe set of bioinformatics algorithms and data
The main bottleneck in advancing genomics in present times is the lack of expertise in using bioinformatics tools and approaches for data mining in raw DNA sequences generated by modern high throughput technologies such as next generation sequencing. Although bioinformatics has been making major progress and ...
Life sciences research and development has opened up new challenges and opportunities for bioinformatics. The contribution of bioinformatics advances made possible the mapping of the entire human genome and genomes of many other organisms in just over a decade. These discoveries, along with current efforts to ...
Buttigieg, Pier Luigi
Using live presentation to communicate the interdisciplinary and abstract content of bioinformatics to its educationally diverse studentship is a sizeable challenge. This review collects a number of perspectives on multimedia presentation, visual communication and pedagogy. The aim is to encourage educators to reflect on the great potential of live presentation in facilitating bioinformatics education.
Bioinformatics has advanced the course of research and future veterinary vaccines development because it has provided new tools for identification of vaccine targets from sequenced biological data of organisms. In Nigeria, there is lack of bioinformatics training in the universities, expect for short training courses in which ...
Howard, David R.; Miskowski, Jennifer A.; Grunwald, Sandra K.; Abler, Michael L.
At the University of Wisconsin-La Crosse, we have undertaken a program to integrate the study of bioinformatics across the undergraduate life science curricula. Our efforts have included incorporating bioinformatics exercises into courses in the biology, microbiology, and chemistry departments, as well as coordinating the efforts of faculty within…
Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…
Ramlo, Susan E.; McConnell, David; Duan, Zhong-Hui; Moore, Francisco B.
Faculty at a Midwestern metropolitan public university recently developed a course on bioinformatics that emphasized collaboration and inquiry. Bioinformatics, essentially the application of computational tools to biological data, is inherently interdisciplinary. Thus part of the challenge of creating this course was serving the needs and…
When bioinformatics education is considered, several issues are addressed. At the undergraduate level, the main issue revolves around conveying information from two main and different fields: biology and computer science. At the graduate level, the main issue is bridging the gap between biology students and computer science students. However, there is an educational component that is rarely addressed within the context of bioinformatics education: the ethics component. Here, a different perspective is provided on bioinformatics education, and the current status of ethics is analyzed within the existing bioinformatics programs. Analysis of the existing undergraduate and graduate programs, in both Europe and the United States, reveals the minimal attention given to ethics within bioinformatics education. Given that bioinformaticians speedily and effectively shape the biomedical sciences and hence their implications for society, here redesigning of the bioinformatics curricula is suggested in order to integrate the necessary ethics education. Unique ethical problems awaiting bioinformaticians and bioinformatics ethics as a separate field of study are discussed. In addition, a template for an "Ethics in Bioinformatics" course is provided.
Lee, Gyungjoo; Yang, Soo; Jang, Mi Heui; Yeom, Mijung
This study was conducted to evaluate the effectiveness of a mother/infant-toddler health program developed to enhance parenting knowledge, behavior and confidence in low income mothers and home environment. A one-group pretest-posttest quasi-experimental design was used. Sixty-nine dyads of mothers and infant-toddlers (aged 0-36 months) were provided with weekly intervention for seven session. Each session consisted of three parts; first, educating to increase integrated knowledge related to the development of the infant/toddler including nutrition, first aid and home environment; second, counseling to share parenting experience among the mothers and to increase their nurturing confidence; third, playing with the infant/toddler to facilitate attachment-based parenting behavior for the mothers. Following the programs, there were significant increases in parenting knowledge on nutrition and first aid. A significant improvement was found in attachment-based parenting behavior, but not in home safety practice. Nurturing confidence was not significantly increased. The program led to more positive home environment for infant/toddler's health and development. The findings provide evidence for mother-infant/toddler health program to improve parenting knowledge, attachment-based parenting behavior and better home environment in low income mothers. Study of the long term effectiveness of this program is recommended for future research.
Background Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. Results We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012–2013. Conclusions 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost. PMID:23937194
Barker, Daniel; Ferrier, David Ek; Holland, Peter Wh; Mitchell, John Bo; Plaisier, Heleen; Ritchie, Michael G; Smart, Steven D
Teaching bioinformatics at universities is complicated by typical computer classroom settings. As well as running software locally and online, students should gain experience of systems administration. For a future career in biology or bioinformatics, the installation of software is a useful skill. We propose that this may be taught by running the course on GNU/Linux running on inexpensive Raspberry Pi computer hardware, for which students may be granted full administrator access. We release 4273π, an operating system image for Raspberry Pi based on Raspbian Linux. This includes minor customisations for classroom use and includes our Open Access bioinformatics course, 4273π Bioinformatics for Biologists. This is based on the final-year undergraduate module BL4273, run on Raspberry Pi computers at the University of St Andrews, Semester 1, academic year 2012-2013. 4273π is a means to teach bioinformatics, including systems administration tasks, to undergraduates at low cost.
Hernández-de-Diego, Rafael; de Villiers, Etienne P; Klingström, Tomas; Gourlé, Hadrien; Conesa, Ana; Bongcam-Rudloff, Erik
Bioinformatics skills have become essential for many research areas; however, the availability of qualified researchers is usually lower than the demand and training to increase the number of able bioinformaticians is an important task for the bioinformatics community. When conducting training or hands-on tutorials, the lack of control over the analysis tools and repositories often results in undesirable situations during training, as unavailable online tools or version conflicts may delay, complicate, or even prevent the successful completion of a training event. The eBioKit is a stand-alone educational platform that hosts numerous tools and databases for bioinformatics research and allows training to take place in a controlled environment. A key advantage of the eBioKit over other existing teaching solutions is that all the required software and databases are locally installed on the system, significantly reducing the dependence on the internet. Furthermore, the architecture of the eBioKit has demonstrated itself to be an excellent balance between portability and performance, not only making the eBioKit an exceptional educational tool but also providing small research groups with a platform to incorporate bioinformatics analysis in their research. As a result, the eBioKit has formed an integral part of training and research performed by a wide variety of universities and organizations such as the Pan African Bioinformatics Network (H3ABioNet) as part of the initiative Human Heredity and Health in Africa (H3Africa), the Southern Africa Network for Biosciences (SAnBio) initiative, the Biosciences eastern and central Africa (BecA) hub, and the International Glossina Genome Initiative.
Manning, Timmy; Sleator, Roy D; Walsh, Paul
Artificial neural networks (ANNs) are a class of powerful machine learning models for classification and function approximation which have analogs in nature. An ANN learns to map stimuli to responses through repeated evaluation of exemplars of the mapping. This learning approach results in networks which are recognized for their noise tolerance and ability to generalize meaningful responses for novel stimuli. It is these properties of ANNs which make them appealing for applications to bioinformatics problems where interpretation of data may not always be obvious, and where the domain knowledge required for deductive techniques is incomplete or can cause a combinatorial explosion of rules. In this paper, we provide an introduction to artificial neural network theory and review some interesting recent applications to bioinformatics problems.
Karezin, V.; Bronnikova, I.; Terentyeva, T.
Full text: Rosatom being the flagman of the Russian nuclear industry has succession planning as one of the crucial strategic HR objectives. Therefore, it builds different approaches to assure attraction and development of the best and most promising specialists including recent and future graduates. Tournament of young professionals (TEMP) is the corner-stone initiative to select best young professionals in frames of crowdsourcing environment where participants raise the level of professional knowledge, learn to better understand the attitudes of work in the nuclear power industry, compete under the essential tasks of real production value while stakeholders build the culture of knowledge sharing. And the entire scheme rests upon knowledge transfer from the nuclear industry experts to potential hiring pool, applied knowledge accumulation, deep industry involvement and modern Web 2.0 technology capabilities. (author
Kostousov, Sergei; Kudryavtsev, Dmitry
Problem solving is a critical competency for modern world and also an effective way of learning. Education should not only transfer domain-specific knowledge to students, but also prepare them to solve real-life problems--to apply knowledge from one or several domains within specific situation. Problem solving as teaching tool is known for a long…
Kaiser, David Brian; Köhler, Thomas; Weith, Thomas
This article aims to sketch a conceptual design for an information and knowledge management system in sustainability research projects. The suitable frameworks to implement knowledge transfer models constitute social communities, because the mutual exchange and learning processes among all stakeholders promote key sustainable developments through…
Brazas, Michelle D; Ouellette, B F Francis
Bioinformatics.ca has been hosting continuing education programs in introductory and advanced bioinformatics topics in Canada since 1999 and has trained more than 2,000 participants to date. These workshops have been adapted over the years to keep pace with advances in both science and technology as well as the changing landscape in available learning modalities and the bioinformatics training needs of our audience. Post-workshop surveys have been a mandatory component of each workshop and are used to ensure appropriate adjustments are made to workshops to maximize learning. However, neither bioinformatics.ca nor others offering similar training programs have explored the long-term impact of bioinformatics continuing education training. Bioinformatics.ca recently initiated a look back on the impact its workshops have had on the career trajectories, research outcomes, publications, and collaborations of its participants. Using an anonymous online survey, bioinformatics.ca analyzed responses from those surveyed and discovered its workshops have had a positive impact on collaborations, research, publications, and career progression.
Wang, Yijun; Lu, Wenjie; Deng, Dexiang
Diverse bioinformatic resources have been developed for plant transcription factor (TF) research. This review presents the bioinformatic resources and methodologies for the elucidation of plant TF-mediated biological events. Such information is helpful to dissect the transcriptional regulatory systems in the three reference plants Arabidopsis , rice, and maize and translation to other plants. Transcription factors (TFs) orchestrate diverse biological programs by the modulation of spatiotemporal patterns of gene expression via binding cis-regulatory elements. Advanced sequencing platforms accompanied by emerging bioinformatic tools revolutionize the scope and extent of TF research. The system-level integration of bioinformatic resources is beneficial to the decoding of TF-involved networks. Herein, we first briefly introduce general and specialized databases for TF research in three reference plants Arabidopsis, rice, and maize. Then, as proof of concept, we identified and characterized heat shock transcription factor (HSF) members through the TF databases. Finally, we present how the integration of bioinformatic resources at -omics layers can aid the dissection of TF-mediated pathways. We also suggest ways forward to improve the bioinformatic resources of plant TFs. Leveraging these bioinformatic resources and methodologies opens new avenues for the elucidation of transcriptional regulatory systems in the three model systems and translation to other plants.
Full Text Available ABSTRACT:The traditional methods for mining foods for bioactive peptides are tedious and long. Similar to the drug industry, the length of time to identify and deliver a commercial health ingredient that reduces disease symptoms can take anything between 5 to 10 years. Reducing this time and effort is crucial in order to create new commercially viable products with clear and important health benefits. In the past few years, bioinformatics, the science that brings together fast computational biology, and efficient genome mining, is appearing as the long awaited solution to this problem. By quickly mining food genomes for characteristics of certain food therapeutic ingredients, researchers can potentially find new ones in a matter of a few weeks. Yet, surprisingly, very little success has been achieved so far using bioinformatics in mining for food bioactives.The absence of food specific bioinformatic mining tools, the slow integration of both experimental mining and bioinformatics, and the important difference between different experimental platforms are some of the reasons for the slow progress of bioinformatics in the field of functional food and more specifically in bioactive peptide discovery.In this paper I discuss some methods that could be easily translated, using a rational peptide bioinformatics design, to food bioactive peptide mining. I highlight the need for an integrated food peptide database. I also discuss how to better integrate experimental work with bioinformatics in order to improve the mining of food for bioactive peptides, therefore achieving a higher success rates.
Menegidio, Fabiano B; Jabes, Daniela L; Costa de Oliveira, Regina; Nunes, Luiz R
This manuscript introduces and describes Dugong, a Docker image based on Ubuntu 16.04, which automates installation of more than 3500 bioinformatics tools (along with their respective libraries and dependencies), in alternative computational environments. The software operates through a user-friendly XFCE4 graphic interface that allows software management and installation by users not fully familiarized with the Linux command line and provides the Jupyter Notebook to assist in the delivery and exchange of consistent and reproducible protocols and results across laboratories, assisting in the development of open science projects. Source code and instructions for local installation are available at https://github.com/DugongBioinformatics, under the MIT open source license. Luiz.firstname.lastname@example.org. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: email@example.com
Basyuni, M.; Wasilah, M.; Sumardi
This study describes the bioinformatics methods to analyze eight actin genes from mangrove plants on DDBJ/EMBL/GenBank as well as predicted the structure, composition, subcellular localization, similarity, and phylogenetic. The physical and chemical properties of eight mangroves showed variation among the genes. The percentage of the secondary structure of eight mangrove actin genes followed the order of a helix > random coil > extended chain structure for BgActl, KcActl, RsActl, and A. corniculatum Act. In contrast to this observation, the remaining actin genes were random coil > extended chain structure > a helix. This study, therefore, shown the prediction of secondary structure was performed for necessary structural information. The values of chloroplast or signal peptide or mitochondrial target were too small, indicated that no chloroplast or mitochondrial transit peptide or signal peptide of secretion pathway in mangrove actin genes. These results suggested the importance of understanding the diversity and functional of properties of the different amino acids in mangrove actin genes. To clarify the relationship among the mangrove actin gene, a phylogenetic tree was constructed. Three groups of mangrove actin genes were formed, the first group contains B. gymnorrhiza BgAct and R. stylosa RsActl. The second cluster which consists of 5 actin genes the largest group, and the last branch consist of one gene, B. sexagula Act. The present study, therefore, supported the previous results that plant actin genes form distinct clusters in the tree.
Horbach, D.Y.; Usanov, S.A.
One of the mechanisms of external signal transduction (ionizing radiation, toxicants, stress) to the target cell is the existence of membrane and intracellular proteins with intrinsic tyrosine kinase activity. No wonder that etiology of malignant growth links to abnormalities in signal transduction through tyrosine kinases. The epidermal growth factor receptor (EGFR) tyrosine kinases play fundamental roles in development, proliferation and differentiation of tissues of epithelial, mesenchymal and neuronal origin. There are four types of EGFR: EGF receptor (ErbB1/HER1), ErbB2/Neu/HER2, ErbB3/HER3 and ErbB4/HER4. Abnormal expression of EGFR, appearance of receptor mutants with changed ability to protein-protein interactions or increased tyrosine kinase activity have been implicated in the malignancy of different types of human tumors. Bioinformatics is currently using in investigation on design and selection of drugs that can make alterations in structure or competitively bind with receptors and so display antagonistic characteristics. (authors)
Neerincx, Pieter B T; Leunissen, Jack A M
Bioinformaticians have developed large collections of tools to make sense of the rapidly growing pool of molecular biological data. Biological systems tend to be complex and in order to understand them, it is often necessary to link many data sets and use more than one tool. Therefore, bioinformaticians have experimented with several strategies to try to integrate data sets and tools. Owing to the lack of standards for data sets and the interfaces of the tools this is not a trivial task. Over the past few years building services with web-based interfaces has become a popular way of sharing the data and tools that have resulted from many bioinformatics projects. This paper discusses the interoperability problem and how web services are being used to try to solve it, resulting in the evolution of tools with web interfaces from HTML/web form-based tools not suited for automatic workflow generation to a dynamic network of XML-based web services that can easily be used to create pipelines.
Fu, Zhiyan; Lin, Jing
The rapidly increasing number of characterized allergens has created huge demands for advanced information storage, retrieval, and analysis. Bioinformatics and machine learning approaches provide useful tools for the study of allergens and epitopes prediction, which greatly complement traditional laboratory techniques. The specific applications mainly include identification of B- and T-cell epitopes, and assessment of allergenicity and cross-reactivity. In order to facilitate the work of clinical and basic researchers who are not familiar with bioinformatics, we review in this chapter the most important databases, bioinformatic tools, and methods with relevance to the study of allergens.
Gallagher, Jerry P
... between private security and the KCPD. To empower this resource as a terrorism prevention force multiplier the development of a web based virtual knowledge sharing initiative was explored in this study as a solution to provide "one stop...
de Knikker, Remko; Guo, Youjun; Li, Jin-Long; Kwan, Albert K H; Yip, Kevin Y; Cheung, David W; Cheung, Kei-Hoi
services choreography language (BPEL4WS). While it is relatively straightforward to implement and publish web services, the use of web services choreography engines is still in its infancy. However, industry-wide support and push for web services standards is quickly increasing the chance of success in using web services to unify heterogeneous bioinformatics applications. Due to the immaturity of currently available web services engines, it is still most practical to implement a simple, ad-hoc XML-based workflow by hard coding the workflow as a Java application. For advanced web service users the Collaxa BPEL engine facilitates a configuration and management environment that can fully handle XML-based workflow.
Cheung David W
with these web services using a web services choreography language (BPEL4WS. Conclusion While it is relatively straightforward to implement and publish web services, the use of web services choreography engines is still in its infancy. However, industry-wide support and push for web services standards is quickly increasing the chance of success in using web services to unify heterogeneous bioinformatics applications. Due to the immaturity of currently available web services engines, it is still most practical to implement a simple, ad-hoc XML-based workflow by hard coding the workflow as a Java application. For advanced web service users the Collaxa BPEL engine facilitates a configuration and management environment that can fully handle XML-based workflow.
de Knikker, Remko; Guo, Youjun; Li, Jin-long; Kwan, Albert KH; Yip, Kevin Y; Cheung, David W; Cheung, Kei-Hoi
services using a web services choreography language (BPEL4WS). Conclusion While it is relatively straightforward to implement and publish web services, the use of web services choreography engines is still in its infancy. However, industry-wide support and push for web services standards is quickly increasing the chance of success in using web services to unify heterogeneous bioinformatics applications. Due to the immaturity of currently available web services engines, it is still most practical to implement a simple, ad-hoc XML-based workflow by hard coding the workflow as a Java application. For advanced web service users the Collaxa BPEL engine facilitates a configuration and management environment that can fully handle XML-based workflow. PMID:15113410
Ardan, Andam S.
The purposes of this study were (1) to describe the biology learning such as lesson plans, teaching materials, media and worksheets for the tenth grade of High School on the topic of Biodiversity and Basic Classification, Ecosystems and Environment Issues based on local wisdom of Timorese; (2) to analyze the improvement of the environmental…
Knowledge for a sustainable economy. Knowledge questions around the Dutch Memorandum on Environment and Economy ('Nota Milieu en Economie'); Kennis voor een duurzame economie. Kennisvragen rond de Nota Milieu en Economie
Dieleman, J.P.C.; Hafkamp, W.A. [Erasmus Studiecentrum voor Milieukunde, Erasmus Universiteit Rotterdam, Rotterdam (Netherlands)
June 18, 1997, the Dutch government presented the Memorandum Environment and Economy with the aim to contribute to integration of environment and economy and to stimulation the realization of a sustainable economy. Next to a vast overview of actions, ideas, perspectives, staring points, challenges and dilemmas to take into account when forming a sustainable economy, it is indicated in that Memorandum that there is a need for research and knowledge to compile relevant data and insight to support decision making processes. The aim of this report is to develop a framework in which knowledge questions can be generated. The questions that fall outside the framework of the Memorandum concern needs, values and images and are formulated in four groups: (1) what is the role of materialism and stress in processes of conventional economic growth?; (2) What is the importance of reduction of consumption ('consuminderen') and slowing down ('onthaasting' or dehasting) to realize a process of sustainable economic development; (3) which images form the basis of the present process of economic development, where do they come from and how do they change over time; and (4) which images of progression give direction to a sustainable economic development and how do they exist? The questions that follow the Memorandum concern decoupling (of environment and economy), sustainable consumption, knowledge economy, institutions and a process of change. Central in the framework of knowledge questions are questions, related to perspectives and actions, as formulated in the Memorandum for different sectors in the Dutch society: industry and services; agriculture and rural areas; and traffic, transport and infrastructure.
Izak, Dariusz; Klim, Joanna; Kaczanowski, Szymon
Malaria remains one of the highest mortality infectious diseases. Malaria is caused by parasites from the genus Plasmodium. Most deaths are caused by infections involving Plasmodium falciparum, which has a complex life cycle. Malaria parasites are extremely well adapted for interactions with their host and their host's immune system and are able to suppress the human immune system, erase immunological memory and rapidly alter exposed antigens. Owing to this rapid evolution, parasites develop drug resistance and express novel forms of antigenic proteins that are not recognized by the host immune system. There is an emerging need for novel interventions, including novel drugs and vaccines. Designing novel therapies requires knowledge about host-parasite interactions, which is still limited. However, significant progress has recently been achieved in this field through the application of bioinformatics analysis of parasite genome sequences. In this review, we describe the main achievements in 'malarial' bioinformatics and provide examples of successful applications of protein sequence analysis. These examples include the prediction of protein functions based on homology and the prediction of protein surface localization via domain and motif analysis. Additionally, we describe PlasmoDB, a database that stores accumulated experimental data. This tool allows data mining of the stored information and will play an important role in the development of malaria science. Finally, we illustrate the application of bioinformatics in the development of population genetics research on malaria parasites, an approach referred to as reverse ecology.
Vamathevan, J; Birney, E
Objectives: To highlight and provide insights into key developments in translational bioinformatics between 2014 and 2016. Methods: This review describes some of the most influential bioinformatics papers and resources that have been published between 2014 and 2016 as well as the national genome sequencing initiatives that utilize these resources to routinely embed genomic medicine into healthcare. Also discussed are some applications of the secondary use of patient data followed by a comprehensive view of the open challenges and emergent technologies. Results: Although data generation can be performed routinely, analyses and data integration methods still require active research and standardization to improve streamlining of clinical interpretation. The secondary use of patient data has resulted in the development of novel algorithms and has enabled a refined understanding of cellular and phenotypic mechanisms. New data storage and data sharing approaches are required to enable diverse biomedical communities to contribute to genomic discovery. Conclusion: The translation of genomics data into actionable knowledge for use in healthcare is transforming the clinical landscape in an unprecedented way. Exciting and innovative models that bridge the gap between clinical and academic research are set to open up the field of translational bioinformatics for rapid growth in a digital era. Georg Thieme Verlag KG Stuttgart.
The field of bioinformatics has allowed the interpretation of massive amounts of biological data, ushering in the era of 'omics' to biomedical research. Its potential impact on pharmacology research is enormous and it has shown some emerging successes. A full realization of this potential, however, requires standardized data annotation for large health record databases and molecular data resources. Improved standardization will further stimulate the development of system pharmacology models, using translational bioinformatics methods. This new translational bioinformatics paradigm is highly complementary to current pharmacological research fields, such as personalized medicine, pharmacoepidemiology and drug discovery. In this review, I illustrate the application of transformational bioinformatics to research in numerous pharmacology subdisciplines. © 2015 The British Pharmacological Society.
Revote, Jerico; Watson-Haigh, Nathan S.; Quenette, Steve; Bethwaite, Blair; McGrath, Annette
Abstract The Bioinformatics Training Platform (BTP) has been developed to provide access to the computational infrastructure required to deliver sophisticated hands-on bioinformatics training courses. The BTP is a cloud-based solution that is in active use for delivering next-generation sequencing training to Australian researchers at geographically dispersed locations. The BTP was built to provide an easy, accessible, consistent and cost-effective approach to delivering workshops at host universities and organizations with a high demand for bioinformatics training but lacking the dedicated bioinformatics training suites required. To support broad uptake of the BTP, the platform has been made compatible with multiple cloud infrastructures. The BTP is an open-source and open-access resource. To date, 20 training workshops have been delivered to over 700 trainees at over 10 venues across Australia using the BTP. PMID:27084333
Phosphoenolpyruvate carboxykinase (PEPCK), a critical gluconeogenic enzyme, catalyzes the first committed step in the diversion of tricarboxylic acid cycle intermediates toward gluconeogenesis. According to the relative conservation of homologous gene, a bioinformatics strategy was applied to clone Fusarium ...
Diaz Acosta, B.
The Microsoft Biology Initiative (MBI) is an effort in Microsoft Research to bring new technology and tools to the area of bioinformatics and biology. This initiative is comprised of two primary components, the Microsoft Biology Foundation (MBF) and the Microsoft Biology Tools (MBT). MBF is a language-neutral bioinformatics toolkit built as an extension to the Microsoft .NET Framework—initially aimed at the area of Genomics research. Currently, it implements a range of parsers for common bioinformatics file formats; a range of algorithms for manipulating DNA, RNA, and protein sequences; and a set of connectors to biological web services such as NCBI BLAST. MBF is available under an open source license, and executables, source code, demo applications, documentation and training materials are freely downloadable from http://research.microsoft.com/bio. MBT is a collection of tools that enable biology and bioinformatics researchers to be more productive in making scientific discoveries.
Michael R Clay
Full Text Available Training anatomic and clinical pathology residents in the principles of bioinformatics is a challenging endeavor. Most residents receive little to no formal exposure to bioinformatics during medical education, and most of the pathology training is spent interpreting histopathology slides using light microscopy or focused on laboratory regulation, management, and interpretation of discrete laboratory data. At a minimum, residents should be familiar with data structure, data pipelines, data manipulation, and data regulations within clinical laboratories. Fellowship-level training should incorporate advanced principles unique to each subspecialty. Barriers to bioinformatics education include the clinical apprenticeship training model, ill-defined educational milestones, inadequate faculty expertise, and limited exposure during medical training. Online educational resources, case-based learning, and incorporation into molecular genomics education could serve as effective educational strategies. Overall, pathology bioinformatics training can be incorporated into pathology resident curricula, provided there is motivation to incorporate, institutional support, educational resources, and adequate faculty expertise.
Patwari, Puneet; Choudhury, Subhrojyoti R.; Banerjee, Amar; Swaminathan, N.; Pandey, Shreya
Model Driven Engineering (MDE) as a key driver to reduce development cost of M&C systems is beginning to find acceptance across scientific instruments such as Radio Telescopes and Nuclear Reactors. Such projects are adopting it to reduce time to integrate, test and simulate their individual controllers and increase reusability and traceability in the process. The creation and maintenance of models is still a significant challenge to realizing MDE benefits. Creating domain-specific modelling environments reduces the barriers, and we have been working along these lines, creating a domain-specific language and environment based on an M&C knowledge model. However, large projects involve several such domains, and there is still a need to interconnect the domain models, in order to ensure modelling completeness. This paper presents a knowledge-centric approach to doing that, by creating a generic system model that underlies the individual domain knowledge models. We present our vision for M&C Domain Map Maker, a set of processes and tools that enables explication of domain knowledge in terms of domain models with mutual consistency relationships to aid MDE.
Liang, Li; Sharp, Alice
E-waste is the fastest growing waste in the solid waste stream in the urban environment. It has become a widely recognised social and environmental problem; therefore, proper management is vital to protecting the fragile environment from its improper disposal. Questionnaire surveys were conducted to determine the knowledge of environmental impacts of e-waste disposal as it relates to mobile phones among different gender and age groups in China, Laos, and Thailand. The results revealed that gender was positively correlated with their knowledge of the status of environmental conditions (P104) (r = 0.077, n = 1994, p < 0.01) and negatively correlated with their knowledge of how to improve environmental conditions (P105) (r = -0.067, n = 2037, p < 0.01). In addition, an increase in age was positively correlated with respondents' concern over the environmental conditions (P103) (r = 0.052, n = 2077, p < 0.05) and P105 (r = 0.061, n = 2061, p < 0.01) mentioned above. The results indicated that female respondents were less knowledgeable about how to improve environmental conditions than male respondents in the three countries. Knowledge gaps were detected in the respondents, at age ⩽17, in the three countries, and from age 18-22 to 36-45 or older from Thailand and China, on their knowledge of the existing e-waste-related laws. Thus, an effort to bridge the gaps through initiating proper educational programmes in these two countries is necessary. © The Author(s) 2016.
Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel
Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.
Gentleman, R.C.; Carey, V.J.; Bates, D.M.
into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples.......The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry...
Gasparoviča-Asīte, M; Aleksejeva, L; Gersons, V
This article studies the possibilities of BEXA family classification algorithms – BEXA, FuzzyBexa and FuzzyBexa II in data, especially bioinformatics data, classification. Three different types of data sets were used in the study – data sets often used in the literature (like Iris data set), UCI data repository real life data sets (like breast cancer data set) and real bioinformatics data sets that have the specific character – a large number of attributes (several thousands) and a small numb...
Full Text Available Abstract Background Recent advances in experimental and computational technologies have fueled the development of many sophisticated bioinformatics programs. The correctness of such programs is crucial as incorrectly computed results may lead to wrong biological conclusion or misguide downstream experimentation. Common software testing procedures involve executing the target program with a set of test inputs and then verifying the correctness of the test outputs. However, due to the complexity of many bioinformatics programs, it is often difficult to verify the correctness of the test outputs. Therefore our ability to perform systematic software testing is greatly hindered. Results We propose to use a novel software testing technique, metamorphic testing (MT, to test a range of bioinformatics programs. Instead of requiring a mechanism to verify whether an individual test output is correct, the MT technique verifies whether a pair of test outputs conform to a set of domain specific properties, called metamorphic relations (MRs, thus greatly increases the number and variety of test cases that can be applied. To demonstrate how MT is used in practice, we applied MT to test two open-source bioinformatics programs, namely GNLab and SeqMap. In particular we show that MT is simple to implement, and is effective in detecting faults in a real-life program and some artificially fault-seeded programs. Further, we discuss how MT can be applied to test programs from various domains of bioinformatics. Conclusion This paper describes the application of a simple, effective and automated technique to systematically test a range of bioinformatics programs. We show how MT can be implemented in practice through two real-life case studies. Since many bioinformatics programs, particularly those for large scale simulation and data analysis, are hard to test systematically, their developers may benefit from using MT as part of the testing strategy. Therefore our work
The main purpose of this mixed methods research was to explore and analyze visitors' overall experience while they attended a museum exhibition, and examine how this experience was affected by previously using a virtual 3dimensional representation of the museum itself. The research measured knowledge acquisition in a virtual museum, and compared…
Bulu, Saniye Tugba; Pedersen, Susan
This study investigated the effects of domain-general and domain-specific scaffolds with different levels of support, continuous and faded, on learning of scientific content and problem-solving. Students' scores on a multiple-choice pretest, posttest, and four recommendation forms were analyzed. Students' content knowledge in all conditions…
Pafilis, Evangelos; Pletscher-Frankild, Sune; Schnetzer, Julia
are needed to facilitate large-scale analyses. Therefore, we developed ENVIRONMENTS, a fast dictionary-based tagger capable of identifying Environment Ontology (ENVO) terms in text. We evaluate the accuracy of the tagger on a new manually curated corpus of 600 Encyclopedia of Life (EOL) species pages. We use......, respectively, at http://environments.hcmr.gr CONTACT: firstname.lastname@example.org or email@example.com Supplementary information: Supplementary data are available at Bioinformatics online....
Husebø, Anne Marie Lunde; Storm, Marianne; Våga, Bodil Bø; Rosenberg, Adriana; Akerjordet, Kristin
To give an overview of empirical studies investigating nursing homes as a learning environment during nursing students' clinical practice. A supportive clinical learning environment is crucial to students' learning and for their development into reflective and capable practitioners. Nursing students' experience with clinical practice can be decisive in future workplace choices. A competent workforce is needed for the future care of older people. Opportunities for maximum learning among nursing students during clinical practice studies in nursing homes should therefore be explored. Mixed-method systematic review using PRISMA guidelines, on learning environments in nursing homes, published in English between 2005 and 2015. Search of CINAHL with Full Text, Academic Search Premier, MEDLINE, and SocINDEX with Full Text, in combination with journal hand searches. Three hundred and thirty-six titles were identified. Twenty studies met the review inclusion criteria. Assessment of methodological quality was based on the Mixed Methods Appraisal Tool. Data were extracted and synthesized using a data analysis method for integrative reviews. Twenty articles were included. The majority of the studies showed moderately high methodological quality. Four main themes emerged from data synthesis: 'Student characteristic and earlier experience'; 'Nursing home ward environment'; 'Quality of mentoring relationship and learning methods'; and 'Students' achieved nursing competencies'. Nursing home learning environments may be optimized by a well-prepared academic-clinical partnership, supervision by encouraging mentors and high-quality nursing care of older people. Positive learning experiences may increase students' professional development through achievement of basic nursing skills and competencies, and motivate them to choose the nursing home as their future work place. An optimal learning environment can be ensured by thorough pre-placement preparations in academia and in nursing
Assessments of knowledge and perceptions about influenza were developed for high school students, and used to determine how knowledge, perceptions, and demographic variables relate to students taking precautions and their odds of getting sick. Assessments were piloted with 205 students and validated using the Rasch model. Data were then collected on 410 students from six high schools. Scores were calculated using the 2-parameter logistic model and clustered using the k-means algorithm. Kendall-tau correlations were evaluated at the alpha = 0.05 level, multinomial logistic regression was used to identify the best predictors and to test for interactions, and neural networks were used to test how well precautions and illness can be predicted using the significant correlates. Precautions and illness had more than one statistically significant correlate with small to moderate effect sizes. Knowledge was positively correlated to compliance with vaccination, hand washing frequency, and respiratory etiquette, and negatively correlated with hand sanitizer use. Perceived risk was positively correlated to compliance with flu vaccination; perceived complications to personal distancing and staying home when sick. Perceived risk and complications increased with reported illness severity. Perceived barriers decreased compliance with vaccination, hand washing, and respiratory etiquette. Factors such as gender, ethnicity, and school, had effects on more than one precaution. Hand washing quality and frequency could be predicted moderately well. Other predictions had small-to-negligible associations with actual values. Implications for future uses of the instruments and development of interventions regarding influenza in high schools are discussed.
Adams, Nan B.; DeVaney, Thomas A.; Sawyer, Susan G.
The design of virtual learning environments for post-secondary instruction is rapidly increasing among public and private universities. While the quantity of online courses over the past 10 years has exponentially increased, the quality of these courses has not. As universities increase their online teaching activities, real concern about the best…
Politis, John; Politis, Denis
Online learning is becoming more attractive to perspective students because it offers them greater accessibility, convenience and flexibility to study at a reduced cost. While these benefits may attract prospective learners to embark on an online learning environment there remains little empirical evidence relating the skills and traits of…
Chan, Landon L; Jiang, Peiyong
The discovery of cell-free DNA molecules in plasma has opened up numerous opportunities in noninvasive diagnosis. Cell-free DNA molecules have become increasingly recognized as promising biomarkers for detection and management of many diseases. The advent of next generation sequencing has provided unprecedented opportunities to scrutinize the characteristics of cell-free DNA molecules in plasma in a genome-wide fashion and at single-base resolution. Consequently, clinical applications of circulating cell-free DNA analysis have not only revolutionized noninvasive prenatal diagnosis but also facilitated cancer detection and monitoring toward an era of blood-based personalized medicine. With the remarkably increasing throughput and lowering cost of next generation sequencing, bioinformatics analysis becomes increasingly demanding to understand the large amount of data generated by these sequencing platforms. In this Review, we highlight the major bioinformatics algorithms involved in the analysis of cell-free DNA sequencing data. Firstly, we briefly describe the biological properties of these molecules and provide an overview of the general bioinformatics approach for the analysis of cell-free DNA. Then, we discuss the specific upstream bioinformatics considerations concerning the analysis of sequencing data of circulating cell-free DNA, followed by further detailed elaboration on each key clinical situation in noninvasive prenatal diagnosis and cancer management where downstream bioinformatics analysis is heavily involved. We also discuss bioinformatics analysis as well as clinical applications of the newly developed massively parallel bisulfite sequencing of cell-free DNA. Finally, we offer our perspectives on the future development of bioinformatics in noninvasive diagnosis. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Hashimoto-Martell, Erin A.; McNeill, Katherine L.; Hoffman, Emily M.
This study explores the impact of an urban ecology program on participating middle school students' understanding of science and pro-environmental attitudes and behaviors. We gathered pre and post survey data from four classes and found significant gains in scientific knowledge, but no significant changes in student beliefs regarding the environment. We interviewed 12 students to better understand their beliefs. Although student responses showed they had learned discrete content knowledge, they lacked any ecological understanding of the environment and had mixed perceptions of the course's relevance in their lives. Students reported doing pro-environmental behaviors, but overwhelmingly contributed such actions to influences other than the urban ecology course. Analyses indicated a disconnect between the course, the environment, and the impact on the students' lives. Consequently, this suggests the importance of recognizing the implications of context, culture, and identity development of urban youth. Perhaps by providing explicit connections and skills in urban environmental programs through engaging students in environmental scientific investigations that stem from their own issues and questions can increase student engagement, motivation, and self-efficacy of environmental issues.
Khandelwal, Siddhartha; Wickstrom, Nicholas
Detecting gait events is the key to many gait analysis applications that would benefit from continuous monitoring or long-term analysis. Most gait event detection algorithms using wearable sensors that offer a potential for use in daily living have been developed from data collected in controlled indoor experiments. However, for real-word applications, it is essential that the analysis is carried out in humans' natural environment; that involves different gait speeds, changing walking terrains, varying surface inclinations and regular turns among other factors. Existing domain knowledge in the form of principles or underlying fundamental gait relationships can be utilized to drive and support the data analysis in order to develop robust algorithms that can tackle real-world challenges in gait analysis. This paper presents a novel approach that exhibits how domain knowledge about human gait can be incorporated into time-frequency analysis to detect gait events from long-term accelerometer signals. The accuracy and robustness of the proposed algorithm are validated by experiments done in indoor and outdoor environments with approximately 93 600 gait events in total. The proposed algorithm exhibits consistently high performance scores across all datasets in both, indoor and outdoor environments.
Petrie, Bruce; Barden, Ruth; Kasprzyk-Hordern, Barbara
This review identifies understudied areas of emerging contaminant (EC) research in wastewaters and the environment, and recommends direction for future monitoring. Non-regulated trace organic ECs including pharmaceuticals, illicit drugs and personal care products are focused on due to ongoing policy initiatives and the expectant broadening of environmental legislation. These ECs are ubiquitous in the aquatic environment, mainly derived from the discharge of municipal wastewater effluents. Their presence is of concern due to the possible ecological impact (e.g., endocrine disruption) to biota within the environment. To better understand their fate in wastewaters and in the environment, a standardised approach to sampling is needed. This ensures representative data is attained and facilitates a better understanding of spatial and temporal trends of EC occurrence. During wastewater treatment, there is a lack of suspended particulate matter analysis due to further preparation requirements and a lack of good analytical approaches. This results in the under-reporting of several ECs entering wastewater treatment works (WwTWs) and the aquatic environment. Also, sludge can act as a concentrating medium for some chemicals during wastewater treatment. The majority of treated sludge is applied directly to agricultural land without analysis for ECs. As a result there is a paucity of information on the fate of ECs in soils and consequently, there has been no driver to investigate the toxicity to exposed terrestrial organisms. Therefore a more holistic approach to environmental monitoring is required, such that the fate and impact of ECs in all exposed environmental compartments are studied. The traditional analytical approach of applying targeted screening with low resolution mass spectrometry (e.g., triple quadrupoles) results in numerous chemicals such as transformation products going undetected. These can exhibit similar toxicity to the parent EC, demonstrating the necessity
This study sought to compare a data-rich learning (DRL) environment that utilized online data as a tool for teaching about renewable energy technologies (RET) to a lecture-based learning environment to determine the impact of the learning environment on students' knowledge of Science, Technology, Engineering, and Math (STEM) concepts related…
Garnier-Laplace, J.; Adam-Guillermin, C.; Antonelli, C.; Beaugelin-Seiller, K.; Boyer, P. [Institut de Radioprotection et de Surete Nucleaire, Direction de l' Environnement et de l' Intervention, 13 - Saint Paul Lez Durance (France); Bailly du Bois, P.; Fievet, B.; Masson, M. [Institut de Radioprotection et de Surete Nucleaire, LRC, 50 - Cherbourg Octeville (France); Gariel, J.C.; Pierrard, O.; Renaud, P.; Roussel-Debet, S. [Institut de Radioprotection et de Surete Nucleaire, DEI, 78 - Le Vesinet (France); Gurrarian, R. [Institut de Radioprotection et de Surete Nucleaire, DEI/STME/LMRE, 91 - Orsay (France); Le Dizes-Maurel, S.; Maro, D. [Institut de Radioprotection et de Surete Nucleaire, DEI/SECRE/LME, 13 - Saint Paul Lez Durance (France)
The authors first outline that tritium is, along with carbon 14, the main radionuclide in France in terms of activity released by nuclear facilities, whatever it concerns gaseous or liquid releases. They describe its behaviour, its various forms in the atmosphere and in the ecosystems, its transfer to plants (results of surveys are evoked which seem to demonstrate that there is no significant bio-accumulation). They comment the current knowledge and results of surveys about the presence of tritium in land and sea animals, and about the toxicity of tritium for non-human organisms
Mello, Luciane V.; Tregilgas, Luke; Cowley, Gwen; Gupta, Anshul; Makki, Fatima; Jhutty, Anjeet; Shanmugasundram, Achchuthan
Abstract Teaching bioinformatics is a longstanding challenge for educators who need to demonstrate to students how skills developed in the classroom may be applied to real world research. This study employed an action research methodology which utilised student–staff partnership and peer-learning. It was centred on the experiences of peer-facilitators, students who had previously taken a postgraduate bioinformatics module, and had applied knowledge and skills gained from it to their own research. It aimed to demonstrate to peer-receivers, current students, how bioinformatics could be used in their own research while developing peer-facilitators’ teaching and mentoring skills. This student-centred approach was well received by the peer-receivers, who claimed to have gained improved understanding of bioinformatics and its relevance to research. Equally, peer-facilitators also developed a better understanding of the subject and appreciated that the activity was a rare and invaluable opportunity to develop their teaching and mentoring skills, enhancing their employability. PMID:29098185
Albertsen, Karen; Rugulies, Reiner; Garde, Anne Helene; Burr, Hermann
Interpersonal relations at work as well as individual factors seem to play prominent roles in the modern labour market, and arguably also for the change in stress symptoms. The aim was to examine whether exposures in the psychosocial work environment predicted symptoms of cognitive stress in a sample of Danish knowledge workers (i.e. employees working with sign, communication or exchange of knowledge) and whether performance-based self-esteem had a main effect, over and above the work environmental factors. 349 knowledge workers, selected from a national, representative cohort study, were followed up with two data collections, 12 months apart. We used data on psychosocial work environment factors and cognitive stress symptoms measured with the Copenhagen Psychosocial Questionnaire (COPSOQ), and a measurement of performance-based self-esteem. Effects on cognitive stress symptoms were analyzed with a GLM procedure with and without adjustment for baseline level. Measures at baseline of quantitative demands, role conflicts, lack of role clarity, recognition, predictability, influence and social support from management were positively associated with cognitive stress symptoms 12 months later. After adjustment for baseline level of cognitive stress symptoms, follow-up level was only predicted by lack of predictability. Performance-based self-esteem was prospectively associated with cognitive stress symptoms and had an independent effect above the psychosocial work environment factors on the level of and changes in cognitive stress symptoms. The results suggest that both work environmental and individual characteristics should be taken into account in order to capture sources of stress in modern working life.
Cox, Narelle S; Oliveira, Cristino C; Lahham, Aroub; Holland, Anne E
What are the barriers and enablers of referral, uptake, attendance and completion of pulmonary rehabilitation for people with chronic obstructive pulmonary disease (COPD)? Systematic review of qualitative or quantitative studies reporting data relating to referral, uptake, attendance and/or completion in pulmonary rehabilitation. People aged >18years with a diagnosis of COPD and/or their healthcare professionals. Data were extracted regarding the nature of barriers and enablers of pulmonary rehabilitation referral and participation. Extracted data items were mapped to the Theoretical Domains Framework (TDF). A total of 6969 references were screened, with 48 studies included and 369 relevant items mapped to the TDF. The most frequently represented domain was 'Environment' (33/48 included studies, 37% of mapped items), which included items such as waiting time, burden of illness, travel, transport and health system resources. Other frequently represented domains were 'Knowledge' (18/48 studies, including items such as clinician knowledge of referral processes, patient understanding of rehabilitation content) and 'Beliefs about consequences' (15/48 studies, including items such as beliefs regarding role and safety of exercise, expectations of rehabilitation outcomes). Barriers to referral, uptake, attendance or completion represented 71% (n=183) of items mapped to the TDF. All domains of the TDF were represented; however, items were least frequently coded to the domains of 'Optimism' and 'Memory'. The methodological quality of included studies was fair (mean quality score 9/12, SD 2). Many factors - particularly those related to environment, knowledge, attitudes and behaviours - interact to influence referral, uptake, attendance and completion of pulmonary rehabilitation. Overcoming the challenges associated with the personal and/or healthcare system environment will be imperative to improving access and uptake of pulmonary rehabilitation. PROSPERO CRD42015015976
J. Köster (Johannes); S. Rahmann (Sven)
textabstractSnakemake is a workflow engine that provides a readable Python-based workflow definition language and a powerful execution environment that scales from single-core workstations to compute clusters without modifying the workflow. It is the first system to support the use of automatically
Full Text Available Multiuser virtual environments (MUVEs generate a large amount of data but most of them are not accessible even to users who triggered them. What’s more, most datasets are not even stored for further use; they have only temporary character and very short "halftime of decay" limited f.e. to onesecondlong screen display. Such a huge loss of data makes evaluation of knowledge transfer in MUVEs almost impossible. There is a need to both improve monitoring capabilities of MUVEs to be able to make completion assessment and use MUVEs that enable simulation (reexperience using complete datasets gathered from environment itself. Future research in the field of simulation methodology is suggested.
Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke
Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics. © The Author 2013. Published by Oxford University Press. For Permissions, please email: firstname.lastname@example.org.
Karikari, Thomas K; Quansah, Emmanuel; Mohamed, Wael M Y
Research in bioinformatics has a central role in helping to advance biomedical research. However, its introduction to Africa has been met with some challenges (such as inadequate infrastructure, training opportunities, research funding, human resources, biorepositories and databases) that have contributed to the slow pace of development in this field across the continent. Fortunately, recent improvements in areas such as research funding, infrastructural support and capacity building are helping to develop bioinformatics into an important discipline in Africa. These contributions are leading to the establishment of world-class research facilities, biorepositories, training programmes, scientific networks and funding schemes to improve studies into disease and health in Africa. With increased contribution from all stakeholders, these developments could be further enhanced. Here, we discuss how the recent developments are contributing to the advancement of bioinformatics in Africa.
Thomas K. Karikari
Full Text Available Research in bioinformatics has a central role in helping to advance biomedical research. However, its introduction to Africa has been met with some challenges (such as inadequate infrastructure, training opportunities, research funding, human resources, biorepositories and databases that have contributed to the slow pace of development in this field across the continent. Fortunately, recent improvements in areas such as research funding, infrastructural support and capacity building are helping to develop bioinformatics into an important discipline in Africa. These contributions are leading to the establishment of world-class research facilities, biorepositories, training programmes, scientific networks and funding schemes to improve studies into disease and health in Africa. With increased contribution from all stakeholders, these developments could be further enhanced. Here, we discuss how the recent developments are contributing to the advancement of bioinformatics in Africa.
Attwood, Teresa K; Atwood, Teresa K; Bongcam-Rudloff, Erik; Brazas, Michelle E; Corpas, Manuel; Gaudet, Pascale; Lewitter, Fran; Mulder, Nicola; Palagi, Patricia M; Schneider, Maria Victoria; van Gelder, Celia W G
In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy--paradoxically, many are actually closing "niche" bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all.
van Gelder, Celia W G; Hooft, Rob W W; van Rijswijk, Merlijn N; van den Berg, Linda; Kok, Ruben G; Reinders, Marcel; Mons, Barend; Heringa, Jaap
This review provides a historical overview of the inception and development of bioinformatics research in the Netherlands. Rooted in theoretical biology by foundational figures such as Paulien Hogeweg (at Utrecht University since the 1970s), the developments leading to organizational structures supporting a relatively large Dutch bioinformatics community will be reviewed. We will show that the most valuable resource that we have built over these years is the close-knit national expert community that is well engaged in basic and translational life science research programmes. The Dutch bioinformatics community is accustomed to facing the ever-changing landscape of data challenges and working towards solutions together. In addition, this community is the stable factor on the road towards sustainability, especially in times where existing funding models are challenged and change rapidly. © The Author 2017. Published by Oxford University Press.
Cardoso, Olivier; Porcher, Jean-Marc; Sanchez, Wilfried
Human and veterinary active pharmaceutical ingredients (APIs) are involved in contamination of surface water, ground water, effluents, sediments and biota. Effluents of waste water treatment plants and hospitals are considered as major sources of such contamination. However, recent evidences reveal high concentrations of a large number of APIs in effluents from pharmaceutical factories and in receiving aquatic ecosystems. Moreover, laboratory exposures to these effluents and field experiments reveal various physiological disturbances in exposed aquatic organisms. Also, it seems to be relevant to increase knowledge on this route of contamination but also to develop specific approaches for further environmental monitoring campaigns. The present study summarizes available data related to the impact of pharmaceutical factory discharges on aquatic ecosystem contaminations and presents associated challenges for scientists and environmental managers. Copyright © 2014 Elsevier Ltd. All rights reserved.
Yip, Y L
To summarize current excellent research in the field of bioinformatics. Synopsis of the articles selected for the IMIA Yearbook 2009. The selection process for this yearbook's section on Bioinformatics results in six excellent articles highlighting several important trends First, it can be noted that Semantic Web technology continues to play an important role in heterogeneous data integration. Novel applications also put more emphasis on its ability to make logical inferences leading to new insights and discoveries. Second, translational research, due to its complex nature, increasingly relies on collective intelligence made available through the adoption of community-defined protocols or software architectures for secure data annotation, sharing and analysis. Advances in systems biology, bio-ontologies and text-ming can also be noted. Current biomedical research gradually evolves towards an environment characterized by intensive collaboration and more sophisticated knowledge processing activities. Enabling technologies, either Semantic Web or other solutions, are expected to play an increasingly important role in generating new knowledge in the foreseeable future.
Campbell, Chad E.; Nehm, Ross H.
The growing importance of genomics and bioinformatics methods and paradigms in biology has been accompanied by an explosion of new curricula and pedagogies. An important question to ask about these educational innovations is whether they are having a meaningful impact on students’ knowledge, attitudes, or skills. Although assessments are necessary tools for answering this question, their outputs are dependent on their quality. Our study 1) reviews the central importance of reliability and construct validity evidence in the development and evaluation of science assessments and 2) examines the extent to which published assessments in genomics and bioinformatics education (GBE) have been developed using such evidence. We identified 95 GBE articles (out of 226) that contained claims of knowledge increases, affective changes, or skill acquisition. We found that 1) the purpose of most of these studies was to assess summative learning gains associated with curricular change at the undergraduate level, and 2) a minority (quality of evidence derived from these instruments. We end with recommendations for improving assessment quality in GBE. PMID:24006400
Gniadek, Agnieszka; Cepuch, Grażyna; Ochender, Katarzyna; Salamon, Dominika
Despite a significant civilization advancement, parasitic diseases still pose a serious diagnostic and therapeutic problem. Children's susceptibility to these infections stems from their immature immune system and lack of basic hygiene routines. The objective of the study was to evaluate the level of knowledge which parents of preschool children's possess about parasitic diseases in their children's environment. The study was carried out in the group of 151 parents of preschool children living both in the city and in the country. The survey was carried out by means of a diagnostic poll with the application of a self-designed research questionnaire. To make the evaluation even more objective, a special scale was created in which parents could score points for their answers (0 - wrong answer, 1 - correct answer). The total number of points ranging from 0 to 9 indicated an unsatisfactory level of knowledge, from 10 to 13 - satisfactory level, from 14 to 16 - good level and from 17 to 20 - very good level of parents' awareness. The results of the study reveal that the level of parents' knowledge about parasitic diseases is only satisfactory. A statistically significant relationship was observed between the variables such as education and sex. The higher education, the higher level of knowledge. Moreover, women were more knowledgeable in the field of parasitic diseases than men were. Financial status of the family did not influence the level of parents' awareness. Well-planned educational programmes might have a positive influence on developing proper hygiene routines in families, which, in turn, will limit the risk of spreading parasitoses in the population of children.
Oliver, Jeffrey C
Health sciences research is increasingly focusing on big data applications, such as genomic technologies and precision medicine, to address key issues in human health. These approaches rely on biological data repositories and bioinformatic analyses, both of which are growing rapidly in size and scope. Libraries play a key role in supporting researchers in navigating these and other information resources. With the goal of supporting bioinformatics research in the health sciences, the University of Arizona Health Sciences Library established a Bioinformation program. To shape the support provided by the library, I developed and administered a needs assessment survey to the University of Arizona Health Sciences campus in Tucson, Arizona. The survey was designed to identify the training topics of interest to health sciences researchers and the preferred modes of training. Survey respondents expressed an interest in a broad array of potential training topics, including "traditional" information seeking as well as interest in analytical training. Of particular interest were training in transcriptomic tools and the use of databases linking genotypes and phenotypes. Staff were most interested in bioinformatics training topics, while faculty were the least interested. Hands-on workshops were significantly preferred over any other mode of training. The University of Arizona Health Sciences Library is meeting those needs through internal programming and external partnerships. The results of the survey demonstrate a keen interest in a variety of bioinformatic resources; the challenge to the library is how to address those training needs. The mode of support depends largely on library staff expertise in the numerous subject-specific databases and tools. Librarian-led bioinformatic training sessions provide opportunities for engagement with researchers at multiple points of the research life cycle. When training needs exceed library capacity, partnering with intramural and
Tao, Ying; Liu, Yang; Friedman, Carol
Information visualization techniques, which take advantage of the bandwidth of human vision, are powerful tools for organizing and analyzing a large amount of data. In the postgenomic era, information visualization tools are indispensable for biomedical research. This paper aims to present an overview of current applications of information visualization techniques in bioinformatics for visualizing different types of biological data, such as from genomics, proteomics, expression profiling and structural studies. Finally, we discuss the challenges of information visualization in bioinformatics related to dealing with more complex biological information in the emerging fields of systems biology and systems medicine. PMID:20976032
Phillips, J. C.
Allosteric (long-range) interactions can be surprisingly strong in proteins of biomedical interest. Here we use bioinformatic scaling to connect prior results on nonsteroidal anti-inflammatory drugs to promising new drugs that inhibit cancer cell metabolism. Many parallel features are apparent, which explain how even one amino acid mutation, remote from active sites, can alter medical results. The enzyme twins involved are cyclooxygenase (aspirin) and isocitrate dehydrogenase (IDH). The IDH results are accurate to 1% and are overdetermined by adjusting a single bioinformatic scaling parameter. It appears that the final stage in optimizing protein functionality may involve leveling of the hydrophobic limits of the arms of conformational hydrophilic hinges.
Leclère, Valérie; Weber, Tilmann; Jacques, Philippe
This chapter helps in the use of bioinformatics tools relevant to the discovery of new nonribosomal peptides (NRPs) produced by microorganisms. The strategy described can be applied to draft or fully assembled genome sequences. It relies on the identification of the synthetase genes and the decip......This chapter helps in the use of bioinformatics tools relevant to the discovery of new nonribosomal peptides (NRPs) produced by microorganisms. The strategy described can be applied to draft or fully assembled genome sequences. It relies on the identification of the synthetase genes...
Ditty, Jayna L.; Kvaal, Christopher A.; Goodner, Brad; Freyermuth, Sharyn K.; Bailey, Cheryl; Britton, Robert A.; Gordon, Stuart G.; Heinhorst, Sabine; Reed, Kelynne; Xu, Zhaohui; Sanders-Lorenz, Erin R.; Axen, Seth; Kim, Edwin; Johns, Mitrick; Scott, Kathleen; Kerfeld, Cheryl A.
Undergraduate life sciences education needs an overhaul, as clearly described in the National Research Council of the National Academies publication BIO 2010: Transforming Undergraduate Education for Future Research Biologists. Among BIO 2010's top recommendations is the need to involve students in working with real data and tools that reflect the nature of life sciences research in the 21st century. Education research studies support the importance of utilizing primary literature, designing and implementing experiments, and analyzing results in the context of a bona fide scientific question in cultivating the analytical skills necessary to become a scientist. Incorporating these basic scientific methodologies in undergraduate education leads to increased undergraduate and post-graduate retention in the sciences. Toward this end, many undergraduate teaching organizations offer training and suggestions for faculty to update and improve their teaching approaches to help students learn as scientists, through design and discovery (e.g., Council of Undergraduate Research [www.cur.org] and Project Kaleidoscope [www.pkal.org]). With the advent of genome sequencing and bioinformatics, many scientists now formulate biological questions and interpret research results in the context of genomic information. Just as the use of bioinformatic tools and databases changed the way scientists investigate problems, it must change how scientists teach to create new opportunities for students to gain experiences reflecting the influence of genomics, proteomics, and bioinformatics on modern life sciences research. Educators have responded by incorporating bioinformatics into diverse life science curricula. While these published exercises in, and guidelines for, bioinformatics curricula are helpful and inspirational, faculty new to the area of bioinformatics inevitably need training in the theoretical underpinnings of the algorithms. Moreover, effectively integrating bioinformatics
Kandi, Kamala M.
This study examines the effect of a technology-based instructional tool 'Geniverse' on the content knowledge gains, Science Self-Efficacy, Technology Self-Efficacy, and Career Goal Aspirations among 283 high school learners. The study was conducted in four urban high schools, two of which have achieved Adequate Yearly Progress (AYP) and two have not. Students in both types of schools were taught genetics either through Geniverse, a virtual learning environment or Dragon genetics, a paper-pencil activity embedded in traditional instructional method. Results indicated that students in all schools increased their knowledge of genetics using either type of instructional approach. Students who were taught using Geniverse demonstrated an advantage for genetics knowledge although the effect was small. These increases were more pronounced in the schools that had been meeting the AYP goal. The other significant effect for Geniverse was that students in the technology-enhanced classrooms increased in science Self-Efficacy while students in the non-technology enhanced classrooms decreased. In addition, students from Non-AYP schools showed an improvement in Science and Technology Self-Efficacy; however the effects were small. The implications of these results for the future use of technology-enriched classrooms were discussed. Keywords: Technology-based instruction, Self-Efficacy, career goals and Adequate Yearly Progress (AYP).
Roberto da Justa Pires Neto
Full Text Available Objectives: To describe clinical and epidemiological characteristics of inpatients with tuberculosis (TB and to assess the knowledge of health personnel on fundamental concepts about TB and control measures for pulmonary tuberculosis in a hospital environment. Methods: The study was conducted in a tertiary hospital in Fortaleza-CE and involved patients admitted with TB and health professionals responsible for assistance. A first phase was characterized by a retrospective study of medical records of patients admitted with suspected TB. In a second stage, a cross-sectional study with application of a structured questionnaire assessed the knowledge of health personnel on TB control measures in a hospital environment. Results: Sixty-seven patients admitted with suspected TB had their medical records assessed. Among the confirmed cases, the most frequent clinical form was pulmonary (81.3%. Out of 55 patients admitted with suspected pulmonary tuberculosis, only 29 (52.7% were admitted in a respiratory isolation bed. Twenty-six patients with suspected pulmonary tuberculosis on admission stayed a total of 148 days out of a respiratory isolation bed (average 4.1 days / patient. The knowledge of 159 health professionals about TB was assessed. Regarding the transmission of TB, 107 (67.2% were unaware of airborne transmission and 109 (68.5% ignored the clinical forms that require respiratory isolation. Conclusions: Pulmonary tuberculosis is the most frequent clinical form among inpatients in a tertiary hospital in Fortaleza-CE. Considerable fraction of health personnel doesn’t know key concepts related to tuberculosis and essential for the proper and safe care. Descriptors: Tuberculosis; Infectious Disease Transmission; Exposure to Biological Agents; Health personnel.
Structural bioinformatics is concerned with the molecular structure of biomacromolecules on a genomic scale, using computational methods. Classic problems in structural bioinformatics include the prediction of protein and RNA structure from sequence, the design of artificial proteins or enzymes, and the automated analysis and comparison of biomacromolecules in atomic detail. The determination of macromolecular structure from experimental data (for example coming from nuclear magnetic resonance, X-ray crystallography or small angle X-ray scattering) has close ties with the field of structural bioinformatics. Recently, probabilistic models and machine learning methods based on Bayesian principles are providing efficient and rigorous solutions to challenging problems that were long regarded as intractable. In this review, I will highlight some important recent developments in the prediction, analysis and experimental determination of macromolecular structure that are based on such methods. These developments include generative models of protein structure, the estimation of the parameters of energy functions that are used in structure prediction, the superposition of macromolecules and structure determination methods that are based on inference. Although this review is not exhaustive, I believe the selected topics give a good impression of the exciting new, probabilistic road the field of structural bioinformatics is taking.
Jungck, John R; Weisstein, Anton E
The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes-the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software-the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a 'two-culture' problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses.
A Refresher Course on 'Bioinformatics in Modern Biology' for graduate and postgraduate college/university teachers will be held at School of Life Sciences, Manipal University, Manipal for two weeks from 5 to 17 May 2014. The objective of this Course is to improvise on teaching methodologies incorporating online teaching ...
Cazals, Frédéric; Dreyfus, Tom
Software in structural bioinformatics has mainly been application driven. To favor practitioners seeking off-the-shelf applications, but also developers seeking advanced building blocks to develop novel applications, we undertook the design of the Structural Bioinformatics Library ( SBL , http://sbl.inria.fr ), a generic C ++/python cross-platform software library targeting complex problems in structural bioinformatics. Its tenet is based on a modular design offering a rich and versatile framework allowing the development of novel applications requiring well specified complex operations, without compromising robustness and performances. The SBL involves four software components (1-4 thereafter). For end-users, the SBL provides ready to use, state-of-the-art (1) applications to handle molecular models defined by unions of balls, to deal with molecular flexibility, to model macro-molecular assemblies. These applications can also be combined to tackle integrated analysis problems. For developers, the SBL provides a broad C ++ toolbox with modular design, involving core (2) algorithms , (3) biophysical models and (4) modules , the latter being especially suited to develop novel applications. The SBL comes with a thorough documentation consisting of user and reference manuals, and a bugzilla platform to handle community feedback. The SBL is available from http://sbl.inria.fr. Frederic.Cazals@inria.fr. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com
Budd, Aidan; Corpas, Manuel; Brazas, Michelle D; Fuller, Jonathan C; Goecks, Jeremy; Mulder, Nicola J; Michaut, Magali; Ouellette, B F Francis; Pawlik, Aleksandra; Blomberg, Niklas
"Scientific community" refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i) the exchange and development of ideas and expertise; (ii) career development; (iii) coordinated funding activities; (iv) interactions and engagement with professionals from other fields; and (v) other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop "The 'How To Guide' for Establishing a Successful Bioinformatics Network" at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) and the 12th European Conference on Computational Biology (ECCB).
van Gelder, Celia W.G.; Hooft, Rob; van Rijswijk, Merlijn; van den Berg, Linda; Kok, Ruben; Reinders, M.J.T.; Mons, Barend; Heringa, Jaap
This review provides a historical overview of the inception and development of bioinformatics research in the Netherlands. Rooted in theoretical biology by foundational figures such as Paulien Hogeweg (at Utrecht University since the 1970s), the developments leading to organizational structures
Bioinformatics has become an essential tool not only for basic research but also for applied research in biotechnology and biomedical sciences. Optimal primer sequence and appropriate primer concentration are essential for maximal specificity and efficiency of PCR. A poorly designed primer can result in little or no ...
A Bioinformatic Strategy to Rapidly Characterize cDNA LibrariesG. Charles Ostermeier1, David J. Dix2 and Stephen A. Krawetz1.1Departments of Obstetrics and Gynecology, Center for Molecular Medicine and Genetics, & Institute for Scientific Computing, Wayne State Univer...
Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 2. Science Academies' Refresher Course on Bioinformatics in Modern Biology. Information and Announcements Volume 19 Issue 2 February 2014 pp 192-192. Fulltext. Click here to view fulltext PDF. Permanent link:
Weisstein, Anton E.
The patterns of variation within a molecular sequence data set result from the interplay between population genetic, molecular evolutionary and macroevolutionary processes—the standard purview of evolutionary biologists. Elucidating these patterns, particularly for large data sets, requires an understanding of the structure, assumptions and limitations of the algorithms used by bioinformatics software—the domain of mathematicians and computer scientists. As a result, bioinformatics often suffers a ‘two-culture’ problem because of the lack of broad overlapping expertise between these two groups. Collaboration among specialists in different fields has greatly mitigated this problem among active bioinformaticians. However, science education researchers report that much of bioinformatics education does little to bridge the cultural divide, the curriculum too focused on solving narrow problems (e.g. interpreting pre-built phylogenetic trees) rather than on exploring broader ones (e.g. exploring alternative phylogenetic strategies for different kinds of data sets). Herein, we present an introduction to the mathematics of tree enumeration, tree construction, split decomposition and sequence alignment. We also introduce off-line downloadable software tools developed by the BioQUEST Curriculum Consortium to help students learn how to interpret and critically evaluate the results of standard bioinformatics analyses. PMID:23821621
May 2, 2011 ... A virus-neutralizing antibody by a virus-specific synthetic peptide. J. Virol. 55(3): 836-839. Geourjon C, Deléage G (1995). SOPMA: significant improvements in protein secondary structure prediction by consensus prediction from multiple alignments. Bioinformatics, 11(6): 681-684. Guex N, Peitsch MC ...
Rasmussen, Morten; Thaysen-Andersen, Morten; Højrup, Peter
We have developed "GLYCANthrope " - CROSSWORKS for glycans: a bioinformatics tool, which assists in identifying N-linked glycosylated peptides as well as their glycan moieties from MS2 data of enzymatically digested glycoproteins. The program runs either as a stand-alone application or as a plug...
Lima, Andre O. S.; Garces, Sergio P. S.
Bioinformatics is one of the fastest growing scientific areas over the last decade. It focuses on the use of informatics tools for the organization and analysis of biological data. An example of their importance is the availability nowadays of dozens of software programs for genomic and proteomic studies. Thus, there is a growing field (private…
Kappa casein (CSN3) gene is a variant of the milk protein highly conserved in mammalian species. Genetic variations in CSN3 gene of six mammalian livestock species were investigated using bioinformatics approach. A total of twenty-seven CSN3 gene sequences with corresponding amino acids belonging to the six ...
Lewis, Jamie; Bartlett, Andrew; Atkinson, Paul
Bioinformatics--the so-called shotgun marriage between biology and computer science--is an interdiscipline. Despite interdisciplinarity being seen as a virtue, for having the capacity to solve complex problems and foster innovation, it has the potential to place projects and people in anomalous categories. For example, valorised…
Deepti D. Deobagkar
Full Text Available Bioinformatics software and visualisation tools have been a key factor in the rapid and phenomenal advances in genomics, proteomics, medicine, drug discovery, systems approaches and in fact in every area of new development. Indian scientists have also made a mark in a few specific areas. India has an advantage of an early start and extensive and organised network in the Bioinformatics education and research with substantial inputs from the Indian government. India has a strong hold in computation and IT and has a pool of bright and young talent with demographic dividend along with experienced and excellent mentors and researchers. Although small in number and scale, Bioinformatics Industry also has a presence and is making its mark in India. There are a number of high throughput and extremely useful resources available which are critical in biological data analysis and interpretation. This has made a paradigm shift in the way research can be carried out and discoveries can be made in any area of biological, biochemical and chemical research. This article summarises the current status and contributions from India in the development of software and web servers for Bioinformatics applications.
Jun 26, 2013 ... 2Bioinformatics and Biotechnology, DES, FBAS International Islamic University, Islamabad, Pakistan. Accepted 26 April, 2013. The Tp73 ... New discoveries about the control and function of p73 are still in progress and it is ..... modern research for diagnostics and evolutionary history of p73. REFERENCES.
The rise of social media and web 2.0 technologies over the last few years has impacted many communication functions. One influence is organizational bloggers as knowledge mediators on government agency practices. The ways in which these organizational bloggers in their roles as experts are able...... to change, facilitate, and enable communication about a broad range of specialized knowledge areas, in a more open interactional institutional communication environment than traditional media typically offer, give rise to a set of new implications as regards the mediation of expert knowledge to the target...
This study examined the effects of two different instructional formats on Internet WebPages in an informal learning environment. The purpose of this study is to (a) identify optimal instructional formats for on-line learning; (b) identify the relationship between post-assessment scores and the student's gender, age or racial identity; (c) examine the effects of verbal aptitudes on learning in different formats; (d) identify relationships between computer attitudes and achievement; and (e) identify the potential power for self-regulated learning and self-efficacy on Internet WebPages. Two learning strategy modules were developed; a constructivist and an objectivist instruction module. The study program consisted of an on-line consent form; a computer attitude survey; a Motivated Strategies for Learning Questionnaire; a verbal aptitude test; a pre-assessment; instructional directions followed by the instructional module and a post-assessment. The study tested 145 post-secondary science and engineering participants from the University of Florida. Participants were randomly assigned to one of two treatment groups or a control in a pretest/posttest design. An analysis of covariance with general linear models was used to account for effects of individual difference variables and aptitude treatment interaction (ATI). This statistical procedure was used to determine the relationships among the dependent variable, the achievement on each of the formats and the independent variables, attitudes, gender, racial identity, verbal aptitudes, and self-regulated learning/self-efficacy. Significant results at alpha = .05 were found for none of these variables. However, a linear prediction of age shows that older participants scored higher on the post-assessment after completing the objectivist module. Although there were no significant differences between the learning format and the variables, there was a difference between the modules and the control. Therefore, it is possible that
Stringer-Calvert David WJ
Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the
Full Text Available Abstract Background There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no sucessful attempts have been made to integrate chemo- and bioinformatics into a single framework. Results Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Conclusion Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL, an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at http://www.bioclipse.net.
Fontaine, Guillaume; Cossette, Sylvie; Maheu-Cadotte, Marc-André; Mailhot, Tanya; Deschênes, Marie-France; Mathieu-Dupuis, Gabrielle
Adaptive e-learning environments (AEEs) can provide tailored instruction by adapting content, navigation, presentation, multimedia, and tools to each user's navigation behavior, individual objectives, knowledge, and preferences. AEEs can have various levels of complexity, ranging from systems using a simple adaptive functionality to systems using artificial intelligence. While AEEs are promising, their effectiveness for the education of health professionals and health professions students remains unclear. The purpose of this systematic review is to assess the effectiveness of AEEs in improving knowledge, competence, and behavior in health professionals and students. We will follow the Cochrane Collaboration and the Effective Practice and Organisation of Care (EPOC) Group guidelines on systematic review methodology. A systematic search of the literature will be conducted in 6 bibliographic databases (CINAHL, EMBASE, ERIC, PsycINFO, PubMed, and Web of Science) using the concepts "adaptive e-learning environments," "health professionals/students," and "effects on knowledge/skills/behavior." We will include randomized and nonrandomized controlled trials, in addition to controlled before-after, interrupted time series, and repeated measures studies published between 2005 and 2017. The title and the abstract of each study followed by a full-text assessment of potentially eligible studies will be independently screened by 2 review authors. Using the EPOC extraction form, 1 review author will conduct data extraction and a second author will validate the data extraction. The methodological quality of included studies will be independently assessed by 2 review authors using the EPOC risk of bias criteria. Included studies will be synthesized by a descriptive analysis. Where appropriate, data will be pooled using meta-analysis by applying the RevMan software version 5.1, considering the heterogeneity of studies. The review is in progress. We plan to submit the results in the
Cavallo, Eugenio; Biddoccu, Marcella; Bagagiolo, Giorgia; De Marziis, Massimo; Gaia Forni, Emanuela; Alemanno, Laura; Ferraris, Stefano; Canone, Davide; Previati, Maurizio; Turconi, Laura; Arattano, Massimo; Coviello, Velio
Environmental sensor monitoring is continuously developing, both in terms of quantity (i.e. measurement sites), and quality (i.e. technological innovation). Environmental monitoring is carried out by either public or private entities for their own specific purposes, such as scientific research, civil protection, support to industrial and agricultural activities, services for citizens, security, education, and information. However, the acquired dataset could be cross-appealing, hence, being interesting for purposes that diverted from their main intended use. The CIRCE project (Cooperative Internet-of-Data Rural-alpine Community Environment) aimed to gather, manage, use and distribute data obtained from sensors and from people, in a multipurpose approach. The CIRCE project was selected within a call for tender launched by Piedmont Region (in collaboration with CSI Piemonte) in order to improve the digital ecosystem represented by YUCCA, an open source platform oriented to the acquisition, sharing and reuse of data resulting both from real-time and on-demand applications. The partnership of the CIRCE project was made by scientific research bodies (IMAMOTER-CNR, IRPI-CNR, DIST) together with SMEs involved in environmental monitoring and ICT sectors (namely: 3a srl, EnviCons srl, Impresa Verde Cuneo srl, and NetValue srl). Within the project a shared network of agro-meteo-hydrological sensors has been created. Then a platform and its interface for collection, management and distribution of data has been developed. The CIRCE network is currently constituted by a total amount of 171 sensors remotely connected and originally belonging to different networks. They are settled-up in order to monitor and investigate agro-meteo-hydrological processes in different rural and mountain areas of Piedmont Region (NW-Italy), including some very sensitive locations, but difficult to access. Each sensor network differs from each other, in terms of purpose of monitoring, monitored
Wright, Victoria Ann; Vaughan, Brendan W; Laurent, Thomas; Lopez, Rodrigo; Brooksbank, Cath; Schneider, Maria Victoria
Today's molecular life scientists are well educated in the emerging experimental tools of their trade, but when it comes to training on the myriad of resources and tools for dealing with biological data, a less ideal situation emerges. Often bioinformatics users receive no formal training on how to make the most of the bioinformatics resources and tools available in the public domain. The European Bioinformatics Institute, which is part of the European Molecular Biology Laboratory (EMBL-EBI), holds the world's most comprehensive collection of molecular data, and training the research community to exploit this information is embedded in the EBI's mission. We have evaluated eLearning, in parallel with face-to-face courses, as a means of training users of our data resources and tools. We anticipate that eLearning will become an increasingly important vehicle for delivering training to our growing user base, so we have undertaken an extensive review of Learning Content Management Systems (LCMSs). Here, we describe the process that we used, which considered the requirements of trainees, trainers and systems administrators, as well as taking into account our organizational values and needs. This review describes the literature survey, user discussions and scripted platform testing that we performed to narrow down our choice of platform from 36 to a single platform. We hope that it will serve as guidance for others who are seeking to incorporate eLearning into their bioinformatics training programmes.
Nehm, Ross H.; Budd, Ann F.
NMITA is a reef coral biodiversity database that we use to introduce students to the expansive realm of bioinformatics beyond genetics. We introduce a series of lessons that have students use this database, thereby accessing real data that can be used to test hypotheses about biodiversity and evolution while targeting the "National Science …
Rumyana Y. Papancheva
Full Text Available The paper presents a review of the contemporary school, the digital generation and the need of teachers equipped with new knowledge and skills, in particular – basic programming skills. The last change of educational system in Bulgaria after the adoption of the new pre-school and general school education act is analysed. New primary school curricula and new standards for teacher’s qualification were implemented. The new school subject “Computer modelling” is presented. Some experience of the authors from project-based work in mathematics with teachers and students is described. The aim is the formation of skills of programming by working within Scratch – visual environment for block-based coding. Some conclusions and ideas for future work are formulated.
This paper reflects on the analytic challenges emerging from the study of bioinformatic tools recently created to store and disseminate biological data, such as databases, repositories, and bio-ontologies. I focus my discussion on the Gene Ontology, a term that defines three entities at once: a classification system facilitating the distribution and use of genomic data as evidence towards new insights; an expert community specialised in the curation of those data; and a scientific institution promoting the use of this tool among experimental biologists. These three dimensions of the Gene Ontology can be clearly distinguished analytically, but are tightly intertwined in practice. I suggest that this is true of all bioinformatic tools: they need to be understood simultaneously as epistemic, social, and institutional entities, since they shape the knowledge extracted from data and at the same time regulate the organisation, development, and communication of research. This viewpoint has one important implication for the methodologies used to study these tools; that is, the need to integrate historical, philosophical, and sociological approaches. I illustrate this claim through examples of misunderstandings that may result from a narrowly disciplinary study of the Gene Ontology, as I experienced them in my own research.
Le Heron, Richard
The challenges of managing marine ecosystems for multiple users, while well recognised, has not led to clear strategies, principles or practice. The paper uses novel workshop based thought-experiments to address these concerns. These took the form of trans-disciplinary Non-Sectarian Scenario Experiments (NSSE), involving participants who agreed to put aside their disciplinary interests and commercial and institutional obligations. The NSSE form of co-production of knowledge is a distinctive addition to the participatory and scenario literatures in marine resource management (MRM). Set in the context of resource use conflicts in New Zealand, the workshops assembled diverse participants in the marine economy to co-develop and co-explore the making of socio-ecological knowledge and identify capability required for a new generation of multi-use oriented resource management. The thought-experiments assumed that non-sectarian navigation of scenarios will resource a step-change in marine management by facilitating new connections, relationships, and understandings of potential marine futures. Two questions guided workshop interactions: what science needs spring from pursuing imaginable possibilities and directions in a field of scenarios, and what kinds of institutions would aid the generation of science knowledge, and it application to policy and management solutions. The effectiveness of the thought- experiments helped identify ways of dealing with core problems in multi-use marine management, such as the urgent need to cope with ecological and socio-economic surprise, and define and address cumulative impacts. Discussion focuses on how the workshops offered fresh perspectives and insights into a number of challenges. These challenges include building relations of trust and collective organisation, showing the importance of values-means-ends pathways, developing facilitative legislation to enable initiatives, and the utility of the NSSEs in informing new governance and
Inlow, Jennifer K.; Miller, Paige; Pittman, Bethany
We describe two bioinformatics exercises intended for use in a computer laboratory setting in an upper-level undergraduate biochemistry course. To introduce students to bioinformatics, the exercises incorporate several commonly used bioinformatics tools, including BLAST, that are freely available online. The exercises build upon the students'…
Furge, Laura Lowe; Stevens-Truss, Regina; Moore, D. Blaine; Langeland, James A.
Bioinformatics education for undergraduates has been approached primarily in two ways: introduction of new courses with largely bioinformatics focus or introduction of bioinformatics experiences into existing courses. For small colleges such as Kalamazoo, creation of new courses within an already resource-stretched setting has not been an option.…
Attwood, Terri K.; Selimas, Ioannis; Buis, Rob; Altenburg, Ruud; Herzog, Robert; Ledent, Valerie; Ghita, Viorica; Fernandes, Pedro; Marques, Isabel; Brugman, Marc
EMBER was a European project aiming to develop bioinformatics teaching materials on the Web and CD-ROM to help address the recognised skills shortage in bioinformatics. The project grew out of pilot work on the development of an interactive web-based bioinformatics tutorial and the desire to repackage that resource with the help of a professional…
Shachak, Aviv; Ophir, Ron; Rubin, Eitan
The need to support bioinformatics training has been widely recognized by scientists, industry, and government institutions. However, the discussion of instructional methods for teaching bioinformatics is only beginning. Here we report on a systematic attempt to design two bioinformatics workshops for graduate biology students on the basis of…
Dacinia Crina Petrescu
Full Text Available The present research is based on the premise that people perceive radiation risks in different ways, depending on their cultural background, information exposure, economic level, and educational status, which are specific to each country. The main objective was to assess and report, for the first time, the Romanians’ attitude (perceptions, knowledge, and behaviors related to residential radon, in order to contribute to the creation of a healthier living environment. A convenience sample of 229 people from different parts of Romania, including radon prone areas, was used. Results profiled a population vulnerable to radon threats from the perspective of their awareness and perceptions. Thus, study results showed that most participants did not perceive the risk generated by radon exposure as significant to their health; only 13.1% of interviewed people considered the danger to their health as “high” or “very high”. Additionally, it was found that awareness of radon itself was low: 62.4% of the sample did not know what radon was. From a practical perspective, the study shows that in Romania, increasing awareness, through the provision of valid information, should be a major objective of strategies that aim to reduce radon exposure. The present study takes a bottom-up perspective by assessing Romanian citizens’ attitudes toward radon. Therefore, it compensates for a gap in the behavioral studies literature by providing practical support for radon risk mitigation and creating the premises for a healthier living environment.
Accardi, L.; Freudenberg, Wolfgang; Ohya, Masanori
/ H. Kamimura -- Massive collection of full-length complementary DNA clones and microarray analyses: keys to rice transcriptome analysis / S. Kikuchi -- Changes of influenza A(H5) viruses by means of entropic chaos degree / K. Sato and M. Ohya -- Basics of genome sequence analysis in bioinformatics - its fundamental ideas and problems / T. Suzuki and S. Miyazaki -- A basic introduction to gene expression studies using microarray expression data analysis / D. Wanke and J. Kilian -- Integrating biological perspectives: a quantum leap for microarray expression analysis / D. Wanke ... [et al.].
Adriansen, Hanne Kirstine; Valentin, Karen; Nielsen, Gritt B.
Internationalisation of higher education is premised by a seeming paradox: On the one hand, academic knowledge strives to be universal in the sense that it claims to produce generalizable, valid and reliable knowledge that can be used, critiqued, and redeveloped by academics from all over the world......; on the other hand, the rationale for strengthening mobility through internationalisation is based on an imagination of the potentials of particular locations (academic institutions). Intrigued by this tension between universality and particularity in academic knowledge production, this paper presents...... preliminary findings from a project that study internationalisation of higher education as an agent in the interrelated processes of place-making and knowledge-making. The project is based on three case-studies. In this paper, focus is on PhD students’ change of research environment. This is used as a case...
Full Text Available A new bioinformatic methodology was developed founded on the Unsupervised Pattern Cognition Analysis of GRID-based BioGPS descriptors (Global Positioning System in Biological Space. The procedure relies entirely on three-dimensional structure analysis of enzymes and does not stem from sequence or structure alignment. The BioGPS descriptors account for chemical, geometrical and physical-chemical features of enzymes and are able to describe comprehensively the active site of enzymes in terms of "pre-organized environment" able to stabilize the transition state of a given reaction. The efficiency of this new bioinformatic strategy was demonstrated by the consistent clustering of four different Ser hydrolases classes, which are characterized by the same active site organization but able to catalyze different reactions. The method was validated by considering, as a case study, the engineering of amidase activity into the scaffold of a lipase. The BioGPS tool predicted correctly the properties of lipase variants, as demonstrated by the projection of mutants inside the BioGPS "roadmap".
Ison, Jon; Rapacki, Kristoffer; Ménager, Hervé; Kalaš, Matúš; Rydza, Emil; Chmura, Piotr; Anthon, Christian; Beard, Niall; Berka, Karel; Bolser, Dan; Booth, Tim; Bretaudeau, Anthony; Brezovsky, Jan; Casadio, Rita; Cesareni, Gianni; Coppens, Frederik; Cornell, Michael; Cuccuru, Gianmauro; Davidsen, Kristian; Vedova, Gianluca Della; Dogan, Tunca; Doppelt-Azeroual, Olivia; Emery, Laura; Gasteiger, Elisabeth; Gatter, Thomas; Goldberg, Tatyana; Grosjean, Marie; Grüning, Björn; Helmer-Citterich, Manuela; Ienasescu, Hans; Ioannidis, Vassilios; Jespersen, Martin Closter; Jimenez, Rafael; Juty, Nick; Juvan, Peter; Koch, Maximilian; Laibe, Camille; Li, Jing-Woei; Licata, Luana; Mareuil, Fabien; Mičetić, Ivan; Friborg, Rune Møllegaard; Moretti, Sebastien; Morris, Chris; Möller, Steffen; Nenadic, Aleksandra; Peterson, Hedi; Profiti, Giuseppe; Rice, Peter; Romano, Paolo; Roncaglia, Paola; Saidi, Rabie; Schafferhans, Andrea; Schwämmle, Veit; Smith, Callum; Sperotto, Maria Maddalena; Stockinger, Heinz; Vařeková, Radka Svobodová; Tosatto, Silvio C.E.; de la Torre, Victor; Uva, Paolo; Via, Allegra; Yachdav, Guy; Zambelli, Federico; Vriend, Gert; Rost, Burkhard; Parkinson, Helen; Løngreen, Peter; Brunak, Søren
Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task at hand. Here we present a community-driven curation effort, supported by ELIXIR—the European infrastructure for biological information—that aspires to a comprehensive and consistent registry of information about bioinformatics resources. The sustainable upkeep of this Tools and Data Services Registry is assured by a curation effort driven by and tailored to local needs, and shared amongst a network of engaged partners. As of November 2015, the registry includes 1785 resources, with depositions from 126 individual registrations including 52 institutional providers and 74 individuals. With community support, the registry can become a standard for dissemination of information about bioinformatics resources: we welcome everyone to join us in this common endeavour. The registry is freely available at https://bio.tools. PMID:26538599
This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...
López, Vivian F.; Aguilar, Ramiro; Alonso, Luis; Moreno, María N.; Corchado, Juan M.
In this paper we describe both theoretical and practical results of a novel data mining process that combines hybrid techniques of association analysis and classical sequentiation algorithms of genomics to generate grammatical structures of a specific language. We used an application of a compilers generator system that allows the development of a practical application within the area of grammarware, where the concepts of the language analysis are applied to other disciplines, such as Bioinformatic. The tool allows the complexity of the obtained grammar to be measured automatically from textual data. A technique of incremental discovery of sequential patterns is presented to obtain simplified production rules, and compacted with bioinformatics criteria to make up a grammar.
Li, Xiao; Zhang, Yizheng
It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biology data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium) and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.
Varma, B Sharat Chandra; Balakrishnan, M
This book presents an evaluation methodology to design future FPGA fabrics incorporating hard embedded blocks (HEBs) to accelerate applications. This methodology will be useful for selection of blocks to be embedded into the fabric and for evaluating the performance gain that can be achieved by such an embedding. The authors illustrate the use of their methodology by studying the impact of HEBs on two important bioinformatics applications: protein docking and genome assembly. The book also explains how the respective HEBs are designed and how hardware implementation of the application is done using these HEBs. It shows that significant speedups can be achieved over pure software implementations by using such FPGA-based accelerators. The methodology presented in this book may also be used for designing HEBs for accelerating software implementations in other domains besides bioinformatics. This book will prove useful to students, researchers, and practicing engineers alike.
Overby, Casey Lynnette; Tarczy-Hornoch, Peter
Personalized medicine can be defined broadly as a model of healthcare that is predictive, personalized, preventive and participatory. Two US President's Council of Advisors on Science and Technology reports illustrate challenges in personalized medicine (in a 2008 report) and in use of health information technology (in a 2010 report). Translational bioinformatics is a field that can help address these challenges and is defined by the American Medical Informatics Association as "the development of storage, analytic and interpretive methods to optimize the transformation of increasing voluminous biomedical data into proactive, predictive, preventative and participatory health." This article discusses barriers to implementing genomics applications and current progress toward overcoming barriers, describes lessons learned from early experiences of institutions engaged in personalized medicine and provides example areas for translational bioinformatics research inquiry.
At the end of January I travelled to the States to speak at and attend the first O'Reilly Bioinformatics Technology Conference. It was a large, well-organized and diverse meeting with an interesting history. Although the meeting was not a typical academic conference, its style will, I am sure, become more typical of meetings in both biological and computational sciences.Speakers at the event included prominent bioinformatics researchers such as Ewan Birney, Terry Gaasterland and Lincoln Stein; authors and leaders in the open source programming community like Damian Conway and Nat Torkington; and representatives from several publishing companies including the Nature Publishing Group, Current Science Group and the President of O'Reilly himself, Tim O'Reilly. There were presentations, tutorials, debates, quizzes and even a 'jam session' for musical bioinformaticists.
Full Text Available "Scientific community" refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i the exchange and development of ideas and expertise; (ii career development; (iii coordinated funding activities; (iv interactions and engagement with professionals from other fields; and (v other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop "The 'How To Guide' for Establishing a Successful Bioinformatics Network" at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB and the 12th European Conference on Computational Biology (ECCB.
Full Text Available Designers have a saying that "the joy of an early release lasts but a short time. The bitterness of an unusable system lasts for years." It is indeed disappointing to discover that your data resources are not being used to their full potential. Not only have you invested your time, effort, and research grant on the project, but you may face costly redesigns if you want to improve the system later. This scenario would be less likely if the product was designed to provide users with exactly what they need, so that it is fit for purpose before its launch. We work at EMBL-European Bioinformatics Institute (EMBL-EBI, and we consult extensively with life science researchers to find out what they need from biological data resources. We have found that although users believe that the bioinformatics community is providing accurate and valuable data, they often find the interfaces to these resources tricky to use and navigate. We believe that if you can find out what your users want even before you create the first mock-up of a system, the final product will provide a better user experience. This would encourage more people to use the resource and they would have greater access to the data, which could ultimately lead to more scientific discoveries. In this paper, we explore the need for a user-centred design (UCD strategy when designing bioinformatics resources and illustrate this with examples from our work at EMBL-EBI. Our aim is to introduce the reader to how selected UCD techniques may be successfully applied to software design for bioinformatics.
Budd, Aidan; Corpas, Manuel; Brazas, Michelle D.; Fuller, Jonathan C.; Goecks, Jeremy; Mulder, Nicola J.; Michaut, Magali; Ouellette, B. F. Francis; Pawlik, Aleksandra; Blomberg, Niklas
“Scientific community” refers to a group of people collaborating together on scientific-research-related activities who also share common goals, interests, and values. Such communities play a key role in many bioinformatics activities. Communities may be linked to a specific location or institute, or involve people working at many different institutions and locations. Education and training is typically an important component of these communities, providing a valuable context in which to develop skills and expertise, while also strengthening links and relationships within the community. Scientific communities facilitate: (i) the exchange and development of ideas and expertise; (ii) career development; (iii) coordinated funding activities; (iv) interactions and engagement with professionals from other fields; and (v) other activities beneficial to individual participants, communities, and the scientific field as a whole. It is thus beneficial at many different levels to understand the general features of successful, high-impact bioinformatics communities; how individual participants can contribute to the success of these communities; and the role of education and training within these communities. We present here a quick guide to building and maintaining a successful, high-impact bioinformatics community, along with an overview of the general benefits of participating in such communities. This article grew out of contributions made by organizers, presenters, panelists, and other participants of the ISMB/ECCB 2013 workshop “The ‘How To Guide’ for Establishing a Successful Bioinformatics Network” at the 21st Annual International Conference on Intelligent Systems for Molecular Biology (ISMB) and the 12th European Conference on Computational Biology (ECCB). PMID:25654371
Díaz-Del-Pino, Sergio; Falgueras, Juan; Perez-Wohlfeil, Esteban; Trelles, Oswaldo
Nearly 10 years have passed since the first mobile apps appeared. Given the fact that bioinformatics is a web-based world and that mobile devices are endowed with web-browsers, it seemed natural that bioinformatics would transit from personal computers to mobile devices but nothing could be further from the truth. The transition demands new paradigms, designs and novel implementations. Throughout an in-depth analysis of requirements of existing bioinformatics applications we designed and deployed an easy-to-use web-based lightweight mobile client. Such client is able to browse, select, compose automatically interface parameters, invoke services and monitor the execution of Web Services using the service's metadata stored in catalogs or repositories. mORCA is available at http://bitlab-es.com/morca/app as a web-app. It is also available in the App store by Apple and Play Store by Google. The software will be available for at least 2 years. firstname.lastname@example.org. Source code, final web-app, training material and documentation is available at http://bitlab-es.com/morca. © The Author(s) 2017. Published by Oxford University Press.
Bencharit, Sompop; Border, Michael B; Edelmann, Alex; Byrd, Warren C
The 3rd International Conference on Proteomics & Bioinformatics (Proteomics 2013) Philadelphia, PA, USA, 15-17 July 2013 The Third International Conference on Proteomics & Bioinformatics (Proteomics 2013) was sponsored by the OMICS group and was organized in order to strengthen the future of proteomics science by bringing together professionals, researchers and scholars from leading universities across the globe. The main topics of this conference included the integration of novel platforms in data analysis, the use of a systems biology approach, different novel mass spectrometry platforms and biomarker discovery methods. The conference was divided into proteomic methods and research interests. Among these two categories, interactions between methods in proteomics and bioinformatics, as well as other research methodologies, were discussed. Exceptional topics from the keynote forum, oral presentations and the poster session have been highlighted. The topics range from new techniques for analyzing proteomics data, to new models designed to help better understand genetic variations to the differences in the salivary proteomes of HIV-infected patients.
Full Text Available Abstract The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS, adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded and http://soap.g-language.org/kbws_dl.wsdl (Document/literal.
Teresa K Attwood
Full Text Available In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy--paradoxically, many are actually closing "niche" bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all.
Fourment, Mathieu; Gillings, Michael R
The performance of different programming languages has previously been benchmarked using abstract mathematical algorithms, but not using standard bioinformatics algorithms. We compared the memory usage and speed of execution for three standard bioinformatics methods, implemented in programs using one of six different programming languages. Programs for the Sellers algorithm, the Neighbor-Joining tree construction algorithm and an algorithm for parsing BLAST file outputs were implemented in C, C++, C#, Java, Perl and Python. Implementations in C and C++ were fastest and used the least memory. Programs in these languages generally contained more lines of code. Java and C# appeared to be a compromise between the flexibility of Perl and Python and the fast performance of C and C++. The relative performance of the tested languages did not change from Windows to Linux and no clear evidence of a faster operating system was found. Source code and additional information are available from http://www.bioinformatics.org/benchmark/. This benchmark provides a comparison of six commonly used programming languages under two different operating systems. The overall comparison shows that a developer should choose an appropriate language carefully, taking into account the performance expected and the library availability for each language.
The mountains of data thrusting from the new landscape of modern high-throughput biology are irrevocably changing biomedical research and creating a near-insatiable demand for training in data management and manipulation and data mining and analysis. Among life scientists, from clinicians to environmental researchers, a common theme is the need not just to use, and gain familiarity with, bioinformatics tools and resources but also to understand their underlying fundamental theoretical and practical concepts. Providing bioinformatics training to empower life scientists to handle and analyse their data efficiently, and progress their research, is a challenge across the globe. Delivering good training goes beyond traditional lectures and resource-centric demos, using interactivity, problem-solving exercises and cooperative learning to substantially enhance training quality and learning outcomes. In this context, this article discusses various pragmatic criteria for identifying training needs and learning objectives, for selecting suitable trainees and trainers, for developing and maintaining training skills and evaluating training quality. Adherence to these criteria may help not only to guide course organizers and trainers on the path towards bioinformatics training excellence but, importantly, also to improve the training experience for life scientists.
Repin, Rul Aisyah Mat; Mutalib, Sahilah Abdul; Shahimi, Safiyyah; Khalid, Rozida Mohd.; Ayob, Mohd. Khan; Bakar, Mohd. Faizal Abu; Isa, Mohd Noor Mat
In this study, we performed bioinformatics analysis toward genome sequence of Lysinibacillussphaericus (L. sphaericus) to determine gene encoded for gelatinase. L. sphaericus was isolated from soil and gelatinase species-specific bacterium to porcine and bovine gelatin. This bacterium offers the possibility of enzymes production which is specific to both species of meat, respectively. The main focus of this research is to identify the gelatinase encoded gene within the bacteria of L. Sphaericus using bioinformatics analysis of partially sequence genome. From the research study, three candidate gene were identified which was, gelatinase candidate gene 1 (P1), NODE_71_length_93919_cov_158.931839_21 which containing 1563 base pair (bp) in size with 520 amino acids sequence; Secondly, gelatinase candidate gene 2 (P2), NODE_23_length_52851_cov_190.061386_17 which containing 1776 bp in size with 591 amino acids sequence; and Thirdly, gelatinase candidate gene 3 (P3), NODE_106_length_32943_cov_169.147919_8 containing 1701 bp in size with 566 amino acids sequence. Three pairs of oligonucleotide primers were designed and namely as, F1, R1, F2, R2, F3 and R3 were targeted short sequences of cDNA by PCR. The amplicons were reliably results in 1563 bp in size for candidate gene P1 and 1701 bp in size for candidate gene P3. Therefore, the results of bioinformatics analysis of L. Sphaericus resulting in gene encoded gelatinase were identified.
Atwood, Teresa K.; Bongcam-Rudloff, Erik; Brazas, Michelle E.; Corpas, Manuel; Gaudet, Pascale; Lewitter, Fran; Mulder, Nicola; Palagi, Patricia M.; Schneider, Maria Victoria; van Gelder, Celia W. G.
In recent years, high-throughput technologies have brought big data to the life sciences. The march of progress has been rapid, leaving in its wake a demand for courses in data analysis, data stewardship, computing fundamentals, etc., a need that universities have not yet been able to satisfy—paradoxically, many are actually closing “niche” bioinformatics courses at a time of critical need. The impact of this is being felt across continents, as many students and early-stage researchers are being left without appropriate skills to manage, analyse, and interpret their data with confidence. This situation has galvanised a group of scientists to address the problems on an international scale. For the first time, bioinformatics educators and trainers across the globe have come together to address common needs, rising above institutional and international boundaries to cooperate in sharing bioinformatics training expertise, experience, and resources, aiming to put ad hoc training practices on a more professional footing for the benefit of all. PMID:25856076
Rocha, Miguel; Fdez-Riverola, Florentino; Santana, Juan
Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis of the datasets produced and their integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Clearly, Biology is more and more a science of information requiring tools from the computational sciences. In the last few years, we have seen the surge of a new generation of interdisciplinary scientists that have a strong background in the biological and computational sciences. In this context, the interaction of researche...
Rocha, Miguel; Fdez-Riverola, Florentino; Mayo, Francisco; Paz, Juan
Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis of the datasets produced and their integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Clearly, Biology is more and more a science of information requiring tools from the computational sciences. In the last few years, we have seen the surge of a new generation of interdisciplinary scientists that have a strong background in the biological and computational sciences. In this context, the interaction of researche...
Mohamad, Mohd; Rocha, Miguel; Paz, Juan; Pinto, Tiago
Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next-generation sequencing technologies, together with novel and constantly evolving, distinct types of omics data technologies, have created an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis of the datasets produced and their integration call for new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Clearly, Biology is more and more a science of information and requires tools from the computational sciences. In the last few years, we have seen the rise of a new generation of interdisciplinary scientists with a strong background in the biological and computational sciences. In this context, the interaction of r...
Pavlopoulou, Athanasia; Michalopoulos, Ioannis
Knowledge of the native structure of a protein could provide an understanding of the molecular basis of its function. However, in the postgenomics era, there is a growing gap between proteins with experimentally determined structures and proteins without known structures. To deal with the overwhelming data, a collection of automated methods as bioinformatics tools which determine the structure of a protein from its amino acid sequence have emerged. The aim of this paper is to provide the experimental biologists with a set of cutting-edge, carefully evaluated, user-friendly computational tools for protein structure prediction that would be helpful for the interpretation of their results and the rational design of new experiments.
Microarray technology is being used widely in various biomedical research areas; the corresponding microarray data analysis is an essential step toward the best utilizing of array technologies. Here we review two components of the microarray data analysis: a low level of microarray data analysis that emphasizes the designing, the quality control, and the preprocessing of microarray experiments, then a high level of microarray data analysis that focuses on the domain-specific microarray applications such as tumor classification, biomarker prediction, analyzing array CGH experiments, and reverse engineering of gene expression networks. Additionally, we will review the recent development of building a predictive model in genome expression and regulation studies. This review may help biologists grasp a basic knowledge of microarray bioinformatics as well as its potential impact on the future evolvement of biomedical research fields.
Full Text Available Since the decoding of the Human Genome, techniques from bioinformatics, statistics, and machine learning have been instrumental in uncovering patterns in increasing amounts and types of different data produced by technical profiling technologies applied to clinical samples, animal models, and cellular systems. Yet, progress on unravelling biological mechanisms, causally driving diseases, has been limited, in part due to the inherent complexity of biological systems. Whereas we have witnessed progress in the areas of cancer, cardiovascular and metabolic diseases, the area of neurodegenerative diseases has proved to be very challenging. This is in part because the aetiology of neurodegenerative diseases such as Alzheimer´s disease or Parkinson´s disease is unknown, rendering it very difficult to discern early causal events. Here we describe a panel of bioinformatics and modeling approaches that have recently been developed to identify candidate mechanisms of neurodegenerative diseases based on publicly available data and knowledge. We identify two complementary strategies—data mining techniques using genetic data as a starting point to be further enriched using other data-types, or alternatively to encode prior knowledge about disease mechanisms in a model based framework supporting reasoning and enrichment analysis. Our review illustrates the challenges entailed in integrating heterogeneous, multiscale and multimodal information in the area of neurology in general and neurodegeneration in particular. We conclude, that progress would be accelerated by increasing efforts on performing systematic collection of multiple data-types over time from each individual suffering from neurodegenerative disease. The work presented here has been driven by project AETIONOMY; a project funded in the course of the Innovative Medicines Initiative (IMI; which is a public-private partnership of the European Federation of Pharmaceutical Industry Associations
Hofmann-Apitius, Martin; Ball, Gordon; Gebel, Stephan; Bagewadi, Shweta; de Bono, Bernard; Schneider, Reinhard; Page, Matt; Kodamullil, Alpha Tom; Younesi, Erfan; Ebeling, Christian; Tegnér, Jesper; Canard, Luc
Since the decoding of the Human Genome, techniques from bioinformatics, statistics, and machine learning have been instrumental in uncovering patterns in increasing amounts and types of different data produced by technical profiling technologies applied to clinical samples, animal models, and cellular systems. Yet, progress on unravelling biological mechanisms, causally driving diseases, has been limited, in part due to the inherent complexity of biological systems. Whereas we have witnessed progress in the areas of cancer, cardiovascular and metabolic diseases, the area of neurodegenerative diseases has proved to be very challenging. This is in part because the aetiology of neurodegenerative diseases such as Alzheimer´s disease or Parkinson´s disease is unknown, rendering it very difficult to discern early causal events. Here we describe a panel of bioinformatics and modeling approaches that have recently been developed to identify candidate mechanisms of neurodegenerative diseases based on publicly available data and knowledge. We identify two complementary strategies-data mining techniques using genetic data as a starting point to be further enriched using other data-types, or alternatively to encode prior knowledge about disease mechanisms in a model based framework supporting reasoning and enrichment analysis. Our review illustrates the challenges entailed in integrating heterogeneous, multiscale and multimodal information in the area of neurology in general and neurodegeneration in particular. We conclude, that progress would be accelerated by increasing efforts on performing systematic collection of multiple data-types over time from each individual suffering from neurodegenerative disease. The work presented here has been driven by project AETIONOMY; a project funded in the course of the Innovative Medicines Initiative (IMI); which is a public-private partnership of the European Federation of Pharmaceutical Industry Associations (EFPIA) and the European
Full Text Available The continuous growth of online learning and its movement towards cross-border and cross-culture education has recently taken a new turn with the epic hype that currently surrounds the development of massive open online courses (MOOCs (Beattie-Moss, 2013. This development brings to focus the experiences of international students who take online courses designed and offered within the paradigm of Western pedagogy. Employing a sociocultural theoretical framework (Vygotsky, 1978; Scollon & Scollon, 2001, this paper examines the mediating roles that peers may play in the context of multicultural online learning environments. This two-stage, mixed methods study explored the experiences of 12 international graduate students who took fully online courses in a large research university in the northeastern region of the United States. The data included a survey, online interviews, as well as a case study that took a close look at the experiences of a female student from China. Findings of the study demonstrated that international students that come from diverse native academic backgrounds and cultures may necessitate a close relationship with peers they meet in the US courses. Peers become invaluable mediators of knowledge for international students who seek peer assistance to compensate for the lack of culture-specific knowledge and skills and to satisfy their interest in the host culture. The study suggests that course developers and facilitators should be proactive when assigning group projects and activities so as to enable close peer-to-peer interaction and opportunities for building personal relationships with other class members.
Full Text Available The background to this article review is governmental interest in finding reasons why a majority of the employees in Sweden who are on sick leave are women. In order to find answers to these questions three issues will be discussed from a meso-level: (i recent changes in the Swedish health care sector’s working organization and their effects on gender, (ii what research says about work health and gender in the health care sector, and (iii the meaning of gender at work. The aim is to first discuss these three issues to give a picture of what gender research says concerning work organization and work health, and second to examine the theories behind the issue. In this article the female-dominated health care sector is in focus. This sector strives for efficiency relating to invisible job tasks and emotional work performed by women. In contemporary work organizations gender segregation has a tendency to take on new and subtler forms. One reason for this is today’s de-hierarchized and flexible organizations. A burning question connected to this is whether new constructions of masculinities and femininities really are ways of relating to the prevailing norm in a profession or are ways of deconstructing the gender order. To gain a deeper understanding of working life we need multidisciplinary research projects where gender-critical knowledge is interwoven into research not only on organizations, but also into research concerning the physical work environment, in order to be able to develop good and sustainable work environments, in this case in the health care sector
This study sought to compare a data-rich learning (DRL) environment that utilized online data as a tool for teaching about renewable energy technologies (RET) to a lecture-based learning environment to determine the impact of the learning environment on students' knowledge of Science, Technology, Engineering, and Math (STEM) concepts related to renewable energy technologies and students' problem solving skills. Two purposefully selected Advanced Placement (AP) Environmental Science teachers were included in the study. Each teacher taught one class about RET in a lecture-based environment (control) and another class in a DRL environment (treatment), for a total of four classes of students (n=128). This study utilized a quasi-experimental, pretest/posttest, control-group design. The initial hypothesis that the treatment group would have a significant gain in knowledge of STEM concepts related to RET and be better able to solve problems when compared to the control group was not supported by the data. Although students in the DRL environment had a significant gain in knowledge after instruction, posttest score comparisons of the control and treatment groups revealed no significant differences between the groups. Further, no significant differences were noted in students' problem solving abilities as measured by scores on a problem-based activity and self-reported abilities on a reflective questionnaire. This suggests that the DRL environment is at least as effective as the lecture-based learning environment in teaching AP Environmental Science students about RET and fostering the development of problem solving skills. As this was a small scale study, further research is needed to provide information about effectiveness of DRL environments in promoting students' knowledge of STEM concepts and problem-solving skills.
Geary, Janis; Jardine, Cynthia G; Guebert, Jenilee; Bubela, Tania
Research in northern Canada focused on Aboriginal peoples has historically benefited academia with little consideration for the people being researched or their traditional knowledge (TK). Although this attitude is changing, the complexity of TK makes it difficult to develop mechanisms to preserve and protect it. Protecting TK becomes even more important when outside groups become interested in using TK or materials with associated TK. In the latter category are genetic resources, which may have commercial value and are the focus of this article. This article addresses access to and use of genetic resources and associated TK in the context of the historical power-imbalances in research relationships in Canadian north. Review. Research involving genetic resources and TK is becoming increasingly relevant in northern Canada. The legal framework related to genetic resources and the cultural shift of universities towards commercial goals in research influence the environment for negotiating research agreements. Current guidelines for research agreements do not offer appropriate guidelines to achieve mutual benefit, reflect unequal bargaining power or take the relationship between parties into account. Relational contract theory may be a useful framework to address the social, cultural and legal hurdles inherent in creating research agreements.
Full Text Available Background. Research in northern Canada focused on Aboriginal peoples has historically benefited academia with little consideration for the people being researched or their traditional knowledge (TK. Although this attitude is changing, the complexity of TK makes it difficult to develop mechanisms to preserve and protect it. Protecting TK becomes even more important when outside groups become interested in using TK or materials with associated TK. In the latter category are genetic resources, which may have commercial value and are the focus of this article. Objective. This article addresses access to and use of genetic resources and associated TK in the context of the historical power-imbalances in research relationships in Canadian north. Design. Review. Results. Research involving genetic resources and TK is becoming increasingly relevant in northern Canada. The legal framework related to genetic resources and the cultural shift of universities towards commercial goals in research influence the environment for negotiating research agreements. Current guidelines for research agreements do not offer appropriate guidelines to achieve mutual benefit, reflect unequal bargaining power or take the relationship between parties into account. Conclusions. Relational contract theory may be a useful framework to address the social, cultural and legal hurdles inherent in creating research agreements.
Almeida, Jonas S; Iriabho, Egiebade E; Gorrepati, Vijaya L; Wilkinson, Sean R; Grüneberg, Alexander; Robbins, David E; Hackney, James R
Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local "download and installation".
Macdonald, John M; Boutros, Paul C
To reproduce and report a bioinformatics analysis, it is important to be able to determine the environment in which a program was run. It can also be valuable when trying to debug why different executions are giving unexpectedly different results. Log::ProgramInfo is a Perl module that writes a log file at the termination of execution of the enclosing program, to document useful execution characteristics. This log file can be used to re-create the environment in order to reproduce an earlier execution. It can also be used to compare the environments of two executions to determine whether there were any differences that might affect (or explain) their operation. The source is available on CPAN (Macdonald and Boutros, Log-ProgramInfo. http://search.cpan.org/~boutroslb/Log-ProgramInfo/). Using Log::ProgramInfo in programs creating result data for publishable research, and including the Log::ProgramInfo output log as part of the publication of that research is a valuable method to assist others to duplicate the programming environment as a precursor to validating and/or extending that research.
Lin, Sabrina; Fonteno, Shawn; Satish, Shruthi; Bhanu, Bir; Talbot, Prue
Because video data are complex and are comprised of many images, mining information from video material is difficult to do without the aid of computer software. Video bioinformatics is a powerful quantitative approach for extracting spatio-temporal data from video images using computer software to perform dating mining and analysis. In this article, we introduce a video bioinformatics method for quantifying the growth of human embryonic stem cells (hESC) by analyzing time-lapse videos collected in a Nikon BioStation CT incubator equipped with a camera for video imaging. In our experiments, hESC colonies that were attached to Matrigel were filmed for 48 hours in the BioStation CT. To determine the rate of growth of these colonies, recipes were developed using CL-Quant software which enables users to extract various types of data from video images. To accurately evaluate colony growth, three recipes were created. The first segmented the image into the colony and background, the second enhanced the image to define colonies throughout the video sequence accurately, and the third measured the number of pixels in the colony over time. The three recipes were run in sequence on video data collected in a BioStation CT to analyze the rate of growth of individual hESC colonies over 48 hours. To verify the truthfulness of the CL-Quant recipes, the same data were analyzed manually using Adobe Photoshop software. When the data obtained using the CL-Quant recipes and Photoshop were compared, results were virtually identical, indicating the CL-Quant recipes were truthful. The method described here could be applied to any video data to measure growth rates of hESC or other cells that grow in colonies. In addition, other video bioinformatics recipes can be developed in the future for other cell processes such as migration, apoptosis, and cell adhesion. PMID:20495527
Lopez, Rodrigo; Silventoinen, Ville; Robinson, Stephen; Kibria, Asif; Gish, Warren
Since 1995, the WU-BLAST programs (http://blast.wustl.edu) have provided a fast, flexible and reliable method for similarity searching of biological sequence databases. The software is in use at many locales and web sites. The European Bioinformatics Institute's WU-Blast2 (http://www.ebi.ac.uk/blast2/) server has been providing free access to these search services since 1997 and today supports many features that both enhance the usability and expand on the scope of the software. PMID:12824421
Wiwanitkit, Somsri; Wiwanitkit, Viroj
The role of microRNA in the pathogenesis of pulmonary tuberculosis is the interesting topic in chest medicine at present. Recently, it was proposed that the microRNA can be a useful biomarker for monitoring of pulmonary tuberculosis and might be the important part in pathogenesis of disease. Here, the authors perform a bioinformatics study to assess the microRNA within known tuberculosis RNA. The microRNA part can be detected and this can be important key information in further study of the p...
The signature feature of Cellular Automata is the realization that "simple rules can give rise to complex behavior". In particular how fixed "rock-bottom" simple rules can give rise to multiple levels of organization. Here we describe Multilevel Cellular Automata, in which the microscopic entities (states) and their transition rules themselves are adjusted by the mesoscale patterns that they themselves generate. Thus we study the feedback of higher levels of organization on the lower levels. Such an approach is preeminently important for studying bioinformatic systems. We will here focus on an evolutionary approach to formalize such Multilevel Cellular Automata, and review examples of studies that use them.
Pharmacogenetics refers to the study of the individual pharmacological response based on the genotype. Its objective is to optimize treatment in an individual basis, thereby creating a more efficient and safe personalized therapy. In the second part of this review, the molecular methods of study in pharmacogenetics, including microarray technology or DNA chips, are discussed. Among them we highlight the microarrays used to determine the gene expression that detect specific RNA sequences, and the microarrays employed to determine the genotype that detect specific DNA sequences, including polymorphisms, particularly single nucleotide polymorphisms (SNPs). The relationship between pharmacogenetics, bioinformatics and ethical concerns is reviewed.
Rezig, Slim; Sakhri, Saber
Salmonellas are the main responsible agent for the frequent food-borne gastrointestinal diseases. Their detection using classical methods are laborious and their results take a lot of time to be revealed. In this context, we tried to set up a revealing technique of the invA virulence gene, found in the majority of Salmonella species. After amplification with PCR using specific primers created and verified by bioinformatics programs, two couples of primers were set up and they appeared to be very specific and sensitive for the detection of invA gene. (Author)
Schönbach, Christian; Tongsima, Sissades; Chan, Jonathan; Brusic, Vladimir; Tan, Tin Wee; Ranganathan, Shoba
Ten years ago when Asia-Pacific Bioinformatics Network held the first International Conference on Bioinformatics (InCoB) in Bangkok its theme was North-South Networking. At that time InCoB aimed to provide biologists and bioinformatics researchers in the Asia-Pacific region a forum to meet, interact with, and disseminate knowledge about the burgeoning field of bioinformatics. Meanwhile InCoB has evolved into a major regional bioinformatics conference that attracts not only talented and established scientists from the region but increasingly also from East Asia, North America and Europe. Since 2006 InCoB yielded 114 articles in BMC Bioinformatics supplement issues that have been cited nearly 1,000 times to date. In part, these developments reflect the success of bioinformatics education and continuous efforts to integrate and utilize bioinformatics in biotechnology and biosciences in the Asia-Pacific region. A cross-section of research leading from biological data to knowledge and to technological applications, the InCoB2012 theme, is introduced in this editorial. Other highlights included sessions organized by the Pan-Asian Pacific Genome Initiative and a Machine Learning in Immunology competition. InCoB2013 is scheduled for September 18-21, 2013 at Suzhou, China.
Boldrini, E.; Brumana, R.; Previtali, M., Jr.; Mazzetti, P., Sr.; Cuca, B., Sr.; Barazzetti, L., Sr.; Camagni, R.; Santoro, M.
The Built Environment (BE) is intended as the sum of natural and human activities in dynamic transformations in the past, in the present and in the future: it calls for more informed decisions to face the challenging threats (climate change, natural hazards, anthropic pressures) by exploiting resilience, sustainable intervention and tackling societal opportunities, as heritage valorization and tourism acknowledgment; thus, it asks for awareness rising among circular reflective society. In the framework of ENERGIC OD project (EU Network for Redistributing Geographic Information - Open Data), this paper describes the implementation of an application (GeoPAN Atl@s app) addressed to improve a circular multi-temporal knowledge oriented generation of information, able to integrate and take in account historic and current maps, as well as products of satellite image processing to understand on course and on coming phenomena and relating them with the ones occurred in the ancient and recent past in a diachronic approach. The app is focused on riverbeds-BE and knowledge generation for the detection of their changes by involving geologist community and providing to other user the retrieved information (architects and urban planner, tourists and citizen). Here is described the implementation of the app interfaced with the ENERGIC OD Virtual Hub component, based on a brokering framework for OD discovery and access, to assure interoperability and integration of different datasets, wide spread cartographic products with huge granularity (national, regional environmental Risk Maps, i.e. PAI, on site local data, i.e. UAV data, or results of Copernicus Programme satellite data processing, i.e. object-based and time series image analysis for riverbed monitoring using Sentinel2): different sources, scales and formats, including historical maps needing metadata generation, and SHP data used by the geologist in their daily activities for hydrogeological analysis, to be both usable as
Rossnerova, Andrea; Pokorna, Michaela; Svecova, Vlasta; Sram, Radim J; Topinka, Jan; Zölzer, Friedo; Rossner, Pavel
The human population is continually exposed to numerous harmful environmental stressors, causing negative health effects and/or deregulation of biomarker levels. However, studies reporting no or even positive impacts of some stressors on humans are also sometimes published. The main aim of this review is to provide a comprehensive overview of the last decade of Czech biomonitoring research, concerning the effect of various levels of air pollution (benzo[a]pyrene) and radiation (uranium, X-ray examination and natural radon background), on the differently exposed population groups. Because some results obtained from cytogenetic studies were opposite than hypothesized, we have searched for a meaningful interpretation in genomic/epigenetic studies. A detailed analysis of our data supported by the studies of others and current epigenetic knowledge, leads to a hypothesis of the versatile mechanism of adaptation to environmental stressors via DNA methylation settings which may even originate in prenatal development, and help to reduce the resulting DNA damage levels. This hypothesis is fully in agreement with unexpected data from our studies (e.g. lower levels of DNA damage in subjects from highly polluted regions than in controls or in subjects exposed repeatedly to a pollutant than in those without previous exposure), and is also supported by differences in DNA methylation patterns in groups from regions with various levels of pollution. In light of the adaptation hypothesis, the following points may be suggested for future research: (i) the chronic and acute exposure of study subjects should be distinguished; (ii) the exposure history should be mapped including place of residence during the life and prenatal development; (iii) changes of epigenetic markers should be monitored over time. In summary, investigation of human adaptation to the environment, one of the most important processes of survival, is a new challenge for future research in the field of human
Supreet Kaur Gill
Full Text Available Clinical research is making toiling efforts for promotion and wellbeing of the health status of the people. There is a rapid increase in number and severity of diseases like cancer, hepatitis, HIV etc, resulting in high morbidity and mortality. Clinical research involves drug discovery and development whereas clinical trials are performed to establish safety and efficacy of drugs. Drug discovery is a long process starting with the target identification, validation and lead optimization. This is followed by the preclinical trials, intensive clinical trials and eventually post marketing vigilance for drug safety. Softwares and the bioinformatics tools play a great role not only in the drug discovery but also in drug development. It involves the use of informatics in the development of new knowledge pertaining to health and disease, data management during clinical trials and to use clinical data for secondary research. In addition, new technology likes molecular docking, molecular dynamics simulation, proteomics and quantitative structure activity relationship in clinical research results in faster and easier drug discovery process. During the preclinical trials, the software is used for randomization to remove bias and to plan study design. In clinical trials software like electronic data capture, Remote data capture and electronic case report form (eCRF is used to store the data. eClinical, Oracle clinical are software used for clinical data management and for statistical analysis of the data. After the drug is marketed the safety of a drug could be monitored by drug safety software like Oracle Argus or ARISg. Therefore, softwares are used from the very early stages of drug designing, to drug development, clinical trials and during pharmacovigilance. This review describes different aspects related to application of computers and bioinformatics in drug designing, discovery and development, formulation designing and clinical research.
Soualmia, L F; Lecroq, T
To summarize excellent current research in the field of Bioinformatics and Translational Informatics with application in the health domain and clinical care. We provide a synopsis of the articles selected for the IMIA Yearbook 2015, from which we attempt to derive a synthetic overview of current and future activities in the field. As last year, a first step of selection was performed by querying MEDLINE with a list of MeSH descriptors completed by a list of terms adapted to the section. Each section editor has evaluated separately the set of 1,594 articles and the evaluation results were merged for retaining 15 articles for peer-review. The selection and evaluation process of this Yearbook's section on Bioinformatics and Translational Informatics yielded four excellent articles regarding data management and genome medicine that are mainly tool-based papers. In the first article, the authors present PPISURV a tool for uncovering the role of specific genes in cancer survival outcome. The second article describes the classifier PredictSNP which combines six performing tools for predicting disease-related mutations. In the third article, by presenting a high-coverage map of the human proteome using high resolution mass spectrometry, the authors highlight the need for using mass spectrometry to complement genome annotation. The fourth article is also related to patient survival and decision support. The authors present datamining methods of large-scale datasets of past transplants. The objective is to identify chances of survival. The current research activities still attest the continuous convergence of Bioinformatics and Medical Informatics, with a focus this year on dedicated tools and methods to advance clinical care. Indeed, there is a need for powerful tools for managing and interpreting complex, large-scale genomic and biological datasets, but also a need for user-friendly tools developed for the clinicians in their daily practice. All the recent research and
With the capability of creating a situated and engaging learning environment, video games have been considered as a powerful tool to enhance students' learning outcomes and interest in learning. Yet, little empirical evidence exists to support the effectiveness of video games in learning. Particularly, little attention has been given to the design of specific game elements. Focusing on middle school students, the goal of this study was to investigate the effects of two types of representations of reflective scaffolds (verbal and visual) on students' learning outcomes, game performance, and level of engagement in a video game for physics learning. In addition, the role of students' level of English proficiency was examined to understand whether the effects of reflective scaffolds were influenced by students' language proficiency. Two studies were conducted. Study 1 playtested the game with target players and led to game modification for its use in Study 2, which focused on the effects of different types of reflective scaffolds and level of English proficiency. The results of Study 2 showed that students who received both verbal and visual reflective scaffolds completed the most levels compared to the other groups in the given time. No significant effect of type of reflective scaffolds were found on learning outcomes despite the fact that the pattern of the learning outcomes across conditions was close to prediction. Participants' engagement in gameplay was high regardless of the type of scaffolds they received, their interest in learning physics, and their prior knowledge of physics. The results of video analysis also showed that the game used in this study was able to engage students not only in gameplay but also in learning physics. Finally, English proficiency functioned as a significant factor moderating the effects of scaffolds, learning outcomes and game performance. Students with limited English proficiency benefited more from visual reflective scaffolds than
Commercial success or failure of innovation in bioinformatics and in-silico biology requires the appropriate use of legal tools for protecting and exploiting intellectual property. These tools include patents, copyrights, trademarks, design rights, and limiting information in the form of 'trade secrets'. Potentially patentable components of bioinformatics programmes include lines of code, algorithms, data content, data structure and user interfaces. In both the US and the European Union, copyright protection is granted for software as a literary work, and most other major industrial countries have adopted similar rules. Nonetheless, the grant of software patents remains controversial and is being challenged in some countries. Current debate extends to aspects such as whether patents can claim not only the apparatus and methods but also the data signals and/or products, such as a CD-ROM, on which the programme is stored. The patentability of substances discovered using in-silico methods is a separate debate that is unlikely to be resolved in the near future.
Liu, Yao-Yuan; Harbison, SallyAnn
Short tandem repeats, single nucleotide polymorphisms, and whole mitochondrial analyses are three classes of markers which will play an important role in the future of forensic DNA typing. The arrival of massively parallel sequencing platforms in forensic science reveals new information such as insights into the complexity and variability of the markers that were previously unseen, along with amounts of data too immense for analyses by manual means. Along with the sequencing chemistries employed, bioinformatic methods are required to process and interpret this new and extensive data. As more is learnt about the use of these new technologies for forensic applications, development and standardization of efficient, favourable tools for each stage of data processing is being carried out, and faster, more accurate methods that improve on the original approaches have been developed. As forensic laboratories search for the optimal pipeline of tools, sequencer manufacturers have incorporated pipelines into sequencer software to make analyses convenient. This review explores the current state of bioinformatic methods and tools used for the analyses of forensic markers sequenced on the massively parallel sequencing (MPS) platforms currently most widely used. Copyright © 2017 Elsevier B.V. All rights reserved.