Ngu, A; Rocco, D; Critchlow, T; Buttler, D
The World Wide Web provides a vast resource to genomics researchers in the form of web-based access to distributed data sources--e.g. BLAST sequence homology search interfaces. However, the process for seeking the desired scientific information is still very tedious and frustrating. While there are several known servers on genomic data (e.g., GeneBank, EMBL, NCBI), that are shared and accessed frequently, new data sources are created each day in laboratories all over the world. The sharing of these newly discovered genomics results are hindered by the lack of a common interface or data exchange mechanism. Moreover, the number of autonomous genomics sources and their rate of change out-pace the speed at which they can be manually identified, meaning that the available data is not being utilized to its full potential. An automated system that can find, classify, describe and wrap new sources without tedious and low-level coding of source specific wrappers is needed to assist scientists to access to hundreds of dynamically changing bioinformatics web data sources through a single interface. A correct classification of any kind of Web data source must address both the capability of the source and the conversation/interaction semantics which is inherent in the design of the Web data source. In this paper, we propose an automatic approach to classify Web data sources that takes into account both the capability and the conversational semantics of the source. The ability to discover the interaction pattern of a Web source leads to increased accuracy in the classification process. At the same time, it facilitates the extraction of process semantics, which is necessary for the automatic generation of wrappers that can interact correctly with the sources.
James F. Aiton
Full Text Available The rapid expansion occurring in World-Wide Web activity is beginning to make the concepts of ‘global hypermedia’ and ‘universal document readership’ realistic objectives of the new revolution in information technology. One consequence of this increase in usage is that educators and students are becoming more aware of the diversity of the knowledge base which can be accessed via the Internet. Although computerised databases and information services have long played a key role in bioinformatics these same resources can also be used to provide core materials for teaching and learning. The large datasets and arch ives th at have been compiled for biomedical research can be enhanced with the addition of a variety of multimedia elements (images. digital videos. animation etc.. The use of this digitally stored information in structured and self-directed learning environments is likely to increase as activity across World-Wide Web increases.
Rocco, D; Critchlow, T
The transition of the World Wide Web from a paradigm of static Web pages to one of dynamic Web services provides new and exciting opportunities for bioinformatics with respect to data dissemination, transformation, and integration. However, the rapid growth of bioinformatics services, coupled with non-standardized interfaces, diminish the potential that these Web services offer. To face this challenge, we examine the notion of a Web service class that defines the functionality provided by a collection of interfaces. These descriptions are an integral part of a larger framework that can be used to discover, classify, and wrapWeb services automatically. We discuss how this framework can be used in the context of the proliferation of sites offering BLAST sequence alignment services for specialized data sets.
Neerincx, Pieter B T; Leunissen, Jack A M
Bioinformaticians have developed large collections of tools to make sense of the rapidly growing pool of molecular biological data. Biological systems tend to be complex and in order to understand them, it is often necessary to link many data sets and use more than one tool. Therefore, bioinformaticians have experimented with several strategies to try to integrate data sets and tools. Owing to the lack of standards for data sets and the interfaces of the tools this is not a trivial task. Over the past few years building services with web-based interfaces has become a popular way of sharing the data and tools that have resulted from many bioinformatics projects. This paper discusses the interoperability problem and how web services are being used to try to solve it, resulting in the evolution of tools with web interfaces from HTML/web form-based tools not suited for automatic workflow generation to a dynamic network of XML-based web services that can easily be used to create pipelines.
Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P
Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.
Cheung David W
Full Text Available Abstract Background Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1 the platforms on which the applications run are heterogeneous, 2 their web interface is not machine-friendly, 3 they use a non-standard format for data input and output, 4 they do not exploit standards to define application interface and message exchange, and 5 existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. Results To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates
Full Text Available Abstract The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS, adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded and http://soap.g-language.org/kbws_dl.wsdl (Document/literal.
Full Text Available Abstract Background Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. Results To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. Conclusions The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others.
Full Text Available Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/.
Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas
Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/.
Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas
Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/. PMID:26882475
Kalas, M.; Puntervoll, P.; Joseph, A.;
Motivation: The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use...... of the resources is hampered by the lack of widely used, standard data-exchange formats for the basic, everyday bioinformatics data types. Results: BioXSD has been developed as a candidate for standard, canonical exchange format for basic bioinformatics data. BioXSD is represented by a dedicated XML Schema...
Trias Thireou; George Spyrou; Vassilis Atlamazoglou
The explosive growth of the bioinformatics field has led to a large amount of data and software applications publicly available as web resources. However, the lack of persistence of web references is a barrier to a comprehensive shared access. We conducted a study of the current availability and other features of primary bioinformatics web resources (such as software tools and databases). The majority (95%) of the examined bioinformatics web resources were found running on UNIX/Linux operating systems, and the most widely used web server was found to be Apache (or Apache-related products). Of the overall 1,130 Uniform Resource Locators (URLs) examined, 91% were highly available (more than 90% of the time), while only 4% showed low accessibility (less than 50% of the time) during the survey. Furthermore, the most common URL failure modes are presented and analyzed.
Neerincx, P.B.T.; Leunissen, J.A.M.
Bioinformaticians have developed large collections of tools to make sense of the rapidly growing pool of molecular biological data. Biological systems tend to be complex and in order to understand them, it is often necessary to link many data sets and use more than one tool. Therefore, bioinformatic
Pettifer, S.; Thorne, D.; McDermott, P.; Attwood, T.; Baran, J.; Bryne, J.C.; Hupponen, T.; Mowbray, D.; Vriend, G.
SUMMARY: The EMBRACE Registry is a web portal that collects and monitors web services according to test scripts provided by the their administrators. Users are able to search for, rank and annotate services, enabling them to select the most appropriate working service for inclusion in their bioinfor
Xiao Li; Yizheng Zhang
It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biological data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium)and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.
Babu, B. Ramesh; O'Brien, Ann
Discussion of Web-based online public access catalogs (OPACs) focuses on a review of six Web OPAC interfaces in use in academic libraries in the United Kingdom. Presents a checklist and guidelines of important features and functions that are currently available, including search strategies, access points, display, links, and layout. (Author/LRW)
Swainston Neil; Griffiths Tony; Hedeler Cornelia; Garwood Christopher; Garwood Kevin; Oliver Stephen G; Paton Norman W
Abstract Background The proliferation of data repositories in bioinformatics has resulted in the development of numerous interfaces that allow scientists to browse, search and analyse the data that they contain. Interfaces typically support repository access by means of web pages, but other means are also used, such as desktop applications and command line tools. Interfaces often duplicate functionality amongst each other, and this implies that associated development activities are repeated i...
Rocco, D; Critchlow, T J
The World Wide Web provides an incredible resource to genomics researchers in the form of dynamic data sources--e.g. BLAST sequence homology search interfaces. The growth rate of these sources outpaces the speed at which they can be manually classified, meaning that the available data is not being utilized to its full potential. Existing research has not addressed the problems of automatically locating, classifying, and integrating classes of bioinformatics data sources. This paper presents an overview of a system for finding classes of bioinformatics data sources and integrating them behind a unified interface. We examine an approach to classifying these sources automatically that relies on an abstract description format: the service class description. This format allows a domain expert to describe the important features of an entire class of services without tying that description to any particular Web source. We present the features of this description format in the context of BLAST sources to show how the service class description relates to Web sources that are being described. We then show how a service class description can be used to classify an arbitrary Web source to determine if that source is an instance of the described service. To validate the effectiveness of this approach, we have constructed a prototype that can correctly classify approximately two-thirds of the BLAST sources we tested. We then examine these results, consider the factors that affect correct automatic classification, and discuss future work.
Ramírez, Sergio; Muñoz-Mérida, Antonio; Karlsson, Johan; García, Maximiliano; Pérez-Pulido, Antonio J.; Claros, M. Gonzalo; Trelles, Oswaldo
The productivity of any scientist is affected by cumbersome, tedious and time-consuming tasks that try to make the heterogeneous web services compatible so that they can be useful in their research. MOWServ, the bioinformatic platform offered by the Spanish National Institute of Bioinformatics, was released to provide integrated access to databases and analytical tools. Since its release, the number of available services has grown dramatically, and it has become one of the main contributors of registered services in the EMBRACE Biocatalogue. The ontology that enables most of the web-service compatibility has been curated, improved and extended. The service discovery has been greatly enhanced by Magallanes software and biodataSF. User data are securely stored on the main server by an authentication protocol that enables the monitoring of current or already-finished user’s tasks, as well as the pipelining of successive data processing services. The BioMoby standard has been greatly extended with the new features included in the MOWServ, such as management of additional information (metadata such as extended descriptions, keywords and datafile examples), a qualified registry, error handling, asynchronous services and service replication. All of them have increased the MOWServ service quality, usability and robustness. MOWServ is available at http://www.inab.org/MOWServ/ and has a mirror at http://www.bitlab-es.com/MOWServ/. PMID:20525794
Full Text Available Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL. Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL. BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license.
Repchevsky, Dmitry; Gelpi, Josep Ll
Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license.
Katayama, Toshiaki; Nakao, Mitsuteru; Takagi, Toshihisa
Web services have become widely used in bioinformatics analysis, but there exist incompatibilities in interfaces and data types, which prevent users from making full use of a combination of these services. Therefore, we have developed the TogoWS service to provide an integrated interface with advanced features. In the TogoWS REST (REpresentative State Transfer) API (application programming interface), we introduce a unified access method for major database resources through intuitive URIs that can be used to search, retrieve, parse and convert the database entries. The TogoWS SOAP API resolves compatibility issues found on the server and client-side SOAP implementations. The TogoWS service is freely available at: http://togows.dbcls.jp/.
The aim of the thesis is the integration of CMS web interface into an existing static HTML page that allows users to edit websites without knowledge of web languages and databases. At first, we present the CMS web interface, basic concepts and functions that enable such systems. We continue with a description of technologies, techniques and tools to achieve the desired solution of a given problem. The core of the thesis is the development of a web page using a CMS web interface. In the ...
Willighagen Egon L
Full Text Available Abstract Background Life sciences make heavily use of the web for both data provision and analysis. However, the increasing amount of available data and the diversity of analysis tools call for machine accessible interfaces in order to be effective. HTTP-based Web service technologies, like the Simple Object Access Protocol (SOAP and REpresentational State Transfer (REST services, are today the most common technologies for this in bioinformatics. However, these methods have severe drawbacks, including lack of discoverability, and the inability for services to send status notifications. Several complementary workarounds have been proposed, but the results are ad-hoc solutions of varying quality that can be difficult to use. Results We present a novel approach based on the open standard Extensible Messaging and Presence Protocol (XMPP, consisting of an extension (IO Data to comprise discovery, asynchronous invocation, and definition of data types in the service. That XMPP cloud services are capable of asynchronous communication implies that clients do not have to poll repetitively for status, but the service sends the results back to the client upon completion. Implementations for Bioclipse and Taverna are presented, as are various XMPP cloud services in bio- and cheminformatics. Conclusion XMPP with its extensions is a powerful protocol for cloud services that demonstrate several advantages over traditional HTTP-based Web services: 1 services are discoverable without the need of an external registry, 2 asynchronous invocation eliminates the need for ad-hoc solutions like polling, and 3 input and output types defined in the service allows for generation of clients on the fly without the need of an external semantics description. The many advantages over existing technologies make XMPP a highly interesting candidate for next generation online services in bioinformatics.
In this thesis we developed a prototype robot, which can be controlled by user via web interface and is accessible trough a web browser. Web interface updates sensor data and streams video captured with the web-cam mounted on the robot in real-time. Raspberry Pi computer runs the back-end code of the thesis. General purpose input-output header on Raspberry Pi communicates with motor driver and sensors. Wireless dongle and web-cam connected trough USB, ensure wireless communication and vid...
Baldi, Pierre; Brunak, Søren
, and medicine will be particularly affected by the new results and the increased understanding of life at the molecular level. Bioinformatics is the development and application of computer methods for analysis, interpretation, and prediction, as well as for the design of experiments. It has emerged...
Computer Science This thesis examines methods for accessing information stored in a relational database from a Web Page. The stateless and connectionless nature of the Web's Hypertext Transport Protocol as well as the open nature of the Internet Protocol pose problems in the areas of database concurrency, security, speed, and performance. We examined the Common Gateway Interface, Server API, Oracle's Web/database architecture, and the Java Database Connectivity interface in terms of p...
DUAN Lei; SHEN Liren
Web services is used in Experimental Physics and Industrial Control System (EPICS). Combined with EPICS Channel Access protocol, Web services' high usability, platform independence and language independence can be used to design a fully transparent and uniform software interface layer, which helps us complete channel data acquisition, modification and monitoring functions. This software interface layer, a cross-platform of cross-language,has good interopcrability and reusability.
Emoto, M. [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan)], E-mail: firstname.lastname@example.org; Murakami, S. [Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501 (Japan); Yoshida, M.; Funaba, H.; Nagayama, Y. [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan)
There are many analysis codes that analyze various aspects of plasma physics. However, most of them are FORTRAN programs that are written to be run in supercomputers. On the other hand, many scientists use GUI (graphical user interface)-based operating systems. For those who are not familiar with supercomputers, it is a difficult task to run analysis codes in supercomputers, and they often hesitate to use these programs to substantiate their ideas. Furthermore, these analysis codes are written for personal use, and the programmers do not expect these programs to be run by other users. In order to make these programs to be widely used by many users, the authors developed user-friendly interfaces using a Web interface. Since the Web browser is one of the most common applications, it is useful for both the users and developers. In order to realize interactive Web interface, AJAX technique is widely used, and the authors also adopted AJAX. To build such an AJAX based Web system, Ruby on Rails plays an important role in this system. Since this application framework, which is written in Ruby, abstracts the Web interfaces necessary to implement AJAX and database functions, it enables the programmers to efficiently develop the Web-based application. In this paper, the authors will introduce the system and demonstrate the usefulness of this approach.
Bassil, Youssef; 10.5121/ijwest.2012.3104
Recent advances in computing systems have led to a new digital era in which every area of life is nearly interrelated with information technology. However, with the trend towards large-scale IT systems, a new challenge has emerged. The complexity of IT systems is becoming an obstacle that hampers the manageability, operability, and maintainability of modern computing infrastructures. Autonomic computing popped up to provide an answer to these ever-growing pitfalls. Fundamentally, autonomic systems are self-configuring, self-healing, self-optimizing, and self-protecting; hence, they can automate all complex IT processes without human intervention. This paper proposes an autonomic HTML web-interface generator based on XML Schema and Style Sheet specifications for self-configuring graphical user interfaces of web applications. The goal of this autonomic generator is to automate the process of customizing GUI web-interfaces according to the ever-changing business rules, policies, and operating environment with th...
Andrei Maciuca; Dan Popescu
The current paper proposes a smart web interface designed for monitoring the status of the elderly people. There are four main user types used in the web application: the administrator (who has power access to all the application’s functionalities), the patient (who has access to his own personal data, like parameters history, personal details), relatives of the patient (who have administrable access to the person in care, access that is defined by the patient) and the medic (who can view ...
Tomatis, N.; Moreau, B.
The Autonomous Systems Lab at the Swiss Federal Institute of Technology Lausanne (EPFL) is engaged in mobile robotics research. The labs research focuses mainly on indoor localization and map building, outdoor locomotion and navigation, and micro mobile robotics. In the framework of a research project on mobile robot localization, a graphical web interface for our indoor robots has been developed. The purpose of this interface is twofold: it serves as a tool for task supervision for the rese...
Dmitry Repchevsky; Josep Ll Gelpi
Article About the Authors Metrics Comments Related Content Abstract Introduction Functionality Implementation Discussion Acknowledgments Author Contributions References Reader Comments (0) Figures Abstract Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to...
Full Text Available Data collection is a key component of an information system. The widespread penetration of ICT tools in organizations and institutions has resulted in a shift in the way the data is collected. Data may be collected in printed-form, by e-mails, on a compact disk, or, by direct upload on the management information system. Since web services are platform-independent, it can access data stored in the XML format from any platform. In this paper, we present an interface which uses web services for data collection. It requires interaction between a web service deployed for the purposes of data collection, and the web address where the data is stored. Our interface requires that the web service has pre-knowledge of the address from where the data is to be collected. Also, the data to be accessed must be stored in XML format. Since our interface uses computer-supported interaction on both sides, it eases the task of regular and ongoing data collection. We apply our framework to the Education Management Information System, which collects data from schools spread across the country.
and platform independent web technology. This enables accessing the RODOS systems by remote users from all kinds of computer platforms with Internet browser. The layout and content structure of this web interface have been designed and developed with a unique standardized interface layout and information structure under due consideration of the needs of the RODOS users. Two types of web-based interfaces have been realized: category B: active user with access to the RODOS system via web browser. The interaction with RODOS is limited to the level (2) and (3) mentioned above: category B users can only define interactive runs via input forms and select results from predefined information. They have no access to data bases and cannot operate RODOS in its automatic mode. Category C: passive user with access via web browser and - if desired - via X-desktop only to RODOS results produced by users of category A or B. The category B users define their requests to the RODOS system via an interactive Web-based interface. The corresponding HTML file is sent to the RODOS Web server. lt transforms the information into RODOS compatible input data, initiates the corresponding RODOS runs, produces an HTML results file and returns it to the web browser. The web browser receives the HTML file, it interprets the page content and displays the page. The layout, content and functions of the new web based interface for category B and category C users will be demonstrated. Example interactive runs will show the interaction with the RODOS system. fig. 1 (author)
Full Text Available Recent advances in computing systems have led to a new digital era in which every area of life is nearlyinterrelated with information technology. However, with the trend towards large-scale IT systems, a newchallenge has emerged. The complexity of IT systems is becoming an obstacle that hampers themanageability, operability, and maintainability of modern computing infrastructures. Autonomiccomputing popped up to provide an answer to these ever-growing pitfalls. Fundamentally, autonomicsystems are self-configuring, self-healing, self-optimizing, and self-protecting; hence, they can automate allcomplex IT processes without human intervention. This paper proposes an autonomic HTML web-interfacegenerator based on XML Schema and Style Sheet specifications for self-configuring graphical userinterfaces of web applications. The goal of this autonomic generator is to automate the process ofcustomizing GUI web-interfaces according to the ever-changing business rules, policies, and operatingenvironment with the least IT labor involvement. The conducted experiments showed a successfulautomation of web interfaces customization that dynamically self-adapts to keep with the always-changingbusiness requirements. Future research can improve upon the proposed solution so that it supports the selfconfiguringof not only web applications but also desktop applications.
The present paper covers a generic and dynamic framework for the web publishing of bioinformatics databases based upon a meta data design, Java Bean, Java Server Page(JSP), Extensible Markup Language(XML), Extensible Stylesheet Language(XSL) and Extensible Stylesheet Language Transformation(XSLT). In this framework, the content is stored in a configurable and structured XML format, dynamically generated from an Oracle Relational Database Management System(RDBMS). The presentation is dynamically generated by transforming the XML document into HTML through XSLT.This clean separation between content and presentation makes the web publishing more flexible; changing the presentation only needs a modification of the Extensive Stylesheet(XS).
Dragut, Eduard C; Yu, Clement T
There are millions of searchable data sources on the Web and to a large extent their contents can only be reached through their own query interfaces. There is an enormous interest in making the data in these sources easily accessible. There are primarily two general approaches to achieve this objective. The first is to surface the contents of these sources from the deep Web and add the contents to the index of regular search engines. The second is to integrate the searching capabilities of these sources and support integrated access to them. In this book, we introduce the state-of-the-art tech
Kabisch, Thomas; Dragut, Eduard; Yu, Clement; Leser, Ulf
Much data in the Web is hidden behind Web query interfaces. In most cases the only means to "surface" the content of a Web database is by formulating complex queries on such interfaces. Applications such as Deep Web crawling and Web database integration require an automatic usage of these interfaces. Therefore, an important problem to be addressed is the automatic extraction of query interfaces into an appropriate model. We hypothesize the existence of a set of domain-independent "commonsense...
Pritychenko,B.; Sonzogni, A.A.
We present Sigma Web interface which provides user-friendly access for online analysis and plotting of the evaluated and experimental nuclear reaction data stored in the ENDF-6 and EXFOR formats. The interface includes advanced browsing and search capabilities, interactive plots of cross sections, angular distributions and spectra, nubars, comparisons between evaluated and experimental data, computations for cross section data sets, pre-calculated integral quantities, neutron cross section uncertainties plots and visualization of covariance matrices. Sigma is publicly available at the National Nuclear Data Center website at http://www.nndc.bnl.gov/sigma.
A web-based interface dedicated for cluster computer which is publicly accessible for free is introduced. The interface plays an important role to enable secure public access, while providing user-friendly computational environment for end-users and easy maintainance for administrators as well. The whole architecture which integrates both aspects of hardware and software is briefly explained. It is argued that the public cluster is globally a unique approach, and could be a new kind of e-learning system especially for parallel programming communities.
Full Text Available The current paper proposes a smart web interface designed for monitoring the status of the elderly people. There are four main user types used in the web application: the administrator (who has power access to all the application’s functionalities, the patient (who has access to his own personal data, like parameters history, personal details, relatives of the patient (who have administrable access to the person in care, access that is defined by the patient and the medic (who can view the medical history of the patient and prescribe different medications or interpret the received parameters data. The main purpose of this web application is to receive and analyze received data from body sensors like accelerometers, EKG or GSR sensors, or even ambient sensors like gas detectors, humidity, pressure or temperature sensors. After processing the harvested information, the web application decides if an alert has to be triggered and sends it to a specialized call center (for example, if the patient’s body temperature is over 40 degrees Celsius.
Panichi, Giancarlo; Coro, Gianpaolo
In this document we describe the DataMiner Manager Web interface that allows interacting with the gCube DataMiner service. DataMiner is a cross-usage service that provides users and services with tools for performing data mining operations. Specifically, it offers a unique access to perform data mining and statistical operations on heterogeneous data, which may reside either at client side, in the form of comma-separated values files, or be remotely hosted, possibly in a database. The DataMin...
Full Text Available Bioinformatics tools have gained popularity in biology but little is known about their validity. We aimed to assess the early contribution of 415 single nucleotide polymorphisms (SNPs associated with eight cardio-metabolic traits at the genome-wide significance level in adults in the Family Atherosclerosis Monitoring In earLY Life (FAMILY birth cohort. We used the popular web-based tool SNAP to assess the availability of the 415 SNPs in the Illumina Cardio-Metabochip genotyped in the FAMILY study participants. We then compared the SNAP output with the Cardio-Metabochip file provided by Illumina using chromosome and chromosomal positions of SNPs from NCBI Human Genome Browser (Genome Reference Consortium Human Build 37. With the HapMap 3 release 2 reference, 201 out of 415 SNPs were reported as missing in the Cardio-Metabochip by the SNAP output. However, the Cardio-Metabochip file revealed that 152 of these 201 SNPs were in fact present in the Cardio-Metabochip array (false negative rate of 36.6%. With the more recent 1000 Genomes Project release, we found a false-negative rate of 17.6% by comparing the outputs of SNAP and the Illumina product file. We did not find any 'false positive' SNPs (SNPs specified as available in the Cardio-Metabochip by SNAP, but not by the Cardio-Metabochip Illumina file. The Cohen's Kappa coefficient, which calculates the percentage of agreement between both methods, indicated that the validity of SNAP was fair to moderate depending on the reference used (the HapMap 3 or 1000 Genomes. In conclusion, we demonstrate that the SNAP outputs for the Cardio-Metabochip are invalid. This study illustrates the importance of systematically assessing the validity of bioinformatics tools in an independent manner. We propose a series of guidelines to improve practices in the fast-moving field of bioinformatics software implementation.
Full Text Available Abstract Background The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Results Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i a workflow to annotate 100,000 sequences from an invertebrate species; ii an integrated system for analysis of the transcription factor binding sites (TFBSs enriched based on differential gene expression data obtained from a microarray experiment; iii a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Conclusions Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i the absence of several useful data or analysis functions in the Web service "space"; ii the lack of documentation of methods; iii lack of
Chen, Xihui [ORNL; Kasemir, Kay [ORNL
Ad Gateway is a product which provides targeting of online advertisements. This thesis focuses on improving its web based admin interface, used for defining targeting properties as well as creating the advertisements. First by integrating the system with third party web services used for storage of advertisement campaigns, called advertisement providers, allowing advertisement to be retrieved and uploaded directly to their servers. Later to extend the admin interface with new functionality, i...
Dutta, S.; Prakash, S.; Estrada, D.; Pop, E.
A lightweight Web Service and a Web site interface have been developed, which enable remote measurements of electronic devices as a "virtual laboratory" for undergraduate engineering classes. Using standard browsers without additional plugins (such as Internet Explorer, Firefox, or even Safari on an iPhone), remote users can control a Keithley…
Lee, MW; Chen, SY; Liu, X.
Web-based technology has already been adopted as a tool to support teaching and learning in higher education. One criterion affecting the usability of such a technology is the design of web-based interface (WBI) within web-based learning programs. How different users access the WBIs has been investigated by several studies, which mainly analyze the collected data using statistical methods. In this paper, we propose to analyze users’ learning behavior using Data Mining (DM) techniques. Finding...
Aiftimiei, C; Pra, S D; Fantinel, S [INFN-Padova, Padova (Italy); Andreozzi, S; Fattibene, E; Misurelli, G [INFN-CNAF, Bologna (Italy); Cuscela, G; Donvito, G; Dudhalkar, V; Maggi, G; Pierro, A [INFN-Bari, Bari (Italy)], E-mail: email@example.com, E-mail: firstname.lastname@example.org
A monitoring tool of a complex Grid system can gather a huge amount of information that have to be presented to the users in the most comprehensive way. Moreover different types of consumers could be interested in inspecting and analyzing different subsets of data. The main goal in designing a Web interface for the presentation of monitoring information is to organize the huge amount of data in a simple, user-friendly and usable structure. One more problem is to consider different approaches, skills and interests that all the possible categories of users have in looking for the desired information. Starting from the Information Architecture guidelines for the Web, it is possible to design Web interfaces towards a closer user experience and to deal with an advanced user interaction through the implementation of many Web standard technologies. In this paper, we will present a number of principles for the design of Web interface for monitoring tools that provide a wider, richer range of possibilities for what concerns the user interaction. These principles are based on an extensive review of the current literature in Web design and on the experience with the development of the GridICE monitoring tool. The described principles can drive the evolution of the Web interface of Grid monitoring tools.
LIU Wei; LIN Can; MENG Xiaofeng
A vision based query interface annotation method is used to relate attributes and form elements in form-based web query interfaces, this method can reach accuracy of 82%.And a user participation method is used to tune the result; user can answer "yes" or "no" for existing annotations, or manually annotate form elements.Mass feedback is added to the annotation algorithm to produce more accurate result.By this approach, query interface annotation can reach a perfect accuracy.
Carlisle, W. H.
This paper reports on investigations into how to extend capabilities of the Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1996 Summer Faculty Fellowship program, and involved research into and prototype development of software components that provide documents and services for the World Wide Web (WWW). The WWW has become a de-facto standard for sharing resources over the internet, primarily because web browsers are freely available for the most common hardware platforms and their operating systems. As a consequence of the popularity of the internet, tools, and techniques associated with web browsers are changing rapidly. New capabilities are offered by companies that support web browsers in order to achieve or remain a dominant participant in internet services. Because a goal of the VRC is to build an environment for NASA centers, universities, and industrial partners to share information associated with Advanced Concepts Office activities, the VRC tracks new techniques and services associated with the web in order to determine the their usefulness for distributed and collaborative engineering research activities. Most recently, Java has emerged as a new tool for providing internet services. Because the major web browser providers have decided to include Java in their software, investigations into Java were conducted this summer.
Face-to-face bioinformatics courses commonly include a weekly, in-person computer lab to facilitate active learning, reinforce conceptual material, and teach practical skills. Similarly, fully-online bioinformatics courses employ hands-on exercises to achieve these outcomes, although students typically perform this work offsite. Combining a…
Grisham, William; Schottler, Natalie A.; Valli-Marill, Joanne; Beck, Lisa; Beatty, Jackson
This completely computer-based module's purpose is to introduce students to bioinformatics resources. We present an easy-to-adopt module that weaves together several important bioinformatic tools so students can grasp how these tools are used in answering research questions. Students integrate information gathered from websites dealing with…
Kuppusamy, K S; Aghila, G
Though World Wide Web is the single largest source of information, it is ill-equipped to serve the people with vision related problems. With the prolific increase in the interest to make the web accessible to all sections of the society, solving this accessibility problem becomes mandatory. This paper presents a technique for making web pages accessible for people with low vision issues. A model for making web pages accessible, WILI (Web Interface for people with Low-vision Issues) has been proposed. The approach followed in this work is to automatically replace the existing display style of a web page with a new skin following the guidelines given by Clear Print Booklet provided by Royal National Institute of Blind. "Single Click Solution" is one of the primary advantages provided by WILI. A prototype using the WILI model is implemented and various experiments are conducted. The results of experiments conducted on WILI indicate 82% effective conversion rate.
Full Text Available Abstract Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS and Computational Biology Research Center (CBRC and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.
Schmid Amy K
Full Text Available Abstract Background Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. Results The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV, and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. Conclusion The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the
España Bonet, Cristina; Vila Rigat, Marta; Rodríguez Hontoria, Horacio; Martí, Maria Antònia
CoCo es una interfaz web colaborativa para la compilación de recursos lingüísticos. En esta demo se presenta una de sus posibles aplicaciones: la obtención de paráfrasis. / CoCo is a collaborative web interface for the compilation of linguistic resources. In this demo we are presenting one of its possible applications: paraphrase acquisition. Peer Reviewed
Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro
Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.
Full Text Available Upon the popularity of 3C devices, the visual creatures are all around us, such the online game, touch pad, video and animation. Therefore, the text-based web page will no longer satisfy users. With the popularity of webcam, digital camera, stereoscopic glasses, or head-mounted display, the user interface becomes more visual and multi-dimensional. For the consideration of 3D and visual display in the research of web user interface design, Augmented Reality technology providing the convenient tools and impressive effects becomes the hot topic. Augmented Reality effect enables users to represent parts of the digital objects on top of the physical surroundings. The easy operation with webcam greatly improving the visual representation of web pages becomes the interest of our research. Therefore, we apply Augmented Reality technology for developing a city tour web site to collect the opinions of users. Therefore, the website stickiness is an important measurement. The major tasks of the work include the exploration of Augmented Reality technology and the evaluation of the outputs of Augmented Reality. The feedback opinions of users are valuable references for improving AR application in the work. As a result, the AR increasing the visual and interactive effects of web page encourages users to stay longer and more than 80% of users are willing to return for visiting the website soon. Moreover, several valuable conclusions about Augmented Reality technology in web user interface design are also provided for further practical references.
The lecture will introduce new functions and graphic design WebSOD - web interface Personal dosimetry Service VF. a.s. which will be updated in November 2014. The new interface will have a new graphic design, intuitive control system and will be providing a range of new functions: - Personal doses - display of personal doses from personal, extremity and neutron dosimeters including graphs, annual and electronic listings of doses; - Collective doses - display of group doses for selected periods of time; Reference levels - setting and display of three reference levels; - Evidence - enables administration of monitored individuals - beginning, ending of monitoring, or editing the data of monitored persons and centers. (author)
Lin, Ling; Zhou, Lizhu
Web databases provide different types of query interfaces to access the data records stored in the backend databases. While most existing works exploit a complex query interface with multiple input fields to perform schema identification of the Web databases, little attention has been paid on how to identify the schema of web databases by simple query interface (SQI), which has only one single query text input field. This paper proposes a new method of instance-based query probing to identify WDBs' interface and result schema for SQI. The interface schema identification problem is defined as generating the fullcondition query of SQI and a novel query probing strategy is proposed. The result schema is also identified based on the result webpages of SQI's full-condition query, and an extended identification of the non-query attributes is proposed to improve the attribute recall rate. Experimental results on web databases of online shopping for book, movie and mobile phone show that our method is effective and efficient.
Invenio is an open source web-based application that implements a digital library or document server, and it's used at CERN as the base of the CERN Document Server Institutional Repository and the Inspire High Energy Physics Subject Repository. The purpose of this work was to reimplement the administrative interface of the search engine in Invenio, using new and proved open source technologies, to simplify the code base and lay the foundations for the work that it will be done in porting the rest of the administrative interfaces to use these newer technologies. In my time as a CERN openlab summer student I was able to implement some of the features for the WebSearch Admin Interfaces, enhance some of the existing code with new features and find solutions to technical challenges that will be common when porting the other administrative interfaces modules.
Georgiana-Petruţa Fîntîneanu; Florentina Anica Pintea
The present article aims to describe a project consisting in designing a framework of applications used to create graphical interfaces with an Oracle distributed database. The development of the project supposed the use of the latest technologies: database Oracle server, Tomcat web server, JDBC (Java library used for accessing a database), JSP and Tag Library (for the development of graphical interfaces).
Full Text Available The present article aims to describe a project consisting in designing a framework of applications used to create graphical interfaces with an Oracle distributed database. The development of the project supposed the use of the latest technologies: database Oracle server, Tomcat web server, JDBC (Java library used for accessing a database, JSP and Tag Library (for the development of graphical interfaces.
Full Text Available C++ language was used for creating web applications at the department of Mapping and Cartography for many years. Plenty of projects started to be very large-scale and complicated to maintain. Consequently, the traditional way of adding functionality to a Web Server which previously has been used (CGI programs started being usefulness. I was looking for some solutions - particularly open source ones. I have tried many languages (solutions and finally I chose the Java language and started writing servlets. Using the Java language (servlets has significantly simplified the development of web applications. As a result, developing cycle was cut down. Because of Java JNI (Java Native Interface it is still possible to use C++ libraries which we are using. The main goal of this article is to share my practical experiences with rewriting typical CGI web application and creating complex geoinformatic web application.
Serban, Alexandru; Crisan-Vida, Mihaela; Mada, Leonard; Stoicu-Tivadar, Lacramioara
User interfaces are important to facilitate easy learning and operating with an IT application especially in the medical world. An easy to use interface has to be simple and to customize the user needs and mode of operation. The technology in the background is an important tool to accomplish this. The present work aims to creating a web interface using specific technology (HTML table design combined with CSS3) to provide an optimized responsive interface for a complex web application. In the first phase, the current icMED web medical application layout is analyzed, and its structure is designed using specific tools, on source files. In the second phase, a new graphic adaptable interface to different mobile terminals is proposed, (using HTML table design (TD) and CSS3 method) that uses no source files, just lines of code for layout design, improving the interaction in terms of speed and simplicity. For a complex medical software application a new prototype layout was designed and developed using HTML tables. The method uses a CSS code with only CSS classes applied to one or multiple HTML table elements, instead of CSS styles that can be applied to just one DIV tag at once. The technique has the advantage of a simplified CSS code, and a better adaptability to different media resolutions compared to DIV-CSS style method. The presented work is a proof that adaptive web interfaces can be developed just using and combining different types of design methods and technologies, using HTML table design, resulting in a simpler to learn and use interface, suitable for healthcare services. PMID:27139407
Serban, Alexandru; Crisan-Vida, Mihaela; Mada, Leonard; Stoicu-Tivadar, Lacramioara
User interfaces are important to facilitate easy learning and operating with an IT application especially in the medical world. An easy to use interface has to be simple and to customize the user needs and mode of operation. The technology in the background is an important tool to accomplish this. The present work aims to creating a web interface using specific technology (HTML table design combined with CSS3) to provide an optimized responsive interface for a complex web application. In the first phase, the current icMED web medical application layout is analyzed, and its structure is designed using specific tools, on source files. In the second phase, a new graphic adaptable interface to different mobile terminals is proposed, (using HTML table design (TD) and CSS3 method) that uses no source files, just lines of code for layout design, improving the interaction in terms of speed and simplicity. For a complex medical software application a new prototype layout was designed and developed using HTML tables. The method uses a CSS code with only CSS classes applied to one or multiple HTML table elements, instead of CSS styles that can be applied to just one DIV tag at once. The technique has the advantage of a simplified CSS code, and a better adaptability to different media resolutions compared to DIV-CSS style method. The presented work is a proof that adaptive web interfaces can be developed just using and combining different types of design methods and technologies, using HTML table design, resulting in a simpler to learn and use interface, suitable for healthcare services.
LI Lin; SHEN Liren; ZHU Qing; WAN Tianmin
Accelerator database stores various static parameters and real-time data of accelerator. SSRF (Shanghai Synchrotron Radiation Facility) adopts relational database to save the data. We developed a data retrieval system based on XML Web Services for accessing the archive data. It includes a bottom layer interface and an interface applicable for accelerator physics. Client samples exemplifying how to consume the interface are given. The users can browse, retrieve and plot data by the client samples. Also, we give a method to test its stability. The test result and performance are described.
Using a participatory approach, 11 middle school children created paper prototypes for Web search engines. The prototypes were analyzed in relation to content-related spaces, specific spaces, general spaces, instruction spaces, and other spaces. Children's comments about the purposes of the interfaces were analyzed in terms of functionality and…
Full Text Available Abstract Background High-throughput sequencing makes it possible to rapidly obtain thousands of 16S rDNA sequences from environmental samples. Bioinformatic tools for the analyses of large 16S rDNA sequence databases are needed to comprehensively describe and compare these datasets. Results FastGroupII is a web-based bioinformatics platform to dereplicate large 16S rDNA libraries. FastGroupII provides users with the option of four different dereplication methods, performs rarefaction analysis, and automatically calculates the Shannon-Wiener Index and Chao1. FastGroupII was tested on a set of 16S rDNA sequences from coral-associated Bacteria. The different grouping algorithms produced similar, but not identical, results. This suggests that 16S rDNA datasets need to be analyzed in multiple ways when being used for community ecology studies. Conclusion FastGroupII is an effective bioinformatics tool for the trimming and dereplication of 16S rDNA sequences. Several standard diversity indices are calculated, and the raw sequences are prepared for downstream analyses.
Full Text Available There exists a great number of work related to chaotic systems investigated by many researchers, especially about Lorenz chaotic system. If the order of differentiation of variables are fractional, the systems are called fractional chaotic systems. In this work a web-based interface is designed for fractional composition of five different chaotic systems. The interface takes initial and fractional differentiation values and yields output signals and phase portraits. The paper first introduces design tools and then provides results obtained throughout the experiments.
Kamlesh Sharma; Dr. S.V.A.V. Prasad; Prasad, Dr. T. V.
Aiming at increasing system simplicity and flexibility, an audio evoked based system was developed by integrating simplified headphone and user-friendly software design. This paper describes a Hindi Speech Actuated Computer Interface for Web search (HSACIWS), which accepts spoken queries in Hindi language and provides the search result on the screen. This system recognizes spoken queries by large vocabulary continuous speech recognition (LVCSR), retrieves relevant document by text retrieval, ...
Choi, Jeongseok; Kim, Jaekwon; Lee, Dong Kyun; Jang, Kwang Soo; Kim, Dai-Jin; Choi, In Young
Internet addiction (IA) has become a widespread and problematic phenomenon as smart devices pervade society. Moreover, internet gaming disorder leads to increases in social expenditures for both individuals and nations alike. Although the prevention and treatment of IA are getting more important, the diagnosis of IA remains problematic. Understanding the neurobiological mechanism of behavioral addictions is essential for the development of specific and effective treatments. Although there are many databases related to other addictions, a database for IA has not been developed yet. In addition, bioinformatics databases, especially genetic databases, require a high level of security and should be designed based on medical information standards. In this respect, our study proposes the OAuth standard protocol for database access authorization. The proposed IA Bioinformatics (IABio) database system is based on internet user authentication, which is a guideline for medical information standards, and uses OAuth 2.0 for access control technology. This study designed and developed the system requirements and configuration. The OAuth 2.0 protocol is expected to establish the security of personal medical information and be applied to genomic research on IA.
Choi, Jeongseok; Kim, Jaekwon; Lee, Dong Kyun; Jang, Kwang Soo; Kim, Dai-Jin; Choi, In Young
Internet addiction (IA) has become a widespread and problematic phenomenon as smart devices pervade society. Moreover, internet gaming disorder leads to increases in social expenditures for both individuals and nations alike. Although the prevention and treatment of IA are getting more important, the diagnosis of IA remains problematic. Understanding the neurobiological mechanism of behavioral addictions is essential for the development of specific and effective treatments. Although there are many databases related to other addictions, a database for IA has not been developed yet. In addition, bioinformatics databases, especially genetic databases, require a high level of security and should be designed based on medical information standards. In this respect, our study proposes the OAuth standard protocol for database access authorization. The proposed IA Bioinformatics (IABio) database system is based on internet user authentication, which is a guideline for medical information standards, and uses OAuth 2.0 for access control technology. This study designed and developed the system requirements and configuration. The OAuth 2.0 protocol is expected to establish the security of personal medical information and be applied to genomic research on IA. PMID:27103887
Thiel, William H; Giangrande, Paloma H
The development of DNA and RNA aptamers for research as well as diagnostic and therapeutic applications is a rapidly growing field. In the past decade, the process of identifying aptamers has been revolutionized with the advent of high-throughput sequencing (HTS). However, bioinformatics tools that enable the average molecular biologist to analyze these large datasets and expedite the identification of candidate aptamer sequences have been lagging behind the HTS revolution. The Galaxy Project was developed in order to efficiently analyze genome, exome, and transcriptome HTS data, and we have now applied these tools to aptamer HTS data. The Galaxy Project's public webserver is an open source collection of bioinformatics tools that are powerful, flexible, dynamic, and user friendly. The online nature of the Galaxy webserver and its graphical interface allow users to analyze HTS data without compiling code or installing multiple programs. Herein we describe how tools within the Galaxy webserver can be adapted to pre-process, compile, filter and analyze aptamer HTS data from multiple rounds of selection. PMID:26481156
Garcia Montoro, Carlos; The ATLAS collaboration
The EventIndex project consists in the development and deployment of a complete catalogue of events for the ATLAS experiment at the LHC accelerator at CERN. In 2015 the ATLAS experiment has produced 12 billion real events in 1 million files, and 5 billion simulated events in 8 million files. The ATLAS EventIndex is running in production since mid- 2015, reliably collecting information worldwide about all produced events and storing them in a central Hadoop infrastructure. A subset of this information is copied to an Oracle relational database. These slides present two components of the ATLAS EventIndex: its data collection supervisor and its web interface partner.
Manuel Juárez Pacheco
Full Text Available This paper describes a pilot study carried out to compare two Web interfaces used to support a collaborative learning design for science education. The study is part of a wider research project, which aims at characterizing computer software for collaborative learning in science education. The results coming from a questionnaire applied to teachers and researchers reveal the necessity to design technological tools based mainly on users’ needs and to take into account the impact of these tools on the learning of curricular contents.
Pritychenko,B.; Sonzogni, A.A.
The authors present Sigma, a Web-rich application which provides user-friendly access in processing and plotting of the evaluated and experimental nuclear reaction data stored in the ENDF-6 and EXFOR formats. The main interface includes browsing using a periodic table and a directory tree, basic and advanced search capabilities, interactive plots of cross sections, angular distributions and spectra, comparisons between evaluated and experimental data, computations between different cross section sets. Interactive energy-angle, neutron cross section uncertainties plots and visualization of covariance matrices are under development. Sigma is publicly available at the National Nuclear Data Center website at www.nndc.bnl.gov/sigma.
Shen, Lishuang; Diroma, Maria Angela; Gonzalez, Michael; Navarro-Gomez, Daniel; Leipzig, Jeremy; Lott, Marie T; van Oven, Mannis; Wallace, Douglas C; Muraresku, Colleen Clarke; Zolkipli-Cunningham, Zarazuela; Chinnery, Patrick F; Attimonelli, Marcella; Zuchner, Stephan; Falk, Marni J; Gai, Xiaowu
MSeqDR is the Mitochondrial Disease Sequence Data Resource, a centralized and comprehensive genome and phenome bioinformatics resource built by the mitochondrial disease community to facilitate clinical diagnosis and research investigations of individual patient phenotypes, genomes, genes, and variants. A central Web portal (https://mseqdr.org) integrates community knowledge from expert-curated databases with genomic and phenotype data shared by clinicians and researchers. MSeqDR also functions as a centralized application server for Web-based tools to analyze data across both mitochondrial and nuclear DNA, including investigator-driven whole exome or genome dataset analyses through MSeqDR-Genesis. MSeqDR-GBrowse genome browser supports interactive genomic data exploration and visualization with custom tracks relevant to mtDNA variation and mitochondrial disease. MSeqDR-LSDB is a locus-specific database that currently manages 178 mitochondrial diseases, 1,363 genes associated with mitochondrial biology or disease, and 3,711 pathogenic variants in those genes. MSeqDR Disease Portal allows hierarchical tree-style disease exploration to evaluate their unique descriptions, phenotypes, and causative variants. Automated genomic data submission tools are provided that capture ClinVar compliant variant annotations. PhenoTips will be used for phenotypic data submission on deidentified patients using human phenotype ontology terminology. The development of a dynamic informed patient consent process to guide data access is underway to realize the full potential of these resources.
Shen, Lishuang; Diroma, Maria Angela; Gonzalez, Michael; Navarro-Gomez, Daniel; Leipzig, Jeremy; Lott, Marie T; van Oven, Mannis; Wallace, Douglas C; Muraresku, Colleen Clarke; Zolkipli-Cunningham, Zarazuela; Chinnery, Patrick F; Attimonelli, Marcella; Zuchner, Stephan; Falk, Marni J; Gai, Xiaowu
MSeqDR is the Mitochondrial Disease Sequence Data Resource, a centralized and comprehensive genome and phenome bioinformatics resource built by the mitochondrial disease community to facilitate clinical diagnosis and research investigations of individual patient phenotypes, genomes, genes, and variants. A central Web portal (https://mseqdr.org) integrates community knowledge from expert-curated databases with genomic and phenotype data shared by clinicians and researchers. MSeqDR also functions as a centralized application server for Web-based tools to analyze data across both mitochondrial and nuclear DNA, including investigator-driven whole exome or genome dataset analyses through MSeqDR-Genesis. MSeqDR-GBrowse genome browser supports interactive genomic data exploration and visualization with custom tracks relevant to mtDNA variation and mitochondrial disease. MSeqDR-LSDB is a locus-specific database that currently manages 178 mitochondrial diseases, 1,363 genes associated with mitochondrial biology or disease, and 3,711 pathogenic variants in those genes. MSeqDR Disease Portal allows hierarchical tree-style disease exploration to evaluate their unique descriptions, phenotypes, and causative variants. Automated genomic data submission tools are provided that capture ClinVar compliant variant annotations. PhenoTips will be used for phenotypic data submission on deidentified patients using human phenotype ontology terminology. The development of a dynamic informed patient consent process to guide data access is underway to realize the full potential of these resources. PMID:26919060
Gelbart, Hadas; Brill, Gilat; Yarden, Anat
Providing learners with opportunities to engage in activities similar to those carried out by scientists was addressed in a web-based research simulation in genetics developed for high school biology students. The research simulation enables learners to apply their genetics knowledge while giving them an opportunity to participate in an authentic…
Ernst, D. R.
We are developing a web interface to connect plasma microturbulence simulation codes with experimental data. The website automates the preparation of gyrokinetic simulations utilizing plasma profile and magnetic equilibrium data from TRANSP analysis of experiments, read from MDSPLUS over the internet. This database-driven tool saves user sessions, allowing searches of previous simulations, which can be restored to repeat the same analysis for a new discharge. The website includes a multi-tab, multi-frame, publication quality java plotter Webgraph, developed as part of this project. Input files can be uploaded as templates and edited with context-sensitive help. The website creates inputs for GS2 and GYRO using a well-tested and verified back-end, in use for several years for the GS2 code [D. R. Ernst et al., Phys. Plasmas 11(5) 2637 (2004)]. A centralized web site has the advantage that users receive bug fixes instantaneously, while avoiding the duplicated effort of local compilations. Possible extensions to the database to manage run outputs, toward prototyping for the Fusion Simulation Project, are envisioned. Much of the web development utilized support from the DoE National Undergraduate Fellowship program [e.g., A. Suarez and D. R. Ernst, http://meetings.aps.org/link/BAPS.2005.DPP.GP1.57.
Full text: The Internet, and in particular, the World-Wide Web, has provided tremendous opportunities for enabling access and transfer of information. Traditionally, Internet services have relied on textual methods for delivery of information. The World-Wide Web (WWW) in its current (and ever-changing form) is primarily a method of communication which includes both graphical and textual information. The easy-to-use graphical interface, developed as part of the WWW, is based on the Hypertext Mark-up Language (HTML). More advanced interfaces can be developed by incorporating interactive documents, which can be updated depending upon the wishes of the user. The Common Gateway Interface (CGI) can be utilised to transfer information b y utilising various programming and scripting languages (eg. C, Perl). This paper describes the development of a WWW interface for the viewing of anatomical and radiographic information in the form of two-dimensional cross-sectional images and three-dimensional reconstruction images. HTML documents were prepared using a commercial software program (HotDog, Sausage Software Co., Australia). Forms were used to control user-selection parameters such as imaging modality and cross-sectional slice number. All documents were developed and tested using Netscape 2.0. Visual and radiographic images were processed using ANALYZETM Version 7.5 (Biomedical Imaging Resource, Mayo Foundation, Rochester, USA). Perl scripting was used to process all requests passed to the WWW server. ANSI 'C' programming was used to implement image processing operations which are performed in response to user-selected options. The interface which has been developed is easy to use, is independent of browsing software, is accessible by multiple users, and provides an example of how easily medical imaging data can be distributed amongst interested parties. Various imaging datasets, including the Visible Human ProjectTM (National Library of Medicine, USA.) have been prepared
Firmenich, Sergio; Rossi, Gustavo; Winckler, Marco Antonio; Palanque, Philippe
Currently, a lot of the tasks engaged by users over the Web involve dealing with multiple Web sites. Moreover, whilst Web navigation was considered as a lonely activity in the past, a large proportion of users are nowadays engaged in collaborative activities over the Web. In this paper we argue that these two aspects of collaboration and tasks spanning over multiple Web sites call for a level of coordination that require Distributed User Interfaces (DUI). In this context, DUIs would play a ma...
Bayan Abu Shawar
Full Text Available In this paper, we describe a way to access Arabic Web Question Answering (QA corpus using a chatbot, without the need for sophisticated natural language processing or logical inference. Any Natural Language (NL interface to Question Answer (QA system is constrained to reply with the given answers, so there is no need for NL generation to recreate well-formed answers, or for deep analysis or logical inference to map user input questions onto this logical ontology; simple (but large set of pattern-template matching rules will suffice. In previous research, this approach works properly with English and other European languages. In this paper, we try to see how the same chatbot will react in terms of Arabic Web QA corpus. Initial results shows that 93% of answers were correct, but because of a lot of characteristics related to Arabic language, changing Arabic questions into other forms may lead to no answers.
Garcia Montoro, Carlos; The ATLAS collaboration; Sanchez, Javier
The EventIndex project consists in the development and deployment of a complete catalogue of events for the ATLAS experiment  at the LHC accelerator at CERN. In 2015 the ATLAS experiment has produced 12 billion real events in 1 million files, and 5 billion simulated events in 8 million files. The ATLAS EventIndex is running in production since mid-2015, reliably collecting information worldwide about all produced events and storing them in a central Hadoop infrastructure. A subset of this information is copied to an Oracle relational database. This paper presents two components of the ATLAS EventIndex : its data collection supervisor and its web interface partner.
Boughamoura, Radhouane; Omri, Mohamed Nazih
Deep Web databases contain more than 90% of pertinent information of the Web. Despite their importance, users don't profit of this treasury. Many deep web services are offering competitive services in term of prices, quality of service, and facilities. As the number of services is growing rapidly, users have difficulty to ask many web services in the same time. In this paper, we imagine a system where users have the possibility to formulate one query using one query interface and then the system translates query to the rest of query interfaces. However, interfaces are created by designers in order to be interpreted visually by users, machines can not interpret query from a given interface. We propose a new approach which emulates capacity of interpretation of users and extracts query from deep web query interfaces. Our approach has proved good performances on two standard datasets.
Maury, Abigail; Critchfield, Anna; Langston, Jim; Adams, Cynthia
The Java-based spacecraft web interface to telemetry and command handling, Jswitch, is a prototype, platform-independent user interface to a spacecraft command and control system that uses Java technology, readily available security software, standard World Wide Web protocols, and commercial off-the-shelf (COTS) products. The Java-based science analysis and trending tool, Jsat, is a major element in Jswitch. Jsat is an inexpensive, Web-based, information-on-demand science data trend analysis ...
Beermann, Thomas; The ATLAS collaboration; Barisits, Martin-Stefan; Serfon, Cedric
Raup, B.; Armstrong, R.; Fetterer, F.; Gartner-Roer, I.; Haeberli, W.; Hoelzle, M.; Khalsa, S. J. S.; Nussbaumer, S.; Weaver, R.; Zemp, M.
The Global Terrestrial Network for Glaciers (GTN-G) is an umbrella organization with links to the Global Climate Observing System (GCOS), Global Terrestrial Observing System (GTOS), and UNESCO (all organizations under the United Nations), for the curation of several glacier-related databases. It is composed of the World Glacier Monitoring Service (WGMS), the U.S. National Snow and Ice Data Center (NSIDC), and the Global Land Ice Measurements from Space (GLIMS) initiative. The glacier databases include the World Glacier Inventory (WGI), the GLIMS Glacier Database, the Glacier Photograph Collection at NSIDC, and the Fluctuations of Glaciers (FoG) and Mass Balance databases at WGMS. We are working toward increased interoperability between these related databases. For example, the Web interface to the GLIMS Glacier Database has also included queryable layers for the WGI and FoG databases since 2008. To improve this further, we have produced a new GTN-G web portal (http://www.gtn-g.org/), which includes a glacier metadata browsing application. This web application allows the browsing of the metadata behind the main GTN-G databases, as well as querying the metadata in order to get to the source, no matter which database holds the data in question. A new glacier inventory, called the Randolph Glacier Inventory 1.0, has recently been compiled. This compilation, which includes glacier outlines that do not have the attributes or IDs or links to other data like the GLIMS data do, was motivated by the tight deadline schedule of the sea level chapter of the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). Now served from the GLIMS website (http://glims.org/), it is designed to serve that narrowly focused research goal in the near term, and in the longer term will be incorporated into the multi-temporal glacier database of GLIMS. For the required merging of large sets of glacier outlines and association of proper IDs that tie together outlines
Mesbah, A.; Van Deursen, A.; Lenselink, S.
L. Shen (Lishuang); M.A. Diroma (Maria Angela); M. Gonzalez (Michael); D. Navarro-Gomez (Daniel); J. Leipzig (Jeremy); M.T. Lott (Marie T.); M. van Oven (Mannis); D.C. Wallace; C.C. Muraresku (Colleen Clarke); Z. Zolkipli-Cunningham (Zarazuela); P.F. Chinnery (Patrick); M. Attimonelli (Marcella); S. Zuchner (Stephan); M.J. Falk (Marni J.); X. Gai (Xiaowu)
textabstractMSeqDR is the Mitochondrial Disease Sequence Data Resource, a centralized and comprehensive genome and phenome bioinformatics resource built by the mitochondrial disease community to facilitate clinical diagnosis and research investigations of individual patient phenotypes, genomes, gene
Deep Web查询接口是Web数据库的接口,其对于Deep Web数据库集成至关重要.本文根据网页表单的结构特征定义查询接口；针对非提交查询法,给出界定Deep Web查询接口的一些规则；提出提交查询法,根据链接属性的特点进行判断,找到包含查询接口的页面；采用决策树C4.5算法进行分类,并用Java语言实现Deep Web查询接口系统.%Deep Web search interface is the interface of Web database. It is essential for the integration of Deep Web databases. According to the structural characteristics of the Web form, search interface is defined. For non-submission query method, some of the rules as defined in the Deep Web search interface are given. The submission query method is proposed, which finds out the page containing the search interface with the features of the link properties. The Web pages are classified with the C4. S decision tree algorithm, and the Deep Web search interface system is realized by using Java.
Hong Wang; Qingsong Xu; Youyang Chen; Jinsong Lan
Determining whether a site has a search interface is a crucial priority for further research of deep web databases. This study first reviews the current approaches employed in search interface identification for deep web databases. Then, a novel identification scheme using hybrid features and a feature-weighted instance-based learner is put forward. Experiment results show that the proposed scheme is satisfactory in terms of classification accuracy and our feature-weighted instance-based lear...
Odier, J.; Albrand, S.; Fulachier, J.; Lambert, F.
Odier, Jerome; The ATLAS collaboration; Fulachier, Jerome; Lambert, Fabian
Odier, Jerome; The ATLAS collaboration; Fulachier, Jerome; Lambert, Fabian
The ATLAS Metadata Interface (AMI) can be considered to be a mature application because it has existed for at least 10 years. Over the last year, we have been adapting the application to some recently available technologies. The web interface, which previously manipulated XML documents using XSL transformations, has been migrated to Asynchronous Java Script (AJAX). Web development has been considerably simplified by the development of a framework for AMI based on JQuery and Twitter Bootstrap. Finally there has been a major upgrade of the python web service client.
Fadhilah Mat Yamin; RAMAYAH, T.
The behaviour of the searcher when using the search engine especially during the query formulation is crucial. Search engines capture users’ activities in the search log, which is stored at the search engine server. Due to the difficulty of obtaining this search log, this paper proposed and develops an interface framework to interface a Google search engine. This interface will capture users’ queries before redirect them to Google. The analysis of the search log will show that users are utili...
Zain, Jasni Mohamad; Goh, Yingsoon
Aesthetics of web page refers to how attractive a web page is in which it catches the attention of the user to read through the information. In addition, the visual appearance is important in getting attentions of the users. Moreover, it was found that those screens, which were perceived as aesthetically pleasing, were having a better usability. Usability might be a strong basic in relating to the applicability for learning, and in this study pertaining to Mandarin learning. It was also found that aesthetically pleasing layouts of web page would motivate students in Mandarin learning The Mandarin Learning web pages were manipulated according to the desired aesthetic measurements. GUI aesthetic measuring method was used for this purpose. The Aesthetics-Measurement Application (AMA) accomplished with six aesthetic measures was developed and used. On top of it, questionnaires were distributed to the users to gather information on the students' perceptions on the aesthetic aspects and learning aspects. Respondent...
Boughamoura, Radhouane; Hlaoua, Lobna; Omri, Mohamed Nazih
Deep Web databases contain more than 90% of pertinent information of the Web. Despite their importance, users don't profit of this treasury. Many deep web services are offering competitive services in term of prices, quality of service, and facilities. As the number of services is growing rapidly, users have difficulty to ask many web services in the same time. In this paper, we imagine a system where users have the possibility to formulate one query using one query interface and then the sys...
Wilkinson Mark D; Kuo Byron; Kawas Edward A; Good Benjamin M
Abstract Background User-scripts are programs stored in Web browsers that can manipulate the content of websites prior to display in the browser. They provide a novel mechanism by which users can conveniently gain increased control over the content and the display of the information presented to them on the Web. As the Web is the primary medium by which scientists retrieve biological information, any improvements in the mechanisms that govern the utility or accessibility of this information m...
Fadhilah Mat Yamin
Full Text Available The behaviour of the searcher when using the search engine especially during the query formulation is crucial. Search engines capture users’ activities in the search log, which is stored at the search engine server. Due to the difficulty of obtaining this search log, this paper proposed and develops an interface framework to interface a Google search engine. This interface will capture users’ queries before redirect them to Google. The analysis of the search log will show that users are utilizing different types of queries. These queries are then classified as breadth and depth search query.
SOA offers solutions to the most intractable business problems faced by every enterprise, but getting the SOA service interface right requires the practical design knowledge this book uniquely delivers
Monrozier, F. Jocteur; Pesquet, T.
This paper presents the approach retained in a R&D CNES development to provide a configurable and generic request interface for operations, using new modeling and programming techniques (standards and tools) in the core of the resulting "Request Interface for Operations" (RIO) framework. This prototype will be based on object oriented and internet technologies and standards such as SOAP with Attachment1, UML2 State diagram and JAVA. The advantage of the approach presented in this paper is to have a customizable tool that can be configured and deployed depending on the target needs in order to provide a cross-support "request interface for operations". Once this work will be carried out to an end and validated, it should be submitted for approval to CCSDS Cross Support Services Area in order to extend the current SLE Service Request work, to provide a recommendation for a "Cross- support Request Interface for Operations". As this approach also provides a methodology to define a complete and pragmatic service interface specification (with static and dynamic views) focusing on the user point of view, it will be proposed to the CCSDS Systems Architecture Working group to complete the Reference Architecture methodology. Key-words: UML State diagrams, Dynamic Service interface description, formal notation, code generation, SOAP, CCSDS SLE Service Management, Cross-support.
Devadason, Francis; Intaraksa, Neelawat; Patamawongjariya, Pornprapa; Desai, Kavita
Describes an experimental system designed to organize and provide access to Web documents using a faceted pre-coordinate indexing system based on the Deep Structure Indexing System (DSIS) derived from POPSI (Postulate based Permuted Subject Indexing) of Bhattacharyya, and the facet analysis and chain indexing system of Ranganathan. (AEF)
Full Text Available This research extends the capability for the new technology platform by Remote Data Inspection System (RDIS server from Furukawa Co., Ltd. Enabling the integration of standard Hypertext Markup Language (HTML programming and RDIS tag programming to create a user-friendly “point-and-click” web-based control mechanism. The integration allows the users to send commands to mobile robot over the Internet. Web-based control enables human to extend his action and intelligence to remote locations. Three mechanisms for web-based controls are developed: Manual remote control, continuous operation event and autonomous navigational control. In the manual remote control the user is fully responsible for the robot action and the robot do not use any sophisticated algorithms. The continuous operation event is the extension of the basic movement of a manual remote control mechanism. In the autonomous navigation control, the user has more flexibility to tell the robot to carry out specific tasks. Using this method, mobile robot can be controlled via the web, from any places connected to the network without constructing specific infrastructures for communication.
熊岚; 胡洪良; 杜清运; 黄茂军; 王明军
The fusion of VISI (visual identity system Internet), digital maps and Web GIS is presented. Web GIS interface inter- active design with VISI needs to consider more new factors. VISI can provide the design principle, elements and contents for the Web GIS. The design of the Wuhan Bus Search System is fulfilled to confirm the validity and practicability of the fusion.
Jensen, Casper Svenning; Møller, Anders; Su, Zhendong
Sharma, Parichit; Mantri, Shrikant S
The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design
Full Text Available The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture
Nelson Rex T
Full Text Available Abstract Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap" offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS] used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL, genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL, Resource Description Framework (RDF and eXtensible Markup Language (XML documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST. Conclusions The need for semantic integration technologies has preceded
Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap") offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS]) used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL), genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL), Resource Description Framework (RDF) and eXtensible Markup Language (XML) documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST). Conclusions The need for semantic integration technologies has preceded available solutions. We
Hong Wang; Qingsong Xu; Lifeng Zhou
To surface the Deep Web, one crucial task is to predict whether a given web page has a search interface (searchable HyperText Markup Language (HTML) form) or not. Previous studies have focused on supervised classification with labeled examples. However, labeled data are scarce, hard to get and requires tediousmanual work, while unlabeled HTML forms are abundant and easy to obtain. In this research, we consider the plausibility of using both labeled and unlabeled data to train better models to...
Full Text Available Determining whether a site has a search interface is a crucial priority for further research of deep web databases. This study first reviews the current approaches employed in search interface identification for deep web databases. Then, a novel identification scheme using hybrid features and a feature-weighted instance-based learner is put forward. Experiment results show that the proposed scheme is satisfactory in terms of classification accuracy and our feature-weighted instance-based learner gives better results than classical algorithms such as C4.5, random forest and KNN.
Bethel, Wes; Siegerist, Cristina; Shalf, John; Shetty, Praveenkumar; Jankun-Kelly, T.J.; Kreylos, Oliver; Ma, Kwan-Liu
The LBNL/NERSC Visportal effort explores ways to deliver advanced Remote/Distributed Visualization (RDV) capabilities through a Grid-enabled web-portal interface. The effort focuses on latency tolerant distributed visualization algorithms, GUI designs that are more appropriate for the capabilities of web interfaces, and refactoring parallel-distributed applications to work in a N-tiered component deployment strategy. Most importantly, our aim is to leverage commercially-supported technology as much as possible in order to create a deployable, supportable, and hence viable platform for delivering grid-based visualization services to collaboratory users.
Manaud, N.; Gonzalez, J.
We present a first prototype of a Web Map Interface that will serve as a proof of concept and design for ESA's future fully web-based Planetary Science Archive (PSA) User Interface. The PSA is ESA's planetary science archiving authority and central repository for all scientific and engineering data returned by ESA's Solar System missions . All data are compliant with NASA's Planetary Data System (PDS) Standards and are accessible through several interfaces : in addition to serving all public data via FTP and the Planetary Data Access Protocol (PDAP), a Java-based User Interface provides advanced search, preview, download, notification and delivery-basket functionality. It allows the user to query and visualise instrument observations footprints using a map-based interface (currently only available for Mars Express HRSC and OMEGA instruments). During the last decade, the planetary mapping science community has increasingly been adopting Geographic Information System (GIS) tools and standards, originally developed for and used in Earth science. There is an ongoing effort to produce and share cartographic products through Open Geospatial Consortium (OGC) Web Services, or as standalone data sets, so that they can be readily used in existing GIS applications [3,4,5]. Previous studies conducted at ESAC [6,7] have helped identify the needs of Planetary GIS users, and define key areas of improvement for the future Web PSA User Interface. Its web map interface shall will provide access to the full geospatial content of the PSA, including (1) observation geometry footprints of all remote sensing instruments, and (2) all georeferenced cartographic products, such as HRSC map-projected data or OMEGA global maps from Mars Express. It shall aim to provide a rich user experience for search and visualisation of this content using modern and interactive web mapping technology. A comprehensive set of built-in context maps from external sources, such as MOLA topography, TES
Wilkinson Mark D
Full Text Available Abstract Background User-scripts are programs stored in Web browsers that can manipulate the content of websites prior to display in the browser. They provide a novel mechanism by which users can conveniently gain increased control over the content and the display of the information presented to them on the Web. As the Web is the primary medium by which scientists retrieve biological information, any improvements in the mechanisms that govern the utility or accessibility of this information may have profound effects. GreaseMonkey is a Mozilla Firefox extension that facilitates the development and deployment of user-scripts for the Firefox web-browser. We utilize this to enhance the content and the presentation of the iHOP (information Hyperlinked Over Proteins website. Results The iHOPerator is a GreaseMonkey user-script that augments the gene-centred pages on iHOP by providing a compact, configurable visualization of the defining information for each gene and by enabling additional data, such as biochemical pathway diagrams, to be collected automatically from third party resources and displayed in the same browsing context. Conclusion This open-source script provides an extension to the iHOP website, demonstrating how user-scripts can personalize and enhance the Web browsing experience in a relevant biological setting. The novel, user-driven controls over the content and the display of Web resources made possible by user-scripts, such as the iHOPerator, herald the beginning of a transition from a resource-centric to a user-centric Web experience. We believe that this transition is a necessary step in the development of Web technology that will eventually result in profound improvements in the way life scientists interact with information.
To archive the information integration of business information system and HIS and eliminates the information island, the Web Services approach is a good choice which has low costs of development and operation. At the same time, Web Services is the commonly used integration technology in the industry. In addition, we collect and summarize the minimal set of interface in HIS, and expand the interface in accordance with the on-demand approach for fitting the new system. This interface not only ensures the independence of HIS, but also realizes the openness of HIS. Thus, the Military No.l system is a secure and controllable system which is open to the 3rd party system.%使用Web Services技术实现LIS,PACS等业务子系统与HIS系统信息集成,消除信息孤岛.对HIS中常用的、需对外公开的功能和服务最小子集进行了总结和梳理.并按照"按需定制"的方式对接口进行扩展,以适应新接入的系统.该接口既保证HIS系统的独立性,又实现了HIS系统的开放性,使军卫一号HIS系统成为一个可与第三方系统互联互通,而且又是安全、可控的系统.
Newman, R. L.; Clemesha, A.; Lindquist, K. G.; Reyes, J.; Steidl, J. H.; Vernon, F. L.
Burger, Melanie C
Zain, Jasni Mohamad; Goh, Yingsoon
This article describes the accurateness of our application namely Self-Developed Aesthetics Measurement Application (SDA) in measuring the aesthetics aspect by comparing the results of our application and users' perceptions in measuring the aesthetics of the web page interfaces. For this research, the positions of objects, images element and texts element are defined as objects in a web page interface. Mandarin learning web pages are used in this research. These learning web pages comprised of main pages, learning pages and exercise pages, on the first author's E-portfolio web site. The objects of the web pages were manipulated in order to produce the desired aesthetic values. The six aesthetics related elements used are balance, equilibrium, symmetry, sequence, rhythm, as well as order and complexity. Results from the research showed that the ranking of the aesthetics values of the web page interfaces measured of the users were congruent with the expected perceptions of our designed Mandarin learning web pag...
Knipp, D.; Kilcommons, L. M.; Damas, M. C.
We have created a simple and user-friendly web application to visualize output from empirical atmospheric models that describe the lower atmosphere and the Space-Atmosphere Interface Region (SAIR). The Atmospheric Model Web Explorer (AtModWeb) is a lightweight, multi-user, Python-driven application which uses standard web technology (jQuery, HTML5, CSS3) to give an in-browser interface that can produce plots of modeled quantities such as temperature and individual species and total densities of neutral and ionized upper-atmosphere. Output may be displayed as: 1) a contour plot over a map projection, 2) a pseudo-color plot (heatmap) which allows visualization of a variable as a function of two spatial coordinates, or 3) a simple line plot of one spatial coordinate versus any number of desired model output variables. The application is designed around an abstraction of an empirical atmospheric model, essentially treating the model code as a black box, which makes it simple to add additional models without modifying the main body of the application. Currently implemented are the Naval Research Laboratory NRLMSISE00 model for neutral atmosphere and the International Reference Ionosphere (IRI). These models are relevant to the Low Earth Orbit environment and the SAIR. The interface is simple and usable, allowing users (students and experts) to specify time and location, and choose between historical (i.e. the values for the given date) or manual specification of whichever solar or geomagnetic activity drivers are required by the model. We present a number of use-case examples from research and education: 1) How does atmospheric density between the surface and 1000 km vary with time of day, season and solar cycle?; 2) How do ionospheric layers change with the solar cycle?; 3 How does the composition of the SAIR vary between day and night at a fixed altitude?
Groenewegen, D.M.; Visser, E.
This paper is a pre-print of: Danny M. Groenewegen, Eelco Visser. Integration of Data Validation and User Interface Concerns in a DSL for Web Applications. In Mark G. J. van den Brand, Jeff Gray, editors, Software Language Engineering, Second International Conference, SLE 2009, Denver, USA, October,
Discusses multimedia Web site design that may include images, animations, audio, and video. Highlights include interfaces that stress user-centered design; using only relevant media; placing high-demand content on secondary pages and keeping the home page simpler; providing information about the media; considering users with disabilities; and user…
Joan Segura Mora
Full Text Available BACKGROUND: It is well established that only a portion of residues that mediate protein-protein interactions (PPIs, the so-called hot spot, contributes the most to the total binding energy, and thus its identification is an important and relevant question that has clear applications in drug discovery and protein design. The experimental identification of hot spots is however a lengthy and costly process, and thus there is an interest in computational tools that can complement and guide experimental efforts. PRINCIPAL FINDINGS: Here, we present Presaging Critical Residues in Protein interfaces-Web server (http://www.bioinsilico.org/PCRPi, a web server that implements a recently described and highly accurate computational tool designed to predict critical residues in protein interfaces: PCRPi. PRCPi depends on the integration of structural, energetic, and evolutionary-based measures by using Bayesian Networks (BNs. CONCLUSIONS: PCRPi-W has been designed to provide an easy and convenient access to the broad scientific community. Predictions are readily available for download or presented in a web page that includes among other information links to relevant files, sequence information, and a Jmol applet to visualize and analyze the predictions in the context of the protein structure.
Full Text Available To surface the Deep Web, one crucial task is to predict whether a given web page has a search interface (searchable HyperText Markup Language (HTML form or not. Previous studies have focused on supervised classification with labeled examples. However, labeled data are scarce, hard to get and requires tediousmanual work, while unlabeled HTML forms are abundant and easy to obtain. In this research, we consider the plausibility of using both labeled and unlabeled data to train better models to identify search interfaces more effectively. We present a semi-supervised co-training ensemble learning approach using both neural networks and decision trees to deal with the search interface identification problem. We show that the proposed model outperforms previous methods using only labeled data. We also show that adding unlabeled data improves the effectiveness of the proposed model.
Hendry Setyawans Sutedjo
Full Text Available Informasi dalam sebuah website atau web diharapkan dapat sampaikan dan diterima oleh pencari informasi dengan mudah. Di dalam Dunia pendidikan, informasi yang ada di dalam web juga diharapkan mampu diterima oleh para penggunanya dengan tujuan media komunikasi online seperti website dapat membantu para pelajar menerima ilmu yang disampaikan melalui media online. Untuk Mengetahui seberapa mudahnya informasi itu ditangkap ditandai dengan seberapa mudah website itu digunakan (usable. Untuk mengetahui seberapa mudah penggunaan suatu website digunakan analisa usability, banyak metode yang dapat digunakan untuk mengidentifikasi masalah usability terutama dari sisi interface web. Heuristic evaluation merupakan salah satu teknik dalam melakukan hal tersebut yang digunakan dalam penelitian ini guna menilai seberapa mudahnya website Institut Teknologi Sepuluh Nopember dalam menyampaikan informasi yang ada. Dalam penelitian ini digunakan juga Quality Function Deployment (QFD untuk mengidentifikasi keinginan pengguna terhadap tampilan dari web ITS
Burger, Melanie C
The purpose of this thesis is to cover widely the area of user interface animations in different lights and perspectives, in order to fully understand the complexity interactive producers, such as designer and developers, have to deal with in order to create good, well thought and well designed animated experience. The thesis is consisted of an introduction, 4 research chapters and final conclusion chapter. The first research part is an overview of user interface animation usage. The sec...
Vali, Faisal; Hong, Robert
Full Text Available This paper discusses the standards and guidelines of user interface features in web-based applications for the different personality types of people. An overview of human computer interaction and human personality types is described. LEONARD, Let’s Explore our personality type based on Openness (O, Neutral (N, Analytical (A, Relational (R and Decisive (D is the model used to determine the different personality types for this study. The purpose is to define user personality profiles and to establish guidelines for the graphical user interface. The personality inventory and a user interface questionnaire were administered to university students. Interview sessions were also conducted and parts of the interviews with the university students were used to validate the results obtained from the questionnaires. The analysis of the students' personality types identified five main groups. The results suggest that users do have definable expectations concerning the features of web applications. This profile served as basis for the guidelines of web features for the graphical user interface design for the particular user groups.
Lassnig, Mario; The ATLAS collaboration; Barisits, Martin-Stefan; Serfon, Cedric; Vigne, Ralph; Garonne, Vincent
The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component, and allows easy integration for user-generated views. The interface follows three design principles. First, the collection and storage of data from internal and external systems is asynchronous to reduce latency. This includes the use of technologies like ActiveMQ or Nagios. Second, analysis of the data into information is done massively parallel due to its volume, using a combined approach with an Oracle database and Hadoop MapReduce. Third, sharing of the information does not distinguish between human or programmatic access, making it easy to access selective parts of the information both in constrained frontends like ...
Lassnig, Mario; The ATLAS collaboration; Vigne, Ralph; Barisits, Martin-Stefan; Garonne, Vincent; Serfon, Cedric
The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new data management system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component, and allows easy integration for user-generated views. The interface follows three design principles. First, the collection and storage of data from internal and external systems is asynchronous to reduce latency. This includes the use of technologies like ActiveMQ or Nagios. Second, analysis of the data into information is done massively parallel due to its volume, using a combined approach with an Oracle database and Hadoop MapReduce. Third, sharing of the information does not distinguish between human or programmatic access, making it easy to access selective parts of the information both in constrained...
This paper presents a generic web-based database interface implemented in Prolog. We discuss the advantages of the implementation platform and demonstrate the system's applicability in providing access to integrated biochemical data. Our system exploits two libraries of SWI-Prolog to create a schema-transparent interface within a relational setting. As is expected in declarative programming, the interface was written with minimal programming effort due to the high level of the language and its suitability to the task. We highlight two of Prolog's features that are well suited to the task at hand: term representation of structured documents and relational nature of Prolog which facilitates transparent integration of relational databases. Although we developed the system for accessing in-house biochemical and genomic data the interface is generic and provides a number of extensible features. We describe some of these features with references to our research databases. Finally we outline an in-house library that...
Lange Ramos, Bruno; The ATLAS collaboration; Pommes, Kathy; Pavani Neto, Varlen; Vieira Arosa, Breno
In order to manage a heterogeneous and worldwide collaboration, the ATLAS experiment develops web systems that range from supporting the process of publishing scientific papers to monitoring equipment radiation levels. These systems are vastly supported by Glance, a technology that was set forward in 2004 to create an abstraction layer on top of varied databases that automatically recognizes their modeling and generate web search interfaces. Fence (Front ENd ENgine for glaNCE) assembles classes to build applications by making extensive use of configuration files. It produces templates of the core JSON files on top of which it is possible to create Glance-compliant search interfaces. Once the database, its schemas and tables are defined using Glance, its records can be incorporated into the templates by escaping the returned values with a reference to the column identifier wrapped around double enclosing brackets. The developer may also expand on available configuration files to create HTML forms and securely ...
Vernon, Frank; Newman, Robert; Lindquist, Kent
Cranford, Jonathan W.
The objective of this Langley Aerospace Research Summer Scholars (LARSS) research project was to write a user interface that utilizes current World Wide Web (WWW) technologies for an existing computer program written in C, entitled LaRCRisk. The project entailed researching data presentation and script execution on the WWW and than writing input/output procedures for the database management portion of LaRCRisk.
Langston, James H.; Murray, Henry L.; Hunt, Gary R.
A prototype has been developed which makes use of commercially available products in conjunction with the Java programming language to provide a secure user interface for command and control over the open Internet. This paper reports successful demonstration of: (1) Security over the Internet, including encryption and certification; (2) Integration of Java applets with a COTS command and control product; (3) Remote spacecraft commanding using the Internet. The Java-based Spacecraft Web Interface to Telemetry and Command Handling (Jswitch) ground system prototype provides these capabilities. This activity demonstrates the use and integration of current technologies to enable a spacecraft engineer or flight operator to monitor and control a spacecraft from a user interface communicating over the open Internet using standard World Wide Web (WWW) protocols and commercial off-the-shelf (COTS) products. The core command and control functions are provided by the COTS Epoch 2000 product. The standard WWW tools and browsers are used in conjunction with the Java programming technology. Security is provided with the current encryption and certification technology. This system prototype is a step in the direction of giving scientist and flight operators Web-based access to instrument, payload, and spacecraft data.
Lassnig, M.; Beermann, T.; Vigne, R.; Barisits, M.; Garonne, V.; Serfon, C.
The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new data management system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component, and allows easy integration for usergenerated views. The interface follows three design principles. First, the collection and storage of data from internal and external systems is asynchronous to reduce latency. This includes the use of technologies like ActiveMQ or Nagios. Second, analysis of the data into information is done massively parallel due to its volume, using a combined approach with an Oracle database and Hadoop MapReduce. Third, sharing of the information does not distinguish between human or programmatic access, making it easy to access selective parts of the information both in constrained frontends like web-browsers as well as remote services. This contribution will detail the reasons for these principles and the design choices taken. Additionally, the implementation, the interactions with external systems, and an evaluation of the system in production, both from a technological and user perspective, conclude this contribution.
Tungkasthan, Anucha; Jongsawat, Nipat; Poompuang, Pittaya; Intarasema, Sarayut; Premchaiswadi, Wichian
This paper presented a practical framework for automating the building of diagnostic BN models from data sources obtained from the WWW and demonstrates the use of a SMILE web-based interface to represent them. The framework consists of the following components: RSS agent, transformation/conversion tool, core reasoning engine, and the SMILE web-based interface. The RSS agent automatically collects and reads the provided RSS feeds according to the agent's predefined URLs. A transformation/conve...
Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T
Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/. PMID:23748958
Thomas A Schlacher
Full Text Available Food webs near the interface of adjacent ecosystems are potentially subsidised by the flux of organic matter across system boundaries. Such subsidies, including carrion of marine provenance, are predicted to be instrumental on open-coast sandy shores where in situ productivity is low and boundaries are long and highly permeable to imports from the sea. We tested the effect of carrion supply on the structure of consumer dynamics in a beach-dune system using broad-scale, repeated additions of carcasses at the strandline of an exposed beach in eastern Australia. Carrion inputs increased the abundance of large invertebrate scavengers (ghost crabs, Ocypode spp., a numerical response most strongly expressed by the largest size-class in the population, and likely due to aggregative behaviour in the short term. Consumption of carrion at the beach-dune interface was rapid and efficient, driven overwhelmingly by facultative avian scavengers. This guild of vertebrate scavengers comprises several species of birds of prey (sea eagles, kites, crows and gulls, which reacted strongly to concentrations of fish carrion, creating hotspots of intense scavenging activity along the shoreline. Detection of carrion effects at several trophic levels suggests that feeding links arising from carcasses shape the architecture and dynamics of food webs at the land-ocean interface.
Licurse, Mindy Y.; Cook, Tessa S.
Radiology and imaging informatics education have rapidly evolved over the past few decades. With the increasing recognition that future growth and maintenance of radiology practices will rely heavily on radiologists with fundamentally sound informatics skills, the onus falls on radiology residency programs to properly implement and execute an informatics curriculum. In addition, the American Board of Radiology may choose to include even more informatics on the new board examinations. However, the resources available for didactic teaching and guidance most especially at the introductory level are widespread and varied. Given the breadth of informatics, a centralized web-based interface designed to serve as an adjunct to standardized informatics curriculums as well as a stand-alone for other interested audiences is desirable. We present the development of a curriculum using PearlTrees, an existing web-interface based on the concept of a visual interest graph that allows users to collect, organize, and share any URL they find online as well as to upload photos and other documents. For our purpose, the group of "pearls" includes informatics concepts linked by appropriate hierarchal relationships. The curriculum was developed using a combination of our institution's current informatics fellowship curriculum, the Practical Imaging Informatics textbook1 and other useful online resources. After development of the initial interface and curriculum has been publicized, we anticipate that involvement by the informatics community will help promote collaborations and foster mentorships at all career levels.
Karp Peter D
Lange Ramos, Bruno; The ATLAS collaboration; Pommes, Kathy; Pavani Neto, Varlen; Vieira Arosa, Breno; Abreu Da Silva, Igor
The ATLAS Technical Coordination disposes of 17 Web systems to support its operation. These applications, whilst ranging from supporting the process of publishing scientific papers to monitoring radiation levels in the equipment at the cave, are constantly prone to changes in requirements due to the collaborative nature of the experiment and its management. In this context, a Web framework is proposed to unify the generation of the supporting interfaces. Fence assembles classes to build applications by making extensive use of JSON configuration files. It relies vastly on Glance, a technology that was set forth in 2003 to create an abstraction layer on top of the heterogeneous sources that store the technical coordination data. Once Glance maps out the database modeling, records can be referenced in the configuration files by wrapping unique identifiers around double enclosing brackets. The deployed content can be individually secured by attaching clearance attributes to their description thus ensuring that vi...
Jakobovits, R. M.; Brinkley, J. F.
This paper describes the Web-Interfacing Repository Manager (WIRM), a perl toolkit for managing and deploying multimedia data, which is built entirely from free, platform-independent components. The WIRM consists of an object-relational API layered over a relational database, with built-in support for file management and CGI programming. The basic underlying data structure for all WIRM data is the repository object, a perl associative array whose values are bound to a row of a table in the re...
This paper describes a new approach to the visualization of information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers to the network utilization are stored in several databases for later analysis. Although the ability to view these data-sets individually is already in place, currently there is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple providers that have different structures. It is capable of aggregating and correlating the data according to user defined criteria. Finally, it visualizes the collected data using a flexible and interactive front-end web system. Structurally, the project comprises of 3 main levels of the data collection cycle: The Level 0 represents the information sources within ATLAS. These providers do not store information in a uniform fashion. The first step of the project was to define a common interface with which to expose stored data. The interface designed for the project originates from the Google Data Protocol API. The idea is to allow read-only access to data providers, through HTTP requests similar in format to the SQL query structure. This provides a standardized way to access this different information sources within ATLAS. The Level 1 can be considered the engine of the system. The primary task of the Level 1 is to gather data from multiple data sources via the common interface, to correlate this data together, or over a defined time series, and expose the combined data as a whole to the Level 2 web
Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of scientific data between information resources difficu...
Trabant, C. M.; Ahern, T. K.; Stults, M.
At the IRIS Data Management Center (DMC) we have been developing web service data access interfaces for our, primarily seismological, repositories for five years. These interfaces have become the primary access mechanisms for all data extraction from the DMC. For the last two years the DMC has been a principal participant in the GeoWS project, which aims to develop common web service interfaces for data access across hydrology, geodesy, seismology, marine geophysics, atmospheric and other geoscience disciplines. By extending our approach we have converged, along with other project members, on a web service interface and presentation design appropriate for geoscience and other data. The key principles of the approach include using a simple subset of RESTful concepts, common calling conventions whenever possible, a common tabular text data set convention, human-readable documentation and tools to help scientific end users learn how to use the interfaces. The common tabular text format, called GeoCSV, has been incorporated into the DMC's seismic station and event (earthquake) services. In addition to modifying our existing services, we have developed prototype GeoCSV web services for data sets managed by external (unfunded) collaborators. These prototype services include interfaces for data sets at NGDC/NCEI (water level tides and meteorological satellite measurements), INTERMAGNET repository and UTEP gravity and magnetic measurements. In progress are interfaces for WOVOdat (volcano observatory measurements), NEON (ecological observatory measurements) and more. An important goal of our work is to build interfaces usable by non-technologist end users. We find direct usability by researchers to be a major factor in cross-discipline data use, which itself is a key to solving complex research questions. In addition to data discovery and collection by end users, these interfaces provide a foundation upon which federated data access and brokering systems are already being
Chen, J.; Pullman, S.; Hubbard, S. S.; Peterson, J.
The induced-polarization (IP) method has been used increasingly in environmental investigations because IP measurements are very sensitive to the low frequency capacitive properties of rocks and soils. The Cole-Cole model has been very useful for interpreting spectral IP data in terms of parameters, such as chargeability and time constant, which are used to estimate various subsurface properties. However, conventional methods for estimating Cole-Cole parameters use an iterative Gauss-Newton-based deterministic method, which has been shown that the obtained optimal solution depends on the choice of initial values and the estimated uncertainty information often is inaccurate or insufficient. Chen, Kemna, and Hubbard (2008) developed a Bayesian model for inverting spectral IP data for Cole-Cole parameters based on Markov chain Monte Carlo (MCMC) sampling methods. They have demonstrated that the MCMC-based inversion method provides extensive global information on unknown parameters, such as the marginal probability distribution functions, from which better estimates and tighter uncertainty bounds of the parameters can be obtained. Additionally, the results obtained with the MCMC method are almost independent of the choice of initial values. We have developed a web interface to the stochastic inversion software, which permits easy accessibility to the code. The web interface allows users to upload their own spectral IP data, specify prior ranges of unknown parameters, and remotely run the code in real time. After running the code (a few minutes), the interface provides a data file with all the statistics of each unknown parameter, including the median, mean, standard deviation, and 95% predictive intervals, and provides a data misfit file. The interface also allows users to visualize the histogram and posterior probability density of each unknown parameter as well as data misfits. For advanced users, the interface provides an option of producing time-series plots of all
Full Text Available Satellite data, radiative power of hot spots as measured with remote sensing, historical records, on site geological surveys, digital elevation model data, and simulation results together provide a massive data source to investigate the behavior of active volcanoes like Mount Etna (Sicily, Italy over recent times. The integration of these heterogeneous data into a coherent visualization framework is important for their practical exploitation. It is crucial to fill in the gap between experimental and numerical data, and the direct human perception of their meaning. Indeed, the people in charge of safety planning of an area need to be able to quickly assess hazards and other relevant issues even during critical situations. With this in mind, we developed LAV@HAZARD, a web-based geographic information system that provides an interface for the collection of all of the products coming from the LAVA project research activities. LAV@HAZARD is based on Google Maps application programming interface, a choice motivated by its ease of use and the user-friendly interactive environment it provides. In particular, the web structure consists of four modules for satellite applications (time-space evolution of hot spots, radiant flux and effusion rate, hazard map visualization, a database of ca. 30,000 lava-flow simulations, and real-time scenario forecasting by MAGFLOW on Compute Unified Device Architecture.
Oakley, N.; Daudert, B.
Accessing scientific data through an online portal can be a frustrating task. The concept of making web interfaces easy to use known as "usability" has been thoroughly researched in the field of e-commerce but has not been explicitly addressed in the atmospheric sciences. As more observation stations are installed, satellite missions flown, models run, and field campaigns performed, large amounts of data are produced. Portals on the Internet have become the favored mechanisms to share this information and are ever increasing in number. Portals are often created without being tested for usability with the target audience though the expenses of testing are low and the returns high. To remain competitive and relevant in the provision of atmospheric data, it is imperative that developers understand design elements of a successful portal to make their product stand out among others. This presentation informs the audience of the benefits and basic principles of usability for web pages presenting atmospheric data. We will also share some of the best practices and recommendations we have formulated from the results of usability testing performed on two data provision web sites hosted by the Western Regional Climate Center.
Scholl, I.; Girard, Y.; Bykowski, A.
This paper presents the architecture of a Java web-based graphical interface dedicated to the access of the SOHO Data archive. This application allows local and remote users to search in the SOHO data catalog and retrieve the SOHO data files from the archive. It has been developed at MEDOC (Multi-Experiment Data and Operations Centre), located at the Institut d'Astrophysique Spatiale (Orsay, France), which is one of the European Archives for the SOHO data. This development is part of a joint effort between ESA, NASA and IAS in order to implement long term archive systems for the SOHO data. The software architecture is built as a client-server application using Java language and SQL above a set of components such as an HTTP server, a JDBC gateway, a RDBMS server, a data server and a Web browser. Since HTML pages and CGI scripts are not powerful enough to allow user interaction during a multi-instrument catalog search, this type of requirement enforces the choice of Java as the main language. We also discuss performance issues, security problems and portability on different Web browsers and operating syste ms.
Full Text Available Purpose: The aim of this paper is to present a prototype of web-based programming interface for the MitsubishiMovemaster RV-M1 robot.Design/methodology/approach: The web techniques have been selected due to modularity of this solution andpossibility of use the existing code fragments for elaborating new applications. The previous papers [11-14] havepresented the off-line, remote programming system for the RV-M1 robot. The general idea of this system is abase for developing a web-based programming interface.Findings: The prototype of the system has been developed.Research limitations/implications: The presented system is in the early development stage and there is a lackof some functions. In the future a visualisation module will be elaborated and the trajectory translator intendedto co-operate with CAD software will be included.Practical implications: The previous version of the system has been intended for educational purposes. It isplanned that new version will be more flexible and it will have the possibility of being adapted for other devices,like small PLC’s or other robots.Originality/value: Remote supervision of machines during a manufacturing process is an actual issue. Most ofautomation systems manufacturers produce supervising software for their PLC’s and robots. The MovemasterRV-M1 robot is an old model and is lack of the high-tech software. On the other hand, the programming anddevelopment of applications for this robot are very easy. The aim of the presented project is to develop a flexible,remote-programming environment.
Lange, Bruno; Maidantchik, Carmen; Pommes, Kathy; Pavani, Varlen; Arosa, Breno; Abreu, Igor
The ATLAS Technical Coordination disposes of 17 Web systems to support its operation. These applications, whilst ranging from managing the process of publishing scientific papers to monitoring radiation levels in the equipment in the experimental cavern, are constantly prone to changes in requirements due to the collaborative nature of the experiment and its management. In this context, a Web framework is proposed to unify the generation of the supporting interfaces. FENCE assembles classes to build applications by making extensive use of JSON configuration files. It relies heavily on Glance, a technology that was set forth in 2003 to create an abstraction layer on top of the heterogeneous sources that store the technical coordination data. Once Glance maps out the database modeling, records can be referenced in the configuration files by wrapping unique identifiers around double enclosing brackets. The deployed content can be individually secured by attaching clearance attributes to their description thus ensuring that view/edit privileges are granted to eligible users only. The framework also provides tools for securely writing into a database. Fully HTML5-compliant multi-step forms can be generated from their JSON description to assure that the submitted data comply with a series of constraints. Input validation is carried out primarily on the server- side but, following progressive enhancement guidelines, verification might also be performed on the client-side by enabling specific markup data attributes which are then handed over to the jQuery validation plug-in. User monitoring is accomplished by thoroughly logging user requests along with any POST data. Documentation is built from the source code using the phpDocumentor tool and made readily available for developers online. Fence, therefore, speeds up the implementation of Web interfaces and reduces the response time to requirement changes by minimizing maintenance overhead.
Full Text Available Purpose: of this paper. The aim of this paper is to present a prototype of web-based programming interface for the Mitsubishi Movemaster RV-M1 robot.Design/methodology/approach: In the previous papers [11-14] the off-line, remote programming system for the mentioned robot has been presented. It has been used as a base for developing a new branch: web-based programming interface. The web techniques have been selected due to possibility of use existing code fragments for elaborating new applications and modularity of this solution.Findings: As a result, a prototype of the system has been developed.Research limitations/implications: Because the presented system is in the early development stage, there is a lack of some useful functions. Future work will include elaboration of the robot’s visualisation module and implementation of a trajectory translator intended to co-operate with CAD software.Practical implications: The elaborated system has been previously intended for educational purposes, but it may be adapted for other devices, like small PLC’s or other robots.Originality/value: Remote supervision of machines during a manufacturing process is an actual issue. Most of automation systems manufacturers produce software for their PLC’s and robots. Mitsubishi Movemaster RV-M1 is an old model and there is very few programs dedicated to this machine. On the other hand the programming and development of applications for this robot are very easy. The aim of the presented project is to develop a flexible, remote-programming environment.
Weber, Tilmann; Kim, Hyun Uk
. In this context, this review gives a summary of tools and databases that currently are available to mine, identify and characterize natural product biosynthesis pathways and their producers based on ‘omics data. A web portal called Secondary Metabolite Bioinformatics Portal (SMBP at http......://www.secondarymetabolites.org) is introduced to provide a one-stop catalog and links to these bioinformatics resources. In addition, an outlook is presented how the existing tools and those to be developed will influence synthetic biology approaches in the natural products field....
网站应用是互联网最重要的应用之一，它的出现让信息的远距离展示和交互变得极其容易，并且实时；现在，网站系统也是很多电子商务系统的重要载体。由于网站具有这么重要的作用，进而导致网站的界面设计尤其重要，良好的界面设计不仅能让人产生美感，也会引导用户快速完成功能任务。该文对网站的界面设计进行了初步的研究。%Web application is one of the most important applications of the Internet,its appearance makes it extremely easy to dis-play and interact with information,And real time. Now,Website system is also an important carrier of many electronic com-merce system. Because the site has such an important role, and then lead to the interface design of the site is particularly impor-tant, good interface design can not only make people produce beauty, but also guide the user to complete the functional tasks quickly. This paper makes a preliminary research on the interface design of the website.
Matthew L. Clark; T. Mitchell Aide
Web-based applications that integrate geospatial information, or the geoweb, offer exciting opportunities for remote sensing science. One such application is a Web‑based system for automating the collection of reference data for producing and verifying the accuracy of land-use/land-cover (LULC) maps derived from satellite imagery. Here we describe the capabilities and technical components of the Virtual Interpretation of Earth Web-Interface Tool (VIEW-IT), a collaborative browser-based tool f...
This thesis reports on the user-interface design guidelines for usability and accessibility in their connection to human-computer interaction and their implementation in the web design. The goal is to study the theoretical background of the design rules and apply them in designing a real-world website. The analysis of Jakobson’s communication theory applied in the web design and its implications in the design guidelines of visibility, affordance, feedback, simplicity, structure, consisten...
Weidlich, Iwona E; Pevzner, Yuri; Miller, Benjamin T; Filippov, Igor V; Woodcock, H Lee; Brooks, Bernard R
Recent availability of large publicly accessible databases of chemical compounds and their biological activities (PubChem, ChEMBL) has inspired us to develop a web-based tool for structure activity relationship and quantitative structure activity relationship modeling to add to the services provided by CHARMMing (www.charmming.org). This new module implements some of the most recent advances in modern machine learning algorithms-Random Forest, Support Vector Machine, Stochastic Gradient Descent, Gradient Tree Boosting, so forth. A user can import training data from Pubchem Bioassay data collections directly from our interface or upload his or her own SD files which contain structures and activity information to create new models (either categorical or numerical). A user can then track the model generation process and run models on new data to predict activity. PMID:25362883
Palazov, A.; Stefanov, A.; Marinova, V.; Slabakova, V.
Fundamental elements of the success of marine data and information management system and an effective support of marine and maritime economic activities are the speed and the ease with which users can identify, locate, get access, exchange and use oceanographic and marine data and information. There are a lot of activities and bodies have been identified as marine data and information users, such as: science, government and local authorities, port authorities, shipping, marine industry, fishery and aquaculture, tourist industry, environmental protection, coast protection, oil spills combat, Search and Rescue, national security, civil protection, and general public. On other hand diverse sources of real-time and historical marine data and information exist and generally they are fragmented, distributed in different places and sometimes unknown for the users. The marine web portal concept is to build common web based interface which will provide users fast and easy access to all available marine data and information sources, both historical and real-time such as: marine data bases, observing systems, forecasting systems, atlases etc. The service is regionally oriented to meet user needs. The main advantage of the portal is that it provides general look "at glance" on all available marine data and information as well as direct user to easy discover data and information in interest. It is planned to provide personalization ability, which will give the user instrument to tailor visualization according its personal needs.
Mandl, Daniel; Cappelaere, Pat; Frye, Stuart; Sohlberg, Rob; Ly, Vuong; Chien, Steve; Sullivan, Don
A SensorWeb is a set of sensors, which can consist of ground, airborne and space-based sensors interoperating in an automated or autonomous collaborative manner. The NASA SensorWeb toolbox, developed at NASA/GSFC in collaboration with NASA/JPL, NASA/Ames and other partners, is a set of software and standards that (1) enables users to create virtual private networks of sensors over open networks; (2) provides the capability to orchestrate their actions; (3) provides the capability to customize the output data products and (4) enables automated delivery of the data products to the users desktop. A recent addition to the SensorWeb Toolbox is a new user interface, together with web services co-resident with the sensors, to enable rapid creation, loading and execution of new algorithms for processing sensor data. The web service along with the user interface follows the Open Geospatial Consortium (OGC) standard called Web Coverage Processing Service (WCPS). This presentation will detail the prototype that was built and how the WCPS was tested against a HyspIRI flight testbed and an elastic computation cloud on the ground with EO-1 data. HyspIRI is a future NASA decadal mission. The elastic computation cloud stores EO-1 data and runs software similar to Amazon online shopping.
Full Text Available Abstract Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.
Li, Liang-Yi; Chen, Gwo-Dong
Information Gathering is a knowledge construction process. Web learners make a plan for their Information Gathering task based on their prior knowledge. The plan is evolved with new information encountered and their mental model is constructed through continuously assimilating and accommodating new information gathered from different Web pages. In…
Attwood, Terri K.; Selimas, Ioannis; Buis, Rob; Altenburg, Ruud; Herzog, Robert; Ledent, Valerie; Ghita, Viorica; Fernandes, Pedro; Marques, Isabel; Brugman, Marc
EMBER was a European project aiming to develop bioinformatics teaching materials on the Web and CD-ROM to help address the recognised skills shortage in bioinformatics. The project grew out of pilot work on the development of an interactive web-based bioinformatics tutorial and the desire to repackage that resource with the help of a professional…
, and software are also key parts of flow cytometry bioinformatics. Data standards include the widely adopted Flow Cytometry Standard (FCS defining how data from cytometers should be stored, but also several new standards under development by the International Society for Advancement of Cytometry (ISAC to aid in storing more detailed information about experimental design and analytical steps. Open data is slowly growing with the opening of the CytoBank database in 2010 and FlowRepository in 2012, both of which allow users to freely distribute their data, and the latter of which has been recommended as the preferred repository for MIFlowCyt-compliant data by ISAC. Open software is most widely available in the form of a suite of Bioconductor packages, but is also available for web execution on the GenePattern platform.
Oostra, D.; Chambers, L. H.; Lewis, P. M.; Moore, S. W.
The Atmospheric Science Data Center (ASDC) at the NASA Langley Research Center in Virginia houses almost three petabytes of data, a collection that increases every day. To put it into perspective, it is estimated that three petabytes of data storage could store a digitized copy of all printed material in U.S. research libraries. There are more than ten other NASA data centers like the ASDC. Scientists and the public use this data for research, science education, and to understand our environment. Most importantly these data provide the potential for all of us make new discoveries. NASA is about making discoveries. Galileo was quoted as saying, "All discoveries are easy to understand once they are discovered. The point is to discover them." To that end, NASA stores vast amounts of publicly available data. This paper examines an approach to create web applications that serve NASA data in ways that specifically address the mobile web application technologies that are quickly emerging. Mobile data is not a new concept. What is new, is that user driven tools have recently become available that allow users to create their own mobile applications. Through the use of these cloud-based tools users can produce complete native mobile applications. Thus, mobile apps can now be created by everyone, regardless of their programming experience or expertise. This work will explore standards and methods for creating dynamic and malleable application programming interfaces (APIs) that allow users to access and use NASA science data for their own needs. The focus will be on experiences that broaden and increase the scope and usage of NASA science data sets.
Colini, L.; Doumaz, F.; Spinetti, C.; Mazzarini, F.; Favalli, M.; Isola, I.; Buongiorno, M. F.; Ananasso, C.
In the frame of the future Italian Space Agency (ASI) Space Mission PRISMA (Precursore IperSpettrale della Missione Applicativa), the Istituto Nazionale di Geofisica e Vulcanologia (INGV) coordinates the scientific project ASI-AGI (Analisi Sistemi Iperspettrali per le Applicazioni Geofisiche Integrate) aimed to study the hyperspectral volcanic applications and to identify and characterize a vicarious validation and calibration site for hyperspectral space missions. PRISMA is an Earth observation system with innovative electro-optical instrumentation which combines an hyperspectral sensor with a panchromatic medium-resolution camera. These instruments offer the scientific community and users many applications in the field of environmental monitoring, risk management, crop classification, pollution control, and Security. In this context Mt. Etna (Italy) has been choose as main site for testing the sensor capability to assess volcanic risk. The volcanic calibration and validation activities comprise the managing of a large amount of in situ hyperspectral data collected during the last 10 years. The usability and interoperability of these datasets represents a task of ASI-AGI project. For this purpose a database has been created to collect all the spectral signatures of the measured volcanic surfaces. This process has begun with the creation of the metadata structure compliant with those belonging to some standard spectral libraries such as USGS ones. Each spectral signature is described in a table containing ancillary data such as the location map of where it was collected, description of the target selected, etc. The relational database structure has been developed WOVOdat compliant. Specific tables have been formatted for each type of measurements, instruments and targets in order to query the database through a user-friendly web-interface. The interface has an upload area to populate the database and a visualization tool that allows downloading the ASCII spectral
Li, Ping; Cunningham, Krystal
The APA Style Converter is a Web-based tool with which authors may prepare their articles in APA style according to the APA Publication Manual (5th ed.). The Converter provides a user-friendly interface that allows authors to copy and paste text and upload figures through the Web, and it automatically converts all texts, references, and figures to a structured article in APA style. The output is saved in PDF or RTF format, ready for either electronic submission or hardcopy printing. PMID:16171194
This project is about developing a web-based interface for accessing the Marine Contamination database records. The system contains of information pertaining to the occurrence of contaminants and natural elements in the marine eco-system based on samples taken at various locations within the shores of Malaysia in the form of sediment, seawater and marine biota. It represents a systematic approach for recording, storing and managing the vast amount of marine environmental data collected as output of the Marine Contamination and Transport Phenomena Research Project since 1990. The resultant collection of data is to form the background information (or baseline data) which could later be used to monitor the level of marine environmental pollutions around the country. Data collected from the various sampling and related laboratory activities are previously kept in conventional forms such as Excel worksheets and other documents, both in digital and/or paper form. With the help of modern database storage and retrieval techniques, the task of storage and retrieval of data has been made easier and manageable. It can also provide easy access to other parties who are interested in the data. (author)
Belmann, Peter; Dröge, Johannes; Bremges, Andreas; McHardy, Alice C; Sczyrba, Alexander; Barton, Michael D
Software is now both central and essential to modern biology, yet lack of availability, difficult installations, and complex user interfaces make software hard to obtain and use. Containerisation, as exemplified by the Docker platform, has the potential to solve the problems associated with sharing software. We propose bioboxes: containers with standardised interfaces to make bioinformatics software interchangeable. PMID:26473029
Full Text Available Abstract Background Expressed sequence tag (EST collections are composed of a high number of single-pass, redundant, partial sequences, which need to be processed, clustered, and annotated to remove low-quality and vector regions, eliminate redundancy and sequencing errors, and provide biologically relevant information. In order to provide a suitable way of performing the different steps in the analysis of the ESTs, flexible computation pipelines adapted to the local needs of specific EST projects have to be developed. Furthermore, EST collections must be stored in highly structured relational databases available to researchers through user-friendly interfaces which allow efficient and complex data mining, thus offering maximum capabilities for their full exploitation. Results We have created EST2uni, an integrated, highly-configurable EST analysis pipeline and data mining software package that automates the pre-processing, clustering, annotation, database creation, and data mining of EST collections. The pipeline uses standard EST analysis tools and the software has a modular design to facilitate the addition of new analytical methods and their configuration. Currently implemented analyses include functional and structural annotation, SNP and microsatellite discovery, integration of previously known genetic marker data and gene expression results, and assistance in cDNA microarray design. It can be run in parallel in a PC cluster in order to reduce the time necessary for the analysis. It also creates a web site linked to the database, showing collection statistics, with complex query capabilities and tools for data mining and retrieval. Conclusion The software package presented here provides an efficient and complete bioinformatics tool for the management of EST collections which is very easy to adapt to the local needs of different EST projects. The code is freely available under the GPL license and can be obtained at http
Web与数据库的结合是目前web技术和数据库技术发展的主流方向.本文通过对B/S体系结构的分析,对当前比较流行的几种接口技术进行了研究和分析,主要包括;CGI(Common Gateway Interface)技术、Web API(Appllcation Programming Interface)技术、JDBC(Java DataBase Connection)技术和ASP(Acrive Server Pages)技术.
The World Wide Web is rapidly being adopted by libraries and database vendors as a front end for bibliographic databases, reflecting the fact that the Web browser is becoming a universal tool. When the Web is also used for bibliographic instruction about these Web-based resources, it is possible to build tutorials incorporating actual screens from a database. The result is a realistic, highly interactive simulation of database searching that can provide a very detailed level of instruction.
Mesbah, A.; Van Deursen, A.; Lenselink, S.
SHENG Qi-zheng; De Moor Bart
As a newborn interdisciplinary field, bioinformatics is receiving increasing attention from biologists, computer scientists, statisticians, mathematicians and engineers. This paper briefly introduces the birth, importance, and extensive applications of bioinformatics in the different fields of biological research. A major challenge in bioinformatics - the unraveling of gene regulation - is discussed in detail.
Ms. Veena Singh Bhadauriya
Full Text Available Rapid growth of web application has increased the researcher’s interests in this era. All over the world has surrounded by the computer network. There is a very useful application call web application used for the communication and data transfer. An application that is accessed via a web browser over a network is called the web application. Web caching is a well-known strategy for improving the performance of Web based system by keeping Web objects that are likely to be used in the near future in location closer to user. The Web caching mechanisms are implemented at three levels: client level, proxy level and original server level. Significantly, proxy servers play the key roles between users and web sites in lessening of the response time of user requests and saving of network bandwidth. Therefore, for achieving better response time, an efficient caching approach should be built in a proxy server. This paper use FP growth, weighted rule mining concept and Markov model for fast and frequent web pre fetching in order to has improved the hit ratio of the web page and expedites users visiting speed.
Konc, Janez; Miller, Benjamin T; Štular, Tanja; Lešnik, Samo; Woodcock, H Lee; Brooks, Bernard R; Janežič, Dušanka
Proteins often exist only as apo structures (unligated) in the Protein Data Bank, with their corresponding holo structures (with ligands) unavailable. However, apoproteins may not represent the amino-acid residue arrangement upon ligand binding well, which is especially problematic for molecular docking. We developed the ProBiS-CHARMMing web interface by connecting the ProBiS ( http://probis.cmm.ki.si ) and CHARMMing ( http://www.charmming.org ) web servers into one functional unit that enables prediction of protein-ligand complexes and allows for their geometry optimization and interaction energy calculation. The ProBiS web server predicts ligands (small compounds, proteins, nucleic acids, and single-atom ligands) that may bind to a query protein. This is achieved by comparing its surface structure against a nonredundant database of protein structures and finding those that have binding sites similar to that of the query protein. Existing ligands found in the similar binding sites are then transposed to the query according to predictions from ProBiS. The CHARMMing web server enables, among other things, minimization and potential energy calculation for a wide variety of biomolecular systems, and it is used here to optimize the geometry of the predicted protein-ligand complex structures using the CHARMM force field and to calculate their interaction energies with the corresponding query proteins. We show how ProBiS-CHARMMing can be used to predict ligands and their poses for a particular binding site, and minimize the predicted protein-ligand complexes to obtain representations of holoproteins. The ProBiS-CHARMMing web interface is freely available for academic users at http://probis.nih.gov.
The paper presents the necessity of implementing the government information public system interface based on web ser⁃vice. The web service interface system architecture and operation principle are introduced. The design and implementation of web service interface applied in the government information public system is also expounded in the paper. In addition, web service inter⁃face's application and prospects are suggested in sharing government information among government departments.%提出了基于Web Service的政府信息公开系统接口建设必要性，介绍了Web Service接口的体系架构和工作原理，详细阐述了政府信息公开系统Web Service接口的设计和实现过程。文章最后介绍了政府信息公开系统Web Service接口在推进政府部门的政务信息资源共享与服务中的应用及前景。
Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh
In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current res...
Kamel Boulos Maged N
Full Text Available Abstract Background On 21 July 2004, the Healthcare Commission http://www.healthcarecommission.org.uk/ released its annual star ratings of the performance of NHS Primary Care Trusts (PCTs in England for the year ending March 2004. The Healthcare Commission started work on 1 April 2004, taking over all the functions of the former Commission for Health Improvement http://www.chi.nhs.uk/, which had released the corresponding PCT ratings for 2002/2003 in July 2003. Results We produced two Web-based interactive maps of PCT star ratings, one for 2003 and the other for 2004 http://healthcybermap.org/PCT/ratings/, with handy functions like map search (by PCT name or part of it. The maps feature a colour-blind friendly quadri-colour scheme to represent PCT star ratings. Clicking a PCT on any of the maps will display the detailed performance report of that PCT for the corresponding year. Conclusion Using our Web-based interactive maps, users can visually appreciate at a glance the distribution of PCT performance across England. They can visually compare the performance of different PCTs in the same year and also between 2003 and 2004 (by switching between the synchronised 'PCT Ratings 2003' and 'PCT Ratings 2004' themes. The performance of many PCTs has improved in 2004, whereas some PCTs achieved lower ratings in 2004 compared to 2003. Web-based interactive geographical interfaces offer an intuitive way of indexing, accessing, mining, and understanding large healthcare information sets describing geographically differentiated phenomena. By acting as an enhanced alternative or supplement to purely textual online interfaces, interactive Web maps can further empower organisations and decision makers.
Shen, Siu-Tsen; Prior, Stephen D.; Chen, Kuen-Meau
This paper compares the perspicacity, appropriateness and preference of web browser icons from leading software providers with those of a culture-specific design. This online study was conducted in Taiwan and involved 103 participants, who were given three sets of web browser icons to review, namely Microsoft Internet Explorer, Macintosh Safari, and culturally specific icons created using the Culture-Centred Design methodology. The findings of the study show that all three sets have generally...
Thampi, Sabu M.
Bioinformatics is a new discipline that addresses the need to manage and interpret the data that in the past decade was massively generated by genomic research. This discipline represents the convergence of genomics, biotechnology and information technology, and encompasses analysis and interpretation of data, modeling of biological phenomena, and development of algorithms and statistics. This article presents an introduction to bioinformatics
Tengku Siti Meriam Tengku Wook; Siti Salwah Salim
There have been numerous studies done on the guidelines of user interface, but only a number of them have considered specific guidelines for the design of children’s interface. This paper is about a research on the specific guidelines for children, focusing on the criteria of graphic design. The objective of this research is to study on the guidelines of user interface design and to develop specific guidelines on children’s graphic design. The criteria of graphic design are the priority of th...
Full Text Available The paper treats the process of the creation of a web application optimal output for mobile devices in the form of a responsive layout with focus on the agrarian web portal. The utilization and testing of user experience (UX techniques in four steps - UX, research, design and testing - were of great benefit. Two groups of five people representing the task group were employed for the research and testing. The resulting responsive layout was developed with the emphasis on the ergonomic layout of control elements and content, a conservative design, the securing of content accessibility for disabled users and the possibility of fast and simple updating. The resulting knowledge is applicable to web information sources in the agrarian sector (agriculture, food industry, forestry, water supply and distribution and the development of rural areas. In wider context, this knowledge is valid in general.
Polkowski, Marcin; Grad, Marek
Passive seismic experiment "13BB Star" is operated since mid 2013 in northern Poland and consists of 13 broadband seismic stations. One of the elements of this experiment is dedicated on-line data acquisition system comprised of both client (station) side and server side modules with web based interface that allows monitoring of network status and provides tools for preliminary data analysis. Station side is controlled by ARM Linux board that is programmed to maintain 3G/EDGE internet connection, receive data from digitizer, send data do central server among with additional auxiliary parameters like temperatures, voltages and electric current measurements. Station side is controlled by set of easy to install PHP scripts. Data is transmitted securely over SSH protocol to central server. Central server is a dedicated Linux based machine. Its duty is receiving and processing all data from all stations including auxiliary parameters. Server side software is written in PHP and Python. Additionally, it allows remote station configuration and provides web based interface for user friendly interaction. All collected data can be displayed for each day and station. It also allows manual creation of event oriented plots with different filtering abilities and provides numerous status and statistic information. Our solution is very flexible and easy to modify. In this presentation we would like to share our solution and experience. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.
Full Text Available Abstract Background Recent advances in genomic sequencing have enabled the use of genome sequencing in standard biological and biotechnological research projects. The challenge is how to integrate the large amount of data in order to gain novel biological insights. One way to leverage sequence data is to use genome-scale metabolic models. We have therefore designed and implemented a bioinformatics platform which supports the development of such metabolic models. Results MEMOSys (MEtabolic MOdel research and development System is a versatile platform for the management, storage, and development of genome-scale metabolic models. It supports the development of new models by providing a built-in version control system which offers access to the complete developmental history. Moreover, the integrated web board, the authorization system, and the definition of user roles allow collaborations across departments and institutions. Research on existing models is facilitated by a search system, references to external databases, and a feature-rich comparison mechanism. MEMOSys provides customizable data exchange mechanisms using the SBML format to enable analysis in external tools. The web application is based on the Java EE framework and offers an intuitive user interface. It currently contains six annotated microbial metabolic models. Conclusions We have developed a web-based system designed to provide researchers a novel application facilitating the management and development of metabolic models. The system is freely available at http://www.icbi.at/MEMOSys.
Berrios, Daniel C.; Keller, Richard M.
While there are now a number of languages and frameworks that enable computer-based systems to search stored data semantically, the optimal design for effective user interfaces for such systems is still uncle ar. Such interfaces should mask unnecessary query detail from users, yet still allow them to build queries of arbitrary complexity without significant restrictions. We developed a user interface supporting s emantic query generation for Semanticorganizer, a tool used by scient ists and engineers at NASA to construct networks of knowledge and dat a. Through this interface users can select node types, node attribute s and node links to build ad-hoc semantic queries for searching the S emanticOrganizer network.
Matthew L. Clark
Full Text Available Web-based applications that integrate geospatial information, or the geoweb, offer exciting opportunities for remote sensing science. One such application is a Web‑based system for automating the collection of reference data for producing and verifying the accuracy of land-use/land-cover (LULC maps derived from satellite imagery. Here we describe the capabilities and technical components of the Virtual Interpretation of Earth Web-Interface Tool (VIEW-IT, a collaborative browser-based tool for “crowdsourcing” interpretation of reference data from high resolution imagery. The principal component of VIEW-IT is the Google Earth plug-in, which allows users to visually estimate percent cover of seven basic LULC classes within a sample grid. The current system provides a 250 m square sample to match the resolution of MODIS satellite data, although other scales could be easily accommodated. Using VIEW-IT, a team of 23 student and 7 expert interpreters collected over 46,000 reference samples across Latin America and the Caribbean. Samples covered all biomes, avoided spatial autocorrelation, and spanned years 2000 to 2010. By embedding Google Earth within a Web-based application with an intuitive user interface, basic interpretation criteria, distributed Internet access, server-side storage, and automated error-checking, VIEW-IT provides a time and cost efficient means of collecting a large dataset of samples across space and time. When matched with predictor variables from satellite imagery, these data can provide robust mapping algorithm calibration and accuracy assessment. This development is particularly important for regional to global scale LULC mapping efforts, which have traditionally relied on sparse sampling of medium resolution imagery and products for reference data. Our ultimate goal is to make VIEW-IT available to all users to promote rigorous, global land-change monitoring.
Groenewegen, D.M.; Visser, E.
Data validation rules constitute the constraints that data input and processing must adhere to in addition to the structural constraints imposed by a data model. Web modeling tools do not make all types of data validation explicit in their models, hampering full code generation and model expressivit
Groenewegen, D.M.; Visser, E.
Data validation rules constitute the constraints that data input and processing must adhere to in addition to the structural constraints imposed by a data model. Web modeling tools do not make all types of data validation explicit in their models, hampering full code generation and model expressivit
Perri, M. J.; Weber, S. H.
A Web site is described that facilitates use of the free computational chemistry software: General Atomic and Molecular Electronic Structure System (GAMESS). Its goal is to provide an opportunity for undergraduate students to perform computational chemistry experiments without the need to purchase expensive software.
束长波; 施化吉; 王基
针对现有 Deep Web 信息集成系统没有考虑查询接口动态性的特点，造成本地接口与网络接口查询能力不对等的问题，提出一种基于演化版本的 Deep Web 查询接口维护方法。该方法通过构建本地接口的版本化模型来刻画接口的增量变化，识别变动比较活跃的属性集合；然后采取试探性查询来构建最优查询语句，获取网络接口数据源的变动信息，演化出本地接口的下一个版本，实现对本地查询接口数据源的信息维护的迭代过程。实验结果表明，该方法降低了深网环境变化对 Deep Web 信息集成带来的影响，确保了 Deep Web 查询接口的准确率和查全率的稳定性。%In order to solve the problems existed in the traditional Deep Web information integration system that without con-sidering the dynamic feature of search interface,causing local interface and network interface query ability is not equal.There-fore,this paper proposed a Deep Web search interface maintenance method based on evolution version.In this method,con-structing the version models of local search interface was to express the incremental change of it,and to extract the active attrib-ute set.Next,generating the best query string with the set and probing query was to extract the change content and get the next version of local interface.Finally,it could realize the iterative maintenance of local search interface data source.The experi-mental results show that this method is able to decrease the impact caused by deep Web network changing,and keep the recall and precision of Deep Web search interface in a stable state.
Full Text Available According to the advancement in internet and web-based application, the survey via the internet has been increasingly utilized due to its convenience and time saving. This article studied the influence of five web-design techniques - screen design, response format, logo type, progress indicator, and image display on the interest of the respondents. Two screen display designs from each design technique were made for selection. Focus group discussion technique was conducted on the four groups of Y generation participants with different characteristics. Open discussion was performed to identify additional design factors that will affect the interest of the questionnaire. The study found the degree of influence of all related design factors can be ranked from screen design, response format, font type, logo type, background color, progress indicator, and image display respectively.
Arakawa, Kazuharu; Kido, Nobuhiro; Oshita, Kazuki; Tomita, Masaru
G-language genome analysis environment (G-language GAE) contains more than 100 programs that focus on the analysis of bacterial genomes, including programs for the identification of binding sites by means of information theory, analysis of nucleotide composition bias and the distribution of particular oligonucleotides, calculation of codon bias and prediction of expression levels, and visualization of genomic information. We have provided a collection of web services for these programs by uti...
Mitchell John; Murray-Rust Peter; Rzepa Henry
Abstract Chemical information is now seen as critical for most areas of life sciences. But unlike Bioinformatics, where data is openly available and freely re-usable, most chemical information is closed and cannot be re-distributed without permission. This has led to a failure to adopt modern informatics and software techniques and therefore paucity of chemistry in bioinformatics. New technology, however, offers the hope of making chemical data (compounds and properties) free during the auth...
Full Text Available Abstract Background To aid in bioinformatics data processing and analysis, an increasing number of web-based applications are being deployed. Although this is a positive circumstance in general, the proliferation of tools makes it difficult to find the right tool, or more importantly, the right set of tools that can work together to solve real complex problems. Results Magallanes (Magellan is a versatile, platform-independent Java library of algorithms aimed at discovering bioinformatics web services and associated data types. A second important feature of Magallanes is its ability to connect available and compatible web services into workflows that can process data sequentially to reach a desired output given a particular input. Magallanes' capabilities can be exploited both as an API or directly accessed through a graphic user interface. The Magallanes' API is freely available for academic use, and together with Magallanes application has been tested in MS-Windows™ XP and Unix-like operating systems. Detailed implementation information, including user manuals and tutorials, is available at http://www.bitlab-es.com/magallanes. Conclusion Different implementations of the same client (web page, desktop applications, web services, etc. have been deployed and are currently in use in real installations such as the National Institute of Bioinformatics (Spain and the ACGT-EU project. This shows the potential utility and versatility of the software library, including the integration of novel tools in the domain and with strong evidences in the line of facilitate the automatic discovering and composition of workflows.
Full Text Available Abstract Background Computational methods for problem solving need to interleave information access and algorithm execution in a problem-specific workflow. The structures of these workflows are defined by a scaffold of syntactic, semantic and algebraic objects capable of representing them. Despite the proliferation of GUIs (Graphic User Interfaces in bioinformatics, only some of them provide workflow capabilities; surprisingly, no meta-analysis of workflow operators and components in bioinformatics has been reported. Results We present a set of syntactic components and algebraic operators capable of representing analytical workflows in bioinformatics. Iteration, recursion, the use of conditional statements, and management of suspend/resume tasks have traditionally been implemented on an ad hoc basis and hard-coded; by having these operators properly defined it is possible to use and parameterize them as generic re-usable components. To illustrate how these operations can be orchestrated, we present GPIPE, a prototype graphic pipeline generator for PISE that allows the definition of a pipeline, parameterization of its component methods, and storage of metadata in XML formats. This implementation goes beyond the macro capacities currently in PISE. As the entire analysis protocol is defined in XML, a complete bioinformatic experiment (linked sets of methods, parameters and results can be reproduced or shared among users. Availability: http://if-web1.imb.uq.edu.au/Pise/5.a/gpipe.html (interactive, ftp://ftp.pasteur.fr/pub/GenSoft/unix/misc/Pise/ (download. Conclusion From our meta-analysis we have identified syntactic structures and algebraic operators common to many workflows in bioinformatics. The workflow components and algebraic operators can be assimilated into re-usable software components. GPIPE, a prototype implementation of this framework, provides a GUI builder to facilitate the generation of workflows and integration of heterogeneous
Full Text Available Data visualization methods are necessary during the exploration and analysis activities of an increasingly data-intensive scientific process. There are few existing visualization methods for raw nucleotide sequences of a whole genome or chromosome. Software for data visualization should allow the researchers to create accessible data visualization interfaces that can be exported and shared with others on the web. Herein, novel software developed for generating DNA data visualization interfaces is described. The software converts DNA data sets into images that are further processed as multi-scale images to be accessed through a web-based interface that supports zooming, panning and sequence fragment selection. Nucleotide composition frequencies and GC skew of a selected sequence segment can be obtained through the interface. The software was used to generate DNA data visualization of human and bacterial chromosomes. Examples of visually detectable features such as short and long direct repeats, long terminal repeats, mobile genetic elements, heterochromatic segments in microbial and human chromosomes, are presented. The software and its source code are available for download and further development. The visualization interfaces generated with the software allow for the immediate identification and observation of several types of sequence patterns in genomes of various sizes and origins. The visualization interfaces generated with the software are readily accessible through a web browser. This software is a useful research and teaching tool for genetics and structural genomics.
Maddox, Marlo; Zheng, Yihua; Rastaetter, Lutz; Taktakishvili, A.; Mays, M. L.; Kuznetsova, M.; Lee, Hyesook; Chulaki, Anna; Hesse, Michael; Mullinix, Richard; Berrios, David
The NASA GSFC Space Weather Center (http://swc.gsfc.nasa.gov) is committed to providing forecasts, alerts, research, and educational support to address NASA's space weather needs - in addition to the needs of the general space weather community. We provide a host of services including spacecraft anomaly resolution, historical impact analysis, real-time monitoring and forecasting, custom space weather alerts and products, weekly summaries and reports, and most recently - video casts. There are many challenges in providing accurate descriptions of past, present, and expected space weather events - and the Space Weather Center at NASA GSFC employs several innovative solutions to provide access to a comprehensive collection of both observational data, as well as space weather model/simulation data. We'll describe the challenges we've faced with managing hundreds of data streams, running models in real-time, data storage, and data dissemination. We'll also highlight several systems and tools that are utilized by the Space Weather Center in our daily operations, all of which are available to the general community as well. These systems and services include a web-based application called the Integrated Space Weather Analysis System (iSWA http://iswa.gsfc.nasa.gov), two mobile space weather applications for both IOS and Android devices, an external API for web-service style access to data, google earth compatible data products, and a downloadable client-based visualization tool.
macroscopic conservation equations with an order parameter which can account for the solid, liquid, and the mushy zones with the help of a phase function defined on the basis of the liquid fraction, the Gibbs relation, and the phase diagram with local approximations. Using the above formalism for alloy solidification, the width of the diffuse interface (mushy zone was computed rather accurately for iron-carbon and ammonium chloride-water binary alloys and validated against experimental data from literature.
Enhanced decision support for policy makers using a web interface to health-economic models - Illustrated with a cost-effectiveness analysis of nation-wide infant vaccination with the 7-valent pneumococcal conjugate vaccine in the Netherlands
Hubben, G.A.A.; Bos, J.M.; Glynn, D.M.; van der Ende, A.; van Alphen, L.; Postma, M.J.
We have developed a web-based user-interface (web interface) to enhance the usefulness of health-economic evaluations to support decision making (http://pcv.healtheconomics.nl). It allows the user to interact with a health-economic model to evaluate predefined and customized scenarios and perform se
The new interface of the Web of Science (Thomson Reuters) enables users to retrieve sets larger than 100,000 documents in a single search. This makes it possible to compare publication trends for China, the USA, EU-27, and smaller countries with the data in the Scopus (Elsevier) database. China no l
Full Text Available Abstract Background The Human Immunodeficiency Virus type one (HIV-1 is the major causing pathogen of the Acquired Immune Deficiency Syndrome (AIDS. A large number of HIV-1-related studies are based on three non-human model animals: chimpanzee, rhesus macaque, and mouse. However, the differences in host-HIV-1 interactions between human and these model organisms have remained unexplored. Description Here we present CAPIH (Comparative Analysis of Protein Interactions for HIV-1, the first web-based interface to provide comparative information between human and the three model organisms in the context of host-HIV-1 protein interactions. CAPIH identifies genetic changes that occur in HIV-1-interacting host proteins. In a total of 1,370 orthologous protein sets, CAPIH identifies ~86,000 amino acid substitutions, ~21,000 insertions/deletions, and ~33,000 potential post-translational modifications that occur only in one of the four compared species. CAPIH also provides an interactive interface to display the host-HIV-1 protein interaction networks, the presence/absence of orthologous proteins in the model organisms in the networks, the genetic changes that occur in the protein nodes, and the functional domains and potential protein interaction hot sites that may be affected by the genetic changes. The CAPIH interface is freely accessible at http://bioinfo-dbb.nhri.org.tw/capih. Conclusion CAPIH exemplifies that large divergences exist in disease-associated proteins between human and the model animals. Since all of the newly developed medications must be tested in model animals before entering clinical trials, it is advisable that comparative analyses be performed to ensure proper translations of animal-based studies. In the case of AIDS, the host-HIV-1 protein interactions apparently have differed to a great extent among the compared species. An integrated protein network comparison among the four species will probably shed new lights on AIDS studies.
L Jegatha Deborah; R Sathiyaseelan; S Audithan; P Vijayakumar
he e-learners' excellence can be improved by recommending suitable e-contents available in e-learning servers that are based on investigating their learning styles. The learning styles had to be predicted carefully, because the psychological balance is variable in nature and the e-learners are diversified based on the learning patterns, environment, time and their mood. Moreover, the knowledge about the learners used for learning style prediction is uncertain in nature. This paper identifies Felder–Silverman learning style model as a suitable model for learning style prediction, especially in web environments and proposes to use Fuzzy rules to handle the uncertainty in the learning style predictions. The evaluations have used the Gaussian membership function based fuzzy logic for 120 students and tested for learning of C programming language and it has been observed that the proposed model improved the accuracy in prediction significantly.
Full Text Available Information and communication technology plays essential role for people’s day-to-day business activities. People receive most of their knowledge by processing, recording and transferring necessary information through surfing Internet websites. Internet as an essential part of information technology (IT has grown remarkably. Nowadays, there have been significant amount of efforts in Iran for developing e-commerce. This paper studies the effects of environmental internet features on internet purchase intention. The study divides internet environment into demographic and technologic parts and, for studying each of them, many features are investigated such as internet connection speed, connectivity model, web browser, type of payments, user’s income, user’s education, user’s gender, frequency of online usage per week and users’ goal for using internet. Using Logistic regression technique, the study has determined a meaningful effects of income, education, connection type, browser and goal on consumers’ behavior.
By analyzing spot checking system of power generation companies, to achieve date exchange and system data sharing between mobile terminal and the server, the Web Services approach is a good choice. This article focuses on the design and im⁃plementation of data communication interface based on Web Services.% 通过对发电企业设备点检工作进行深入的分析，运用Web Services技术实现设备点检系统移动采集终端与服务端数据交互以及系统数据共享。将重点探讨基于Web Services的数据通信接口的设计与实现。
Full Text Available Abstract This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI for controlling 3-D (three-dimensional virtual globes such as Google Earth (including its Street View mode, Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense, as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3 that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements, and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces. Additional file 1 Installation package for Kinoogle (part 1 of 3. Compressed (zipped archive containing Kinoogle's installation package for Microsoft Windows operating systems. Download and unzip the contents of Additional file 1, Additional file 2, and Additional file 3 to the same hard drive location, then run 'Additional_file.part1.exe' from that location. Click here for file Additional file 2 Installation package for Kinoogle (part 2
Delattre, Hadrien; Souiai, Oussema; Fagoonee, Khema; Guerois, Raphaël; Petit, Marie-Agnès
Distant homology search tools are of great help to predict viral protein functions. However, due to the lack of profile databases dedicated to viruses, they can lack sensitivity. We constructed HMM profiles for more than 80,000 proteins from both phages and archaeal viruses, and performed all pairwise comparisons with HHsearch program. The whole resulting database can be explored through a user-friendly "Phagonaute" interface to help predict functions. Results are displayed together with their genetic context, to strengthen inferences based on remote homology. Beyond function prediction, this tool permits detections of co-occurrences, often indicative of proteins completing a task together, and observation of conserved patterns across large evolutionary distances. As a test, Herpes simplex virus I was added to Phagonaute, and 25% of its proteome matched to bacterial or archaeal viral protein counterparts. Phagonaute should therefore help virologists in their quest for protein functions and evolutionary relationships. PMID:27254594
The rapidly changing field of bioinformatics is fuelling the need for suitably trained personnel with skills in relevant biological "sub-disciplines" such as proteomics, transcriptomics and metabolomics, etc. But because of the complexity--and sheer weight of data--associated with these new areas of biology, many school teachers feel…
The Hidden Web databases contain much more searchable information than the Surface Web databases. If the query interfaces on the Deep Web are integrated, the recall and precision of web information retrieval will be highly improved. This paper discusses the clustering analysis for query schema integration problem. The query＇ interface schema integration method costs less, compared with the Deep Web data source integration.%Deep Web信息是隐藏在Web服务器中可搜索的数据库信息资源，其信息量远比表面web信息量大。将Deep Web信息查询的接口模式集成为统一的查询接口，将极大地提高web信息检索的查全率和查准率。讨论了查询模式集成问题的聚类分析方法，相对于直接对Deep Web数据源的进行集成，对查询模式加以集成的思路成本更低。
Keith, J. Brandon; Fennick, Jacob R.; Junkermeier, Chad E.; Nelson, Daniel R.; Lewis, James P.
FIREBALL is an ab initio technique for fast local orbital simulations of nanotechnological, solid state, and biological systems. We have implemented a convenient interface for new users and software architects in the platform-independent Java language to access FIREBALL's unique and powerful capabilities. The graphical user interface can be run directly from a web server or from within a larger framework such as the Computational Science and Engineering Online (CSE-Online) environment or the Distributed Analysis of Neutron Scattering Experiments (DANSE) framework. We demonstrate its use for high-throughput electronic structure calculations and a multi-100 atom quantum molecular dynamics (MD) simulation. Program summaryProgram title: FireballUI Catalogue identifier: AECF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 279 784 No. of bytes in distributed program, including test data, etc.: 12 836 145 Distribution format: tar.gz Programming language: Java Computer: PC and workstation Operating system: The GUI will run under Windows, Mac and Linux. Executables for Mac and Linux are included in the package. RAM: 512 MB Word size: 32 or 64 bits Classification: 4.14 Nature of problem: The set up and running of many simulations (all of the same type), from the command line, is a slow process. But most research quality codes, including the ab initio tight-binding code FIREBALL, are designed to run from the command line. The desire is to have a method for quickly and efficiently setting up and running a host of simulations. Solution method: We have created a graphical user interface for use with the FIREBALL code. Once the user has created the files containing the atomic coordinates for each system that they are
Alva, V.; Nam, S.; Söding, J.; Lupas, A.
The MPI Bioinformatics Toolkit (http://toolkit.tuebingen.mpg.de) is an open, interactive web service for comprehensive and collaborative protein bioinformatic analysis. It offers a wide array of interconnected, state-of-the-art bioinformatics tools to experts and non-experts alike, developed both externally (e.g. BLAST+, HMMER3, MUSCLE) and internally (e.g. HHpred, HHblits, PCOILS). While a beta version of the Toolkit was released 10 years ago, the current production-level release has been av...
Wang, May Dongmei
During 2012, next generation sequencing (NGS) has attracted great attention in the biomedical research community, especially for personalized medicine. Also, third generation sequencing has become available. Therefore, state-of-art sequencing technology and analysis are reviewed in this Bioinformatics spotlight on 2012. Next-generation sequencing (NGS) is high-throughput nucleic acid sequencing technology with wide dynamic range and single base resolution. The full promise of NGS depends on the optimization of NGS platforms, sequence alignment and assembly algorithms, data analytics, novel algorithms for integrating NGS data with existing genomic, proteomic, or metabolomic data, and quantitative assessment of NGS technology in comparing to more established technologies such as microarrays. NGS technology has been predicated to become a cornerstone of personalized medicine. It is argued that NGS is a promising field for motivated young researchers who are looking for opportunities in bioinformatics. PMID:23192635
Johnson, Kathy A.
For the purpose of this paper, bioinformatics is defined as the application of computer technology to the management of biological information. It can be thought of as the science of developing computer databases and algorithms to facilitate and expedite biological research. This is a crosscutting capability that supports nearly all human health areas ranging from computational modeling, to pharmacodynamics research projects, to decision support systems within autonomous medical care. Bioinformatics serves to increase the efficiency and effectiveness of the life sciences research program. It provides data, information, and knowledge capture which further supports management of the bioastronautics research roadmap - identifying gaps that still remain and enabling the determination of which risks have been addressed.
In bioinformatics, there are often a large number of input features. For example, there are millions of single nucleotide polymorphisms (SNPs) that are genetic variations which determine the dierence between any two unrelated individuals. In microarrays, thousands of genes can be proled in each test. It is important to nd out which input features (e.g., SNPs or genes) are useful in classication of a certain group of people or diagnosis of a given disease. In this paper, we investigate some powerful feature selection techniques and apply them to problems in bioinformatics. We are able to identify a very small number of input features sucient for tasks at hand and we demonstrate this with some real-world data.
Wang, May Dongmei
During 2012, next generation sequencing (NGS) has attracted great attention in the biomedical research community, especially for personalized medicine. Also, third generation sequencing has become available. Therefore, state-of-art sequencing technology and analysis are reviewed in this Bioinformatics spotlight on 2012. Next-generation sequencing (NGS) is high-throughput nucleic acid sequencing technology with wide dynamic range and single base resolution. The full promise of NGS depends on t...
Barth, A.; Alvera-Azcárate, A.; Troupin, C.; Ouberdous, M.; Beckers, J.-M.
Burr, Tom L [Los Alamos National Laboratory
Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.
Gelbart, Hadas; Yarden, Anat
Following the rationale that learning is an active process of knowledge construction as well as enculturation into a community of experts, we developed a novel web-based learning environment in bioinformatics for high-school biology majors in Israel. The learning environment enables the learners to actively participate in a guided inquiry process…
Duarte Jose M
bioinformatics. We made the corresponding software implementation available to the community as an easy-to-use graphical web interface at http://www.eppic-web.org.
de Groot Joost CW
Full Text Available Abstract Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1 a web based, graphical user interface (GUI that enables a pipeline operator to manage the system; 2 the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3 the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines.
Bioinformatics is an emerging interdisciplinary research field in which mathematics. computer science and biology meet. In this thesis. bioinformatic methods for analysis of functional and structural properties among proteins will be presented. I have developed and applied bioinformatic methods on the enzyme superfamily of short-chain dehydrogenases/reductases (SDRs), coenzyme-binding enzymes of the Rossmann fold type, and amyloid-forming proteins and peptides. The basis...
de Ridder, Dick; de Ridder, Jeroen; Reinders, Marcel J T
Pattern recognition is concerned with the development of systems that learn to solve a given problem using a set of example instances, each represented by a number of features. These problems include clustering, the grouping of similar instances; classification, the task of assigning a discrete label to a given instance; and dimensionality reduction, combining or selecting features to arrive at a more useful representation. The use of statistical pattern recognition algorithms in bioinformatics is pervasive. Classification and clustering are often applied to high-throughput measurement data arising from microarray, mass spectrometry and next-generation sequencing experiments for selecting markers, predicting phenotype and grouping objects or genes. Less explicitly, classification is at the core of a wide range of tools such as predictors of genes, protein function, functional or genetic interactions, etc., and used extensively in systems biology. A course on pattern recognition (or machine learning) should therefore be at the core of any bioinformatics education program. In this review, we discuss the main elements of a pattern recognition course, based on material developed for courses taught at the BSc, MSc and PhD levels to an audience of bioinformaticians, computer scientists and life scientists. We pay attention to common problems and pitfalls encountered in applications and in interpretation of the results obtained.
Blaz, Jacquelyn W; Pearce, Patricia F
The world is becoming increasingly web-based. Health care institutions are utilizing the web for personal health records, surveillance, communication, and education; health care researchers are finding value in using the web for research subject recruitment, data collection, and follow-up. Programming languages, such as Java, require knowledge and experience usually found only in software engineers and consultants. The purpose of this paper is to demonstrate Ruby on Rails as a feasible alternative for programming questionnaires for use on the web. Ruby on Rails was specifically designed for the development, deployment, and maintenance of database-backed web applications. It is flexible, customizable, and easy to learn. With a relatively little initial training, a novice programmer can create a robust web application in a small amount of time, without the need of a software consultant. The translation of the Children's Computerized Physical Activity Reporter (C-CPAR) from a local installation in Microsoft Access to a web-based format utilizing Ruby on Rails is given as an example. PMID:19592849
李决龙; 李亮; 邢建春; 杨启亮
为了验证Web应用的质量,首次采用了基于交际接口及其工具TICC的建筑智能化系统Web应用验证方法,通过一个简单的能源管理Web应用系统实例说明了整个建模、构件模块组合验证和系统性质验证过程.结果表明验证能够顺利实现,因而该方法是一种合适的Web应用验证方法.%In order to verify Web applications' quality, the paper firstly adopted the methodology based on sociable interface and its tool TICC to check Web applications in the intelligent building systems, used a simple case of energy sources management Web application system to illustrate the whole process of modeling, component composing verification and characteristic model checking.The result shows that verification is done successfully, so it is an appropriate verification method for Web applications.
Tolvanen, Martti; Vihinen, Mauno
Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…
李晓; 王雪飞; 邱玉辉
An adaptive user interface helps to improve the quality of human-computer interaction. Most people at present join to Web-Based Learning by common browser. Due to the one-fits-all user interface, they have to face with the problem of lack of the support on personalized learning. The design and implementation of the adaptive user interface for Web-based learning in this paper is grounded in our work done before, for example interaction model,adaptive user models including domain models. The adaptivity is mainly expressed on learning contents and representation including layout as well as operation.
Emergent Computation is concerned with recent applications of Mathematical Linguistics or Automata Theory. This subject has a primary focus upon "Bioinformatics" (the Genome and arising interest in the Proteome), but the closing chapter also examines applications in Biology, Medicine, Anthropology, etc. The book is composed of an organized examination of DNA, RNA, and the assembly of amino acids into proteins. Rather than examine these areas from a purely mathematical viewpoint (that excludes much of the biochemical reality), the author uses scientific papers written mostly by biochemists based upon their laboratory observations. Thus while DNA may exist in its double stranded form, triple stranded forms are not excluded. Similarly, while bases exist in Watson-Crick complements, mismatched bases and abasic pairs are not excluded, nor are Hoogsteen bonds. Just as there are four bases naturally found in DNA, the existence of additional bases is not ignored, nor amino acids in addition to the usual complement of...
Cantacessi, C; Campbell, B E; Jex, A R; Young, N D; Hall, R S; Ranganathan, S; Gasser, R B
The advent and integration of high-throughput '-omics' technologies (e.g. genomics, transcriptomics, proteomics, metabolomics, glycomics and lipidomics) are revolutionizing the way biology is done, allowing the systems biology of organisms to be explored. These technologies are now providing unique opportunities for global, molecular investigations of parasites. For example, studies of a transcriptome (all transcripts in an organism, tissue or cell) have become instrumental in providing insights into aspects of gene expression, regulation and function in a parasite, which is a major step to understanding its biology. The purpose of this article was to review recent applications of next-generation sequencing technologies and bioinformatic tools to large-scale investigations of the transcriptomes of parasitic nematodes of socio-economic significance (particularly key species of the order Strongylida) and to indicate the prospects and implications of these explorations for developing novel methods of parasite intervention.
@@ With the completion of human genome sequencing, a new era of bioinformatics st arts. On one hand, due to the advance of high throughput DNA microarray technol ogies, functional genomics such as gene expression information has increased exp onentially and will continue to do so for the foreseeable future. Conventional m eans of storing, analysing and comparing related data are already overburdened. Moreover, the rich information in genes , their functions and their associated wide biological implication requires new technologies of analysing data that employ sophisticated statistical and machine learning algorithms, powerful com puters and intensive interaction together different data sources such as seque nce data, gene expression data, proteomics data and metabolic pathway informati on to discover complex genomic structures and functional patterns with other bi ological process to gain a comprehensive understanding of cell physiology.
Tolvanen, Martti; Vihinen, Mauno
Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material over the Internet. Currently, we provide two fully computer-based courses, "Introduction to Bioinformatics" and "Bioinformatics in Functional Genomics." Here we will discuss the application of distance learning in bioinformatics training and our experiences gained during the 3 years that we have run the courses, with about 400 students from a number of universities. The courses are available at bioinf.uta.fi.
Baier, Rosa R.; Cooper, Emily; Wysocki, Andrea; Gravenstein, Stefan; Clark, Melissa
Introduction: Despite the investment in public reporting for a number of healthcare settings, evidence indicates that consumers do not routinely use available data to select providers. This suggests that existing reports do not adequately incorporate recommendations for consumer-facing reports or web applications. Methods: Healthcentric Advisors and Brown University undertook a multi-phased approach to create a consumer-facing home health web application in Rhode Island. This included reviewi...
David K Brown
Full Text Available Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS, a workflow management system and web interface for high performance computing (HPC. JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.
Zhang, Chuanrong; Li, Weidong
This book covers key issues related to Geospatial Semantic Web, including geospatial web services for spatial data interoperability; geospatial ontology for semantic interoperability; ontology creation, sharing, and integration; querying knowledge and information from heterogeneous data source; interfaces for Geospatial Semantic Web, VGI (Volunteered Geographic Information) and Geospatial Semantic Web; challenges of Geospatial Semantic Web; and development of Geospatial Semantic Web applications. This book also describes state-of-the-art technologies that attempt to solve these problems such
Full Text Available Abstract Background The BioMoby project aims to identify and deploy standards and conventions that aid in the discovery, execution, and pipelining of distributed bioinformatics Web Services. As of August, 2006, approximately 680 bioinformatics resources were available through the BioMoby interoperability platform. There are a variety of clients that can interact with BioMoby-style services. Here we describe a Web-based browser-style client – Gbrowse Moby – that allows users to discover and "surf" from one bioinformatics service to the next using a semantically-aided browsing interface. Results Gbrowse Moby is a low-throughput, exploratory tool specifically aimed at non-informaticians. It provides a straightforward, minimal interface that enables a researcher to query the BioMoby Central web service registry for data retrieval or analytical tools of interest, and then select and execute their chosen tool with a single mouse-click. The data is preserved at each step, thus allowing the researcher to manually "click" the data from one service to the next, with the Gbrowse Moby application managing all data formatting and interface interpretation on their behalf. The path of manual exploration is preserved and can be downloaded for import into automated, high-throughput tools such as Taverna. Gbrowse Moby also includes a robust data rendering system to ensure that all new data-types that appear in the BioMoby registry can be properly displayed in the Web interface. Conclusion Gbrowse Moby is a robust, yet facile entry point for both newcomers to the BioMoby interoperability project who wish to manually explore what is known about their data of interest, as well as experienced users who wish to observe the functionality of their analytical workflows prior to running them in a high-throughput environment.
张伟; 王海立; 周杏鹏
In this paper,current network topology of TCMS and its potential problems in maintenance between heterogeneous systems were briefly described,and an advanced maintenance interface based on industrial ethemet and web service technology to allow seamless integration between train devices,TCMS and remote train management system was expounded.As an example,the implementation of such a maintenance interface for electronic door control unit(EDCU)was also presented in this paper to demonstrate its real world application.%扼要介绍了目前TCMS采用的列车网络系统现状及所面临的问题,并以列车车门为例,详细阐述了当前国际先进的基于工业以太网和Web Service技术的网络通信维护接口设计方法,实现车载设备与TCMS以及与远程列车管理系统异构平台之间的无缝信息集成.
Om Prakash Sharma
Full Text Available This review article discusses the current scenario of the national and international burden due to lymphatic filariasis (LF and describes the active elimination programmes for LF and their achievements to eradicate this most debilitating disease from the earth. Since, bioinformatics is a rapidly growing field of biological study, and it has an increasingly significant role in various fields of biology. We have reviewed its leading involvement in the filarial research using different approaches of bioinformatics and have summarized available existing drugs and their targets to re-examine and to keep away from the resisting conditions. Moreover, some of the novel drug targets have been assembled for further study to design fresh and better pharmacological therapeutics. Various bioinformatics-based web resources, and databases have been discussed, which may enrich the filarial research.
Vancea, Andrei; Grossniklaus, Michael; Norrie, Moira C.
In most web mashup applications, the content is generated using either web feeds or an application programming interface (API) based on web services. Both approaches have limitations. Data models provided by web feeds are not powerful enough to permit complex data structures to be transmitted. APIs based on web services are usually different for each web application, and thus different implementations of the APIs are required for each web service that a web mashup application uses. We propose...
Lawlor, Brendan; Walsh, Paul
There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians.
Pallen, Mark J
Microbial bioinformatics in 2020 will remain a vibrant, creative discipline, adding value to the ever-growing flood of new sequence data, while embracing novel technologies and fresh approaches. Databases and search strategies will struggle to cope and manual curation will not be sustainable during the scale-up to the million-microbial-genome era. Microbial taxonomy will have to adapt to a situation in which most microorganisms are discovered and characterised through the analysis of sequences. Genome sequencing will become a routine approach in clinical and research laboratories, with fresh demands for interpretable user-friendly outputs. The "internet of things" will penetrate healthcare systems, so that even a piece of hospital plumbing might have its own IP address that can be integrated with pathogen genome sequences. Microbiome mania will continue, but the tide will turn from molecular barcoding towards metagenomics. Crowd-sourced analyses will collide with cloud computing, but eternal vigilance will be the price of preventing the misinterpretation and overselling of microbial sequence data. Output from hand-held sequencers will be analysed on mobile devices. Open-source training materials will address the need for the development of a skilled labour force. As we boldly go into the third decade of the twenty-first century, microbial sequence space will remain the final frontier! PMID:27471065
In the Web application system, application environment and user' s requirements is easy to change. In order to solve the problem, resented a component-based flexible Web user interface (WUI) model based on a method of combining flexible software ideology and Web user interface development, which could dynamic reconfigure the display style and functionality of WUI at runtime. It separated the two different categories of information from traditional component, namely, the template which was responsible for describing the display style and was stored in XML document, and the component-role which adapted to the change of the operation data structure and was stored in relational database, for solving the problem of the flexibility and reusability of WUI. Finally, gave a flexible WUI with the table data display function to illustrate the model' s effectiveness and availability.%针对Web应用系统中应用环境和用户需求易于变化的问题,将柔性软件思想与Web界面的设计结合起来,提出了一个具有动态重配置能力的基于构件的柔性Web用户界面模型.该模型把描述构件显示样式的模板和适应业务数据结构变化的构件规则分别存储到XML文档和关系数据库中,从而解决了Web用户界面的适应性和重用性问题.最后,通过一个具有表格数据显示功能的柔性Web用户界面来说明模型的有效性和可用性.
Bushell Michael E
Full Text Available Abstract Background Constraint-based approaches facilitate the prediction of cellular metabolic capabilities, based, in turn on predictions of the repertoire of enzymes encoded in the genome. Recently, genome annotations have been used to reconstruct genome scale metabolic reaction networks for numerous species, including Homo sapiens, which allow simulations that provide valuable insights into topics, including predictions of gene essentiality of pathogens, interpretation of genetic polymorphism in metabolic disease syndromes and suggestions for novel approaches to microbial metabolic engineering. These constraint-based simulations are being integrated with the functional genomics portals, an activity that requires efficient implementation of the constraint-based simulations in the web-based environment. Results Here, we present Acorn, an open source (GNU GPL grid computing system for constraint-based simulations of genome scale metabolic reaction networks within an interactive web environment. The grid-based architecture allows efficient execution of computationally intensive, iterative protocols such as Flux Variability Analysis, which can be readily scaled up as the numbers of models (and users increase. The web interface uses AJAX, which facilitates efficient model browsing and other search functions, and intuitive implementation of appropriate simulation conditions. Research groups can install Acorn locally and create user accounts. Users can also import models in the familiar SBML format and link reaction formulas to major functional genomics portals of choice. Selected models and simulation results can be shared between different users and made publically available. Users can construct pathway map layouts and import them into the server using a desktop editor integrated within the system. Pathway maps are then used to visualise numerical results within the web environment. To illustrate these features we have deployed Acorn and created a
This article highlights some of the basic concepts of bioinformatics and data mining. The major research areas of bioinformatics are highlighted. The application of data mining in the domain of bioinformatics is explained. It also highlights some of the current challenges and opportunities of data mining in bioinformatics.
Muhammad Ali Masood
Full Text Available Dealing with data means to group information into a set of categories either in order to learn new artifacts or understand new domains. For this purpose researchers have always looked for the hidden patterns in data that can be defined and compared with other known notions based on the similarity or dissimilarity of their attributes according to well-defined rules. Data mining, having the tools of data classification and data clustering, is one of the most powerful techniques to deal with data in such a manner that it can help researchers identify the required information. As a step forward to address this challenge, experts have utilized clustering techniques as a mean of exploring hidden structure and patterns in underlying data. Improved stability, robustness and accuracy of unsupervised data classification in many fields including pattern recognition, machine learning, information retrieval, image analysis and bioinformatics, clustering has proven itself as a reliable tool. To identify the clusters in datasets algorithm are utilized to partition data set into several groups based on the similarity within a group. There is no specific clustering algorithm, but various algorithms are utilized based on domain of data that constitutes a cluster and the level of efficiency required. Clustering techniques are categorized based upon different approaches. This paper is a survey of few clustering techniques out of many in data mining. For the purpose five of the most common clustering techniques out of many have been discussed. The clustering techniques which have been surveyed are: K-medoids, K-means, Fuzzy C-means, Density-Based Spatial Clustering of Applications with Noise (DBSCAN and Self-Organizing Map (SOM clustering.
Feier, Christina; Polleres, Axel; Dumitru, Roman; Domingue, John; Stollberg, Michael; Fensel, Dieter
The Semantic Web and the Semantic Web Services build a natural application area for Intelligent Agents, namely querying and reasoning about structured knowledge and semantic descriptions of services and their interfaces on the Web. This paper provides an overview of the Web Service Modeling Ontology, a conceptual framework for the semantical description of Web services.
WWW(World Wide Web)系统和数据库是网络化信息服务的基础.通过对Web和数据库方面的主要技术(包括标准CGI,ISAPI,JDBC,IDC,ASP技术)进行详细论述,就基于IIS实现WWW与数据库互联的实际应用作了具体描述.
This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster
DR. ANURADHA; BABITA AHUJA
In this era of digital tsunami of information on the web, everyone is completely dependent on the WWW for information retrieval. This has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. The web databases are hidden behind the query interfaces. In this paper, we propose a Hidden Web Extractor (HWE) that can automatically discover and download data from the Hidden Web databases. ...
The new interface of the Web of Science (of Thomson Reuters) enables users to retrieve sets larger than 100,000 documents in a single search. This makes it possible to compare publication trends for China, the USA, EU-27, and a number of smaller countries. China no longer grew exponentially during the 2000s, but linearly. Contrary to previous predictions on the basis of exponential growth or Scopus data, the cross-over of the lines for China and the USA is postponed to the next decade (after 2020) according to this data. These extrapolations, however, should be used only as indicators and not as predictions. Along with the dynamics in the publication trends, one also has to take into account the dynamics of the databases used for the measurement.
胡聪; 高明; 牛军浩
LXI is applied and extended the technology of Ethernet in field of auto - testing. For popularizing the new network standard, the feature points and its key technic of LXI are studied) The NORCO - 3680ALE as embedded motherboard and PM512B as a function circuit were adopted as the LXI bus interface hardware; The configuration and application of Web server mini httpd based on the Linux operating system, CSP Program Design, Ajax technology and the Makefile configuration and generation of CGI program were highlighted; Finally to measure voltage by remote accessing CGI programs through browser. The test results show that the system can implement voltage measurement correctly, achieve instrument control of Web interface and conform to the LXI standard.%LXI是以太网技术在测试自动化领域的应用和拓展;为了将LXI这一新型网络标准更广泛的推广和应用,研究了LXI的特征及其技术要点;构建了以NORCO-3680ALE为嵌入式主板、以PM512B为功能电路的LXI总线接口的硬件;重点阐述了基于Linux操作系统的Web服务器min_httpd的配置及应用,CSP程序的设计,Ajax技术的应用和CGI程序Makefile的配置及生成;最后经浏览器远程访问CGI程序,进行电压测量;试验结果表明,系统能正确地进行电压测量,实现了Web接口的仪器控制,符合LXI标准.
元书俊; 朱守中; 金灵芝
deep web 数据源中的信息可以通过查询提交进行访问,因此分析一个查询接口的查询能力是非常关键的,本文基于原子查询的理念,提出了一种通过识别查询接口上所有原子查询的方法来估计deep web接口查询能力.
David H Johnson; Tsao, Jun; Luo, Ming; Carson, Mike
The SGCEdb () database/interface serves the primary purpose of reporting progress of the Structural Genomics of Caenorhabditis elegans project at the University of Alabama at Birmingham. It stores and analyzes results of experiments ranging from solubility screening arrays to individual protein purification and structure solution. External databases and algorithms are referenced and evaluated for target selection in the human, C.elegans and Pneumocystis carinii genomes. The flexible and reusa...
Placidi, Giuseppe; Petracca, Andrea; Spezialetti, Matteo; Iacoviello, Daniela
A Brain Computer Interface (BCI) allows communication for impaired people unable to express their intention with common channels. Electroencephalography (EEG) represents an effective tool to allow the implementation of a BCI. The present paper describes a modular framework for the implementation of the graphic interface for binary BCIs based on the selection of symbols in a table. The proposed system is also designed to reduce the time required for writing text. This is made by including a motivational tool, necessary to improve the quality of the collected signals, and by containing a predictive module based on the frequency of occurrence of letters in a language, and of words in a dictionary. The proposed framework is described in a top-down approach through its modules: signal acquisition, analysis, classification, communication, visualization, and predictive engine. The framework, being modular, can be easily modified to personalize the graphic interface to the needs of the subject who has to use the BCI and it can be integrated with different classification strategies, communication paradigms, and dictionaries/languages. The implementation of a scenario and some experimental results on healthy subjects are also reported and discussed: the modules of the proposed scenario can be used as a starting point for further developments, and application on severely disabled people under the guide of specialized personnel.
Bartlett, Andrew; Lewis, Jamie; Williams, Matthew L.
Bioinformatics, a specialism propelled into relevance by the Human Genome Project and the subsequent -omic turn in the life science, is an interdisciplinary field of research. Qualitative work on the disciplinary identities of bioinformaticians has revealed the tensions involved in work in this “borderland.” As part of our ongoing work on the emergence of bioinformatics, between 2010 and 2011, we conducted a survey of United Kingdom-based academic bioinformaticians. Building on insights drawn from our fieldwork over the past decade, we present results from this survey relevant to a discussion of disciplinary generation and stabilization. Not only is there evidence of an attitudinal divide between the different disciplinary cultures that make up bioinformatics, but there are distinctions between the forerunners, founders and the followers; as inter/disciplines mature, they face challenges that are both inter-disciplinary and inter-generational in nature. PMID:27453689
Stevens, R; Miller, C
Bioinformaticians seeking to provide services to working biologists are faced with the twin problems of distribution and diversity of resources. Bioinformatics databases are distributed around the world and exist in many kinds of storage forms, platforms and access paradigms. To provide adequate services to biologists, these distributed and diverse resources have to interoperate seamlessly within single applications. The Common Object Request Broker Architecture (CORBA) offers one technical solution to these problems. The key component of CORBA is its use of object orientation as an intermediate form to translate between different representations. This paper concentrates on an explanation of object orientation and how it can be used to overcome the problems of distribution and diversity by describing the interfaces between objects.
Full Text Available Bioinformatics, for its very nature, is devoted to a set of targets that constantly evolve. Training is probably the best response to the constant need for the acquisition of bioinformatics skills. It is interesting to assess the effects of training in the different sets of researchers that make use of it. While training bench experimentalists in the life sciences, we have observed instances of changes in their attitudes in research that, if well exploited, can have beneficial impacts in the dialogue with professional bioinformaticians and influence the conduction of the research itself.
With the rapid expansion and development of Internet and WWW (World Wide Web or Web), Web GIS (Web Geographical Information Systen) is becoming ever more popular and as a result numerous sites have added GIS capability on their Web sites. In this paper, the reasons behind developing a Web GIS instead of a “traditional” GIS are first outlined. Then the current status of Web GIS is reviewed, and their implementation methodologies are explored as well.The underlying technologies for developing Web GIS, such as Web Server, Web browser, CGI (Common Gateway Interface), Java, ActiveX, are discussed, and some typical implementation tools from both commercial and public domain are given as well. Finally, the future development direction of Web GIS is predicted.
Based on a comparative study of Cambridge Scientific Abstracts' Internet Database Service and OCLC's FirstSearch, this paper discusses the user-friendly interfaces of Web-based databases according to their characteristics such as database selection, search strategy formulation and reformulation, online help and result output.
茅琴娇; 冯博琴; 潘善亮
为了进一步提高搜索引擎的效率,实现对deep web中所蕴含的大量有用信息的检索、索引和定位,引入潜在语义分析理论是一种简单而有效的方法.通过对作为deep web站点入口的查询界面里的表单属性进行潜在语义分析,从表单属性中挖掘出潜在语义结构,并实现一定程度上的降维.利用这种潜在语义结构,推断对应站点的数据内容并改善不同站点的相似度计算.实验结果显示,潜在语义分析修正和改善了deep web站点的表单属性的语义理解,弥补了单纯的关键字匹配带来的一些不足.该方法可以被用来实现为某一站点查找网络上相似度高的站点及通过键入表单属性给出拥有相似表单的站点列表.%To further enhance the efficiencies of search engines, achieving capabilities of searching, indexing and locating the information in the deep web, latent semantic analysis is a simple and effective way. Through the latent semantic analysis of the attributes in the query interfaces and the unique entrances of the deep web sites, the hidden semantic structure information can be retrieved and dimension reduction can be achieved to a certain extent. Using this semantic structure information, the contents in the site can be inferred and the similarity measures among sites in deep web can be revised. Experimental results show that latent semantic analysis revises and improves the semantic understanding of the query form in the deep web, which overcomes the shortcomings of the keyword-based methods. This approach can be used to effectively search the most similar site for any given site and to obtain a site list which conforms to the restrictions one specifies.
Meganck, B.; Mergen, P.; Meirte, D.
The following text presents some personal ideas about the way (bio)informatics2 is heading, along with some examples of how our institution – the Royal Museum for Central Africa (RMCA) – is gearing up for these new times ahead. It tries to find the important trends amongst the buzzwords, and to demo
Elwess, Nancy L.; Latourelle, Sandra M.; Cauthorn, Olivia
One of the hottest areas of science today is the field in which biology, information technology,and computer science are merged into a single discipline called bioinformatics. This field enables the discovery and analysis of biological data, including nucleotide and amino acid sequences that are easily accessed through the use of computers. As…
This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...
Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael
Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…
Since remote participation in ITER experiments is planned, it is expected to demonstrate that the JT-60SA experiment is controlled from a Japanese remote experiment center located in Rokkasho-mura, Aomori-ken, Japan as a part of the ITER-BA project. Functions required for this experiment are monitoring of the discharge sequence status, handling of the discharge parameter, checking of experiment data, and monitoring of plant data, all of which are included in the existing JT-60 Man-Machine Interfacing System (MMIF). The MMIF is now only available to on-site users at the Naka site due to network safety. The motivation for remote MMIF is prompted by the issue of developing and achieving compatibility with network safety. The Java language has been chosen to implement this task. This paper deals with details of the JT-60 MMIF for the remote experiment that has evolved using the Java language
Chau, M; H Chen; Li, X; Ho, YJ; Tseng, C
With the advent of the World Wide Web, many business applications that utilize data mining and text mining techniques to extract useful business information on the Web have evolved from Web searching to Web mining. It is important for students to acquire knowledge and hands-on experience in Web mining during their education in information systems curricula. This paper reports on an experience using open Web Application Programming Interfaces (APIs) that have been made available by major Inter...
Rinaldelli, Mauro; Carlon, Azzurra; Ravera, Enrico; Parigi, Giacomo, E-mail: email@example.com; Luchinat, Claudio, E-mail: firstname.lastname@example.org [University of Florence, CERM and Department of Chemistry “Ugo Schiff” (Italy)
Pseudocontact shifts (PCSs) and residual dipolar couplings (RDCs) arising from the presence of paramagnetic metal ions in proteins as well as RDCs due to partial orientation induced by external orienting media are nowadays routinely measured as a part of the NMR characterization of biologically relevant systems. PCSs and RDCs are becoming more and more popular as restraints (1) to determine and/or refine protein structures in solution, (2) to monitor the extent of conformational heterogeneity in systems composed of rigid domains which can reorient with respect to one another, and (3) to obtain structural information in protein–protein complexes. The use of both PCSs and RDCs proceeds through the determination of the anisotropy tensors which are at the origin of these NMR observables. A new user-friendly web tool, called FANTEN (Finding ANisotropy TENsors), has been developed for the determination of the anisotropy tensors related to PCSs and RDCs and has been made freely available through the WeNMR ( http://fanten-enmr.cerm.unifi.it:8080 http://fanten-enmr.cerm.unifi.it:8080 ) gateway. The program has many new features not available in other existing programs, among which the possibility of a joint analysis of several sets of PCS and RDC data and the possibility to perform rigid body minimizations.
Pseudocontact shifts (PCSs) and residual dipolar couplings (RDCs) arising from the presence of paramagnetic metal ions in proteins as well as RDCs due to partial orientation induced by external orienting media are nowadays routinely measured as a part of the NMR characterization of biologically relevant systems. PCSs and RDCs are becoming more and more popular as restraints (1) to determine and/or refine protein structures in solution, (2) to monitor the extent of conformational heterogeneity in systems composed of rigid domains which can reorient with respect to one another, and (3) to obtain structural information in protein–protein complexes. The use of both PCSs and RDCs proceeds through the determination of the anisotropy tensors which are at the origin of these NMR observables. A new user-friendly web tool, called FANTEN (Finding ANisotropy TENsors), has been developed for the determination of the anisotropy tensors related to PCSs and RDCs and has been made freely available through the WeNMR ( http://fanten-enmr.cerm.unifi.it:8080 http://fanten-enmr.cerm.unifi.it:8080 ) gateway. The program has many new features not available in other existing programs, among which the possibility of a joint analysis of several sets of PCS and RDC data and the possibility to perform rigid body minimizations
Bioinformatics is a usage of information technology to help solve biological problems by designing novel and in-cisive algorithms and methods of analyses.Bioinformatics becomes a discipline vital in the era of post-genom-ics.In this review article,the application of bioinformatics in tropical medicine will be presented and dis-cussed.
Full Text Available Abstract With the decreasing cost of DNA sequencing technology and the vast diversity of biological resources, researchers increasingly face the basic challenge of annotating a larger number of expressed sequences tags (EST from a variety of species. This typically consists of a series of repetitive tasks, which should be automated and easy to use. The results of these annotation tasks need to be stored and organized in a consistent way. All these operations should be self-installing, platform independent, easy to customize and amenable to using distributed bioinformatics resources available on the Internet. In order to address these issues, we present EST-PAC a web oriented multi-platform software package for expressed sequences tag (EST annotation. EST-PAC provides a solution for the administration of EST and protein sequence annotations accessible through a web interface. Three aspects of EST annotation are automated: 1 searching local or remote biological databases for sequence similarities using Blast services, 2 predicting protein coding sequence from EST data and, 3 annotating predicted protein sequences with functional domain predictions. In practice, EST-PAC integrates the BLASTALL suite, EST-Scan2 and HMMER in a relational database system accessible through a simple web interface. EST-PAC also takes advantage of the relational database to allow consistent storage, powerful queries of results and, management of the annotation process. The system allows users to customize annotation strategies and provides an open-source data-management environment for research and education in bioinformatics.
Full Text Available Abstract Background The analysis of high-throughput screening data sets is an expanding field in bioinformatics. High-throughput screens by RNAi generate large primary data sets which need to be analyzed and annotated to identify relevant phenotypic hits. Large-scale RNAi screens are frequently used to identify novel factors that influence a broad range of cellular processes, including signaling pathway activity, cell proliferation, and host cell infection. Here, we present a web-based application utility for the end-to-end analysis of large cell-based screening experiments by cellHTS2. Results The software guides the user through the configuration steps that are required for the analysis of single or multi-channel experiments. The web-application provides options for various standardization and normalization methods, annotation of data sets and a comprehensive HTML report of the screening data analysis, including a ranked hit list. Sessions can be saved and restored for later re-analysis. The web frontend for the cellHTS2 R/Bioconductor package interacts with it through an R-server implementation that enables highly parallel analysis of screening data sets. web cellHTS2 further provides a file import and configuration module for common file formats. Conclusions The implemented web-application facilitates the analysis of high-throughput data sets and provides a user-friendly interface. web cellHTS2 is accessible online at http://web-cellHTS2.dkfz.de. A standalone version as a virtual appliance and source code for platforms supporting Java 1.5.0 can be downloaded from the web cellHTS2 page. web cellHTS2 is freely distributed under GPL.
Verheggen, Kenneth; Maddelein, Davy; Hulstaert, Niels; Martens, Lennart; Barsnes, Harald; Vaudel, Marc
The use of proteomics bioinformatics substantially contributes to an improved understanding of proteomes, but this novel and in-depth knowledge comes at the cost of increased computational complexity. Parallelization across multiple computers, a strategy termed distributed computing, can be used to handle this increased complexity; however, setting up and maintaining a distributed computing infrastructure requires resources and skills that are not readily available to most research groups. Here we propose a free and open-source framework named Pladipus that greatly facilitates the establishment of distributed computing networks for proteomics bioinformatics tools. Pladipus is straightforward to install and operate thanks to its user-friendly graphical interface, allowing complex bioinformatics tasks to be run easily on a network instead of a single computer. As a result, any researcher can benefit from the increased computational efficiency provided by distributed computing, hence empowering them to tackle more complex bioinformatics challenges. Notably, it enables any research group to perform large-scale reprocessing of publicly available proteomics data, thus supporting the scientific community in mining these data for novel discoveries. PMID:26510693
Smita, Shuchi; Lenka, Sangram Keshari; Katiyar, Amit; Jaiswal, Pankaj; Preece, Justin; Bansal, Kailash Chander
The QlicRice database is designed to host publicly accessible, abiotic stress responsive quantitative trait loci (QTLs) in rice (Oryza sativa) and their corresponding sequenced gene loci. It provides a platform for the data mining of abiotic stress responsive QTLs, as well as browsing and annotating associated traits, their location on a sequenced genome, mapped expressed sequence tags (ESTs) and tissue and growth stage-specific expressions on the whole genome. Information on QTLs related to abiotic stresses and their corresponding loci from a genomic perspective has not yet been integrated on an accessible, user-friendly platform. QlicRice offers client-responsive architecture to retrieve meaningful biological information--integrated and named 'Qlic Search'--embedded in a query phrase autocomplete feature, coupled with multiple search options that include trait names, genes and QTL IDs. A comprehensive physical and genetic map and vital statistics have been provided in a graphical manner for deciphering the position of QTLs on different chromosomes. A convenient and intuitive user interface have been designed to help users retrieve associations to agronomically important QTLs on abiotic stress response in rice. Database URL: http://nabg.iasri.res.in:8080/qlic-rice/. PMID:21965557
李雪玲; 施化吉; 兰均; 李星毅
针对现有Deep Web查询接口判定方法误判较多、无法有效区分搜索引擎类接口的不足,提出了基于决策树和链接相似的Deep Web查询接口判定方法.该方法利用信息增益率选取重要属性,并构建决策树对接口表单进行预判定,识别特征较为明显的接口;然后利用基于链接相似的判定方法对未识别出的接口进行二次判定,准确识别真正查询接口,排除搜索引擎类接口.结果表明,该方法能有效区分搜索引擎类接口,提高了分类的准确率和查全率.%In order to solve the problems existed in the traditional method that Deep Web query interfaces are more false positives and search engine class interface can not be effectively distinguished, this paper proposed a Deep Web query interface identification method based on decision tree and link-similar. This method used attribute information gain ratio as selection level, built a decision tree to pre-determine the form of the interfaces to identify the most interfaces which had some distinct features, and then used a new method based on link-similar to identify these unidentified again, distinguishing between Deep Web query interface and the interface of search engines. The result of experiment shows that it can enhance the accuracy and proves that it is better than the traditional methods.
With the development of power industry , it is hoped that the power industry marketing and acquisition can become an integrated information platform in order to provide a unified management , and realize data sharing among systems.This paper in-troduces a way of power marketing and acquisition data exchange under heterogeneous circumstance .Taking a county power ac-quisition data uploaded to the provincial center as a case , it is proved that this scheme can realize the interface of power market-ing and acquisition services.% 随着电力行业的发展，电力行业的营销和采集希望整合成一体化的信息平台，提供统一管理，实现系统之间的数据共享。基于Web Service提出一种异构环境下电力营销与采集数据交换方法，以某县电力采集数据上传到省中心为实例，进行实例实践，实际应用表明，该方案可以实现电力营销与采集业务。
Recently, a new web development technique for creating interactive web applications, dubbed AJAX, has emerged in response to the limited degree of interactivity in large-grain stateless web interactions. In this new model, the web interface is composed of individual components which can be updated/r
Schweighofer, Karl; Pohorille, Andrew
Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.
Jones, Andrew R; Hubbard, Simon J
This book is part of the Methods in Molecular Biology series, and provides a general overview of computational approaches used in proteome research. In this chapter, we give an overview of the scope of the book in terms of current proteomics experimental techniques and the reasons why computational approaches are needed. We then give a summary of each chapter, which together provide a picture of the state of the art in proteome bioinformatics research.
Full Text Available Abstract Background High-throughput bioinformatic analysis tools are needed to mine the large amount of structural data via knowledge based approaches. The development of such tools requires a robust interface to access the structural data in an easy way. For this the Python scripting language is the optimal choice since its philosophy is to write an understandable source code. Results p3d is an object oriented Python module that adds a simple yet powerful interface to the Python interpreter to process and analyse three dimensional protein structure files (PDB files. p3d's strength arises from the combination of a very fast spatial access to the structural data due to the implementation of a binary space partitioning (BSP tree, b set theory and c functions that allow to combine a and b and that use human readable language in the search queries rather than complex computer language. All these factors combined facilitate the rapid development of bioinformatic tools that can perform quick and complex analyses of protein structures. Conclusion p3d is the perfect tool to quickly develop tools for structural bioinformatics using the Python scripting language.
Antonio d'Acierno; Angelo Facchiano; Anna Marabotti
We describe the GALT-Prot database and its related web-based application that have been developed to collect information about the structural and functional effects of mutations on the human enzyme galactose-1-phosphate uridyltransferase (GALT) involved in the genetic disease named galactosemia type Ⅰ. Besides a list of missense mutations at gene and protein sequence levels, GALT-Prot reports the analysis results of mutant GALT structures. In addition to the structural information about the wild-type enzyme, the database also includes structures of over 100 single point mutants simulated by means of a computational procedure, and the analysis to each mutant was made with several bioinformatics programs in order to investigate the effect of the mutations. The web-based interface allows querying of the database, and several links are also provided in order to guarantee a high integration with other resources already present on the web. Moreover, the architecture of the database and the web application is flexible and can be easily adapted to store data related to other proteins with point mutations. GALT-Prot is freely available at http://bioinformatica.isa.cnr.it/GALT/.
Holtzclaw, J. David; Eisen, Arri; Whitney, Erika M.; Penumetcha, Meera; Hoey, J. Joseph; Kimbro, K. Sean
Many students at minority-serving institutions are underexposed to Internet resources such as the human genome project, PubMed, NCBI databases, and other Web-based technologies because of a lack of financial resources. To change this, we designed and implemented a new bioinformatics component to supplement the undergraduate Genetics course at…
Tusch, Guenter; Bretl, Chris; O'Connor, Martin; Connor, Martin; Das, Amar
Mining large clinical and bioinformatics databases often includes exploration of temporal data. E.g., in liver transplantation, researchers might look for patients with an unusual time pattern of potential complications of the liver. In Knowledge-based Temporal Abstraction time-stamped data points are transformed into an interval-based representation. We extended this framework by creating an open-source platform, SPOT. It supports the R statistical package and knowledge representation standards (OWL, SWRL) using the open source Semantic Web tool Protégé-OWL. PMID:18999225
Lopez, Rodrigo; Silventoinen, Ville; Robinson, Stephen; Kibria, Asif; Gish, Warren
Since 1995, the WU-BLAST programs (http://blast.wustl.edu) have provided a fast, flexible and reliable method for similarity searching of biological sequence databases. The software is in use at many locales and web sites. The European Bioinformatics Institute's WU-Blast2 (http://www.ebi.ac.uk/blast2/) server has been providing free access to these search services since 1997 and today supports many features that both enhance the usability and expand on the scope of the software.
Falquet, Laurent; Bordoli, Lorenza; Ioannidis, Vassilios; Pagni, Marco; Jongeneel, C. Victor
EMBnet is a consortium of collaborating bioinformatics groups located mainly within Europe (http://www.embnet.org). Each member country is represented by a ‘node’, a group responsible for the maintenance of local services for their users (e.g. education, training, software, database distribution, technical support, helpdesk). Among these services a web portal with links and access to locally developed and maintained software is essential and different for each node. Our web portal targets bio...
Despite all of the UI toolkits available today, it's still not easy to design good application interfaces. This bestselling book is one of the few reliable sources to help you navigate through the maze of design options. By capturing UI best practices and reusable ideas as design patterns, Designing Interfaces provides solutions to common design problems that you can tailor to the situation at hand. This updated edition includes patterns for mobile apps and social media, as well as web applications and desktop software. Each pattern contains full-color examples and practical design advice th
Thomas K Karikari
Full Text Available Until recently, bioinformatics, an important discipline in the biological sciences, was largely limited to countries with advanced scientific resources. Nonetheless, several developing countries have lately been making progress in bioinformatics training and applications. In Africa, leading countries in the discipline include South Africa, Nigeria, and Kenya. However, one country that is less known when it comes to bioinformatics is Ghana. Here, I provide a first description of the development of bioinformatics activities in Ghana and how these activities contribute to the overall development of the discipline in Africa. Over the past decade, scientists in Ghana have been involved in publications incorporating bioinformatics analyses, aimed at addressing research questions in biomedical science and agriculture. Scarce research funding and inadequate training opportunities are some of the challenges that need to be addressed for Ghanaian scientists to continue developing their expertise in bioinformatics.
Full Text Available Abstract In 1998, the Asia Pacific Bioinformatics Network (APBioNet, Asia's oldest bioinformatics organisation was set up to champion the advancement of bioinformatics in the Asia Pacific. By 2002, APBioNet was able to gain sufficient critical mass to initiate the first International Conference on Bioinformatics (InCoB bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2006 Conference was organized as the 5th annual conference of the Asia-Pacific Bioinformatics Network, on Dec. 18–20, 2006 in New Delhi, India, following a series of successful events in Bangkok (Thailand, Penang (Malaysia, Auckland (New Zealand and Busan (South Korea. This Introduction provides a brief overview of the peer-reviewed manuscripts accepted for publication in this Supplement. It exemplifies a typical snapshot of the growing research excellence in bioinformatics of the region as we embark on a trajectory of establishing a solid bioinformatics research culture in the Asia Pacific that is able to contribute fully to the global bioinformatics community.
Schneider, Maria V.; Walter, Peter; Blatter, Marie-Claude;
to the development of ‘high-throughput biology’, the need for training in the field of bioinformatics, in particular, is seeing a resurgence: it has been defined as a key priority by many Institutions and research programmes and is now an important component of many grant proposals. Nevertheless, when it comes...... to planning and preparing to meet such training needs, tension arises between the reward structures that predominate in the scientific community which compel individuals to publish or perish, and the time that must be devoted to the design, delivery and maintenance of high-quality training materials....... Conversely, there is much relevant teaching material and training expertise available worldwide that, were it properly organized, could be exploited by anyone who needs to provide training or needs to set up a new course. To do this, however, the materials would have to be centralized in a database...
US Agency for International Development — QWICR is a secure, online Title II commodity reporting system accessible to USAID Missions, PVO Cooperating Sponsors and Food for Peace Officers. QWICR provides PVO...
Carlisle, W. H.
This reports on a project to extend the capabilities of a Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1995 Summer Faculty Fellowship program and involved the development of a prototype component of the VRC - a database system that provides data creation and access services within a room of the VRC. In support of VRC development, NASA has assembled a laboratory containing the variety of equipment expected to be used by scientists within the VRC. This laboratory consists of the major hardware platforms, SUN, Intel, and Motorola processors and their most common operating systems UNIX, Windows NT, Windows for Workgroups, and Macintosh. The SPARC 20 runs SUN Solaris 2.4, an Intel Pentium runs Windows NT and is installed on a different network from the other machines in the laboratory, a Pentium PC runs Windows for Workgroups, two Intel 386 machines run Windows 3.1, and finally, a PowerMacintosh and a Macintosh IIsi run MacOS.
Web 2.0 technologies enable users to produce and distribute their own content. The variety of motives for taking part in these communication processes leads to considerable differences in levels of quality. While social media contexts have developed features for evaluating contributions, user-generated maps frequently do not offer tools to question or examine the origin and elements of user-generated content. This paper discusses the effects of the integration of Web 2.0 features with web map...
Full Text Available In this era of digital tsunami of information on the web, everyone is completely dependent on the WWW for information retrieval. This has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. The web databases are hidden behind the query interfaces. In this paper, we propose a Hidden Web Extractor (HWE that can automatically discover and download data from the Hidden Web databases. Since the only “entry point” to a Hidden Web site is a query interface, the main challenge that a Hidden WebExtractor has to face is how to automatically generate meaningful queries for the unlimited number of website pages.
Ku, David Tawei; Chang, Chia-Chi
By conducting usability testing on a multilanguage Web site, this study analyzed the cultural differences between Taiwanese and American users in the performance of assigned tasks. To provide feasible insight into cross-cultural Web site design, Microsoft Office Online (MOO) that supports both traditional Chinese and English and contains an almost…
Explains the Human Genome Project (HGP) and efforts to sequence the human genome. Describes the role of bioinformatics in the project and considers it the genetics Swiss Army Knife, which has many different uses, for use in forensic science, medicine, agriculture, and environmental sciences. Discusses the use of bioinformatics in the high school…
This article describes a new approach to teaching bioinformatics using "Arabidopsis" genetic sequences. Several open-ended and inquiry-based laboratory exercises have been designed to help students grasp key concepts and gain practical skills in bioinformatics, using "Arabidopsis" leucine-rich repeat receptor-like kinase (LRR RLK) genetic…
Heyer, Laurie J.
This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…
Bioinformatics is a scientific discipline that applies computer science and information technology to help understand biological processes. The NIH provides a list of free online bioinformatics tutorials, either generated by the NIH Library or other institutes, which includes introductory lectures and "how to" videos on using various tools.
Zhong, Yang; Zhang, Xiaoyan; Ma, Jian; Zhang, Liang
As the Human Genome Project experiences remarkable success and a flood of biological data is produced, bioinformatics becomes a very "hot" cross-disciplinary field, yet experienced bioinformaticians are urgently needed worldwide. This paper summarises the rapid development of bioinformatics education in China, especially related undergraduate…
As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics.This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. 2012 Dai et al.; licensee BioMed Central Ltd.
Full Text Available Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS, Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS, and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.
Full Text Available Designers have a saying that "the joy of an early release lasts but a short time. The bitterness of an unusable system lasts for years." It is indeed disappointing to discover that your data resources are not being used to their full potential. Not only have you invested your time, effort, and research grant on the project, but you may face costly redesigns if you want to improve the system later. This scenario would be less likely if the product was designed to provide users with exactly what they need, so that it is fit for purpose before its launch. We work at EMBL-European Bioinformatics Institute (EMBL-EBI, and we consult extensively with life science researchers to find out what they need from biological data resources. We have found that although users believe that the bioinformatics community is providing accurate and valuable data, they often find the interfaces to these resources tricky to use and navigate. We believe that if you can find out what your users want even before you create the first mock-up of a system, the final product will provide a better user experience. This would encourage more people to use the resource and they would have greater access to the data, which could ultimately lead to more scientific discoveries. In this paper, we explore the need for a user-centred design (UCD strategy when designing bioinformatics resources and illustrate this with examples from our work at EMBL-EBI. Our aim is to introduce the reader to how selected UCD techniques may be successfully applied to software design for bioinformatics.
石龙; 强保华; 谌超; 吴春明
With the rapid development of Internet technology,a large number of Web databases have mushroomed and the number remains in a fast-growing trend.In order to effectively organise and utilise the information which hides deeply in Web databases,it is necessary to classify and integrate them according to domains.Since the query interface of Webpage is the unique channel to access the Web database,the classification of Deep Web data source can be realised by classifying the query interfaces.In this paper,a classification method based on text VSM of query interface is proposed.The basic idea is to build a vector space model (VSM) by using query interface text information firstly.Then the typical data mining classification algorithm is employed to train one or more classifiers,thus to classify the domains the query interfaces belonging to is implemented.Experimental result shows that the approach proposed in the paper has excellent classification performance.%随着Intemet技术的快速发展,Web数据库数目庞大而且仍在快速增长.为有效组织利用深藏于Web数据库上的信息,需对其按领域进行分类和集成.Web页面上的查询接口是网络用户访问Web数据库的唯一途径,对Deep Web数据源分类可通过对查询接口分类实现.为此,提出一种基于查询接口文本VSM(Vector Space Model)的分类方法.首先,使用查询接口文本信息构建向量空间模型,然后通过典型的数据挖掘分类算法训练分类器,从而实现对查询接口所属领域进行分类.实验结果表明给出的方法具有良好的分类性能.
Web Dynpro ABAP, a NetWeaver web application user interface tool from SAP, enables web programming connected to SAP Systems. The authors' main focus was to create a book based on their own practical experience. Each chapter includes examples which lead through the content step-by-step and enable the reader to gradually explore and grasp the Web Dynpro ABAP process. The authors explain in particular how to design Web Dynpro components, the data binding and interface methods, and the view controller methods. They also describe the other SAP NetWeaver Elements (ABAP Dictionary, Authorization) and
Neculae, Alina Georgiana
The aim of this project is to develop a web interface that would be used by the Icinga monitoring system to manage the CMS online cluster, in the experimental site. The interface would allow users to visualize the information in a compressed and intuitive way, as well as modify the information of each individual object and edit the relationships between classes.
Segun A Fatumo
Full Text Available Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.
Zhou, Shuigeng; Liao, Ruiqi; Guan, Jihong
In the past decades, with the rapid development of high-throughput technologies, biology research has generated an unprecedented amount of data. In order to store and process such a great amount of data, cloud computing and MapReduce were applied to many fields of bioinformatics. In this paper, we first introduce the basic concepts of cloud computing and MapReduce, and their applications in bioinformatics. We then highlight some problems challenging the applications of cloud computing and MapReduce to bioinformatics. Finally, we give a brief guideline for using cloud computing in biology research.
Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T
Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations.
Schultheiss, Sebastian J.; Münch, Marc-Christian; Andreeva, Gergana D.; Rätsch, Gunnar
We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was...
Niriksha Bhojaraj Kabbin
Full Text Available Web applications in today’s world has a great impact on businesses and are popular since they provide business benefits and hugely deployable. Developing such efficient web applications using leading edge web technologies that promise to deliver upgraded user interface, greater scalability and interoperability, improved performance and usability, among different systems is a challenge. Google Web Toolkit (GWT is one such framework that helps to build Rich Internet Applications (RIAs that enable fertile development of high performance web applications. This paper puts an effort to provide an effective solution to develop quality web based applications with an added layer of security.
Pettifer, S.; Ison, J.; Kalas, M.;
The EMBRACE (European Model for Bioinformatics Research and Community Education) web service collection is the culmination of a 5-year project that set out to investigate issues involved in developing and deploying web services for use in the life sciences. The project concluded that in order......, an ontology for describing life science web services; BioXSD, a schema for exchanging data between services; and a centralized registry (http://www.embraceregistry.net) that collects together around 1000 services developed by the consortium partners. This article presents the current status of the collection...... for web services to achieve widespread adoption, standards must be defined for the choice of web service technology, for semantically annotating both service function and the data exchanged, and a mechanism for discovering services must be provided. Building on this, the project developed: EDAM...
Alva, Vikram; Nam, Seung-Zin; Söding, Johannes; Lupas, Andrei N
The MPI Bioinformatics Toolkit (http://toolkit.tuebingen.mpg.de) is an open, interactive web service for comprehensive and collaborative protein bioinformatic analysis. It offers a wide array of interconnected, state-of-the-art bioinformatics tools to experts and non-experts alike, developed both externally (e.g. BLAST+, HMMER3, MUSCLE) and internally (e.g. HHpred, HHblits, PCOILS). While a beta version of the Toolkit was released 10 years ago, the current production-level release has been available since 2008 and has serviced more than 1.6 million external user queries. The usage of the Toolkit has continued to increase linearly over the years, reaching more than 400 000 queries in 2015. In fact, through the breadth of its tools and their tight interconnection, the Toolkit has become an excellent platform for experimental scientists as well as a useful resource for teaching bioinformatic inquiry to students in the life sciences. In this article, we report on the evolution of the Toolkit over the last ten years, focusing on the expansion of the tool repertoire (e.g. CS-BLAST, HHblits) and on infrastructural work needed to remain operative in a changing web environment. PMID:27131380
Full Text Available Language WorkBenches (LWBs are software engineering tools that help domain experts develop solutions to various classes of problems. Some of these tools focus on non-technical users and provide languages to help organize knowledge while other workbenches provide means to create new programming languages. A key advantage of language workbenches is that they support the seamless composition of independently developed languages. This capability is useful when developing programs that can benefit from different levels of abstraction. We reasoned that language workbenches could be useful to develop bioinformatics software solutions. In order to evaluate the potential of language workbenches in bioinformatics, we tested a prominent workbench by developing an alternative to shell scripting. To illustrate what LWBs and Language Composition can bring to bioinformatics, we report on our design and development of NYoSh (Not Your ordinary Shell. NYoSh was implemented as a collection of languages that can be composed to write programs as expressive and concise as shell scripts. This manuscript offers a concrete illustration of the advantages and current minor drawbacks of using the MPS LWB. For instance, we found that we could implement an environment-aware editor for NYoSh that can assist the programmers when developing scripts for specific execution environments. This editor further provides semantic error detection and can be compiled interactively with an automatic build and deployment system. In contrast to shell scripts, NYoSh scripts can be written in a modern development environment, supporting context dependent intentions and can be extended seamlessly by end-users with new abstractions and language constructs. We further illustrate language extension and composition with LWBs by presenting a tight integration of NYoSh scripts with the GobyWeb system. The NYoSh Workbench prototype, which implements a fully featured integrated development environment
Simi, Manuele; Campagne, Fabien
Language WorkBenches (LWBs) are software engineering tools that help domain experts develop solutions to various classes of problems. Some of these tools focus on non-technical users and provide languages to help organize knowledge while other workbenches provide means to create new programming languages. A key advantage of language workbenches is that they support the seamless composition of independently developed languages. This capability is useful when developing programs that can benefit from different levels of abstraction. We reasoned that language workbenches could be useful to develop bioinformatics software solutions. In order to evaluate the potential of language workbenches in bioinformatics, we tested a prominent workbench by developing an alternative to shell scripting. To illustrate what LWBs and Language Composition can bring to bioinformatics, we report on our design and development of NYoSh (Not Your ordinary Shell). NYoSh was implemented as a collection of languages that can be composed to write programs as expressive and concise as shell scripts. This manuscript offers a concrete illustration of the advantages and current minor drawbacks of using the MPS LWB. For instance, we found that we could implement an environment-aware editor for NYoSh that can assist the programmers when developing scripts for specific execution environments. This editor further provides semantic error detection and can be compiled interactively with an automatic build and deployment system. In contrast to shell scripts, NYoSh scripts can be written in a modern development environment, supporting context dependent intentions and can be extended seamlessly by end-users with new abstractions and language constructs. We further illustrate language extension and composition with LWBs by presenting a tight integration of NYoSh scripts with the GobyWeb system. The NYoSh Workbench prototype, which implements a fully featured integrated development environment for NYoSh is
Systems integration is becoming the driving force for 21st century biology. Researchers are systematically tackling gene functions and complex regulatory processes by studying organisms at different levels of organization, from genomes and transcriptomes to proteomes and interactomes. To fully realize the value of such high-throughput data requires advanced bioinformatics for integration, mining, comparative analysis, and functional interpretation. We are developing a bioinformatics research ...
Gorodkin, Jan; Hofacker, Ivo L.; Ruzzo, Walter L.
RNA bioinformatics and computational RNA biology have emerged from implementing methods for predicting the secondary structure of single sequences. The field has evolved to exploit multiple sequences to take evolutionary information into account, such as compensating (and structure preserving) base...... for interactions between RNA and proteins.Here, we introduce the basic concepts of predicting RNA secondary structure relevant to the further analyses of RNA sequences. We also provide pointers to methods addressing various aspects of RNA bioinformatics and computational RNA biology....
Ghulam A. PARRAY; Abdul G. Rather; Parvez Sofi; Shafiq A. Wani; Amjad M. Husaini; Asif B. Shikari; Javid I. Mir
Saffron (Crocus sativus L.) is a sterile triploid plant and belongs to the Iridaceae (Liliales, Monocots). Its genome is of relatively large size and is poorly characterized. Bioinformatics can play an enormous technical role in the sequence-level structural characterization of saffron genomic DNA. Bioinformatics tools can also help in appreciating the extent of diversity of various geographic or genetic groups of cultivated saffron to infer relationships between groups and accessions. The ch...
Full Text Available The drastic increase in the number of coronaviruses discovered and coronavirus genomes being sequenced have given us an unprecedented opportunity to perform genomics and bioinformatics analysis on this family of viruses. Coronaviruses possess the largest genomes (26.4 to 31.7 kb among all known RNA viruses, with G + C contents varying from 32% to 43%. Variable numbers of small ORFs are present between the various conserved genes (ORF1ab, spike, envelope, membrane and nucleocapsid and downstream to nucleocapsid gene in different coronavirus lineages. Phylogenetically, three genera, Alphacoronavirus, Betacoronavirus and Gammacoronavirus, with Betacoronavirus consisting of subgroups A, B, C and D, exist. A fourth genus, Deltacoronavirus, which includes bulbul coronavirus HKU11, thrush coronavirus HKU12 and munia coronavirus HKU13, is emerging. Molecular clock analysis using various gene loci revealed that the time of most recent common ancestor of human/civet SARS related coronavirus to be 1999-2002, with estimated substitution rate of 4´10-4 to 2´10-2 substitutions per site per year. Recombination in coronaviruses was most notable between different strains of murine hepatitis virus (MHV, between different strains of infectious bronchitis virus, between MHV and bovine coronavirus, between feline coronavirus (FCoV type I and canine coronavirus generating FCoV type II, and between the three genotypes of human coronavirus HKU1 (HCoV-HKU1. Codon usage bias in coronaviruses were observed, with HCoV-HKU1 showing the most extreme bias, and cytosine deamination and selection of CpG suppressed clones are the two major independent biological forces that shape such codon usage bias in coronaviruses.
邓中亮; 林清; 李来新
针对当前3G业务平台统一、面向服务的发展需求,提出以Portal,Web Service两种技术为基础的增值业务门户的实现方法,介绍Web Service作为接口技术在增值业务门户与业务管理平台互通中的具体应用,并针对业务产品/产品包订购的需求,给出基于Apache Axis的接口设计与实现过程.其统一、可定制的管理方式,为今后运营商的门户提供了新的设计思路与实践基础.
Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H
Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs.
Full Text Available The aim of this research study is to design and implement a Wi-Fi-based control panel for remote control of lights and electrical appliances with a web functionality that allows for wide area control via the intranet or Internet. This eliminates the inconvenience of moving from one switch to another for analog operation of light fixtures and appliance in home, office and campus environment. The wireless technology we adopted is IEEE 802.11 (2008 b/g, also called Wi-Fi (Wireless Fidelity which operates in free band and is easily accessible. Wi-Ap (Wi-Fi Automated Appliance control system contains a web portal which allows for management and control purposes via the intranet or Internet. We built a standalone Wi-Ap console that allows the wireless switching on and off of any appliance(s that is(are plugged into it. The prototype we built was tested within the Electrical and Information Engineering department, Covenant University, Nigeria intranet and the test achieved our aim of remote appliances control from a web portal vial the intranet.
Radhakrishnan, Sabarinathan; Tafer, Hakim; Seemann, Ernst Stefan;
, are derived from extensive pre-computed tables of distributions of substitution effects as a function of gene length and GC content. Here, we present a web service that not only provides an interface for RNAsnp but also features a graphical output representation. In addition, the web server is connected...... to a local mirror of the UCSC genome browser database that enables the users to select the genomic sequences for analysis and visualize the results directly in the UCSC genome browser. The RNAsnp web server is freely available at: http://rth.dk/resources/rnasnp/....
Russell J. KOHEL; John Z. YU; Piyush GUPTA; Rajeev AGRAWAL
@@ There are several web sites for which information is available to the cotton research community. Most of these sites relate to resources developed or available to the research community. Few provide bioinformatic tools,which usually relate to the specific data sets and materials presented in the database. Just as the bioinformatics area is evolving, the available resources reflect this evolution.
Reddy, Ch Ram Mohan; Geetha, D. Evangelin; Srinivasa, K. G.; Kumar, T. V. Suresh; Kanth, K. Rajani
Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified Modeling Language, Use Case Diagram, Sequence Diagram, Deployment Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Mult...
Ch Ram Mohan Reddy; D Evangelin Geetha; KG Srinivasa; Suresh Kumar, T V.; K Rajani Kanth
Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified Modeling Language, Use Case Diagram, Sequence Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Multi-Tie...
Klatt, Edward C.
We present SMPDF Web, a web interface for the construction of parton distribution functions (PDFs) with a minimal number of error sets needed to represent the PDF uncertainty of specific processes (SMPDF).
Mandal, Chittaranjan; Sinha, Vijay Luxmi; Reade, Christopher M. P.
The architecture of a web-based course management tool that has been developed at IIT [Indian Institute of Technology], Kharagpur and which manages the submission of assignments is discussed. Both the distributed architecture used for data storage and the client-server architecture supporting the web interface are described. Further developments…
Ketterl, Markus; Mertens, Robert; Vornberger, Oliver
Purpose: At many universities, web lectures have become an integral part of the e-learning portfolio over the last few years. While many aspects of the technology involved, like automatic recording techniques or innovative interfaces for replay, have evolved at a rapid pace, web lecturing has remained independent of other important developments…
Full Text Available Abstract Background The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. Results The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. Conclusions We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.
Reviews Equipment: BioLite Camp Stove Game: Burnout Paradise Equipment: 850 Universal interface and Capstone software Equipment: xllogger Book: Science Magic Tricks and Puzzles Equipment: Spinthariscope Equipment: DC Power Supply HY5002 Web Watch
WE RECOMMEND BioLite CampStove Robust and multifaceted stove illuminates physics concepts 850 Universal interface and Capstone software Powerful data-acquisition system offers many options for student experiments and demonstrations xllogger Obtaining results is far from an uphill struggle with this easy-to-use datalogger Science Magic Tricks and Puzzles Small but perfectly formed and inexpensive book packed with 'magic-of-science' demonstrations Spinthariscope Kit for older students to have the memorable experience of 'seeing' radioactivity WORTH A LOOK DC Power Supply HY5002 Solid and effective, but noisy and lacks portability HANDLE WITH CARE Burnout Paradise Car computer game may be quick off the mark, but goes nowhere fast when it comes to lab use WEB WATCH 'Live' tube map and free apps would be a useful addition to school physics, but maths-questions website of no more use than a textbook
Zhao, Shanrong; Lu, Jin
, including de novo library design in selection of favorable germline V gene scaffolds and CDR lengths. In addition, we have also developed a web application framework to present our knowledge database, and the web interface can help people to easily retrieve a variety of information from the knowledge database.
Zhao, Shanrong; Lu, Jin
, including de novo library design in selection of favorable germline V gene scaffolds and CDR lengths. In addition, we have also developed a web application framework to present our knowledge database, and the web interface can help people to easily retrieve a variety of information from the knowledge database. PMID:21310488
The thesis focuses on two bioinformatics research topics: the development of tools for an efficient and reliable identification of single nucleotides polymorphisms (SNPs) and polymorphic simple sequence repeats (SSRs) from expressed sequence tags (ESTs) (Chapter 2, 3 and 4), and the subsequent imple
Kelley, Scott; Alger, Christianna; Deutschman, Douglas
The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP). The…
Biological sequence alignment is an important and challenging task in bioinformatics. Alignment may be defined as an arrangement of two or more DNA or protein sequences to highlight the regions of their similarity. Sequence alignment is used to infer the evolutionary relationship between a set of pr
Ondrej, Vladan; Dvorak, Petr
Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…
In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…
Boyle, John A.
Bioinformatics has emerged as an important research tool in recent years. The ability to mine large databases for relevant information has become increasingly central to many different aspects of biochemistry and molecular biology. It is important that undergraduates be introduced to the available information and methodologies. We present a…
Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...
Full Text Available Constant improvements in the field of surveying, computing and distribution of digital-content are reshaping the way Cultural Heritage can be digitised and virtually accessed, even remotely via web. A traditional 2D approach for data access, exploration, retrieval and exploration may generally suffice, however more complex analyses concerning spatial and temporal features require 3D tools, which, in some cases, have not yet been implemented or are not yet generally commercially available. Efficient organisation and integration strategies applicable to the wide array of heterogeneous data in the field of Cultural Heritage represent a hot research topic nowadays. This article presents a visualisation and query tool (QueryArch3D conceived to deal with multi-resolution 3D models. Geometric data are organised in successive levels of detail (LoD, provided with geometric and semantic hierarchies and enriched with attributes coming from external data sources. The visualisation and query front-end enables the 3D navigation of the models in a virtual environment, as well as the interaction with the objects by means of queries based on attributes or on geometries. The tool can be used as a standalone application, or served through the web. The characteristics of the research work, along with some implementation issues and the developed QueryArch3D tool will be discussed and presented.
Full Text Available Abstract Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error. Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. Findings We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Conclusions Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows
饶志敏; 余阳; 李长森
Tjin-Kam-Jet, Kien-Tsoi T.E.
This proposal identifies two main problems related to deep web search, and proposes a step by step solution for each of them. The first problem is about searching deep web content by means of a simple free-text interface (with just one input field, instead of a complex interface with many input fiel
Schneider, M.V.; Watson, J.; Attwood, T.;
As bioinformatics becomes increasingly central to research in the molecular life sciences, the need to train non-bioinformaticians to make the most of bioinformatics resources is growing. Here, we review the key challenges and pitfalls to providing effective training for users of bioinformatics...
Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.
There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…
Munteanu, Cristian R; Magalhães, Alexandre L
Prot-2S is a bioinformatics web application devised to analyse the protein chain secondary structures (2S) (http:/ /www.requimte.pt:8080/Prot-2S/). The tool is built on the RCSB Protein Data Bank PDB and DSSP application/files and includes calculation/graphical display of amino acid propensities in 2S motifs based on any user amino acid classification/code (for any particular protein chain list). The interface can calculate the 2S composition, display the 2S subsequences and search for DSSP non-standard residues and for pairs/triplets/quadruplets (amino acid patterns in 2S motifs). This work presents some Prot-2S applications showing its usefulness in protein research and as an e-learning tool as well. PMID:19640828
Falquet, Laurent; Bordoli, Lorenza; Ioannidis, Vassilios; Pagni, Marco; Jongeneel, C Victor
EMBnet is a consortium of collaborating bioinformatics groups located mainly within Europe (http://www.embnet.org). Each member country is represented by a 'node', a group responsible for the maintenance of local services for their users (e.g. education, training, software, database distribution, technical support, helpdesk). Among these services a web portal with links and access to locally developed and maintained software is essential and different for each node. Our web portal targets biomedical scientists in Switzerland and elsewhere, offering them access to a collection of important sequence analysis tools mirrored from other sites or developed locally. We describe here the Swiss EMBnet node web site (http://www.ch.embnet.org), which presents a number of original services not available anywhere else.
Yang Woo Ick
Full Text Available Abstract Background For the past few years, scientific controversy has surrounded the large number of errors in forensic and literature mitochondrial DNA (mtDNA data. However, recent research has shown that using mtDNA phylogeny and referring to known mtDNA haplotypes can be useful for checking the quality of sequence data. Results We developed a Web-based bioinformatics resource "mtDNAmanager" that offers a convenient interface supporting the management and quality analysis of mtDNA sequence data. The mtDNAmanager performs computations on mtDNA control-region sequences to estimate the most-probable mtDNA haplogroups and retrieves similar sequences from a selected database. By the phased designation of the most-probable haplogroups (both expected and estimated haplogroups, mtDNAmanager enables users to systematically detect errors whilst allowing for confirmation of the presence of clear key diagnostic mutations and accompanying mutations. The query tools of mtDNAmanager also facilitate database screening with two options of "match" and "include the queried nucleotide polymorphism". In addition, mtDNAmanager provides Web interfaces for users to manage and analyse their own data in batch mode. Conclusion The mtDNAmanager will provide systematic routines for mtDNA sequence data management and analysis via easily accessible Web interfaces, and thus should be very useful for population, medical and forensic studies that employ mtDNA analysis. mtDNAmanager can be accessed at http://mtmanager.yonsei.ac.kr.
Research in signaling networks contributes to a deeper understanding of organism living activities. With the development of experimental methods in the signal transduction field, more and more mechanisms of signaling pathways have been discovered. This paper introduces such popular bioin-formatics analysis methods for signaling networks as the common mechanism of signaling pathways and database resource on the Internet, summerizes the methods of analyzing the structural properties of networks, including structural Motif finding and automated pathways generation, and discusses the modeling and simulation of signaling networks in detail, as well as the research situation and tendency in this area. Now the investigation of signal transduction is developing from small-scale experiments to large-scale network analysis, and dynamic simulation of networks is closer to the real system. With the investigation going deeper than ever, the bioinformatics analysis of signal transduction would have immense space for development and application.
Full Text Available The human microbiome has received much attention because many studies have reported that the human gut microbiome is associated with several diseases. The very large datasets that are produced by these kinds of studies means that bioinformatics approaches are crucial for their analysis. Here, we systematically reviewed bioinformatics tools that are commonly used in microbiome research, including a typical pipeline and software for sequence alignment, abundance profiling, enterotype determination, taxonomic diversity, identifying differentially abundant species/genes, gene cataloging, and functional analyses. We also summarized the algorithms and methods used to define metagenomic species and co-abundance gene groups to expand our understanding of unclassified and poorly understood gut microbes that are undocumented in the current genome databases. Additionally, we examined the methods used to identify metagenomic biomarkers based on the gut microbiome, which might help to expand the knowledge and approaches for disease detection and monitoring.
Full Text Available Flavivirus infections are the most prevalent arthropod-borne infections world wide, often causing severe disease especially among children, the elderly, and the immunocompromised. In the absence of effective antiviral treatment, prevention through vaccination would greatly reduce morbidity and mortality associated with flavivirus infections. Despite the success of the empirically developed vaccines against yellow fever virus, Japanese encephalitis virus and tick-borne encephalitis virus, there is an increasing need for a more rational design and development of safe and effective vaccines. Several bioinformatic tools are available to support such rational vaccine design. In doing so, several parameters have to be taken into account, such as safety for the target population, overall immunogenicity of the candidate vaccine, and efficacy and longevity of the immune responses triggered. Examples of how bio-informatics is applied to assist in the rational design and improvements of vaccines, particularly flavivirus vaccines, are presented and discussed.
Ghulam A. Parray
Full Text Available Saffron (Crocus sativus L. is a sterile triploid plant and belongs to the Iridaceae (Liliales, Monocots. Its genome is of relatively large size and is poorly characterized. Bioinformatics can play an enormous technical role in the sequence-level structural characterization of saffron genomic DNA. Bioinformatics tools can also help in appreciating the extent of diversity of various geographic or genetic groups of cultivated saffron to infer relationships between groups and accessions. The characterization of the transcriptome of saffron stigmas is the most vital for throwing light on the molecular basis of flavor, color biogenesis, genomic organization and biology of gynoecium of saffron. The information derived can be utilized for constructing biological pathways involved in the biosynthesis of principal components of saffron i.e., crocin, crocetin, safranal, picrocrocin and safchiA
Yuen Macaire MS
Full Text Available Abstract Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL calls that are implemented in a set of Application Programming Interfaces (APIs. The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD, Biomolecular Interaction Network Database (BIND, Database of Interacting Proteins (DIP, Molecular Interactions Database (MINT, IntAct, NCBI Taxonomy, Gene Ontology (GO, Online Mendelian Inheritance in Man (OMIM, LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First
Biological sequence alignment is an important and challenging task in bioinformatics. Alignment may be defined as an arrangement of two or more DNA or protein sequences to highlight the regions of their similarity. Sequence alignment is used to infer the evolutionary relationship between a set of protein or DNA sequences. An accurate alignment can provide valuable information for experimentation on the newly found sequences. It is indispensable in basic research as well as in practical applic...
In the past two decades genome sequencing has developed from a laborious and costly technology employed by large international consortia to a widely used, automated and affordable tool used worldwide by many individual research groups. Genome sequences of many food animals and crop plants have been deciphered and are being exploited for fundamental research and applied to improve their breeding programs. The developments in sequencing technologies have also impacted the associated bioinformat...
Bretonnel Cohen, K; Hunter, Lawrence E.
Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. P...
Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S; Jason H Moore
Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these...
Lakhno, V D
Mathematical biology and bioinformatics represent a new and rapidly progressing line of investigations which emerged in the course of work on the project "Human genome". The main applied problems of these sciences are grug design, patient-specific medicine and nanobioelectronics. It is shown that progress in the technology of mass sequencing of the human genome has set the stage for starting the national program on patient-specific medicine.
Fang, Wai-Chi; Lue, Jaw-Chyng
A system comprising very-large-scale integrated (VLSI) circuits is being developed as a means of bioinformatics-oriented analysis and recognition of patterns of fluorescence generated in a microarray in an advanced, highly miniaturized, portable genetic-expression-assay instrument. Such an instrument implements an on-chip combination of polymerase chain reactions and electrochemical transduction for amplification and detection of deoxyribonucleic acid (DNA).
Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H
Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469
Ong, Quang; Nguyen, Phuc; Thao, Nguyen Phuong; Le, Ly
The advance in genomics technology leads to the dramatic change in plant biology research. Plant biologists now easily access to enormous genomic data to deeply study plant high-density genetic variation at molecular level. Therefore, fully understanding and well manipulating bioinformatics tools to manage and analyze these data are essential in current plant genome research. Many plant genome databases have been established and continued expanding recently. Meanwhile, analytical methods based on bioinformatics are also well developed in many aspects of plant genomic research including comparative genomic analysis, phylogenomics and evolutionary analysis, and genome-wide association study. However, constantly upgrading in computational infrastructures, such as high capacity data storage and high performing analysis software, is the real challenge for plant genome research. This review paper focuses on challenges and opportunities which knowledge and skills in bioinformatics can bring to plant scientists in present plant genomics era as well as future aspects in critical need for effective tools to facilitate the translation of knowledge from new sequencing data to enhancement of plant productivity. PMID:27499685
Orton, R J; Gu, Q; Hughes, J; Maabar, M; Modha, S; Vattipally, S B; Wilkie, G S; Davison, A J
The field of viral genomics and bioinformatics is experiencing a strong resurgence due to high-throughput sequencing (HTS) technology, which enables the rapid and cost-effective sequencing and subsequent assembly of large numbers of viral genomes. In addition, the unprecedented power of HTS technologies has enabled the analysis of intra-host viral diversity and quasispecies dynamics in relation to important biological questions on viral transmission, vaccine resistance and host jumping. HTS also enables the rapid identification of both known and potentially new viruses from field and clinical samples, thus adding new tools to the fields of viral discovery and metagenomics. Bioinformatics has been central to the rise of HTS applications because new algorithms and software tools are continually needed to process and analyse the large, complex datasets generated in this rapidly evolving area. In this paper, the authors give a brief overview of the main bioinformatics tools available for viral genomic research, with a particular emphasis on HTS technologies and their main applications. They summarise the major steps in various HTS analyses, starting with quality control of raw reads and encompassing activities ranging from consensus and de novo genome assembly to variant calling and metagenomics, as well as RNA sequencing.
Translational bioinformatics plays an indispensable role in transforming psychoneuroimmunology (PNI) into personalized medicine. It provides a powerful method to bridge the gaps between various knowledge domains in PNI and systems biology. Translational bioinformatics methods at various systems levels can facilitate pattern recognition, and expedite and validate the discovery of systemic biomarkers to allow their incorporation into clinical trials and outcome assessments. Analysis of the correlations between genotypes and phenotypes including the behavioral-based profiles will contribute to the transition from the disease-based medicine to human-centered medicine. Translational bioinformatics would also enable the establishment of predictive models for patient responses to diseases, vaccines, and drugs. In PNI research, the development of systems biology models such as those of the neurons would play a critical role. Methods based on data integration, data mining, and knowledge representation are essential elements in building health information systems such as electronic health records and computerized decision support systems. Data integration of genes, pathophysiology, and behaviors are needed for a broad range of PNI studies. Knowledge discovery approaches such as network-based systems biology methods are valuable in studying the cross-talks among pathways in various brain regions involved in disorders such as Alzheimer's disease.
Cohen, K Bretonnel; Hunter, Lawrence E
Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing.
K Bretonnel Cohen
Full Text Available Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. Potential use cases include better phenotyping of research subjects, and pharmacogenomic research. A variety of methods for evaluating text mining applications exist, including corpora, structured test suites, and post hoc judging. Two basic principles of linguistic structure are relevant for building text mining applications. One is that linguistic structure consists of multiple levels. The other is that every level of linguistic structure is characterized by ambiguity. There are two basic approaches to text mining: rule-based, also known as knowledge-based; and machine-learning-based, also known as statistical. Many systems are hybrids of the two approaches. Shared tasks have had a strong effect on the direction of the field. Like all translational bioinformatics software, text mining software for translational bioinformatics can be considered health-critical and should be subject to the strictest standards of quality assurance and software testing.
董永权; 李庆忠; 丁艳辉; 张永新
针对已有查询接口匹配方法匹配器权重设置困难、匹配决策缺乏有效处理的局限性,提出一种基于证据理论和任务分配的Deep Web查询接口匹配方法.该方法通过引人改进的D-S证据理论自动融合多个匹配器结果,避免手工设定匹配器权重,有效减少人工干预.通过对任务分配问题进行扩展,将查询接口的一对一匹配决策问题转化为扩展的任务分配问题,为源查询接口中的每一个属性选择合适的匹配,并在此基础上,采用树结构启发式规则进行一对多匹配决策.实验结果表明ETTA-IM方法具有较高的查准率和查全率.%To solve the limitations of existing query interface matching which have the difficulties of weight setting of the matcher and the absence of the efficient processing of matching decision, a deep web query interface matching approach based on evidence theory and task assignment is proposed called evidence theory and task assignment based query interface matching approach (ETTA-IM).Firstly, an improved D-S evidence theory is used to automatically combine multiple matchers.Thus, the weight of each matcher is not required to be set by hand and human involvement is reduced.Then, a method is used to select a proper attribute correspondence of each source attribute from target query interface, which converts one-to-one matching decision to the extended task assignment problem.Finally, based on one-to-one matching results, some heuristic rules of tree structure are used to perform one-to-many matching decision.Experimental results show that ETTA-IM approach has high precision and recall measure.
Deshpande, Yogesh; Murugesan, San; Ginige, Athula; Hansen, Steve; Schwabe, Daniel; Gaedke, Martin; White, Bebo
Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: a) why is it needed? b) what is its domain of operation? c) how does it help and what should it do to improve Web application develo...
Gurjar, Anoop Kishor Singh; Panwar, Abhijeet Singh; Gupta, Rajinder; Mantri, Shrikant S
High-throughput small RNA (sRNA) sequencing technology enables an entirely new perspective for plant microRNA (miRNA) research and has immense potential to unravel regulatory networks. Novel insights gained through data mining in publically available rich resource of sRNA data will help in designing biotechnology-based approaches for crop improvement to enhance plant yield and nutritional value. Bioinformatics resources enabling meta-analysis of miRNA expression across multiple plant species are still evolving. Here, we report PmiRExAt, a new online database resource that caters plant miRNA expression atlas. The web-based repository comprises of miRNA expression profile and query tool for 1859 wheat, 2330 rice and 283 maize miRNA. The database interface offers open and easy access to miRNA expression profile and helps in identifying tissue preferential, differential and constitutively expressing miRNAs. A feature enabling expression study of conserved miRNA across multiple species is also implemented. Custom expression analysis feature enables expression analysis of novel miRNA in total 117 datasets. New sRNA dataset can also be uploaded for analysing miRNA expression profiles for 73 plant species. PmiRExAt application program interface, a simple object access protocol web service allows other programmers to remotely invoke the methods written for doing programmatic search operations on PmiRExAt database.Database URL:http://pmirexat.nabi.res.in. PMID:27081157
Full Text Available Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model.
Friedrich, Andreas; Kenar, Erhan; Kohlbacher, Oliver; Nahnsen, Sven
Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model. PMID:25954760
Mei, Yongguo; Carbo, Adria; Hoops, Stefan; Hontecillas, Raquel; Bassaganya-Riera, Josep
Modeling and simulations approaches have been widely used in computational biology, mathematics, bioinformatics and engineering to represent complex existing knowledge and to effectively generate novel hypotheses. While deterministic modeling strategies are widely used in computational biology, stochastic modeling techniques are not as popular due to a lack of user-friendly tools. This paper presents ENISI SDE, a novel web-based modeling tool with stochastic differential equations. ENISI SDE provides user-friendly web user interfaces to facilitate adoption by immunologists and computational biologists. This work provides three major contributions: (1) discussion of SDE as a generic approach for stochastic modeling in computational biology; (2) development of ENISI SDE, a web-based user-friendly SDE modeling tool that highly resembles regular ODE-based modeling; (3) applying ENISI SDE modeling tool through a use case for studying stochastic sources of cell heterogeneity in the context of CD4+ T cell differentiation. The CD4+ T cell differential ODE model has been published  and can be downloaded from biomodels.net. The case study reproduces a biological phenomenon that is not captured by the previously published ODE model and shows the effectiveness of SDE as a stochastic modeling approach in biology in general and immunology in particular and the power of ENISI SDE.
Ménager, Hervé; Kalaš, Matúš; Rapacki, Kristoffer;
The diversity and complexity of bioinformatics resources presents significant challenges to their localisation, deployment and use, creating a need for reliable systems that address these issues. Meanwhile, users demand increasingly usable and integrated ways to access and analyse data, especially...... within convenient, integrated “workbench” environments. Resource descriptions are the core element of registry and workbench systems, which are used to both help the user find and comprehend available software tools, data resources, and Web Services, and to localise, execute and combine them...
Robbins Kay A
Full Text Available Abstract Background Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. Findings SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. Conclusions We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.
Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.
The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.
Burch, Randall O.
Discussion of Web-based distance education focuses on communication issues. Highlights include Internet communications; components of a Web site, including site architecture, user interface, information delivery method, and mode of feedback; elements of Web design, including conceptual design, sensory design, and reactive design; and a Web…
Jessica D. Tenenbaum
Though a relatively young discipline, translational bioinformatics (TBI) has become a key component of biomedical research in the era of precision medicine. Development of high-throughput technologies and electronic health records has caused a paradigm shift in both healthcare and biomedical research. Novel tools and methods are required to convert increasingly voluminous datasets into information and actionable knowledge. This review provides a definition and contex-tualization of the term TBI, describes the discipline’s brief history and past accomplishments, as well as current foci, and concludes with predictions of future directions in the field.
The general audience for these lectures is mainly physicists, computer scientists, engineers or the general public wanting to know more about what’s going on in the biosciences. What’s bioinformatics and why is all this fuss being made about it ? What’s this revolution triggered by the human genome project ? Are there any results yet ? What are the problems ? What new avenues of research have been opened up ? What about the technology ? These new developments will be compared with what happened at CERN earlier in its evolution, and it is hoped that the similiraties and contrasts will stimulate new curiosity and provoke new thoughts.
Lue, Jaw-Chyng L.; Fang, Wai-Chi
A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.
Magana, Alejandra J.; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari
Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the…
Asem KASEM; Tetsuo IDA
We present a computing environment for ori-gami on the web. The environment consists of the compu-tational origami engine Eos for origami construction, visualization, and geometrical reasoning, WEвEOS for pro-viding web interface to the functionalities of Eos, and web service system SCORUM for symbolic computing web ser-vices. WEBEOS is developed using Web2.0 technologies, and provides a graphical interactive web interface for ori-gami construction and proving. In SCORUM, we are prepar-ing web services for a wide range of symbolic computing systems, and are using these services in our origami envir-onment. We explain the functionalities of this environment, and discuss its architectural and technological features.
A kinetic interface for orientation detection in a video training system is disclosed. The interface includes a balance platform instrumented with inertial motion sensors. The interface engages a participant's sense of balance in training exercises.......A kinetic interface for orientation detection in a video training system is disclosed. The interface includes a balance platform instrumented with inertial motion sensors. The interface engages a participant's sense of balance in training exercises....
Pravin Dudhagara; Sunil Bhavsar; Chintan Bhagat; Anjana Ghelani; Shreyas Bhatt; Rajesh Patel
The development of next-generation sequencing (NGS) platforms spawned an enormous volume of data. This explosion in data has unearthed new scalability challenges for existing bioinformatics tools. The analysis of metagenomic sequences using bioinformatics pipelines is complicated by the substantial complexity of these data. In this article, we review several commonly-used online tools for metagenomics data analysis with respect to their quality and detail of analysis using simulated metagenomics data. There are at least a dozen such software tools presently available in the public domain. Among them, MGRAST, IMG/M, and METAVIR are the most well-known tools according to the number of citations by peer-reviewed scientific media up to mid-2015. Here, we describe 12 online tools with respect to their web link, annotation pipelines, clustering methods, online user support, and availability of data storage. We have also done the rating for each tool to screen more potential and preferential tools and evaluated five best tools using synthetic metagenome. The article comprehensively deals with the contemporary problems and the prospects of metagenomics from a bioinformatics viewpoint.
Dudhagara, Pravin; Bhavsar, Sunil; Bhagat, Chintan; Ghelani, Anjana; Bhatt, Shreyas; Patel, Rajesh
The development of next-generation sequencing (NGS) platforms spawned an enormous volume of data. This explosion in data has unearthed new scalability challenges for existing bioinformatics tools. The analysis of metagenomic sequences using bioinformatics pipelines is complicated by the substantial complexity of these data. In this article, we review several commonly-used online tools for metagenomics data analysis with respect to their quality and detail of analysis using simulated metagenomics data. There are at least a dozen such software tools presently available in the public domain. Among them, MGRAST, IMG/M, and METAVIR are the most well-known tools according to the number of citations by peer-reviewed scientific media up to mid-2015. Here, we describe 12 online tools with respect to their web link, annotation pipelines, clustering methods, online user support, and availability of data storage. We have also done the rating for each tool to screen more potential and preferential tools and evaluated five best tools using synthetic metagenome. The article comprehensively deals with the contemporary problems and the prospects of metagenomics from a bioinformatics viewpoint.
List, Markus; Alcaraz, Nicolas; Dissing-Hansen, Martin;
We present KeyPathwayMinerWeb, the first online platform for de novo pathway enrichment analysis directly in the browser. Given a biological interaction network (e.g. protein-protein interactions) and a series of molecular profiles derived from one or multiple OMICS studies (gene expression......, for instance), KeyPathwayMiner extracts connected sub-networks containing a high number of active or differentially regulated genes (proteins, metabolites) in the molecular profiles. The web interface at (http://keypathwayminer.compbio.sdu.dk) implements all core functionalities of the KeyPathwayMiner tool set...... such as data integration, input of background knowledge, batch runs for parameter optimization and visualization of extracted pathways. In addition to an intuitive web interface, we also implemented a RESTful API that now enables other online developers to integrate network enrichment as a web service...
Sprimont, P.-G.; Ricci, D.; Nicastro, L.
The Web browser has become one of the major application interfaces for remotely participating in magnetic fusion. Web portals are used to present very diverse sources of information in a unified way. While a web portal has several benefits over other software interfaces, such as providing single point of access for multiple computational services, and eliminating the need for client software installation, the design and development of a web portal has unique challenges. One of the challenges is that a web portal needs to be fast and interactive despite a high volume of tools and information that it presents. Another challenge is the visual output on the web portal often is overwhelming due to the high volume of data generated by complex scientific instruments and experiments; therefore the applications and information should be customizable depending on the needs of users. An appropriate software architecture and web technologies can meet these problems. A web-portal has been designed to support the experimental activities of DIII-D researchers worldwide. It utilizes a multi-tier software architecture, and web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services for users to select. Customizable services are: real-time experiment status monitoring, diagnostic data access, interactive data visualization. The web-portal also supports interactive collaborations by providing collaborative logbook, shared visualization and online instant message services. Furthermore, the web portal will provide a mechanism to allow users to create their own applications on the web portal as well as bridging capabilities to external applications such as Twitter and other social networks. In this series of slides, we describe the software architecture of this scientific web portal and our experiences in utilizing web 2.0 technologies. A
The University of Arizona Artificial Intelligence Lab (AI Lab) Dark Web project is a long-term scientific research program that aims to study and understand the international terrorism (Jihadist) phenomena via a computational, data-centric approach. We aim to collect "ALL" web content generated by international terrorist groups, including web sites, forums, chat rooms, blogs, social networking sites, videos, virtual world, etc. We have developed various multilingual data mining, text mining, and web mining techniques to perform link analysis, content analysis, web metrics (technical
Fuertes Castro, José Luis; Pérez Pérez, Aurora
Muchos sitios Web tienen un importante problema de accesibilidad, ya que su diseño no ha contemplado la gran diversidad funcional que presentan cada uno de los potenciales usuarios. Las directrices de accesibilidad del contenido Web, desarrolladas por el Consorcio de la Web, están compuestas por una serie de recomendaciones para que una página Web pueda utilizarse por cualquier persona. Uno de los principales problemas surge a la hora de comprobar la accesibilidad de una página Web, dado que,...
Hall, Wendy; Tiropanis, Thanassis
This paper examines the evolution of the World Wide Web as a network of networks and discusses the emergence of Web Science as an interdisciplinary area that can provide us with insights on how the Web developed, and how it has affected and is affected by society. Through its different stages of evolution, the Web has gradually changed from a technological network of documents to a network where documents, data, people and organisations are interlinked in various and often unexpected ways. It...
Perhaps the most significant tool of our internet age is the web search engine, providing a powerful interface for accessing the vast amount of information available on the world wide web and beyond. While still in its infancy compared to the knowledge tools that precede it - such as the dictionary or encyclopedia - the impact of web search engines on society and culture has already received considerable attention from a variety of academic disciplines and perspectives. This article aims to organize a meta-discipline of “web search studies,” centered around a nucleus of major research on web search engines from five key perspectives: technical foundations and evaluations; transaction log analyses; user studies; political, ethical, and cultural critiques; and legal and policy analyses.
Nomi L Harris
Full Text Available The Bioinformatics Open Source Conference (BOSC is organized by the Open Bioinformatics Foundation (OBF, a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG before the annual Intelligent Systems in Molecular Biology (ISMB conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.
@@ The Studio of Computational Biology & Bioinformatics (SCBB), IHBT, CSIR,Palampur, India organized one of the very first national workshop funded by DBT,Govt.of India, on the Bioinformatics issues associated with next generation sequencing approaches.The course structure was designed by SCBB, IHBT.The workshop took place in the IHBT premise on 17 and 18 June 2010.
Jungck, John R; Donovan, Samuel S; Weisstein, Anton E; Khiripet, Noppadon; Everse, Stephen J
Bioinformatics is central to biology education in the 21st century. With the generation of terabytes of data per day, the application of computer-based tools to stored and distributed data is fundamentally changing research and its application to problems in medicine, agriculture, conservation and forensics. In light of this 'information revolution,' undergraduate biology curricula must be redesigned to prepare the next generation of informed citizens as well as those who will pursue careers in the life sciences. The BEDROCK initiative (Bioinformatics Education Dissemination: Reaching Out, Connecting and Knitting together) has fostered an international community of bioinformatics educators. The initiative's goals are to: (i) Identify and support faculty who can take leadership roles in bioinformatics education; (ii) Highlight and distribute innovative approaches to incorporating evolutionary bioinformatics data and techniques throughout undergraduate education; (iii) Establish mechanisms for the broad dissemination of bioinformatics resource materials and teaching models; (iv) Emphasize phylogenetic thinking and problem solving; and (v) Develop and publish new software tools to help students develop and test evolutionary hypotheses. Since 2002, BEDROCK has offered more than 50 faculty workshops around the world, published many resources and supported an environment for developing and sharing bioinformatics education approaches. The BEDROCK initiative builds on the established pedagogical philosophy and academic community of the BioQUEST Curriculum Consortium to assemble the diverse intellectual and human resources required to sustain an international reform effort in undergraduate bioinformatics education. PMID:21036947
Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…
Ramlo, Susan E.; McConnell, David; Duan, Zhong-Hui; Moore, Francisco B.
Faculty at a Midwestern metropolitan public university recently developed a course on bioinformatics that emphasized collaboration and inquiry. Bioinformatics, essentially the application of computational tools to biological data, is inherently interdisciplinary. Thus part of the challenge of creating this course was serving the needs and…
Howard, David R.; Miskowski, Jennifer A.; Grunwald, Sandra K.; Abler, Michael L.
At the University of Wisconsin-La Crosse, we have undertaken a program to integrate the study of bioinformatics across the undergraduate life science curricula. Our efforts have included incorporating bioinformatics exercises into courses in the biology, microbiology, and chemistry departments, as well as coordinating the efforts of faculty within…
Web browsers have become a major application interface for participating in scientific experiments such as those in magnetic fusion. The recent advances in web technologies motivated the deployment of interactive web applications with rich features. In the scientific world, web applications have been deployed in portal environments. When used in a scientific research environment, such as fusion experiments, web portals can present diverse sources of information in a unified interface. However, the design and development of a scientific web portal has its own challenges. One such challenge is that a web portal needs to be fast and interactive despite the high volume of information and number of tools it presents. Another challenge is that the visual output of the web portal must not be overwhelming to the end users, despite the high volume of data generated by fusion experiments. Therefore, the applications and information should be customizable depending on the needs of end users. In order to meet these challenges, the design and implementation of a web portal needs to support high interactivity and user customization. A web portal has been designed to support the experimental activities of DIII-D researchers worldwide by providing multiple services, such as real-time experiment status monitoring, diagnostic data access and interactive data visualization. The web portal also supports interactive collaborations by providing a collaborative logbook, shared visualization and online instant messaging services. The portal's design utilizes the multi-tier software architecture and has been implemented utilizing web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services, which allows users to create a unique, personalized working environment to fit their own needs and interests. This paper describes the software
ACADEMIC TRAINING LECTURE SERIES 27, 28 February 1, 2, 3 March 2006 from 11:00 to 12:00 - Auditorium, bldg. 500 Decoding the Genome A special series of 5 lectures on: Recent extraordinary advances in the life sciences arising through new detection technologies and bioinformatics The past five years have seen an extraordinary change in the information and tools available in the life sciences. The sequencing of the human genome, the discovery that we possess far fewer genes than foreseen, the measurement of the tiny changes in the genomes that differentiate us, the sequencing of the genomes of many pathogens that lead to diseases such as malaria are all examples of completely new information that is now available in the quest for improved healthcare. New tools have allowed similar strides in the discovery of the associated protein structures, providing invaluable information for those searching for new drugs. New DNA microarray chips permit simultaneous measurement of the state of expression of tens...
Brooksbank, Catherine; Camon, Evelyn; Harris, Midori A; Magrane, Michele; Martin, Maria Jesus; Mulder, Nicola; O'Donovan, Claire; Parkinson, Helen; Tuli, Mary Ann; Apweiler, Rolf; Birney, Ewan; Brazma, Alvis; Henrick, Kim; Lopez, Rodrigo; Stoesser, Guenter; Stoehr, Peter; Cameron, Graham
As the amount of biological data grows, so does the need for biologists to store and access this information in central repositories in a free and unambiguous manner. The European Bioinformatics Institute (EBI) hosts six core databases, which store information on DNA sequences (EMBL-Bank), protein sequences (SWISS-PROT and TrEMBL), protein structure (MSD), whole genomes (Ensembl) and gene expression (ArrayExpress). But just as a cell would be useless if it couldn't transcribe DNA or translate RNA, our resources would be compromised if each existed in isolation. We have therefore developed a range of tools that not only facilitate the deposition and retrieval of biological information, but also allow users to carry out searches that reflect the interconnectedness of biological information. The EBI's databases and tools are all available on our website at www.ebi.ac.uk. PMID:12519944
Surangi W. Punyasena
Full Text Available Recent advances in microscopy, imaging, and data analyses have permitted both the greater application of quantitative methods and the collection of large data sets that can be used to investigate plant morphology. This special issue, the first for Applications in Plant Sciences, presents a collection of papers highlighting recent methods in the quantitative study of plant form. These emerging biometric and bioinformatic approaches to plant sciences are critical for better understanding how morphology relates to ecology, physiology, genotype, and evolutionary and phylogenetic history. From microscopic pollen grains and charcoal particles, to macroscopic leaves and whole root systems, the methods presented include automated classification and identification, geometric morphometrics, and skeleton networks, as well as tests of the limits of human assessment. All demonstrate a clear need for these computational and morphometric approaches in order to increase the consistency, objectivity, and throughput of plant morphological studies.
Handel, Adam E.
Estrogen is a steroid hormone that plays critical roles in a myriad of intracellular pathways. The expression of many genes is regulated through the steroid hormone receptors ESR1 and ESR2. These bind to DNA and modulate the expression of target genes. Identification of estrogen target genes is greatly facilitated by the use of transcriptomic methods, such as RNA-seq and expression microarrays, and chromatin immunoprecipitation with massively parallel sequencing (ChIP-seq). Combining transcriptomic and ChIP-seq data enables a distinction to be drawn between direct and indirect estrogen target genes. This chapter will discuss some methods of identifying estrogen target genes that do not require any expertise in programming languages or complex bioinformatics. PMID:26585125
Handel, Adam E
Estrogen is a steroid hormone that plays critical roles in a myriad of intracellular pathways. The expression of many genes is regulated through the steroid hormone receptors ESR1 and ESR2. These bind to DNA and modulate the expression of target genes. Identification of estrogen target genes is greatly facilitated by the use of transcriptomic methods, such as RNA-seq and expression microarrays, and chromatin immunoprecipitation with massively parallel sequencing (ChIP-seq). Combining transcriptomic and ChIP-seq data enables a distinction to be drawn between direct and indirect estrogen target genes. This chapter discusses some methods of identifying estrogen target genes that do not require any expertise in programming languages or complex bioinformatics. PMID:26585125
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods
Next generation sequencing (NGS) has revolutionized the field of genomics and its wide range of applications has resulted in the genome-wide analysis of hundreds of species and the development of thousands of computational tools. This thesis represents my work on NGS analysis of four species, Lotus...... japonicus (Lotus), Vaccinium corymbosum (blueberry), Stegodyphus mimosarum (spider) and Trifolium occidentale (clover). From a bioinformatics data analysis perspective, my work can be divided into three parts; genome annotation, small RNA, and gene expression analysis. Lotus is a legume of significant...... agricultural and biological importance. Its capacity to form symbiotic relationships with rhizobia and microrrhizal fungi has fascinated researchers for years. Lotus has a small genome of approximately 470 Mb and a short life cycle of 2 to 3 months, which has made Lotus a model legume plant for many molecular...
Ch Ram Mohan Reddy
Full Text Available Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified Modeling Language, Use Case Diagram, Sequence Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Multi-Tier Queuing Architecture. We have identified the bottle neck resources.
Reddy, Ch Ram Mohan; Srinivasa, K G; Kumar, T V Suresh; Kanth, K Rajani
Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified Modeling Language, Use Case Diagram, Sequence Diagram, Deployment Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Multi-Tier Queuing Architecture. We have identified the bottle neck resources.
Kristina M. Obom
Full Text Available The completely online Master of Science in Bioinformatics program differs from the onsite program only in the mode of content delivery. Analysis of student satisfaction indicates no statistically significant difference between most online and onsite student responses, however, online and onsite students do differ significantly in their responses to a few questions on the course evaluation queries. Analysis of student exam performance using three assessments indicates that there was no significant difference in grades earned by students in online and onsite courses. These results suggest that our model for online bioinformatics education provides students with a rigorous course of study that is comparable to onsite course instruction and possibly provides a more rigorous course load and more opportunities for participation.
Obom, Kristina. M.; Cummings, Patrick J.
The completely online Master of Science in Bioinformatics program differs from the onsite program only in the mode of content delivery. Analysis of student satisfaction indicates no statistically significant difference between most online and onsite student responses, however, online and onsite students do differ significantly in their responses to a few questions on the course evaluation queries. Analysis of student exam performance using three assessments indicates that there was no signifi...
Jose M. Ferreira
Full Text Available Remote Experimentation is an educational resource that allows teachers to strengthen the practical contents of science & engineering courses. However, building up the interfaces to remote experiments is not a trivial task. Although teachers normally master the practical contents addressed by a particular remote experiment they usually lack the programming skills required to quickly build up the corresponding web interface. This paper describes the automatic generation of experiment interfaces through a web-accessible Java application. The application displays a list of existent modules and once the requested modules have been selected, it generates the code that enables the browser to display the experiment interface. The toolsÃ¢Â€Â™ main advantage is enabling non-tech teachers to create their own remote experiments.
Full Text Available Web Services are mounting as an inventive mechanism for rendering services to subjective devices over the WWW. As a consequence of the rapid growth of Web Services applications and the plenty of Service Providers, the consumer is facing with the inevitability of selecting the “right” Service Provider. In such a scenario the Quality of Service (QoS serves as a target to differentiate Service Providers. To select the best Web Services / Service Providers, Ranking and Optimization of Web Service Compositions are challenging areas of research with significant implications for the realization of the “Web of Services” revelation. The “Semantic Web Services” use formal semantic descriptions of Web Service functionality and interface to enable automated reasoning over Web Service Compositions. This study from its experimental results revealed that the existing Semantic Web Services faces a few challenging issues such as poor prediction of best Web Services and optimized Service Providers, which leads to QoS degradation of Semantic Web. To address and overcome these identified issues, this research work is calculating the semantic similarities, utilization of various Web Services and Service Providers. After measuring these parameters, all the Web Services are ranked based on their Utilization. Finally, our proposed technique, selected best Web Services based on their ranking and placed in Web Services Composition. From the experimental results, it is established that our proposed mechanism improves the performance of Semantic Web in terms of Execution Time, Processor Utilization and Memory Management.
Herman, I.; Gylling, M.
Although using advanced Web technologies at their core, e-books represent a parallel universe to everyday Web documents. Their production workflows, user interfaces, their security, access, or privacy models, etc, are all distinct. There is a lack of a vision on how to unify Digital Publishing and t
Huurdeman, H.C.; Ben David, A.; Samar, T.
Web archives provide access to snapshots of the Web of the past, and could be valuable for research purposes. However, access to these archives is often limited, both in terms of data availability, and interfaces to this data. This paper explores new methods to overcome these limitations. It present
Camacho Castro, Juan; Guimerà, Roger; Nunes Amaral, Luís A.
We analyze the properties of seven community food webs from a variety of environments, including freshwater, marine-freshwater interfaces, and terrestrial environments. We uncover quantitative unifying patterns that describe the properties of the diverse trophic webs considered and suggest that statistical physics concepts such as scaling and universality may be useful in the description of ecosystems. Specifically, we find that several quantities characterizing these diverse food webs obey f...
The Web browser has become one of the major application interfaces for remotely participating in magnetic fusion experiments. Recently in other areas, web portals have begun to be deployed. These portals are used to present very diverse sources of information in a unified way. While a web portal has several benefits over other software interfaces, such as providing single point of access for multiple computational services, and eliminating the need for client software installation, the design and development of a web portal has unique challenges. One of the challenges is that a web portal needs to be fast and interactive despite a high volume of tools and information that it presents. Another challenge is the visual output on the web portal often is overwhelming due to the high volume of data generated by complex scientific instruments and experiments; therefore the applications and information should be customizable depending on the needs of users. An appropriate software architecture and web technologies can meet these problems. A web-portal has been designed to support the experimental activities of DIII-D researchers worldwide. It utilizes a multi-tier software architecture, and web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services for users to select. The users can create a unique personalized working environment to fit their own needs and interests. Customizable services are: real-time experiment status monitoring, diagnostic data access, interactive data visualization. The web-portal also supports interactive collaborations by providing collaborative logbook, shared visualization and online instant message services. Furthermore, the web portal will provide a mechanism to allow users to create their own applications on the web portal as well as bridging capabilities to external applications such as
Picardi, Ernesto; Regina, Teresa M R; Verbitskiy, Daniil; Brennicke, Axel; Quagliariello, Carla
RNA editing is a post-transcriptional molecular process whereby the information in a genetic message is modified from that in the corresponding DNA template by means of nucleotide substitutions, insertions and/or deletions. It occurs mostly in organelles by clade-specific diverse and unrelated biochemical mechanisms. RNA editing events have been annotated in primary databases as GenBank and at more sophisticated level in the specialized databases REDIdb, dbRES and EdRNA. At present, REDIdb is the only freely available database that focuses on the organellar RNA editing process and annotates each editing modification in its biological context. Here we present an updated and upgraded release of REDIdb with a web-interface refurbished with graphical and computational facilities that improve RNA editing investigations. Details of the REDIdb features and novelties are illustrated and compared to other RNA editing databases. REDIdb is freely queried at http://biologia.unical.it/py_script/REDIdb/.
Full Text Available ABSTRACT:The traditional methods for mining foods for bioactive peptides are tedious and long. Similar to the drug industry, the length of time to identify and deliver a commercial health ingredient that reduces disease symptoms can take anything between 5 to 10 years. Reducing this time and effort is crucial in order to create new commercially viable products with clear and important health benefits. In the past few years, bioinformatics, the science that brings together fast computational biology, and efficient genome mining, is appearing as the long awaited solution to this problem. By quickly mining food genomes for characteristics of certain food therapeutic ingredients, researchers can potentially find new ones in a matter of a few weeks. Yet, surprisingly, very little success has been achieved so far using bioinformatics in mining for food bioactives.The absence of food specific bioinformatic mining tools, the slow integration of both experimental mining and bioinformatics, and the important difference between different experimental platforms are some of the reasons for the slow progress of bioinformatics in the field of functional food and more specifically in bioactive peptide discovery.In this paper I discuss some methods that could be easily translated, using a rational peptide bioinformatics design, to food bioactive peptide mining. I highlight the need for an integrated food peptide database. I also discuss how to better integrate experimental work with bioinformatics in order to improve the mining of food for bioactive peptides, therefore achieving a higher success rates.
Brazas, Michelle D; Ouellette, B F Francis
Bioinformatics.ca has been hosting continuing education programs in introductory and advanced bioinformatics topics in Canada since 1999 and has trained more than 2,000 participants to date. These workshops have been adapted over the years to keep pace with advances in both science and technology as well as the changing landscape in available learning modalities and the bioinformatics training needs of our audience. Post-workshop surveys have been a mandatory component of each workshop and are used to ensure appropriate adjustments are made to workshops to maximize learning. However, neither bioinformatics.ca nor others offering similar training programs have explored the long-term impact of bioinformatics continuing education training. Bioinformatics.ca recently initiated a look back on the impact its workshops have had on the career trajectories, research outcomes, publications, and collaborations of its participants. Using an anonymous online survey, bioinformatics.ca analyzed responses from those surveyed and discovered its workshops have had a positive impact on collaborations, research, publications, and career progression. PMID:27281025
Ravn, Anders P.; Staunstrup, Jørgen
This paper proposes a model for specifying interfaces between concurrently executing modules of a computing system. The model does not prescribe a particular type of communication protocol and is aimed at describing interfaces between both software and hardware modules or a combination of the two....... The model describes both functional and timing properties of an interface...
Ponyik, Joseph G.; York, David W.
Embedded Systems have traditionally been developed in a highly customized manner. The user interface hardware and software along with the interface to the embedded system are typically unique to the system for which they are built, resulting in extra cost to the system in terms of development time and maintenance effort. World Wide Web standards have been developed in the passed ten years with the goal of allowing servers and clients to intemperate seamlessly. The client and server systems can consist of differing hardware and software platforms but the World Wide Web standards allow them to interface without knowing about the details of system at the other end of the interface. Embedded Web Technology is the merging of Embedded Systems with the World Wide Web. Embedded Web Technology decreases the cost of developing and maintaining the user interface by allowing the user to interface to the embedded system through a web browser running on a standard personal computer. Embedded Web Technology can also be used to simplify an Embedded System's internal network.
Drijfhout, Wanno; Oliver, Jundt; Wevers, Lesley; Hiemstra, Djoerd
We use Common Crawl's 25TB data set of web pages to construct a database of associated concepts using Hadoop. The database can be queried through a web application with two query interfaces. A textual interface allows searching for similarities and differences between multiple concepts using a query
Gentleman, R.C.; Carey, V.J.; Bates, D.M.;
The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisci......The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry...... into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples....
Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke
Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics.
Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke
Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics. PMID:23396756
GAO; George; Fu
Highly pathogenic influenza A virus H5N1 has spread out worldwide and raised the public concerns. This increased the output of influenza virus sequence data as well as the research publication and other reports. In order to fight against H5N1 avian flu in a comprehensive way, we designed and started to set up the Website for Avian Flu Information (http://www.avian-flu.info) from 2004. Other than the influenza virus database available, the website is aiming to integrate diversified information for both researchers and the public. From 2004 to 2009, we collected information from all aspects, i.e. reports of outbreaks, scientific publications and editorials, policies for prevention, medicines and vaccines, clinic and diagnosis. Except for publications, all information is in Chinese. Till April 15, 2009, the cumulative news entries had been over 2000 and research papers were approaching 5000. By using the curated data from Influenza Virus Resource, we have set up an influenza virus sequence database and a bioinformatic platform, providing the basic functions for the sequence analysis of influenza virus. We will focus on the collection of experimental data and results as well as the integration of the data from the geological information system and avian influenza epidemiology.
LIU Di; LIU Quan-He; WU Lin-Huan; LIU Bin; WU Jun; LAO Yi-Mei; LI Xiao-Jing; GAO George Fu; MA Jun-Cai
Highly pathogenic influenza A virus H5N1 has spread out worldwide and raised the public concerns. This increased the output of influenza virus sequence data as well as the research publication and other reports. In order to fight against H5N1 avian flu in a comprehensive way, we designed and started to set up the Website for Avian Flu Information (http://www.avian-flu.info) from 2004. Other than the influenza virus database available, the website is aiming to integrate diversified information for both researchers and the public. From 2004 to 2009, we collected information from all aspects, i.e. reports of outbreaks, scientific publications and editorials, policies for prevention, medicines and vaccines, clinic and diagnosis. Except for publications, all information is in Chinese. Till April 15, 2009, the cumulative news entries had been over 2000 and research papers were approaching 5000. By using the curated data from Influenza Virus Resource, we have set up an influenza virus sequence database and a bioin-formatic platform, providing the basic functions for the sequence analysis of influenza virus. We will focus on the collection of experimental data and results as well as the integration of the data from the geological information system and avian influenza epidemiology.
One of the mechanisms of external signal transduction (ionizing radiation, toxicants, stress) to the target cell is the existence of membrane and intracellular proteins with intrinsic tyrosine kinase activity. No wonder that etiology of malignant growth links to abnormalities in signal transduction through tyrosine kinases. The epidermal growth factor receptor (EGFR) tyrosine kinases play fundamental roles in development, proliferation and differentiation of tissues of epithelial, mesenchymal and neuronal origin. There are four types of EGFR: EGF receptor (ErbB1/HER1), ErbB2/Neu/HER2, ErbB3/HER3 and ErbB4/HER4. Abnormal expression of EGFR, appearance of receptor mutants with changed ability to protein-protein interactions or increased tyrosine kinase activity have been implicated in the malignancy of different types of human tumors. Bioinformatics is currently using in investigation on design and selection of drugs that can make alterations in structure or competitively bind with receptors and so display antagonistic characteristics. (authors)
Full Text Available On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1 First we describe the basics of web mining, types of web mining. 2 Details of each web mining technique.3We propose the architecture for the personalized web page recommendation.
de Bruijne, M.A.
The rise of the mobile internet has rapidly changed the landscape for fielding web surveys. The devices that respondents use to take a web survey vary greatly in size and user interface. This diversity in the interaction between survey and respondent makes it challenging to design a web survey for t
Dragut, Eduard Constantin
An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…
Obrenovic, Z.; Ossenbruggen, J.R. van
A Web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities,
Semantic Web Services for Web Databases introduces an end-to-end framework for querying Web databases using novel Web service querying techniques. This includes a detailed framework for the query infrastructure for Web databases and services. Case studies are covered in the last section of this book. Semantic Web Services For Web Databases is designed for practitioners and researchers focused on service-oriented computing and Web databases.
Hiraoka, Satoshi; Yang, Ching-Chia; Iwasaki, Wataru
Metagenomic approaches are now commonly used in microbial ecology to study microbial communities in more detail, including many strains that cannot be cultivated in the laboratory. Bioinformatic analyses make it possible to mine huge metagenomic datasets and discover general patterns that govern microbial ecosystems. However, the findings of typical metagenomic and bioinformatic analyses still do not completely describe the ecology and evolution of microbes in their environments. Most analyses still depend on straightforward sequence similarity searches against reference databases. We herein review the current state of metagenomics and bioinformatics in microbial ecology and discuss future directions for the field. New techniques will allow us to go beyond routine analyses and broaden our knowledge of microbial ecosystems. We need to enrich reference databases, promote platforms that enable meta- or comprehensive analyses of diverse metagenomic datasets, devise methods that utilize long-read sequence information, and develop more powerful bioinformatic methods to analyze data from diverse perspectives. PMID:27383682
Reviews the development of scalable pattern recognition algorithms for computational biology and bioinformatics Includes numerous examples and experimental results to support the theoretical concepts described Concludes each chapter with directions for future research and a comprehensive bibliography
Hiraoka, Satoshi; Yang, Ching-chia; Iwasaki, Wataru
Metagenomic approaches are now commonly used in microbial ecology to study microbial communities in more detail, including many strains that cannot be cultivated in the laboratory. Bioinformatic analyses make it possible to mine huge metagenomic datasets and discover general patterns that govern microbial ecosystems. However, the findings of typical metagenomic and bioinformatic analyses still do not completely describe the ecology and evolution of microbes in their environments. Most analyses still depend on straightforward sequence similarity searches against reference databases. We herein review the current state of metagenomics and bioinformatics in microbial ecology and discuss future directions for the field. New techniques will allow us to go beyond routine analyses and broaden our knowledge of microbial ecosystems. We need to enrich reference databases, promote platforms that enable meta- or comprehensive analyses of diverse metagenomic datasets, devise methods that utilize long-read sequence information, and develop more powerful bioinformatic methods to analyze data from diverse perspectives. PMID:27383682
Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows generally require access to multiple, distributed data sources and analytic tools. The requisite data sources may include large public data repositories, community...
Full Text Available Informatics methods, such as text mining and natural language processing, are always involved in bioinformatics research. In this study, we discuss text mining and natural language processing methods in bioinformatics from two perspectives. First, we aim to search for knowledge on biology, retrieve references using text mining methods, and reconstruct databases. For example, protein-protein interactions and gene-disease relationship can be mined from PubMed. Then, we analyze the applications of text mining and natural language processing techniques in bioinformatics, including predicting protein structure and function, detecting noncoding RNA. Finally, numerous methods and applications, as well as their contributions to bioinformatics, are discussed for future use by text mining and natural language processing researchers.
Mayr, Philipp; Tosques, Fabio
This report describes possibilities and restrictions of the Google Web APIs (Google API). The implementation of the Google API in the context of information science studies from the webometrics field shows, that the Google API can be used with restrictions for internet based studies. The comparison of hit results from the two Google interfaces Google API and the standard web interface Google.com (Google Web) shows differences concerning range, structure und availability. The study bases on si...
Laranjeiro, Nuno; Vieira, Marco
Web services represent a powerful interface for back-end systems that must provide a robust interface to client applications, even in the presence of invalid inputs. However, developing robust services is a difficult task. In this paper we demonstrate wsrbench, an online tool that facilitates web services robustness testing. Additionally, we present two scenarios to motivate robustness testing and to demonstrate the power of robustness testing in web services environments.
Vandervalk, Ben P; McCarthy, E Luke; Wilkinson, Mark D
Facile and meaningful integration of data from disparate resources is the 'holy grail' of bioinformatics. Some resources have begun to address this problem by providing their data using Semantic Web standards, specifically the Resource Description Framework (RDF) and the Web Ontology Language (OWL). Unfortunately, adoption of Semantic Web standards has been slow overall, and even in cases where the standards are being utilized, interconnectivity between resources is rare. In response, we have seen the emergence of centralized 'semantic warehouses' that collect public data from third parties, integrate it, translate it into OWL/RDF and provide it to the community as a unified and queryable resource. One limitation of the warehouse approach is that queries are confined to the resources that have been selected for inclusion. A related problem, perhaps of greater concern, is that the majority of bioinformatics data exists in the 'Deep Web'-that is, the data does not exist until an application or analytical tool is invoked, and therefore does not have a predictable Web address. The inability to utilize Uniform Resource Identifiers (URIs) to address this data is a barrier to its accessibility via URI-centric Semantic Web technologies. Here we examine 'The State of the Union' for the adoption of Semantic Web standards in the health care and life sciences domain by key bioinformatics resources, explore the nature and connectivity of several community-driven semantic warehousing projects, and report on our own progress with the CardioSHARE/Moby-2 project, which aims to make the resources of the Deep Web transparently accessible through SPARQL queries.
Full Text Available Abstract Background Recent advances in experimental and computational technologies have fueled the development of many sophisticated bioinformatics programs. The correctness of such programs is crucial as incorrectly computed results may lead to wrong biological conclusion or misguide downstream experimentation. Common software testing procedures involve executing the target program with a set of test inputs and then verifying the correctness of the test outputs. However, due to the complexity of many bioinformatics programs, it is often difficult to verify the correctness of the test outputs. Therefore our ability to perform systematic software testing is greatly hindered. Results We propose to use a novel software testing technique, metamorphic testing (MT, to test a range of bioinformatics programs. Instead of requiring a mechanism to verify whether an individual test output is correct, the MT technique verifies whether a pair of test outputs conform to a set of domain specific properties, called metamorphic relations (MRs, thus greatly increases the number and variety of test cases that can be applied. To demonstrate how MT is used in practice, we applied MT to test two open-source bioinformatics programs, namely GNLab and SeqMap. In particular we show that MT is simple to implement, and is effective in detecting faults in a real-life program and some artificially fault-seeded programs. Further, we discuss how MT can be applied to test programs from various domains of bioinformatics. Conclusion This paper describes the application of a simple, effective and automated technique to systematically test a range of bioinformatics programs. We show how MT can be implemented in practice through two real-life case studies. Since many bioinformatics programs, particularly those for large scale simulation and data analysis, are hard to test systematically, their developers may benefit from using MT as part of the testing strategy. Therefore our work
Ilzins, Olaf; Isea, Raul; Hoebeke, Johan
The objective of this short report is to reconsider the subject of bioinformatics as just being a tool of experimental biological science. To do that, we introduce three examples to show how bioinformatics could be considered as an experimental science. These examples show how the development of theoretical biological models generates experimentally verifiable computer hypotheses, which necessarily must be validated by experiments in vitro or in vivo.
Romano, D.; Pinzger, M.
Preprint of paper published in: ICWS 2012 - IEEE 19th International Conference on Web Services, 24-29 June 2012; doi:10.1109/ICWS.2012.29 In the service-oriented paradigm web service interfaces are considered contracts between web service subscribers and providers. However, these interfaces are co
Cohen, Andrew; Vitányi, Paul
Normalized web distance (NWD) is a similarity or normalized semantic distance based on the World Wide Web or any other large electronic database, for instance Wikipedia, and a search engine that returns reliable aggregate page counts. For sets of search terms the NWD gives a similarity on a scale from 0 (identical) to 1 (completely different). The NWD approximates the similarity according to all (upper semi)computable properties. We develop the theory and give applications. The derivation of ...
Chavez, J.; Wu, X.; Roby, W.; Hoac, A.; Goldina, T.; Hartley, B.
The Spitzer Science Center (SSC) provides a set of user tools to support search and retrieval of Spitzer Archive (SA) data via the Internet. This presentation describes the software architecture and design principles that support the Archive Interface subsystem of the SA (Handley 2007). The Archive Interface is an extension of the core components of the Uplink subsystem and provides a set web services to allow open access to the SA data set. Web services technology provides a basis for searching the archive and retrieving data products. The archive interface provides three modes of access: a rich client, a Web browser, and scripts (via Web services). The rich client allows the user to perform complex queries and submit requests for data that are asynchronously down-loaded to the local workstation. Asynchronous down-load is a critical feature given the large volume of a typical data set (on the order of 40~GB). For basic queries and retrieval of data the Web browser interface is provided. For advanced users, scripting languages with web services capabilities (i.e. Perl) can used to query and down-load data from the SA. The archive interface subsystem is the primary means for searching and retrieving data from the SA and is critical to the success of the Spitzer Space Telescope.
Mathe, Z.; Casajus Ramo, A.; Lazovsky, N.; Stagni, F.
For many years the DIRAC interware (Distributed Infrastructure with Remote Agent Control) has had a web interface, allowing the users to monitor DIRAC activities and also interact with the system. Since then many new web technologies have emerged, therefore a redesign and a new implementation of the DIRAC Web portal were necessary, taking into account the lessons learnt using the old portal. These new technologies allowed to build a more compact, robust and responsive web interface that enables users to have better control over the whole system while keeping a simple interface. The web framework provides a large set of “applications”, each of which can be used for interacting with various parts of the system. Communities can also create their own set of personalised web applications, and can easily extend already existing ones with a minimal effort. Each user can configure and personalise the view for each application and save it using the DIRAC User Profile service as RESTful state provider, instead of using cookies. The owner of a view can share it with other users or within a user community. Compatibility between different browsers is assured, as well as with mobile versions. In this paper, we present the new DIRAC Web framework as well as the LHCb extension of the DIRAC Web portal.
Ison, Jon; Rapacki, Kristoffer; Ménager, Hervé; Kalaš, Matúš; Rydza, Emil; Chmura, Piotr; Anthon, Christian; Beard, Niall; Berka, Karel; Bolser, Dan; Booth, Tim; Bretaudeau, Anthony; Brezovsky, Jan; Casadio, Rita; Cesareni, Gianni; Coppens, Frederik; Cornell, Michael; Cuccuru, Gianmauro; Davidsen, Kristian; Vedova, Gianluca Della; Dogan, Tunca; Doppelt-Azeroual, Olivia; Emery, Laura; Gasteiger, Elisabeth; Gatter, Thomas; Goldberg, Tatyana; Grosjean, Marie; Grüning, Björn; Helmer-Citterich, Manuela; Ienasescu, Hans; Ioannidis, Vassilios; Jespersen, Martin Closter; Jimenez, Rafael; Juty, Nick; Juvan, Peter; Koch, Maximilian; Laibe, Camille; Li, Jing-Woei; Licata, Luana; Mareuil, Fabien; Mičetić, Ivan; Friborg, Rune Møllegaard; Moretti, Sebastien; Morris, Chris; Möller, Steffen; Nenadic, Aleksandra; Peterson, Hedi; Profiti, Giuseppe; Rice, Peter; Romano, Paolo; Roncaglia, Paola; Saidi, Rabie; Schafferhans, Andrea; Schwämmle, Veit; Smith, Callum; Sperotto, Maria Maddalena; Stockinger, Heinz; Vařeková, Radka Svobodová; Tosatto, Silvio C E; de la Torre, Victor; Uva, Paolo; Via, Allegra; Yachdav, Guy; Zambelli, Federico; Vriend, Gert; Rost, Burkhard; Parkinson, Helen; Løngreen, Peter; Brunak, Søren
Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task at hand.Here we present a community-driven curation effort, supported by ELIXIR-the European infrastructure for biological information-that aspires to a comprehensive and consistent registry of information about bioinformatics resources. The sustainable upkeep of this Tools and Data Services Registry is assured by a curation effort driven by and tailored to local needs, and shared amongst a network of engaged partners.As of November 2015, the registry includes 1785 resources, with depositions from 126 individual registrations including 52 institutional providers and 74 individuals. With community support, the registry can become a standard for dissemination of information about bioinformatics resources: we welcome everyone to join us in this common endeavour. The registry is freely available at https://bio.tools. PMID:26538599
Castillo, Luis F; López-Gartner, Germán; Isaza, Gustavo A; Sánchez, Mariana; Arango, Jeferson; Agudelo-Valencia, Daniel; Castaño, Sergio
The need to process large quantities of data generated from genomic sequencing has resulted in a difficult task for life scientists who are not familiar with the use of command-line operations or developments in high performance computing and parallelization. This knowledge gap, along with unfamiliarity with necessary processes, can hinder the execution of data processing tasks. Furthermore, many of the commonly used bioinformatics tools for the scientific community are presented as isolated, unrelated entities that do not provide an integrated, guided, and assisted interaction with the scheduling facilities of computational resources or distribution, processing and mapping with runtime analysis. This paper presents the first approximation of a Web Services platform-based architecture (GITIRBio) that acts as a distributed front-end system for autonomous and assisted processing of parallel bioinformatics pipelines that has been validated using multiple sequences. Additionally, this platform allows integration with semantic repositories of genes for search annotations. GITIRBio is available at: http://c-head.ucaldas.edu.co:8080/gitirbio. PMID:26527189
McKenna, Neil J
Nuclear receptors (NRs) are a superfamily of ligand-regulated transcription factors that interact with coregulators and other transcription factors to direct tissue-specific programs of gene expression. Recent years have witnessed a rapid acceleration of the output of high-content data platforms in this field, generating discovery-driven datasets that have collectively described: the organization of the NR superfamily (phylogenomics); the expression patterns of NRs, coregulators and their target genes (transcriptomics); ligand- and tissue-specific functional NR and coregulator sites in DNA (cistromics); the organization of nuclear receptors and coregulators into higher order complexes (proteomics); and their downstream effects on homeostasis and metabolism (metabolomics). Significant bioinformatics challenges lie ahead both in the integration of this information into meaningful models of NR and coregulator biology, as well as in the archiving and communication of datasets to the global nuclear receptor signaling community. While holding great promise for the field, the ascendancy of discovery-driven research in this field brings with it a collective responsibility for researchers, publishers and funding agencies alike to ensure the effective archiving and management of these data. This review will discuss factors lying behind the increasing impact of discovery-driven research, examples of high-content datasets and their bioinformatic analysis, as well as a summary of currently curated web resources in this field. This article is part of a Special Issue entitled: Translating nuclear receptors from health to disease. PMID:21029773
The main purpose of the project was to make use of elements of interface design to create an application. Another purpose was to see how Enoro (customer) Generis system (customer's internal system) merges with the web in particular application. The goal was to create an application web interface for existing System Monitoring application. ASP.NET framework with C# programming language, Enoro Generis System and user interface design elements were used for creating the application. The app...
Ong, Kenneth R
"Just-in-time" database-driven Web applications are inexpensive, quickly-developed software that can be put to many uses within a health care organization. Database-driven Web applications garnered 73873 hits on our system-wide intranet in 2002. They enabled collaboration and communication via user-friendly Web browser-based interfaces for both mission-critical and patient-care-critical functions. Nineteen database-driven Web applications were developed. The application categories that compri...
Full Text Available Vaccine adjuvants are compounds that enhance host immune responses to co-administered antigens in vaccines. Vaxjo is a web-based central database and analysis system that curates, stores, and analyzes vaccine adjuvants and their usages in vaccine development. Basic information of a vaccine adjuvant stored in Vaxjo includes adjuvant name, components, structure, appearance, storage, preparation, function, safety, and vaccines that use this adjuvant. Reliable references are curated and cited. Bioinformatics scripts are developed and used to link vaccine adjuvants to different adjuvanted vaccines stored in the general VIOLIN vaccine database. Presently, 103 vaccine adjuvants have been curated in Vaxjo. Among these adjuvants, 98 have been used in 384 vaccines stored in VIOLIN against over 81 pathogens, cancers, or allergies. All these vaccine adjuvants are categorized and analyzed based on adjuvant types, pathogens used, and vaccine types. As a use case study of vaccine adjuvants in infectious disease vaccines, the adjuvants used in Brucella vaccines are specifically analyzed. A user-friendly web query and visualization interface is developed for interactive vaccine adjuvant search. To support data exchange, the information of vaccine adjuvants is stored in the Vaccine Ontology (VO in the Web Ontology Language (OWL format.
Martinez Ruiz, Francisco Javier
Current web development remains attached to the classic web page paradigm. However, new ways of using known technologies, besides the asynchronous communication, have evolved into a new generation of web applications: Rich Internet Applications (RIA). RIAs are web applications that achieve user expectations in terms of usability, reliability, quality, maintainability and performance. This work presents a methodology for developing User Interfaces of RIAs. This methodology follows a model dri...
Nagino, Norikatsu; Yamada, Seiji
In this paper, we propose a Future View system that assists user's usual Web browsing. The Future View will prefetch Web pages based on user's browsing strategies and present them to a user in order to assist Web browsing. To learn user's browsing strategy, the Future View uses two types of learning classifier systems: a content-based classifier system for contents change patterns and an action-based classifier system for user's action patterns. The results of learning is applied to crawling by Web robots, and the gathered Web pages are presented to a user through a Web browser interface. We experimentally show effectiveness of navigation using the Future View.
Holbøll, Joachim T.; Henriksen, Mogens; Nilson, Jesper K.;
The wide use of solid insulating materials combinations in combinations has introduced problems in the interfaces between components. The most common insulating materials are cross-linked polyethylene (XLPE), silicone rubber (SIR) and ethylene-propylene rubbers (EPR). Assemblies of these materials...... have caused major failures. In the Netherlands, a major black out was caused by interface problems in 150kV cable terminations, causing a cascade of breakdowns. There is a need to investigate the reasons for this and other similar breakdowns.The major problem is expected to lie in the interface between...... two different materials. Environmental influence, surface treatment, defects in materials and interface, design, pressure and rubbing are believed to have an effect on interface degradation. These factors are believed to increase the possibility of partial discharges (PD). PD will, with time, destroy...
Holbøll, Joachim T.; Henriksen, Mogens; Nilson, Jesper K.;
The wide use of solid insulating materials combinations in combinations has introduced problems in the interfaces between components. The most common insulating materials are cross-linked polyethylene (XLPE), silicone rubber (SIR) and ethylene-propylene rubbers (EPR). Assemblies of these materials...... have caused major failures. In the Netherlands, a major black out was caused by interface problems in 150kV cable terminations, causing a cascade of breakdowns. There is a need to investigate the reasons for this and other similar breakdowns. The major problem is expected to lie in the interface...... between two different materials. Environmental influence, surface treatment, defects in materials and interface, design, pressure and rubbing are believed to have an effect on interface degradation. These factors are believed to increase the possibility of partial discharges (PD). PD will, with time...
The perfect place to learn how to design Web sites for mobile devices!. With the popularity of Internet access via cell phones and other mobile devices, Web designers now have to consider as many as eight operating systems, several browsers, and a slew of new devices as they plan a new site, a new interface, or a new sub-site. This easy-to-follow friendly book guides you through this brave new world with a clear look at the fundamentals and offers practical techniques and tricks you may not have considered.: Explores all issues to consider in planning a mobile site; Covers the tools needed for
Full Text Available Abstract Background Peach is being developed as a model organism for Rosaceae, an economically important family that includes fruits and ornamental plants such as apple, pear, strawberry, cherry, almond and rose. The genomics and genetics data of peach can play a significant role in the gene discovery and the genetic understanding of related species. The effective utilization of these peach resources, however, requires the development of an integrated and centralized database with associated analysis tools. Description The Genome Database for Rosaceae (GDR is a curated and integrated web-based relational database. GDR contains comprehensive data of the genetically anchored peach physical map, an annotated peach EST database, Rosaceae maps and markers and all publicly available Rosaceae sequences. Annotations of ESTs include contig assembly, putative function, simple sequence repeats, and anchored position to the peach physical map where applicable. Our integrated map viewer provides graphical interface to the genetic, transcriptome and physical mapping information. ESTs, BACs and markers can be queried by various categories and the search result sites are linked to the integrated map viewer or to the WebFPC physical map sites. In addition to browsing and querying the database, users can compare their sequences with the annotated GDR sequences via a dedicated sequence similarity server running either the BLAST or FASTA algorithm. To demonstrate the utility of the integrated and fully annotated database and analysis tools, we describe a case study where we anchored Rosaceae sequences to the peach physical and genetic map by sequence similarity. Conclusions The GDR has been initiated to meet the major deficiency in Rosaceae genomics and genetics research, namely a centralized web database and bioinformatics tools for data storage, analysis and exchange. GDR can be accessed at http://www.genome.clemson.edu/gdr/.
Full Text Available Abstract Background As the “omics” revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter’s complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. Results COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a “semantic web in a box” approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. Conclusions The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/.
Dalpé, Gratien; Joly, Yann
Healthcare-related bioinformatics databases are increasingly offering the possibility to maintain, organize, and distribute DNA sequencing data. Different national and international institutions are currently hosting such databases that offer researchers website platforms where they can obtain sequencing data on which they can perform different types of analysis. Until recently, this process remained mostly one-dimensional, with most analysis concentrated on a limited amount of data. However, newer genome sequencing technology is producing a huge amount of data that current computer facilities are unable to handle. An alternative approach has been to start adopting cloud computing services for combining the information embedded in genomic and model system biology data, patient healthcare records, and clinical trials' data. In this new technological paradigm, researchers use virtual space and computing power from existing commercial or not-for-profit cloud service providers to access, store, and analyze data via different application programming interfaces. Cloud services are an alternative to the need of larger data storage; however, they raise different ethical, legal, and social issues. The purpose of this Commentary is to summarize how cloud computing can contribute to bioinformatics-based drug discovery and to highlight some of the outstanding legal, ethical, and social issues that are inherent in the use of cloud services. PMID:25195583
A web spider is an automated program or a script that independently crawls websites on the internet. At the same time its job is to pinpoint and extract desired data from websites. The data is then saved in a database and is later used for different purposes. Some spiders download websites which are then saved into large repositories, while others search for more specific data, such as emails or phone numbers. The most well known and the most important application of web crawlers is crawling ...
Obrenovic, Z.; Ossenbruggen, J.R. van
A web browser provides a uniform user interface to different types of information. Making this interface universally accessible and more interactive is a long term goal still far from being achieved. Universally accessible browsers require novel interaction modalities and additional functionalities,
Marylene S. Eder
Full Text Available Abstract Interactive campus map is a web based application that can be accessed through a web browser. With the Google Map Application Programming Interface availability of the overlay function has been taken advantage to create custom map functionalities. Collection of building points were gathered for routing and to create polygons which serves as a representation of each building. The previous campus map provides a static visual representation of the campus. It uses legends building name and its corresponding building number in providing information. Due to its limited capabilities it became a realization to the researchers to create an interactive campus map.Storing data about the building room and staff information and university events and campus guide are among the primary features that this study has to offer. Interactive Web-based Campus Information System is intended in providing a Campus Information System.It is open to constant updates user-friendly for both trained and untrained users and capable of responding to all needs of users and carrying out analyses. Based on the data gathered through questionnaires researchers analyzed the results of the test survey and proved that the system is user friendly deliver information to users and the important features that the students expect.
Vears, R E
Microprocessor Interfacing provides the coverage of the Business and Technician Education Council level NIII unit in Microprocessor Interfacing (syllabus U86/335). Composed of seven chapters, the book explains the foundation in microprocessor interfacing techniques in hardware and software that can be used for problem identification and solving. The book focuses on the 6502, Z80, and 6800/02 microprocessor families. The technique starts with signal conditioning, filtering, and cleaning before the signal can be processed. The signal conversion, from analog to digital or vice versa, is expl
Full Text Available Abstract Background Expression levels for 47294 transcripts in lymphoblastoid cell lines from all 270 HapMap phase II individuals, and genotypes (both HapMap phase II and III of 3.96 million single nucleotide polymorphisms (SNPs in the same individuals are publicly available. We aimed to generate a user-friendly web based tool for visualization of the correlation between SNP genotypes within a specified genomic region and a gene of interest, which is also well-known as an expression quantitative trait locus (eQTL analysis. Results SNPexp is implemented as a server-side script, and publicly available on this website: http://tinyurl.com/snpexp. Correlation between genotype and transcript expression levels are calculated by performing linear regression and the Wald test as implemented in PLINK and visualized using the UCSC Genome Browser. Validation of SNPexp using previously published eQTLs yielded comparable results. Conclusions SNPexp provides a convenient and platform-independent way to calculate and visualize the correlation between HapMap genotypes within a specified genetic region anywhere in the genome and gene expression levels. This allows for investigation of both cis and trans effects. The web interface and utilization of publicly available and widely used software resources makes it an attractive supplement to more advanced bioinformatic tools. For the advanced user the program can be used on a local computer on custom datasets.
DENG You-ping; AI Jun-mei; XIAO Pei-gen
One important purpose to investigate medicinal plants is to understand genes and enzymes that govern the biological metabolic process to produce bioactive compounds.Genome wide high throughput technologies such as genomics,transcriptomics,proteomics and metabolomics can help reach that goal.Such technologies can produce a vast amount of data which desperately need bioinformatics and systems biology to process,manage,distribute and understand these data.By dealing with the"omics"data,bioinformatics and systems biology can also help improve the quality of traditional medicinal materials,develop new approaches for the classification and authentication of medicinal plants,identify new active compounds,and cultivate medicinal plant species that tolerate harsh environmental conditions.In this review,the application of bioinformatics and systems biology in medicinal plants is briefly introduced.
Bülow, Lorenz; Hehl, Reinhard
Bioinformatics tools can be employed to identify conserved cis-sequences in sets of coregulated plant genes because more and more gene expression and genomic sequence data become available. Knowledge on the specific cis-sequences, their enrichment and arrangement within promoters, facilitates the design of functional synthetic plant promoters that are responsive to specific stresses. The present chapter illustrates an example for the bioinformatic identification of conserved Arabidopsis thaliana cis-sequences enriched in drought stress-responsive genes. This workflow can be applied for the identification of cis-sequences in any sets of coregulated genes. The workflow includes detailed protocols to determine sets of coregulated genes, to extract the corresponding promoter sequences, and how to install and run a software package to identify overrepresented motifs. Further bioinformatic analyses that can be performed with the results are discussed. PMID:27557771
and experimental determination of macromolecular structure that are based on such methods. These developments include generative models of protein structure, the estimation of the parameters of energy functions that are used in structure prediction, the superposition of macromolecules and structure determination......Structural bioinformatics is concerned with the molecular structure of biomacromolecules on a genomic scale, using computational methods. Classic problems in structural bioinformatics include the prediction of protein and RNA structure from sequence, the design of artificial proteins or enzymes......, and the automated analysis and comparison of biomacromolecules in atomic detail. The determination of macromolecular structure from experimental data (for example coming from nuclear magnetic resonance, X-ray crystallography or small angle X-ray scattering) has close ties with the field of structural bioinformatics...
Web implementation is truly a multidisciplinary field with influences from programming, choosing of scripting languages, graphic design, user interface design, and database design. The challenge of a Web designer/implementer is his ability to create an attractive and informative Web. To work with the universal framework and link diagrams from the design process as well as the Web specifications and domain information, it is essential to create Hypertext Markup Language (HTML) or other software and multimedia to accomplish the Web's objective. In this article we will discuss Web design standards and the techniques involved in Web implementation based on HTML and Extensible Markup Language (XML). We will also discuss the advantages and disadvantages of HTML over its successor XML in designing and implementing a Web. We have developed two Web pages, one utilizing the features of HTML and the other based on the features of XML to carry out the present investigation. (author)
João Carlos Sousa
Full Text Available The ability to manage the constantly growing information in genetics availableon the internet is becoming crucial in biochemical education and medicalpractice. Therefore, developing students skills in working with bioinformaticstools is a challenge to undergraduate courses in the molecular life sciences.The regulation of gene transcription by hormones and vitamins is a complextopic that influences all body systems. We describe a student centered activityused in a multidisciplinary “Functional Organ System“ course on the EndocrineSystem. By receiving, as teams, a nucleotide sequence of a hormone orvitamin-response element, students navigate through internet databases to findthe gene to which it belongs. Subsequently, student’s search how thecorresponding hormone/vitamin influences the expression of that particulargene and how a dysfunctional interaction might cause disease. This activity,proposed for 4 consecutive years to cohorts of 50-60 students/year enrolled inthe 2nd year our undergraduate medical degree, revealed that 90% of thestudents developed a better understanding of the usefulness of bioinformaticsand that 98% intend to use them in the future. Since hormones and vitaminsregulate genes of all body organ systems, this web-based activity successfullyintegrates the whole body physiology of the medical curriculum and can be ofrelevance to other courses on molecular life sciences.
de Miranda Antonio B
Full Text Available Abstract Background BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Results Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Conclusion Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.
Hajibabaei Mehrdad; Singer Gregory AC
Abstract Background DNA sequences have become a primary source of information in biodiversity analysis. For example, short standardized species-specific genomic regions, DNA barcodes, are being used as a global standard for species identification and biodiversity studies. Most DNA barcodes are being generated by laboratories that have an expertise in DNA sequencing but not in bioinformatics data analysis. Therefore, we have developed a web-based suite of tools to help the DNA barcode research...
Approaches in Integrative Bioinformatics provides a basic introduction to biological information systems, as well as guidance for the computational analysis of systems biology. This book also covers a range of issues and methods that reveal the multitude of omics data integration types and the relevance that integrative bioinformatics has today. Topics include biological data integration and manipulation, modeling and simulation of metabolic networks, transcriptomics and phenomics, and virtual cell approaches, as well as a number of applications of network biology. It helps to illustrat
Phillips, J. C.
Allosteric (long-range) interactions can be surprisingly strong in proteins of biomedical interest. Here we use bioinformatic scaling to connect prior results on nonsteroidal anti-inflammatory drugs to promising new drugs that inhibit cancer cell metabolism. Many parallel features are apparent, which explain how even one amino acid mutation, remote from active sites, can alter medical results. The enzyme twins involved are cyclooxygenase (aspirin) and isocitrate dehydrogenase (IDH). The IDH results are accurate to 1% and are overdetermined by adjusting a single bioinformatic scaling parameter. It appears that the final stage in optimizing protein functionality may involve leveling of the hydrophobic limits of the arms of conformational hydrophilic hinges.
Recent developments in computer science enable algorithms previously perceived as too time-consuming to now be efficiently used for applications in bioinformatics and life sciences. This work focuses on proteins and their structures, protein structure similarity searching at main representation levels and various techniques that can be used to accelerate similarity searches. Divided into four parts, the first part provides a formal model of 3D protein structures for functional genomics, comparative bioinformatics and molecular modeling. The second part focuses on the use of multithreading for
Sweeney, Edwina; Curran, Kevin; Xie, Ermai
A Web crawler or spider crawls through the Web looking for pages to index, and when it locates a new page it passes the page on to an indexer. The indexer identifies links, keywords, and other content and stores these within its database. This database is searched by entering keywords through an interface and suitable Web pages are returned in a results page in the form of hyperlinks accompanied by short descriptions. The Web, however, is increasingly moving away from being a collection of documents to a multidimensional repository for sounds, images, audio, and other formats. This is leading to a situation where certain parts of the Web are invisible or hidden. The term known as the "Deep Web" has emerged to refer to the mass of information that can be accessed via the Web but cannot be indexed by conventional search engines. The concept of the Deep Web makes searches quite complex for search engines. Google states that the claim that conventional search engines cannot find such documents as PDFs, Word, PowerPoint, Excel, or any non-HTML page is not fully accurate and steps have been taken to address this problem by implementing procedures to search items such as academic publications, news, blogs, videos, books, and real-time information. However, Google still only provides access to a fraction of the Deep Web. This chapter explores the Deep Web and the current tools available in accessing it.
In the years since Jakob Nielsen's classic collection on interface consistency first appeared, much has changed, and much has stayed the same. On the one hand, there's been exponential growth in the opportunities for following or disregarding the principles of interface consistency-more computers, more applications, more users, and of course the vast expanse of the Web. On the other, there are the principles themselves, as persistent and as valuable as ever. In these contributed chapters, you'll find details on many methods for seeking and enforcing consistency, along with bottom-line analys
Ditty, Jayna L.; Kvaal, Christopher A.; Goodner, Brad; Freyermuth, Sharyn K.; Bailey, Cheryl; Britton, Robert A.; Gordon, Stuart G.; Heinhorst, Sabine; Reed, Kelynne; Xu, Zhaohui; Sanders-Lorenz, Erin R.; Axen, Seth; Kim, Edwin; Johns, Mitrick; Scott, Kathleen; Kerfeld, Cheryl A.
Undergraduate life sciences education needs an overhaul, as clearly described in the National Research Council of the National Academies publication BIO 2010: Transforming Undergraduate Education for Future Research Biologists. Among BIO 2010's top recommendations is the need to involve students in working with real data and tools that reflect the nature of life sciences research in the 21st century. Education research studies support the importance of utilizing primary literature, designing and implementing experiments, and analyzing results in the context of a bona fide scientific question in cultivating the analytical skills necessary to become a scientist. Incorporating these basic scientific methodologies in undergraduate education leads to increased undergraduate and post-graduate retention in the sciences. Toward this end, many undergraduate teaching organizations offer training and suggestions for faculty to update and improve their teaching approaches to help students learn as scientists, through design and discovery (e.g., Council of Undergraduate Research [www.cur.org] and Project Kaleidoscope [www.pkal.org]). With the advent of genome sequencing and bioinformatics, many scientists now formulate biological questions and interpret research results in the context of genomic information. Just as the use of bioinformatic tools and databases changed the way scientists investigate problems, it must change how scientists teach to create new opportunities for students to gain experiences reflecting the influence of genomics, proteomics, and bioinformatics on modern life sciences research. Educators have responded by incorporating bioinformatics into diverse life science curricula. While these published exercises in, and guidelines for, bioinformatics curricula are helpful and inspirational, faculty new to the area of bioinformatics inevitably need training in the theoretical underpinnings of the algorithms. Moreover, effectively integrating bioinformatics
Graham, Matthew; Grid,
This document describes the minimum interface that a (SOAP- or REST-based) web service requires to participate in the IVOA. Note that this is not required of standard VO services developed prior to this specification, although uptake is strongly encouraged on any subsequent revision. All new standard VO services, however, must feature a VOSI-compliant interface. This document has been produced by the Grid and Web Services Working Group. It has been reviewed by IVOA Members and other interested parties, and has been endorsed by the IVOA Executive Committee as an IVOA Recommendation. It is a stable document and may be used as reference material or cited as a normative reference from another document. IVOA's role in making the Recommendation is to draw attention to the specification and to promote its widespread deployment. This enhances the functionality and interoperability inside the Astronomical Community.
Mattmann, C. A.
Search has progressed through several stages due to the increasing size of the Web. Search engines first focused on text and its rate of occurrence; then focused on the notion of link analysis and citation then on interactivity and guided search; and now on the use of social media - who we interact with, what we comment on, and who we follow (and who follows us). The next stage, referred to as "deep search," requires solutions that can bring together text, images, video, importance, interactivity, and social media to solve this challenging problem. The Apache Nutch project provides an open framework for large-scale, targeted, vertical search with capabilities to support all past and potential future search engine foci. Nutch is a flexible infrastructure allowing open access to ranking; URL selection and filtering approaches, to the link graph generated from search, and Nutch has spawned entire sub communities including Apache Hadoop and Apache Tika. It addresses many current needs with the capability to support new technologies such as image and video. On the DARPA Memex project, we are creating create specific extensions to Nutch that will directly improve its overall technological superiority for search and that will directly allow us to address complex search problems including human trafficking. We are integrating state-of-the-art algorithms developed by Kitware for IARPA Aladdin combined with work by Harvard to provide image and video understanding support allowing automatic detection of people and things and massive deployment via Nutch. We are expanding Apache Tika for scene understanding, object/person detection and classification in images/video. We are delivering an interactive and visual interface for initiating Nutch crawls. The interface uses Python technologies to expose Nutch data and to provide a domain specific language for crawls. With the Bokeh visualization library the interface we are delivering simple interactive crawl visualization and
Rocco, D; Liu, L; Critchlow, T
Dynamic Web data sources--sometimes known collectively as the Deep Web--increase the utility of the Web by providing intuitive access to data repositories anywhere that Web access is available. Deep Web services provide access to real-time information, like entertainment event listings, or present a Web interface to large databases or other data repositories. Recent studies suggest that the size and growth rate of the dynamic Web greatly exceed that of the static Web, yet dynamic content is often ignored by existing search engine indexers owing to the technical challenges that arise when attempting to search the Deep Web. To address these challenges, we present DynaBot, a service-centric crawler for discovering and clustering Deep Web sources offering dynamic content. DynaBot has three unique characteristics. First, DynaBot utilizes a service class model of the Web implemented through the construction of service class descriptions (SCDs). Second, DynaBot employs a modular, self-tuning system architecture for focused crawling of the DeepWeb using service class descriptions. Third, DynaBot incorporates methods and algorithms for efficient probing of the Deep Web and for discovering and clustering Deep Web sources and services through SCD-based service matching analysis. Our experimental results demonstrate the effectiveness of the service class discovery, probing, and matching algorithms and suggest techniques for efficiently managing service discovery in the face of the immense scale of the Deep Web.
Full Text Available This study presents a usability testing as method, which can be used to improve controlling of web map sites. Study refers to the basic principles of this method and describes particular usability tests of mapping sites. In this paper are identified potential usability problems of web sites: Amapy.cz, Google maps and Mapy.cz. The usability testing was focused on problems related with user interfaces, addresses searching and route planning of the map sites.
Ossenbruggen, van, Jacco; Amin, Alia; Hildebrand, Michiel
This position paper discusses our experience in evaluating our cultural search and annotation engine. We identify three aspects that determine the quality of a semantic web application as a whole, namely: the quality of data set, the quality of underlying search and inference software and the quality of the user interface. We argue that evaluation of semantic web applications is particularly difficult because of the strong interdependency between the three aspects.
The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication) within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013). As a system it seeks to overcome overload or excess of irrelevant i...
López, Elena; Wesselink, Jan-Jaap; López, Isabel; Mendieta, Jesús; Gómez-Puertas, Paulino; Muñoz, Sarbelio Rodríguez
Reversible protein phosphorylation is one of the most important forms of cellular regulation. Thus, phosphoproteomic analysis of protein phosphorylation in cells is a powerful tool to evaluate cell functional status. The importance of protein kinase-regulated signal transduction pathways in human cancer has led to the development of drugs that inhibit protein kinases at the apex or intermediary levels of these pathways. Phosphoproteomic analysis of these signalling pathways will provide important insights for operation and connectivity of these pathways to facilitate identification of the best targets for cancer therapies. Enrichment of phosphorylated proteins or peptides from tissue or bodily fluid samples is required. The application of technologies such as phosphoenrichments, mass spectrometry (MS) coupled to bioinformatics tools is crucial for the identification and quantification of protein phosphorylation sites for advancing in such relevant clinical research. A combination of different phosphopeptide enrichments, quantitative techniques and bioinformatic tools is necessary to achieve good phospho-regulation data and good structural analysis of protein studies. The current and most useful proteomics and bioinformatics techniques will be explained with research examples. Our aim in this article is to be helpful for cancer research via detailing proteomics and bioinformatic tools. PMID:21967744
Goto, N.; Prins, J.C.P.; Nakao, M.; Bonnal, R.; Aerts, J.; Katayama, A.
The BioRuby software toolkit contains a comprehensive set of free development tools and libraries for bioinformatics and molecular biology, written in the Ruby programming language. BioRuby has components for sequence analysis, pathway analysis, protein modelling and phylogenetic analysis; it suppor
Rasmussen, Morten; Thaysen-Andersen, Morten; Højrup, Peter
We have developed "GLYCANthrope " - CROSSWORKS for glycans: a bioinformatics tool, which assists in identifying N-linked glycosylated peptides as well as their glycan moieties from MS2 data of enzymatically digested glycoproteins. The program runs either as a stand-alone application or as a plug...
As the sequencing stage of human genome project is near the end, the work has begun for discovering novel genes from genome sequences and annotating their biological functions. Here are reviewed current major bioinformatics tools and technologies available for large scale gene discovery and annotation from human genome sequences. Some ideas about possible future development are also provided.
Whyte, Barry James
The Virginia Bioinformatics Institute (VBI) at Virginia Tech has announced the appointment of Joao Setubal as the institute's deputy director. As deputy director, Setubal will act on behalf of VBI's executive and scientific director, Bruno Sobral, handling internal administrative functions, as well as scientific decision making.
Suplatov, Dmitry; Voevodin, Vladimir; Švedas, Vytas
The ability of proteins and enzymes to maintain a functionally active conformation under adverse environmental conditions is an important feature of biocatalysts, vaccines, and biopharmaceutical proteins. From an evolutionary perspective, robust stability of proteins improves their biological fitness and allows for further optimization. Viewed from an industrial perspective, enzyme stability is crucial for the practical application of enzymes under the required reaction conditions. In this review, we analyze bioinformatic-driven strategies that are used to predict structural changes that can be applied to wild type proteins in order to produce more stable variants. The most commonly employed techniques can be classified into stochastic approaches, empirical or systematic rational design strategies, and design of chimeric proteins. We conclude that bioinformatic analysis can be efficiently used to study large protein superfamilies systematically as well as to predict particular structural changes which increase enzyme stability. Evolution has created a diversity of protein properties that are encoded in genomic sequences and structural data. Bioinformatics has the power to uncover this evolutionary code and provide a reproducible selection of hotspots - key residues to be mutated in order to produce more stable and functionally diverse proteins and enzymes. Further development of systematic bioinformatic procedures is needed to organize and analyze sequences and structures of proteins within large superfamilies and to link them to function, as well as to provide knowledge-based predictions for experimental evaluation.
Feig, Andrew L.; Jabri, Evelyn
The field of bioinformatics is developing faster than most biochemistry textbooks can adapt. Supplementing the undergraduate biochemistry curriculum with data-mining exercises is an ideal way to expose the students to the common databases and tools that take advantage of this vast repository of biochemical information. An integrated collection of…