Ngu, A; Rocco, D; Critchlow, T; Buttler, D
The World Wide Web provides a vast resource to genomics researchers in the form of web-based access to distributed data sources--e.g. BLAST sequence homology search interfaces. However, the process for seeking the desired scientific information is still very tedious and frustrating. While there are several known servers on genomic data (e.g., GeneBank, EMBL, NCBI), that are shared and accessed frequently, new data sources are created each day in laboratories all over the world. The sharing of these newly discovered genomics results are hindered by the lack of a common interface or data exchange mechanism. Moreover, the number of autonomous genomics sources and their rate of change out-pace the speed at which they can be manually identified, meaning that the available data is not being utilized to its full potential. An automated system that can find, classify, describe and wrap new sources without tedious and low-level coding of source specific wrappers is needed to assist scientists to access to hundreds of dynamically changing bioinformatics web data sources through a single interface. A correct classification of any kind of Web data source must address both the capability of the source and the conversation/interaction semantics which is inherent in the design of the Web data source. In this paper, we propose an automatic approach to classify Web data sources that takes into account both the capability and the conversational semantics of the source. The ability to discover the interaction pattern of a Web source leads to increased accuracy in the classification process. At the same time, it facilitates the extraction of process semantics, which is necessary for the automatic generation of wrappers that can interact correctly with the sources.
Rocco, D; Critchlow, T
The transition of the World Wide Web from a paradigm of static Web pages to one of dynamic Web services provides new and exciting opportunities for bioinformatics with respect to data dissemination, transformation, and integration. However, the rapid growth of bioinformatics services, coupled with non-standardized interfaces, diminish the potential that these Web services offer. To face this challenge, we examine the notion of a Web service class that defines the functionality provided by a collection of interfaces. These descriptions are an integral part of a larger framework that can be used to discover, classify, and wrapWeb services automatically. We discuss how this framework can be used in the context of the proliferation of sites offering BLAST sequence alignment services for specialized data sets.
Cheung David W
Full Text Available Abstract Background Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1 the platforms on which the applications run are heterogeneous, 2 their web interface is not machine-friendly, 3 they use a non-standard format for data input and output, 4 they do not exploit standards to define application interface and message exchange, and 5 existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. Results To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates
Full Text Available Abstract The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS, adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded and http://soap.g-language.org/kbws_dl.wsdl (Document/literal.
McWilliam, Hamish; Valentin, Franck; Goujon, Mickael; Li, Weizhong; Narayanasamy, Menaka; Martin, Jenny; Miyar, Teresa; Lopez, Rodrigo
The European Bioinformatics Institute (EMBL-EBI) has been providing access to mainstream databases and tools in bioinformatics since 1997. In addition to the traditional web form based interfaces, APIs exist for core data resources such as EMBL-Bank, Ensembl, UniProt, InterPro, PDB and ArrayExpress. These APIs are based on Web Services (SOAP/REST) interfaces that allow users to systematically access databases and analytical tools. From the user's point of view, these Web Services provide the same functionality as the browser-based forms. However, using the APIs frees the user from web page constraints and are ideal for the analysis of large batches of data, performing text-mining tasks and the casual or systematic evaluation of mathematical models in regulatory networks. Furthermore, these services are widespread and easy to use; require no prior knowledge of the technology and no more than basic experience in programming. In the following we wish to inform of new and updated services as well as briefly describe planned developments to be made available during the course of 2009-2010. PMID:19435877
Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas
Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/. PMID:26882475
Full Text Available Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/.
Trias Thireou; George Spyrou; Vassilis Atlamazoglou
The explosive growth of the bioinformatics field has led to a large amount of data and software applications publicly available as web resources. However, the lack of persistence of web references is a barrier to a comprehensive shared access. We conducted a study of the current availability and other features of primary bioinformatics web resources (such as software tools and databases). The majority (95%) of the examined bioinformatics web resources were found running on UNIX/Linux operating systems, and the most widely used web server was found to be Apache (or Apache-related products). Of the overall 1,130 Uniform Resource Locators (URLs) examined, 91% were highly available (more than 90% of the time), while only 4% showed low accessibility (less than 50% of the time) during the survey. Furthermore, the most common URL failure modes are presented and analyzed.
Kalas, M.; Puntervoll, P.; Joseph, A.;
Motivation: The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use...
Neerincx, P.B.T.; Leunissen, J.A.M.
Bioinformaticians have developed large collections of tools to make sense of the rapidly growing pool of molecular biological data. Biological systems tend to be complex and in order to understand them, it is often necessary to link many data sets and use more than one tool. Therefore, bioinformatic
Pettifer, S.; Thorne, D.; McDermott, P.; Attwood, T.; Baran, J.; Bryne, J.C.; Hupponen, T.; Mowbray, D.; Vriend, G.
SUMMARY: The EMBRACE Registry is a web portal that collects and monitors web services according to test scripts provided by the their administrators. Users are able to search for, rank and annotate services, enabling them to select the most appropriate working service for inclusion in their bioinfor
Xiao Li; Yizheng Zhang
It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biological data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium)and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.
Research for the biological systems has reached to the stage that clarifies an organism as a whole from genome, a set of genes. To accomplish the researches, one needs to face a lot of data retrieved from genome sequences. Conventional methods, namely experiments in wet-labs, are, however not designed for dealing with genome scale data. For those data analyses, computer based researches, such as data mining and simulation, are suitable. As a result, bioinformatics, a new discipline combining expertise of biology and information science, is emerging. Here, we report a development of a data base and a web based system for application execution (Execution of Application on web system). Those are one of the efforts to support bioinformatics research by Office of ITBL Promotion. (author)
Swainston Neil; Griffiths Tony; Hedeler Cornelia; Garwood Christopher; Garwood Kevin; Oliver Stephen G; Paton Norman W
Abstract Background The proliferation of data repositories in bioinformatics has resulted in the development of numerous interfaces that allow scientists to browse, search and analyse the data that they contain. Interfaces typically support repository access by means of web pages, but other means are also used, such as desktop applications and command line tools. Interfaces often duplicate functionality amongst each other, and this implies that associated development activities are repeated i...
Rocco, D; Critchlow, T J
The World Wide Web provides an incredible resource to genomics researchers in the form of dynamic data sources--e.g. BLAST sequence homology search interfaces. The growth rate of these sources outpaces the speed at which they can be manually classified, meaning that the available data is not being utilized to its full potential. Existing research has not addressed the problems of automatically locating, classifying, and integrating classes of bioinformatics data sources. This paper presents an overview of a system for finding classes of bioinformatics data sources and integrating them behind a unified interface. We examine an approach to classifying these sources automatically that relies on an abstract description format: the service class description. This format allows a domain expert to describe the important features of an entire class of services without tying that description to any particular Web source. We present the features of this description format in the context of BLAST sources to show how the service class description relates to Web sources that are being described. We then show how a service class description can be used to classify an arbitrary Web source to determine if that source is an instance of the described service. To validate the effectiveness of this approach, we have constructed a prototype that can correctly classify approximately two-thirds of the BLAST sources we tested. We then examine these results, consider the factors that affect correct automatic classification, and discuss future work.
Full Text Available Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL. Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL. BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license.
Repchevsky, Dmitry; Gelpi, Josep Ll
Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license. PMID:25233118
Repchevsky, Dmitry; Gelpi, Josep Ll.
Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL). Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL). BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF) based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST) API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license. PMID:25233118
Traditionally the interaction between users and the Grid is done with command line tools. However, these tools are difficult to use by non-expert users providing minimal help and generating outputs not always easy to understand especially in case of errors. Graphical User Interfaces are typically limited to providing access to the monitoring or accounting information and concentrate on some particular aspects failing to cover the full spectrum of grid control tasks. To make the Grid more user friendly more complete graphical interfaces are needed. Within the DIRAC project we have attempted to construct a Web based User Interface that provides means not only for monitoring the system behavior but also allows to steer the main user activities on the grid. Using DIRAC's web interface a user can easily track jobs and data. It provides access to job information and allows performing actions on jobs such as killing or deleting. Data managers can define and monitor file transfer activity as well as check requests set by jobs. Production managers can define and follow large data productions and react if necessary by stopping or starting them. The Web Portal is build following all the grid security standards and using modern Web 2.0 technologies which allow to achieve the user experience similar to the desktop applications. Details of the DIRAC Web Portal architecture and User Interface will be presented and discussed.
Baldi, Pierre; Brunak, Søren
medicine will be particularly affected by the new results and the increased understanding of life at the molecular level. Bioinformatics is the development and application of computer methods for analysis, interpretation, and prediction, as well as for the design of experiments. It has emerged as a...
Willighagen Egon L
Full Text Available Abstract Background Life sciences make heavily use of the web for both data provision and analysis. However, the increasing amount of available data and the diversity of analysis tools call for machine accessible interfaces in order to be effective. HTTP-based Web service technologies, like the Simple Object Access Protocol (SOAP and REpresentational State Transfer (REST services, are today the most common technologies for this in bioinformatics. However, these methods have severe drawbacks, including lack of discoverability, and the inability for services to send status notifications. Several complementary workarounds have been proposed, but the results are ad-hoc solutions of varying quality that can be difficult to use. Results We present a novel approach based on the open standard Extensible Messaging and Presence Protocol (XMPP, consisting of an extension (IO Data to comprise discovery, asynchronous invocation, and definition of data types in the service. That XMPP cloud services are capable of asynchronous communication implies that clients do not have to poll repetitively for status, but the service sends the results back to the client upon completion. Implementations for Bioclipse and Taverna are presented, as are various XMPP cloud services in bio- and cheminformatics. Conclusion XMPP with its extensions is a powerful protocol for cloud services that demonstrate several advantages over traditional HTTP-based Web services: 1 services are discoverable without the need of an external registry, 2 asynchronous invocation eliminates the need for ad-hoc solutions like polling, and 3 input and output types defined in the service allows for generation of clients on the fly without the need of an external semantics description. The many advantages over existing technologies make XMPP a highly interesting candidate for next generation online services in bioinformatics.
In this thesis we developed a prototype robot, which can be controlled by user via web interface and is accessible trough a web browser. Web interface updates sensor data and streams video captured with the web-cam mounted on the robot in real-time. Raspberry Pi computer runs the back-end code of the thesis. General purpose input-output header on Raspberry Pi communicates with motor driver and sensors. Wireless dongle and web-cam connected trough USB, ensure wireless communication and vid...
Computer Science This thesis examines methods for accessing information stored in a relational database from a Web Page. The stateless and connectionless nature of the Web's Hypertext Transport Protocol as well as the open nature of the Internet Protocol pose problems in the areas of database concurrency, security, speed, and performance. We examined the Common Gateway Interface, Server API, Oracle's Web/database architecture, and the Java Database Connectivity interface in terms of p...
Ueno; Asai; Arita
We have constructed a general framework for integrating application programs with control through a local Web browser. This method is based on a simple inter-process message function from an external process to application programs. Commands to a target program are prepared in a script file, which is parsed by a message dispatcher program. When it is used as a helper application to a Web browser, these messages will be sent from the browser by clicking a hyper-link in a Web document. Our framework also supports pluggable extension-modules for application programs by means of dynamic linking. A prototype system is implemented on our molecular structure-viewer program, MOSBY. It successfully featured a function to load an extension-module required for the docking study of molecular fragments from a Web page. Our simple framework facilitates the concise configuration of Web softwares without complicated knowledge on network computation and security issues. It is also applicable for a wide range of network computations processing private data using a Web browser. PMID:11072353
Web services is used in Experimental Physics and Industrial Control System (EPICS). Combined with EPICS Channel Access protocol, Web services high usability, platform independence and language independence can be used to design a fully transparent and uniform software interface layer, which helps us complete channel data acquisition, modification and monitoring functions. This software interface layer, a cross-platform of cross-language, has good interoperability and reusability. (authors)
Xiaoyu Zhang; Martin Gordon
One important problem in bioinformatics is to study pockets or tunnels within the protein structure. These pocket or tunnel regions are significant because they indicate areas of ligand binding or enzymatic reactions, and tunnels are often solvent ion conductance areas. The Protein Pocket Viewer (PPV) is a web interface that allows the user to extract and visualize the protein pockets in a browser, based on the algorithm in . The PPV packaged the pocket extraction executable as a web servi...
Emoto, M. [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan)], E-mail: firstname.lastname@example.org; Murakami, S. [Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501 (Japan); Yoshida, M.; Funaba, H.; Nagayama, Y. [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan)
There are many analysis codes that analyze various aspects of plasma physics. However, most of them are FORTRAN programs that are written to be run in supercomputers. On the other hand, many scientists use GUI (graphical user interface)-based operating systems. For those who are not familiar with supercomputers, it is a difficult task to run analysis codes in supercomputers, and they often hesitate to use these programs to substantiate their ideas. Furthermore, these analysis codes are written for personal use, and the programmers do not expect these programs to be run by other users. In order to make these programs to be widely used by many users, the authors developed user-friendly interfaces using a Web interface. Since the Web browser is one of the most common applications, it is useful for both the users and developers. In order to realize interactive Web interface, AJAX technique is widely used, and the authors also adopted AJAX. To build such an AJAX based Web system, Ruby on Rails plays an important role in this system. Since this application framework, which is written in Ruby, abstracts the Web interfaces necessary to implement AJAX and database functions, it enables the programmers to efficiently develop the Web-based application. In this paper, the authors will introduce the system and demonstrate the usefulness of this approach.
Michel, L.; Bantzhaff, P.; Frère, C.; Mantelet, G.; Pineau, F. X.
Saada transforms a set of heterogeneous FITS files or VOtables in a powerful database deployed on the Web without writing code. Saada can mix data of various categories (images, tables, spectra…) in multiple collections. Data collections can be linked each to others making relevant browsing paths and allowing data-mining oriented queries. Saada supports four Virtual Observatory (VO) services: Spectra, images, sources, and TAP. Data collections can be published immediately after the deployment of the Web interface. The poster presents the new Web interface coming with Saada databases. It is based on Ajax and key points are: I) Heterogeneous datasets browsing. II) Smart editors for complex queries. III) Full integration of SAMP (WebSampConnector). IV) Use of either SaadaQL or VO protocols for data selection.
Bassil, Youssef; 10.5121/ijwest.2012.3104
Recent advances in computing systems have led to a new digital era in which every area of life is nearly interrelated with information technology. However, with the trend towards large-scale IT systems, a new challenge has emerged. The complexity of IT systems is becoming an obstacle that hampers the manageability, operability, and maintainability of modern computing infrastructures. Autonomic computing popped up to provide an answer to these ever-growing pitfalls. Fundamentally, autonomic systems are self-configuring, self-healing, self-optimizing, and self-protecting; hence, they can automate all complex IT processes without human intervention. This paper proposes an autonomic HTML web-interface generator based on XML Schema and Style Sheet specifications for self-configuring graphical user interfaces of web applications. The goal of this autonomic generator is to automate the process of customizing GUI web-interfaces according to the ever-changing business rules, policies, and operating environment with th...
Katayama, T.; Arakawa, K.; Nakao, M; Prins, J.C.P.
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain speci...
Katayama, Toshiaki; Arakawa, Kazuharu; Nakao, Mitsuteru; Ono, Keiichiro; Aoki-Kinoshita, Kiyoko F; Yamamoto, Yasunori; Yamaguchi, Atsuko; Kawashima, Shuichi; Chun, Hong-Woo; Aerts, Jan; Aranda, Bruno; Barboza, Lord; Bonnal, Raoul JP; Bruskiewich, Richard; Bryne, Jan C
Abstract Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited...
The present paper covers a generic and dynamic framework for the web publishing of bioinformatics databases based upon a meta data design, Java Bean, Java Server Page(JSP), Extensible Markup Language(XML), Extensible Stylesheet Language(XSL) and Extensible Stylesheet Language Transformation(XSLT). In this framework, the content is stored in a configurable and structured XML format, dynamically generated from an Oracle Relational Database Management System(RDBMS). The presentation is dynamically generated by transforming the XML document into HTML through XSLT.This clean separation between content and presentation makes the web publishing more flexible; changing the presentation only needs a modification of the Extensive Stylesheet(XS).
Tomatis, N.; Moreau, B.
The Autonomous Systems Lab at the Swiss Federal Institute of Technology Lausanne (EPFL) is engaged in mobile robotics research. The labs research focuses mainly on indoor localization and map building, outdoor locomotion and navigation, and micro mobile robotics. In the framework of a research project on mobile robot localization, a graphical web interface for our indoor robots has been developed. The purpose of this interface is twofold: it serves as a tool for task supervision for the rese...
WebFTS is a web-delivered file transfer and management solution which allows users to invoke reliable, managed data transfers on distributed infrastructures. The fully open source solution offers a simple graphical interface through which the power of the FTS3 service can be accessed without the installation of any special grid tools. Created following simplicity and efficiency criteria, WebFTS allows the user to access and interact with multiple grid and cloud storage. The “transfer engine” used is FTS3, the service responsible for distributing the majority of LHC data across WLCG infrastructure. This provides WebFTS with reliable, multi-protocol, adaptively optimised data transfers.The talk will focus on the recent development which allows transfers from/to Dropbox and CERNBox (CERN ownCloud deployment)
and platform independent web technology. This enables accessing the RODOS systems by remote users from all kinds of computer platforms with Internet browser. The layout and content structure of this web interface have been designed and developed with a unique standardized interface layout and information structure under due consideration of the needs of the RODOS users. Two types of web-based interfaces have been realized: category B: active user with access to the RODOS system via web browser. The interaction with RODOS is limited to the level (2) and (3) mentioned above: category B users can only define interactive runs via input forms and select results from predefined information. They have no access to data bases and cannot operate RODOS in its automatic mode. Category C: passive user with access via web browser and - if desired - via X-desktop only to RODOS results produced by users of category A or B. The category B users define their requests to the RODOS system via an interactive Web-based interface. The corresponding HTML file is sent to the RODOS Web server. lt transforms the information into RODOS compatible input data, initiates the corresponding RODOS runs, produces an HTML results file and returns it to the web browser. The web browser receives the HTML file, it interprets the page content and displays the page. The layout, content and functions of the new web based interface for category B and category C users will be demonstrated. Example interactive runs will show the interaction with the RODOS system. fig. 1 (author)
Full Text Available Recent advances in computing systems have led to a new digital era in which every area of life is nearlyinterrelated with information technology. However, with the trend towards large-scale IT systems, a newchallenge has emerged. The complexity of IT systems is becoming an obstacle that hampers themanageability, operability, and maintainability of modern computing infrastructures. Autonomiccomputing popped up to provide an answer to these ever-growing pitfalls. Fundamentally, autonomicsystems are self-configuring, self-healing, self-optimizing, and self-protecting; hence, they can automate allcomplex IT processes without human intervention. This paper proposes an autonomic HTML web-interfacegenerator based on XML Schema and Style Sheet specifications for self-configuring graphical userinterfaces of web applications. The goal of this autonomic generator is to automate the process ofcustomizing GUI web-interfaces according to the ever-changing business rules, policies, and operatingenvironment with the least IT labor involvement. The conducted experiments showed a successfulautomation of web interfaces customization that dynamically self-adapts to keep with the always-changingbusiness requirements. Future research can improve upon the proposed solution so that it supports the selfconfiguringof not only web applications but also desktop applications.
Dragut, Eduard C; Yu, Clement T
There are millions of searchable data sources on the Web and to a large extent their contents can only be reached through their own query interfaces. There is an enormous interest in making the data in these sources easily accessible. There are primarily two general approaches to achieve this objective. The first is to surface the contents of these sources from the deep Web and add the contents to the index of regular search engines. The second is to integrate the searching capabilities of these sources and support integrated access to them. In this book, we introduce the state-of-the-art tech
Kabisch, Thomas; Dragut, Eduard; Yu, Clement; Leser, Ulf
Much data in the Web is hidden behind Web query interfaces. In most cases the only means to "surface" the content of a Web database is by formulating complex queries on such interfaces. Applications such as Deep Web crawling and Web database integration require an automatic usage of these interfaces. Therefore, an important problem to be addressed is the automatic extraction of query interfaces into an appropriate model. We hypothesize the existence of a set of domain-independent "commonsense...
Pritychenko,B.; Sonzogni, A.A.
We present Sigma Web interface which provides user-friendly access for online analysis and plotting of the evaluated and experimental nuclear reaction data stored in the ENDF-6 and EXFOR formats. The interface includes advanced browsing and search capabilities, interactive plots of cross sections, angular distributions and spectra, nubars, comparisons between evaluated and experimental data, computations for cross section data sets, pre-calculated integral quantities, neutron cross section uncertainties plots and visualization of covariance matrices. Sigma is publicly available at the National Nuclear Data Center website at http://www.nndc.bnl.gov/sigma.
A web-based interface dedicated for cluster computer which is publicly accessible for free is introduced. The interface plays an important role to enable secure public access, while providing user-friendly computational environment for end-users and easy maintainance for administrators as well. The whole architecture which integrates both aspects of hardware and software is briefly explained. It is argued that the public cluster is globally a unique approach, and could be a new kind of e-learning system especially for parallel programming communities.
Baohua Qiang; Rui Zhang; Yufeng Wang; Qian He; Wei Li; Sai Wang
How to cluster different query interfaces effectively is one of the most core issues when generating integrated query interface on Deep Web integration domain. However, with the rapid development of Internet technology, the number of Deep Web query interface shows an explosive growth trend. For this reason, the traditional stand-alone Deep Web query interface clustering approaches encounter bottlenecks in terms of time complexity and space complexity. After further study of the Hadoop distrib...
Full Text Available The current paper proposes a smart web interface designed for monitoring the status of the elderly people. There are four main user types used in the web application: the administrator (who has power access to all the application’s functionalities, the patient (who has access to his own personal data, like parameters history, personal details, relatives of the patient (who have administrable access to the person in care, access that is defined by the patient and the medic (who can view the medical history of the patient and prescribe different medications or interpret the received parameters data. The main purpose of this web application is to receive and analyze received data from body sensors like accelerometers, EKG or GSR sensors, or even ambient sensors like gas detectors, humidity, pressure or temperature sensors. After processing the harvested information, the web application decides if an alert has to be triggered and sends it to a specialized call center (for example, if the patient’s body temperature is over 40 degrees Celsius.
Panichi, Giancarlo; Coro, Gianpaolo
In this document we describe the DataMiner Manager Web interface that allows interacting with the gCube DataMiner service. DataMiner is a cross-usage service that provides users and services with tools for performing data mining operations. Specifically, it offers a unique access to perform data mining and statistical operations on heterogeneous data, which may reside either at client side, in the form of comma-separated values files, or be remotely hosted, possibly in a database. The DataMin...
Full Text Available Abstract Background The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Results Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i a workflow to annotate 100,000 sequences from an invertebrate species; ii an integrated system for analysis of the transcription factor binding sites (TFBSs enriched based on differential gene expression data obtained from a microarray experiment; iii a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Conclusions Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i the absence of several useful data or analysis functions in the Web service "space"; ii the lack of documentation of methods; iii lack of
Full Text Available One important problem in bioinformatics is to study pockets or tunnels within the protein structure. These pocket or tunnel regions are significant because they indicate areas of ligand binding or enzymatic reactions, and tunnels are often solvent ion conductance areas. The Protein Pocket Viewer (PPV is a web interface that allows the user to extract and visualize the protein pockets in a browser, based on the algorithm in . The PPV packaged the pocket extraction executable as a web service, and made it accessible to all users with the Internet access and a modern java enabled browser. The PPV employed the Model2design pattern, which led to a loosely coupled implementation that is more robust and easier to maintain. It consists of a client web interface for user inputs and visualization, a middle-layer for controlling the flow, and the backend web services performing the actual CPU-intensive computation. The PPV web client consists of multiple window regions, with each region providing differing views of the protein, pockets and related information. For a more responsive user experience, the PPV web client employs AJAX for asynchronous execution of long running tasks, like protein pocket extraction.
Topic management is the task of gathering, evaluating, organizing, and sharing a set of web sites for a specific topic. Current web tools do not provide adequate support for this task. We created and continue to develop the TopicShop system to address this need. TopicShop includes (1) a web crawler/analyzer that discovers relevant web sites and builds site profiles, and (2) user interfaces for information workspaces. We conducted an empirical pilot study comparing user performance with Topi...
Chen, Xihui [ORNL; Kasemir, Kay [ORNL
Katayama Toshiaki; Arakawa Kazuharu; Nakao Mitsuteru; Ono Keiichiro; Aoki-Kinoshita Kiyoko F; Yamamoto Yasunori; Yamaguchi Atsuko; Kawashima Shuichi; Chun Hong-Woo; Aerts Jan; Aranda Bruno; Barboza Lord; Bonnal Raoul JP; Bruskiewich Richard; Bryne Jan C
Abstract Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited dom...
Face-to-face bioinformatics courses commonly include a weekly, in-person computer lab to facilitate active learning, reinforce conceptual material, and teach practical skills. Similarly, fully-online bioinformatics courses employ hands-on exercises to achieve these outcomes, although students typically perform this work offsite. Combining a…
Grisham, William; Schottler, Natalie A.; Valli-Marill, Joanne; Beck, Lisa; Beatty, Jackson
This completely computer-based module's purpose is to introduce students to bioinformatics resources. We present an easy-to-adopt module that weaves together several important bioinformatic tools so students can grasp how these tools are used in answering research questions. Students integrate information gathered from websites dealing with…
Ad Gateway is a product which provides targeting of online advertisements. This thesis focuses on improving its web based admin interface, used for defining targeting properties as well as creating the advertisements. First by integrating the system with third party web services used for storage of advertisement campaigns, called advertisement providers, allowing advertisement to be retrieved and uploaded directly to their servers. Later to extend the admin interface with new functionality, i...
Dutta, S.; Prakash, S.; Estrada, D.; Pop, E.
A lightweight Web Service and a Web site interface have been developed, which enable remote measurements of electronic devices as a "virtual laboratory" for undergraduate engineering classes. Using standard browsers without additional plugins (such as Internet Explorer, Firefox, or even Safari on an iPhone), remote users can control a Keithley…
Lee, MW; Chen, SY; Liu, X.
Web-based technology has already been adopted as a tool to support teaching and learning in higher education. One criterion affecting the usability of such a technology is the design of web-based interface (WBI) within web-based learning programs. How different users access the WBIs has been investigated by several studies, which mainly analyze the collected data using statistical methods. In this paper, we propose to analyze users’ learning behavior using Data Mining (DM) techniques. Finding...
Katayama, Toshiaki; Arakawa, Kazuharu; Nakao, Mitsuteru; Ono, Keiichiro; Aoki-Kinoshita, Kiyoko F; Yamamoto, Yasunori; Yamaguchi, Atsuko; Kawashima, Shuichi; Chun, Hong-Woo; Aerts, Jan; Aranda, Bruno; Barboza, Lord Hendrix; Bonnal, Raoul Jp; Bruskiewich, Richard; Bryne, Jan C; Fernández, José M; Funahashi, Akira; Gordon, Paul Mk; Goto, Naohisa; Groscurth, Andreas; Gutteridge, Alex; Holland, Richard; Kano, Yoshinobu; Kawas, Edward A; Kerhornou, Arnaud; Kibukawa, Eri; Kinjo, Akira R; Kuhn, Michael; Lapp, Hilmar; Lehvaslaiho, Heikki; Nakamura, Hiroyuki; Nakamura, Yasukazu; Nishizawa, Tatsuya; Nobata, Chikashi; Noguchi, Tamotsu; Oinn, Thomas M; Okamoto, Shinobu; Owen, Stuart; Pafilis, Evangelos; Pocock, Matthew; Prins, Pjotr; Ranzinger, René; Reisinger, Florian; Salwinski, Lukasz; Schreiber, Mark; Senger, Martin; Shigemoto, Yasumasa; Standley, Daron M; Sugawara, Hideaki; Tashiro, Toshiyuki; Trelles, Oswaldo; Vos, Rutger A; Wilkinson, Mark D; York, William; Zmasek, Christian M; Asai, Kiyoshi; Takagi, Toshihisa
Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS) and Computational Biology Research Center (CBRC) and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies. PMID:20727200
Full Text Available Abstract Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS and Computational Biology Research Center (CBRC and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.
Schmid Amy K
Full Text Available Abstract Background Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. Results The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV, and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. Conclusion The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the
Aiftimiei, C; Pra, S D; Fantinel, S [INFN-Padova, Padova (Italy); Andreozzi, S; Fattibene, E; Misurelli, G [INFN-CNAF, Bologna (Italy); Cuscela, G; Donvito, G; Dudhalkar, V; Maggi, G; Pierro, A [INFN-Bari, Bari (Italy)], E-mail: email@example.com, E-mail: firstname.lastname@example.org
A monitoring tool of a complex Grid system can gather a huge amount of information that have to be presented to the users in the most comprehensive way. Moreover different types of consumers could be interested in inspecting and analyzing different subsets of data. The main goal in designing a Web interface for the presentation of monitoring information is to organize the huge amount of data in a simple, user-friendly and usable structure. One more problem is to consider different approaches, skills and interests that all the possible categories of users have in looking for the desired information. Starting from the Information Architecture guidelines for the Web, it is possible to design Web interfaces towards a closer user experience and to deal with an advanced user interaction through the implementation of many Web standard technologies. In this paper, we will present a number of principles for the design of Web interface for monitoring tools that provide a wider, richer range of possibilities for what concerns the user interaction. These principles are based on an extensive review of the current literature in Web design and on the experience with the development of the GridICE monitoring tool. The described principles can drive the evolution of the Web interface of Grid monitoring tools.
A monitoring tool of a complex Grid system can gather a huge amount of information that have to be presented to the users in the most comprehensive way. Moreover different types of consumers could be interested in inspecting and analyzing different subsets of data. The main goal in designing a Web interface for the presentation of monitoring information is to organize the huge amount of data in a simple, user-friendly and usable structure. One more problem is to consider different approaches, skills and interests that all the possible categories of users have in looking for the desired information. Starting from the Information Architecture guidelines for the Web, it is possible to design Web interfaces towards a closer user experience and to deal with an advanced user interaction through the implementation of many Web standard technologies. In this paper, we will present a number of principles for the design of Web interface for monitoring tools that provide a wider, richer range of possibilities for what concerns the user interaction. These principles are based on an extensive review of the current literature in Web design and on the experience with the development of the GridICE monitoring tool. The described principles can drive the evolution of the Web interface of Grid monitoring tools
Fox, Joanne A.; Butland, Stefanie L.; McMillan, Scott; Campbell, Graeme; Ouellette, B. F. Francis
The Bioinformatics Links Directory is an online community resource that contains a directory of freely available tools, databases, and resources for bioinformatics and molecular biology research. The listing of the servers published in this and previous issues of Nucleic Acids Research together with other useful tools and websites represents a rich repository of resources that are openly provided to the research community using internet technologies. The 166 servers highlighted in the 2005 We...
Carlisle, W. H.
This paper reports on investigations into how to extend capabilities of the Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1996 Summer Faculty Fellowship program, and involved research into and prototype development of software components that provide documents and services for the World Wide Web (WWW). The WWW has become a de-facto standard for sharing resources over the internet, primarily because web browsers are freely available for the most common hardware platforms and their operating systems. As a consequence of the popularity of the internet, tools, and techniques associated with web browsers are changing rapidly. New capabilities are offered by companies that support web browsers in order to achieve or remain a dominant participant in internet services. Because a goal of the VRC is to build an environment for NASA centers, universities, and industrial partners to share information associated with Advanced Concepts Office activities, the VRC tracks new techniques and services associated with the web in order to determine the their usefulness for distributed and collaborative engineering research activities. Most recently, Java has emerged as a new tool for providing internet services. Because the major web browser providers have decided to include Java in their software, investigations into Java were conducted this summer.
LIU Wei; LIN Can; MENG Xiaofeng
A vision based query interface annotation method is used to relate attributes and form elements in form-based web query interfaces, this method can reach accuracy of 82%.And a user participation method is used to tune the result; user can answer "yes" or "no" for existing annotations, or manually annotate form elements.Mass feedback is added to the annotation algorithm to produce more accurate result.By this approach, query interface annotation can reach a perfect accuracy.
Full Text Available Abstract Background High-throughput sequencing makes it possible to rapidly obtain thousands of 16S rDNA sequences from environmental samples. Bioinformatic tools for the analyses of large 16S rDNA sequence databases are needed to comprehensively describe and compare these datasets. Results FastGroupII is a web-based bioinformatics platform to dereplicate large 16S rDNA libraries. FastGroupII provides users with the option of four different dereplication methods, performs rarefaction analysis, and automatically calculates the Shannon-Wiener Index and Chao1. FastGroupII was tested on a set of 16S rDNA sequences from coral-associated Bacteria. The different grouping algorithms produced similar, but not identical, results. This suggests that 16S rDNA datasets need to be analyzed in multiple ways when being used for community ecology studies. Conclusion FastGroupII is an effective bioinformatics tool for the trimming and dereplication of 16S rDNA sequences. Several standard diversity indices are calculated, and the raw sequences are prepared for downstream analyses.
Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro
Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org. PMID:21632604
Hediger, Martin R; De Vico, Luca
We present a web interface for the BioFET-SIM program. The web interface allows to conveniently setup calculations based on the BioFET-SIM multiple charges model. As an illustration, two case studies are presented. In the first case, a generic peptide with opposite charges on both ends is inverted in orientation on a semiconducting nanowire surface leading to a corresponding change in sign of the computed sensitivity of the device. In the second case, the binding of an antibody/antigen complex on the nanowire surface is studied in terms of orientation and analyte/nanowire surface distance. We demonstrate how the BioFET-SIM web interface can aid in the understanding of experimental data and postulate alternative ways of antibody/antigen orientation on the nanowire surface.
España Bonet, Cristina; Vila Rigat, Marta; Rodríguez Hontoria, Horacio; Martí, Maria Antònia
CoCo es una interfaz web colaborativa para la compilación de recursos lingüísticos. En esta demo se presenta una de sus posibles aplicaciones: la obtención de paráfrasis. / CoCo is a collaborative web interface for the compilation of linguistic resources. In this demo we are presenting one of its possible applications: paraphrase acquisition. Peer Reviewed
Fisher, S. J.; Levik, K. E.; Williams, M. A.; Ashton, A. W.; McAuley, K.E.
SynchWeb is a modern interface to the ISPyB database. It significantly simplifies sample registration and is targeted towards live data collection monitoring and remote access for macromolecular crystallography. It adds a variety of new features including project management, an integrated diffraction image viewer, and a map and model viewer, as well as displaying results from automated analysis pipelines. Virtually all aspects of an experiment can be monitored through the web browser and the ...
Daumke, Philipp; Schulz, Stefan; Markó, Kornél
Medical document retrieval presents a unique combination of challenges for the design and implementation of retrieval engines. We introduce a method to meet these challenges by implementing a multilingual retrieval interface for biomedical content in the World Wide Web. To this end we developed an automated method for interlingual query construction by which a standard Web search engine is enabled to process non-English queries from the biomedical domain in order to retrieve English documents. PMID:16779221
Full Text Available Upon the popularity of 3C devices, the visual creatures are all around us, such the online game, touch pad, video and animation. Therefore, the text-based web page will no longer satisfy users. With the popularity of webcam, digital camera, stereoscopic glasses, or head-mounted display, the user interface becomes more visual and multi-dimensional. For the consideration of 3D and visual display in the research of web user interface design, Augmented Reality technology providing the convenient tools and impressive effects becomes the hot topic. Augmented Reality effect enables users to represent parts of the digital objects on top of the physical surroundings. The easy operation with webcam greatly improving the visual representation of web pages becomes the interest of our research. Therefore, we apply Augmented Reality technology for developing a city tour web site to collect the opinions of users. Therefore, the website stickiness is an important measurement. The major tasks of the work include the exploration of Augmented Reality technology and the evaluation of the outputs of Augmented Reality. The feedback opinions of users are valuable references for improving AR application in the work. As a result, the AR increasing the visual and interactive effects of web page encourages users to stay longer and more than 80% of users are willing to return for visiting the website soon. Moreover, several valuable conclusions about Augmented Reality technology in web user interface design are also provided for further practical references.
The lecture will introduce new functions and graphic design WebSOD - web interface Personal dosimetry Service VF. a.s. which will be updated in November 2014. The new interface will have a new graphic design, intuitive control system and will be providing a range of new functions: - Personal doses - display of personal doses from personal, extremity and neutron dosimeters including graphs, annual and electronic listings of doses; - Collective doses - display of group doses for selected periods of time; Reference levels - setting and display of three reference levels; - Evidence - enables administration of monitored individuals - beginning, ending of monitoring, or editing the data of monitored persons and centers. (author)
Lin, Ling; Zhou, Lizhu
Web databases provide different types of query interfaces to access the data records stored in the backend databases. While most existing works exploit a complex query interface with multiple input fields to perform schema identification of the Web databases, little attention has been paid on how to identify the schema of web databases by simple query interface (SQI), which has only one single query text input field. This paper proposes a new method of instance-based query probing to identify WDBs' interface and result schema for SQI. The interface schema identification problem is defined as generating the fullcondition query of SQI and a novel query probing strategy is proposed. The result schema is also identified based on the result webpages of SQI's full-condition query, and an extended identification of the non-query attributes is proposed to improve the attribute recall rate. Experimental results on web databases of online shopping for book, movie and mobile phone show that our method is effective and efficient.
Full Text Available The present article aims to describe a project consisting in designing a framework of applications used to create graphical interfaces with an Oracle distributed database. The development of the project supposed the use of the latest technologies: database Oracle server, Tomcat web server, JDBC (Java library used for accessing a database, JSP and Tag Library (for the development of graphical interfaces.
Georgiana-Petruţa Fîntîneanu; Florentina Anica Pintea
The present article aims to describe a project consisting in designing a framework of applications used to create graphical interfaces with an Oracle distributed database. The development of the project supposed the use of the latest technologies: database Oracle server, Tomcat web server, JDBC (Java library used for accessing a database), JSP and Tag Library (for the development of graphical interfaces).
Invenio is an open source web-based application that implements a digital library or document server, and it's used at CERN as the base of the CERN Document Server Institutional Repository and the Inspire High Energy Physics Subject Repository. The purpose of this work was to reimplement the administrative interface of the search engine in Invenio, using new and proved open source technologies, to simplify the code base and lay the foundations for the work that it will be done in porting the rest of the administrative interfaces to use these newer technologies. In my time as a CERN openlab summer student I was able to implement some of the features for the WebSearch Admin Interfaces, enhance some of the existing code with new features and find solutions to technical challenges that will be common when porting the other administrative interfaces modules.
Moreau, B.; Tomatis, N.; Arras, K.O.; Jensen, B.; Siegwart, R.
In this paper we present a multi-modal web interface for autonomous mobile robots. The purpose of this interface is twofold. It serves as a tool for task supervision for the researcher and task specification for the end-user. The applications envisaged are typical service scenarios like remote inspection, transportation tasks or tour guiding. Instead of post-processing a huge amount of data gathered and stored during operation, it is very desirable for the developer to monitor specific inter...
The macroscopic properties of materials can be significantly influenced by the presence of microscopic interfaces. The complexity of these interfaces coupled with the vast configurational space in which they reside has been a long-standing obstacle to the advancement of true bottom-up material behavior predictions. In this vein, atomistic simulations have proven to be a valuable tool for investigating interface behavior. However, before atomistic simulations can be utilized to model interface behavior, meaningful interface atomic structures must be generated. The generation of structures has historically been carried out disjointly by individual research groups, and thus, has constituted an overlap in effort across the broad research community. To address this overlap and to lower the barrier for new researchers to explore interface modeling, we introduce a web-based interface structure databank (www.isdb.cee.cornell.edu) where users can search, download and share interface structures. The databank is intended to grow via two mechanisms: (1) interface structure donations from individual research groups and (2) an automated structure generation algorithm which continuously creates equilibrium interface structures. In this paper, we describe the databank, the automated interface generation algorithm, and compare a subset of the autonomously generated structures to structures currently available in the literature. To date, the automated generation algorithm has been directed toward aluminum grain boundary structures, which can be compared with experimentally measured population densities of aluminum polycrystals. (paper)
Serban, Alexandru; Crisan-Vida, Mihaela; Mada, Leonard; Stoicu-Tivadar, Lacramioara
User interfaces are important to facilitate easy learning and operating with an IT application especially in the medical world. An easy to use interface has to be simple and to customize the user needs and mode of operation. The technology in the background is an important tool to accomplish this. The present work aims to creating a web interface using specific technology (HTML table design combined with CSS3) to provide an optimized responsive interface for a complex web application. In the first phase, the current icMED web medical application layout is analyzed, and its structure is designed using specific tools, on source files. In the second phase, a new graphic adaptable interface to different mobile terminals is proposed, (using HTML table design (TD) and CSS3 method) that uses no source files, just lines of code for layout design, improving the interaction in terms of speed and simplicity. For a complex medical software application a new prototype layout was designed and developed using HTML tables. The method uses a CSS code with only CSS classes applied to one or multiple HTML table elements, instead of CSS styles that can be applied to just one DIV tag at once. The technique has the advantage of a simplified CSS code, and a better adaptability to different media resolutions compared to DIV-CSS style method. The presented work is a proof that adaptive web interfaces can be developed just using and combining different types of design methods and technologies, using HTML table design, resulting in a simpler to learn and use interface, suitable for healthcare services. PMID:27139407
LI Lin; SHEN Liren; ZHU Qing; WAN Tianmin
Accelerator database stores various static parameters and real-time data of accelerator. SSRF (Shanghai Synchrotron Radiation Facility) adopts relational database to save the data. We developed a data retrieval system based on XML Web Services for accessing the archive data. It includes a bottom layer interface and an interface applicable for accelerator physics. Client samples exemplifying how to consume the interface are given. The users can browse, retrieve and plot data by the client samples. Also, we give a method to test its stability. The test result and performance are described.
Accelerator database stores various static parameters and real-time data of accelerator. SSRF (Shanghai Synchrotron Radiation Facility) adopts relational database to save the data. We developed a data retrieval system based on XML Web Services for accessing the archive data. It includes a bottom layer interface and an interface applicable for accelerator physics. Client samples exemplifying how to consume the interface are given. The users can browse, retrieve and plot data by the client samples. Also, we give a method to test its stability. The test result and performance are described. (authors)
Choi, Jeongseok; Kim, Jaekwon; Lee, Dong Kyun; Jang, Kwang Soo; Kim, Dai-Jin; Choi, In Young
Internet addiction (IA) has become a widespread and problematic phenomenon as smart devices pervade society. Moreover, internet gaming disorder leads to increases in social expenditures for both individuals and nations alike. Although the prevention and treatment of IA are getting more important, the diagnosis of IA remains problematic. Understanding the neurobiological mechanism of behavioral addictions is essential for the development of specific and effective treatments. Although there are many databases related to other addictions, a database for IA has not been developed yet. In addition, bioinformatics databases, especially genetic databases, require a high level of security and should be designed based on medical information standards. In this respect, our study proposes the OAuth standard protocol for database access authorization. The proposed IA Bioinformatics (IABio) database system is based on internet user authentication, which is a guideline for medical information standards, and uses OAuth 2.0 for access control technology. This study designed and developed the system requirements and configuration. The OAuth 2.0 protocol is expected to establish the security of personal medical information and be applied to genomic research on IA. PMID:27103887
Thiel, William H; Giangrande, Paloma H
The development of DNA and RNA aptamers for research as well as diagnostic and therapeutic applications is a rapidly growing field. In the past decade, the process of identifying aptamers has been revolutionized with the advent of high-throughput sequencing (HTS). However, bioinformatics tools that enable the average molecular biologist to analyze these large datasets and expedite the identification of candidate aptamer sequences have been lagging behind the HTS revolution. The Galaxy Project was developed in order to efficiently analyze genome, exome, and transcriptome HTS data, and we have now applied these tools to aptamer HTS data. The Galaxy Project's public webserver is an open source collection of bioinformatics tools that are powerful, flexible, dynamic, and user friendly. The online nature of the Galaxy webserver and its graphical interface allow users to analyze HTS data without compiling code or installing multiple programs. Herein we describe how tools within the Galaxy webserver can be adapted to pre-process, compile, filter and analyze aptamer HTS data from multiple rounds of selection. PMID:26481156
The COOL database in ATLAS is primarily used for storing detector conditions data, but also status flags which are uploaded summaries of information to indicate the detector reliability during a run. This paper introduces the use of CherryPy, a Python application server which acts as an intermediate layer between a web interface and the database, providing a simple means of storing to and retrieving from the COOL database which has found use in many web applications. The software layer is designed to be RESTful, implementing the common CRUD (Create, Read, Update, Delete) database methods by means of interpreting the HTTP method (POST, GET, PUT, DELETE) on the server along with a URL identifying the database resource to be operated on. The format of the data (text, xml etc) is also determined by the HTTP protocol. The details of this layer are described along with a popular application demonstrating its use, the ATLAS run list web page.
Kamlesh Sharma; Dr. S.V.A.V. Prasad; Prasad, Dr. T. V.
Aiming at increasing system simplicity and flexibility, an audio evoked based system was developed by integrating simplified headphone and user-friendly software design. This paper describes a Hindi Speech Actuated Computer Interface for Web search (HSACIWS), which accepts spoken queries in Hindi language and provides the search result on the screen. This system recognizes spoken queries by large vocabulary continuous speech recognition (LVCSR), retrieves relevant document by text retrieval, ...
Shen, Lishuang; Diroma, Maria Angela; Gonzalez, Michael; Navarro-Gomez, Daniel; Leipzig, Jeremy; Lott, Marie T; van Oven, Mannis; Wallace, Douglas C; Muraresku, Colleen Clarke; Zolkipli-Cunningham, Zarazuela; Chinnery, Patrick F; Attimonelli, Marcella; Zuchner, Stephan; Falk, Marni J; Gai, Xiaowu
MSeqDR is the Mitochondrial Disease Sequence Data Resource, a centralized and comprehensive genome and phenome bioinformatics resource built by the mitochondrial disease community to facilitate clinical diagnosis and research investigations of individual patient phenotypes, genomes, genes, and variants. A central Web portal (https://mseqdr.org) integrates community knowledge from expert-curated databases with genomic and phenotype data shared by clinicians and researchers. MSeqDR also functions as a centralized application server for Web-based tools to analyze data across both mitochondrial and nuclear DNA, including investigator-driven whole exome or genome dataset analyses through MSeqDR-Genesis. MSeqDR-GBrowse genome browser supports interactive genomic data exploration and visualization with custom tracks relevant to mtDNA variation and mitochondrial disease. MSeqDR-LSDB is a locus-specific database that currently manages 178 mitochondrial diseases, 1,363 genes associated with mitochondrial biology or disease, and 3,711 pathogenic variants in those genes. MSeqDR Disease Portal allows hierarchical tree-style disease exploration to evaluate their unique descriptions, phenotypes, and causative variants. Automated genomic data submission tools are provided that capture ClinVar compliant variant annotations. PhenoTips will be used for phenotypic data submission on deidentified patients using human phenotype ontology terminology. The development of a dynamic informed patient consent process to guide data access is underway to realize the full potential of these resources. PMID:26919060
Manuel Juárez Pacheco
Full Text Available This paper describes a pilot study carried out to compare two Web interfaces used to support a collaborative learning design for science education. The study is part of a wider research project, which aims at characterizing computer software for collaborative learning in science education. The results coming from a questionnaire applied to teachers and researchers reveal the necessity to design technological tools based mainly on users’ needs and to take into account the impact of these tools on the learning of curricular contents.
Pritychenko,B.; Sonzogni, A.A.
The authors present Sigma, a Web-rich application which provides user-friendly access in processing and plotting of the evaluated and experimental nuclear reaction data stored in the ENDF-6 and EXFOR formats. The main interface includes browsing using a periodic table and a directory tree, basic and advanced search capabilities, interactive plots of cross sections, angular distributions and spectra, comparisons between evaluated and experimental data, computations between different cross section sets. Interactive energy-angle, neutron cross section uncertainties plots and visualization of covariance matrices are under development. Sigma is publicly available at the National Nuclear Data Center website at www.nndc.bnl.gov/sigma.
Ernst, D. R.
We are developing a web interface to connect plasma microturbulence simulation codes with experimental data. The website automates the preparation of gyrokinetic simulations utilizing plasma profile and magnetic equilibrium data from TRANSP analysis of experiments, read from MDSPLUS over the internet. This database-driven tool saves user sessions, allowing searches of previous simulations, which can be restored to repeat the same analysis for a new discharge. The website includes a multi-tab, multi-frame, publication quality java plotter Webgraph, developed as part of this project. Input files can be uploaded as templates and edited with context-sensitive help. The website creates inputs for GS2 and GYRO using a well-tested and verified back-end, in use for several years for the GS2 code [D. R. Ernst et al., Phys. Plasmas 11(5) 2637 (2004)]. A centralized web site has the advantage that users receive bug fixes instantaneously, while avoiding the duplicated effort of local compilations. Possible extensions to the database to manage run outputs, toward prototyping for the Fusion Simulation Project, are envisioned. Much of the web development utilized support from the DoE National Undergraduate Fellowship program [e.g., A. Suarez and D. R. Ernst, http://meetings.aps.org/link/BAPS.2005.DPP.GP1.57.
Full text: The Internet, and in particular, the World-Wide Web, has provided tremendous opportunities for enabling access and transfer of information. Traditionally, Internet services have relied on textual methods for delivery of information. The World-Wide Web (WWW) in its current (and ever-changing form) is primarily a method of communication which includes both graphical and textual information. The easy-to-use graphical interface, developed as part of the WWW, is based on the Hypertext Mark-up Language (HTML). More advanced interfaces can be developed by incorporating interactive documents, which can be updated depending upon the wishes of the user. The Common Gateway Interface (CGI) can be utilised to transfer information b y utilising various programming and scripting languages (eg. C, Perl). This paper describes the development of a WWW interface for the viewing of anatomical and radiographic information in the form of two-dimensional cross-sectional images and three-dimensional reconstruction images. HTML documents were prepared using a commercial software program (HotDog, Sausage Software Co., Australia). Forms were used to control user-selection parameters such as imaging modality and cross-sectional slice number. All documents were developed and tested using Netscape 2.0. Visual and radiographic images were processed using ANALYZETM Version 7.5 (Biomedical Imaging Resource, Mayo Foundation, Rochester, USA). Perl scripting was used to process all requests passed to the WWW server. ANSI 'C' programming was used to implement image processing operations which are performed in response to user-selected options. The interface which has been developed is easy to use, is independent of browsing software, is accessible by multiple users, and provides an example of how easily medical imaging data can be distributed amongst interested parties. Various imaging datasets, including the Visible Human ProjectTM (National Library of Medicine, USA.) have been prepared
The material control and accountancy system for the Fuel Conditioning Facility (FCF) initially uses calculated values for the mass flows of irradiated EBR-11 driver fuel to be processed in the electrorefiner. These calculated values are continually verified by measurements performed by the Analytical Laboratory (AL) on samples from the fuel element chopper retained for each chopper batch. Measured values include U and Pu masses, U and Pu isotopic fractions, and burnup (via La and Tc). When the measured data become available, it is necessary to determine if the measured and calculated data are consistent. This verification involves accessing two databases and performing standard statistical analyses to produce control charts for these measurements. These procedures can now be invoked via a Web interface providing: a timely and efficient control of these measurements, a user-friendly interface, off-site remote access to the data, and a convenient means of studying correlations among the data. This paper will present the architecture of the interface and a description of the control procedures, as well as examples of the control charts and correlations
Lu, Qiang; Hao, Pei; Curcin, Vasa; He, Weizhong; Li, Yuan-Yuan; Luo, Qing-Ming; Guo, Yi-Ke; Li, Yi-Xue
Bioinformatics is a dynamic research area in which a large number of algorithms and programs have been developed rapidly and independently without much consideration so far of the need for standardization. The lack of such common standards combined with unfriendly interfaces make it difficult for biologists to learn how to use these tools and to translate the data formats from one to another. Consequently, the construction of an integrative bioinformatics platform to facilitate biologists' research is an urgent and challenging task. KDE Bioscience is a java-based software platform that collects a variety of bioinformatics tools and provides a workflow mechanism to integrate them. Nucleotide and protein sequences from local flat files, web sites, and relational databases can be entered, annotated, and aligned. Several home-made or 3rd-party viewers are built-in to provide visualization of annotations or alignments. KDE Bioscience can also be deployed in client-server mode where simultaneous execution of the same workflow is supported for multiple users. Moreover, workflows can be published as web pages that can be executed from a web browser. The power of KDE Bioscience comes from the integrated algorithms and data sources. With its generic workflow mechanism other novel calculations and simulations can be integrated to augment the current sequence analysis functions. Because of this flexible and extensible architecture, KDE Bioscience makes an ideal integrated informatics environment for future bioinformatics or systems biology research. PMID:16260186
Firmenich, Sergio; Rossi, Gustavo; Winckler, Marco Antonio; Palanque, Philippe
Currently, a lot of the tasks engaged by users over the Web involve dealing with multiple Web sites. Moreover, whilst Web navigation was considered as a lonely activity in the past, a large proportion of users are nowadays engaged in collaborative activities over the Web. In this paper we argue that these two aspects of collaboration and tasks spanning over multiple Web sites call for a level of coordination that require Distributed User Interfaces (DUI). In this context, DUIs would play a ma...
Bayan Abu Shawar
Full Text Available In this paper, we describe a way to access Arabic Web Question Answering (QA corpus using a chatbot, without the need for sophisticated natural language processing or logical inference. Any Natural Language (NL interface to Question Answer (QA system is constrained to reply with the given answers, so there is no need for NL generation to recreate well-formed answers, or for deep analysis or logical inference to map user input questions onto this logical ontology; simple (but large set of pattern-template matching rules will suffice. In previous research, this approach works properly with English and other European languages. In this paper, we try to see how the same chatbot will react in terms of Arabic Web QA corpus. Initial results shows that 93% of answers were correct, but because of a lot of characteristics related to Arabic language, changing Arabic questions into other forms may lead to no answers.
Migoya Orue, Yenca O.; Nava, Bruno; Radicella, Sandro M.; Alazo Cuartas, Katy; Luigi, Ciraolo
A web front-end has been recently developed and released to allow retrieving and plotting ionospheric parameters computed by the latest version of the model, NeQuick 2. NeQuick is a quick-run ionospheric electron density model particularly designed for trans-ionospheric propagation applications. It has been developed at the Aeronomy and Radiopropagation Laboratory (now T/ICT4D Laboratory) of the Abdus Salam International Centre for Theoretical Physics (ICTP) - Trieste, Italy with the collaboration of the Institute for Geophysics, Astrophysics and Meteorology (IGAM) of the University of Graz, Austria. To describe the electron density of the ionosphere up to the peak of the F2 layer, NeQuick uses a profile formulation which includes five semi-Epstein layers with modelled thickness parameters. Through a simple web interface users can exploit all the model features including the possibility of computing the electron density and visualizing the corresponding Total Electron Content (TEC) along any ground-to-satellite straight line ray-path. Indeed, the TEC is the ionospheric parameter retrieved from the GPS measurements. It complements the experimental data obtained with diverse kinds of sensors and can be considered a major source of ionospheric information. Since the TEC is not a direct measurement, a "de-biasing" procedure or calibration has to be applied to obtain the relevant values from the raw GPS observables. Using the observation and navigation RINEX files corresponding to a single receiver as input data, the web application allows the user to compute the slant and/or vertical TEC following the concept of the "arc-by-arc" offsets estimation. The combined use of both tools, freely available from the T/ICT4D Web site, will allow the comparison of experimentally derived slant and vertical TEC with modelled values. An online demonstration of the capabilities of the mentioned web services will be illustrated.
Boughamoura, Radhouane; Omri, Mohamed Nazih
Deep Web databases contain more than 90% of pertinent information of the Web. Despite their importance, users don't profit of this treasury. Many deep web services are offering competitive services in term of prices, quality of service, and facilities. As the number of services is growing rapidly, users have difficulty to ask many web services in the same time. In this paper, we imagine a system where users have the possibility to formulate one query using one query interface and then the system translates query to the rest of query interfaces. However, interfaces are created by designers in order to be interpreted visually by users, machines can not interpret query from a given interface. We propose a new approach which emulates capacity of interpretation of users and extracts query from deep web query interfaces. Our approach has proved good performances on two standard datasets.
Maury, Abigail; Critchfield, Anna; Langston, Jim; Adams, Cynthia
The Java-based spacecraft web interface to telemetry and command handling, Jswitch, is a prototype, platform-independent user interface to a spacecraft command and control system that uses Java technology, readily available security software, standard World Wide Web protocols, and commercial off-the-shelf (COTS) products. The Java-based science analysis and trending tool, Jsat, is a major element in Jswitch. Jsat is an inexpensive, Web-based, information-on-demand science data trend analysis ...
Full Text Available Efficient delivery of relevant product information is increasingly becoming the central basis of competition between firms. The interface design represents the central component for successful information delivery to consumers. However, interface design for web-based information systems is probably more an art than a science at this point in time. Much research is needed to understand properties of an effective interface for electronic commerce. This paper develops a framework identifying the relationship between user factors, the role of the user interface and overall system success for web-based electronic commerce. The paper argues that web-based systems for electronic commerce have some similar properties to decision support systems (DSS and adapts an established DSS framework to the electronic commerce domain. Based on a limited amount of research studying web browser interface design, the framework identifies areas of research needed and outlines possible relationships between consumer characteristics, interface design attributes and measures of overall system success.
Mesbah, A.; Van Deursen, A.; Lenselink, S.
Usability and user experience are the driving concepts behind the web user interface design today. This thesis explores web user interface usability as a concept as well as how to apply it to a wide range of multilayered and lengthy material of the MineHealth project. The objective of this thesis is a pleasant and useful design for the MineHealth training and education material website. This thesis presents the process of web interface design commissioned by the MineHealth Kolarctic ENPI CBC ...
Deep Web查询接口是Web数据库的接口,其对于Deep Web数据库集成至关重要.本文根据网页表单的结构特征定义查询接口；针对非提交查询法,给出界定Deep Web查询接口的一些规则；提出提交查询法,根据链接属性的特点进行判断,找到包含查询接口的页面；采用决策树C4.5算法进行分类,并用Java语言实现Deep Web查询接口系统.%Deep Web search interface is the interface of Web database. It is essential for the integration of Deep Web databases. According to the structural characteristics of the Web form, search interface is defined. For non-submission query method, some of the rules as defined in the Deep Web search interface are given. The submission query method is proposed, which finds out the page containing the search interface with the features of the link properties. The Web pages are classified with the C4. S decision tree algorithm, and the Deep Web search interface system is realized by using Java.
Heiges, Mark; Wang, Haiming; Robinson, Edward; Aurrecoechea, Cristina; Gao, Xin; Kaluskar, Nivedita; Rhodes, Philippa; Wang, Sammy; He, Cong-Zhou; Su, Yanqi; Miller, John; Kraemer, Eileen; Kissinger, Jessica C
The database, CryptoDB (), is a community bioinformatics resource for the AIDS-related apicomplexan-parasite, Cryptosporidium. CryptoDB integrates whole genome sequence and annotation with expressed sequence tag and genome survey sequence data and provides supplemental bioinformatics analyses and data-mining tools. A simple, yet comprehensive web interface is available for mining and visualizing the data. CryptoDB is allied with the databases PlasmoDB and ToxoDB via ApiDB, an NIH/NIAID-funded...
Hong Wang; Qingsong Xu; Youyang Chen; Jinsong Lan
Determining whether a site has a search interface is a crucial priority for further research of deep web databases. This study first reviews the current approaches employed in search interface identification for deep web databases. Then, a novel identification scheme using hybrid features and a feature-weighted instance-based learner is put forward. Experiment results show that the proposed scheme is satisfactory in terms of classification accuracy and our feature-weighted instance-based lear...
Odier, Jerome; The ATLAS collaboration; Fulachier, Jerome; Lambert, Fabian
Odier, J.; Albrand, S.; Fulachier, J.; Lambert, F.
Wilkinson Mark D; Kuo Byron; Kawas Edward A; Good Benjamin M
Abstract Background User-scripts are programs stored in Web browsers that can manipulate the content of websites prior to display in the browser. They provide a novel mechanism by which users can conveniently gain increased control over the content and the display of the information presented to them on the Web. As the Web is the primary medium by which scientists retrieve biological information, any improvements in the mechanisms that govern the utility or accessibility of this information m...
Full Text Available PDB ID : 2web [A2] Color Style Secondary Structure Coordinate Chain Ribbon Spacefil...l Cαonly A B Title No Title Search results - Found 1 records > PDB ID Chain Class Fold Superfamily Family Protein Domain Species 2web
Wallick, Michael N.; Doubleday, Joshua R.; Shams, Khawaja S.
This software allows for the visualization and control of a network of sensors through a Web browser interface. It is currently being deployed for a network of sensors monitoring Mt. Saint Helen s volcano; however, this innovation is generic enough that it can be deployed for any type of sensor Web. From this interface, the user is able to fully control and monitor the sensor Web. This includes, but is not limited to, sending "test" commands to individual sensors in the network, monitoring for real-world events, and reacting to those events
Odier, Jerome; The ATLAS collaboration; Fulachier, Jerome; Lambert, Fabian
The ATLAS Metadata Interface (AMI) can be considered to be a mature application because it has existed for at least 10 years. Over the last year, we have been adapting the application to some recently available technologies. The web interface, which previously manipulated XML documents using XSL transformations, has been migrated to Asynchronous Java Script (AJAX). Web development has been considerably simplified by the development of a framework for AMI based on JQuery and Twitter Bootstrap. Finally there has been a major upgrade of the python web service client.
Zain, Jasni Mohamad; Goh, Yingsoon
Aesthetics of web page refers to how attractive a web page is in which it catches the attention of the user to read through the information. In addition, the visual appearance is important in getting attentions of the users. Moreover, it was found that those screens, which were perceived as aesthetically pleasing, were having a better usability. Usability might be a strong basic in relating to the applicability for learning, and in this study pertaining to Mandarin learning. It was also found that aesthetically pleasing layouts of web page would motivate students in Mandarin learning The Mandarin Learning web pages were manipulated according to the desired aesthetic measurements. GUI aesthetic measuring method was used for this purpose. The Aesthetics-Measurement Application (AMA) accomplished with six aesthetic measures was developed and used. On top of it, questionnaires were distributed to the users to gather information on the students' perceptions on the aesthetic aspects and learning aspects. Respondent...
Fadhilah Mat Yamin; RAMAYAH, T.
The behaviour of the searcher when using the search engine especially during the query formulation is crucial. Search engines capture users’ activities in the search log, which is stored at the search engine server. Due to the difficulty of obtaining this search log, this paper proposed and develops an interface framework to interface a Google search engine. This interface will capture users’ queries before redirect them to Google. The analysis of the search log will show that users are utili...
Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap") offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS]) used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL), genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL), Resource Description Framework (RDF) and eXtensible Markup Language (XML) documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST). Conclusions The need for semantic integration technologies has preceded available solutions. We
Nelson Rex T
Full Text Available Abstract Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap" offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS] used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL, genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL, Resource Description Framework (RDF and eXtensible Markup Language (XML documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST. Conclusions The need for semantic integration technologies has preceded
Boughamoura, Radhouane; Hlaoua, Lobna; Omri, Mohamed Nazih
Deep Web databases contain more than 90% of pertinent information of the Web. Despite their importance, users don't profit of this treasury. Many deep web services are offering competitive services in term of prices, quality of service, and facilities. As the number of services is growing rapidly, users have difficulty to ask many web services in the same time. In this paper, we imagine a system where users have the possibility to formulate one query using one query interface and then the sys...
Wilkinson Mark D
Full Text Available Abstract Background User-scripts are programs stored in Web browsers that can manipulate the content of websites prior to display in the browser. They provide a novel mechanism by which users can conveniently gain increased control over the content and the display of the information presented to them on the Web. As the Web is the primary medium by which scientists retrieve biological information, any improvements in the mechanisms that govern the utility or accessibility of this information may have profound effects. GreaseMonkey is a Mozilla Firefox extension that facilitates the development and deployment of user-scripts for the Firefox web-browser. We utilize this to enhance the content and the presentation of the iHOP (information Hyperlinked Over Proteins website. Results The iHOPerator is a GreaseMonkey user-script that augments the gene-centred pages on iHOP by providing a compact, configurable visualization of the defining information for each gene and by enabling additional data, such as biochemical pathway diagrams, to be collected automatically from third party resources and displayed in the same browsing context. Conclusion This open-source script provides an extension to the iHOP website, demonstrating how user-scripts can personalize and enhance the Web browsing experience in a relevant biological setting. The novel, user-driven controls over the content and the display of Web resources made possible by user-scripts, such as the iHOPerator, herald the beginning of a transition from a resource-centric to a user-centric Web experience. We believe that this transition is a necessary step in the development of Web technology that will eventually result in profound improvements in the way life scientists interact with information.
Fadhilah Mat Yamin
Full Text Available The behaviour of the searcher when using the search engine especially during the query formulation is crucial. Search engines capture users’ activities in the search log, which is stored at the search engine server. Due to the difficulty of obtaining this search log, this paper proposed and develops an interface framework to interface a Google search engine. This interface will capture users’ queries before redirect them to Google. The analysis of the search log will show that users are utilizing different types of queries. These queries are then classified as breadth and depth search query.
SOA offers solutions to the most intractable business problems faced by every enterprise, but getting the SOA service interface right requires the practical design knowledge this book uniquely delivers
Database interfaces define the way database functionality is exported to and utilized by end users, developers and programs. Publishing, integration and service-oriented architectures demand capable interfaces and a higher degree of database functionality utilization in order to realize their potential. In service-oriented architectures, applications need to provide integrated access to the data of multiple sources. Such applications typically support only a restricted set of queries over the...
Monrozier, F. Jocteur; Pesquet, T.
This paper presents the approach retained in a R&D CNES development to provide a configurable and generic request interface for operations, using new modeling and programming techniques (standards and tools) in the core of the resulting "Request Interface for Operations" (RIO) framework. This prototype will be based on object oriented and internet technologies and standards such as SOAP with Attachment1, UML2 State diagram and JAVA. The advantage of the approach presented in this paper is to have a customizable tool that can be configured and deployed depending on the target needs in order to provide a cross-support "request interface for operations". Once this work will be carried out to an end and validated, it should be submitted for approval to CCSDS Cross Support Services Area in order to extend the current SLE Service Request work, to provide a recommendation for a "Cross- support Request Interface for Operations". As this approach also provides a methodology to define a complete and pragmatic service interface specification (with static and dynamic views) focusing on the user point of view, it will be proposed to the CCSDS Systems Architecture Working group to complete the Reference Architecture methodology. Key-words: UML State diagrams, Dynamic Service interface description, formal notation, code generation, SOAP, CCSDS SLE Service Management, Cross-support.
Full Text Available This research extends the capability for the new technology platform by Remote Data Inspection System (RDIS server from Furukawa Co., Ltd. Enabling the integration of standard Hypertext Markup Language (HTML programming and RDIS tag programming to create a user-friendly “point-and-click” web-based control mechanism. The integration allows the users to send commands to mobile robot over the Internet. Web-based control enables human to extend his action and intelligence to remote locations. Three mechanisms for web-based controls are developed: Manual remote control, continuous operation event and autonomous navigational control. In the manual remote control the user is fully responsible for the robot action and the robot do not use any sophisticated algorithms. The continuous operation event is the extension of the basic movement of a manual remote control mechanism. In the autonomous navigation control, the user has more flexibility to tell the robot to carry out specific tasks. Using this method, mobile robot can be controlled via the web, from any places connected to the network without constructing specific infrastructures for communication.
Full Text Available The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture
Sharma, Parichit; Mantri, Shrikant S
The function of a newly sequenced gene can be discovered by determining its sequence homology with known proteins. BLAST is the most extensively used sequence analysis program for sequence similarity search in large databases of sequences. With the advent of next generation sequencing technologies it has now become possible to study genes and their expression at a genome-wide scale through RNA-seq and metagenome sequencing experiments. Functional annotation of all the genes is done by sequence similarity search against multiple protein databases. This annotation task is computationally very intensive and can take days to obtain complete results. The program mpiBLAST, an open-source parallelization of BLAST that achieves superlinear speedup, can be used to accelerate large-scale annotation by using supercomputers and high performance computing (HPC) clusters. Although many parallel bioinformatics applications using the Message Passing Interface (MPI) are available in the public domain, researchers are reluctant to use them due to lack of expertise in the Linux command line and relevant programming experience. With these limitations, it becomes difficult for biologists to use mpiBLAST for accelerating annotation. No web interface is available in the open-source domain for mpiBLAST. We have developed WImpiBLAST, a user-friendly open-source web interface for parallel BLAST searches. It is implemented in Struts 1.3 using a Java backbone and runs atop the open-source Apache Tomcat Server. WImpiBLAST supports script creation and job submission features and also provides a robust job management interface for system administrators. It combines script creation and modification features with job monitoring and management through the Torque resource manager on a Linux-based HPC cluster. Use case information highlights the acceleration of annotation analysis achieved by using WImpiBLAST. Here, we describe the WImpiBLAST web interface features and architecture, explain design
Jensen, Casper Svenning; Møller, Anders; Su, Zhendong
Hong Wang; Qingsong Xu; Lifeng Zhou
To surface the Deep Web, one crucial task is to predict whether a given web page has a search interface (searchable HyperText Markup Language (HTML) form) or not. Previous studies have focused on supervised classification with labeled examples. However, labeled data are scarce, hard to get and requires tediousmanual work, while unlabeled HTML forms are abundant and easy to obtain. In this research, we consider the plausibility of using both labeled and unlabeled data to train better models to...
Genaro Motti V.; Raggett D.; Van Cauwelaert S.; Vanderdonckt J.
Ensuring responsive design of web applications requires their user interfaces to be able to adapt according to different contexts of use, which subsume the end users, the devices and platforms used to carry out the interactive tasks, and also the environment in which they occur. To address the challenges posed by responsive design, aiming to simplify their development by factoring out the common parts from the specific ones, this paper presents Quill, a web-based development environment that ...
Karp Peter D; Latendresse Mario
Abstract Background Displaying complex metabolic-map diagrams, for Web browsers, and allowing users to interact with them for querying and overlaying expression data over them is challenging. Description We present a Web-based metabolic-map diagram, which can be interactively explored by the user, called the Cellular Overview. The main characteristic of this application is the zooming user interface enabling the user to focus on appropriate granularities of the network at will. Various search...
Kaufmann, E.; Bernstein, A.
The need to make the contents of the Semantic Web accessible to end-users becomes increasingly pressing as the amount of information stored in ontology-based knowledge bases steadily increases. Natural language interfaces (NLIs) provide a familiar and convenient means of query access to Semantic Web data for casual end-users. While several studies have shown that NLIs can achieve high retrieval performance as well as domain independence, this paper focuses on usability and investi...
Lathan, Corinna E.; Newman, Dava J.; Sebrechts, Marc M.; Doarn, Charles R.
The objective is to introduce the usability engineering methodology, heuristic evaluation, to the design and development of a web-based telemedicine system. Using a set of usability criteria, or heuristics, one evaluator examined the Spacebridge to Russia web-site for usability problems. Thirty-four usability problems were found in this preliminary study and all were assigned a severity rating. The value of heuristic analysis in the iterative design of a system is shown because the problems can be fixed before deployment of a system and the problems are of a different nature than those found by actual users of the system. It was therefore determined that there is potential value of heuristic evaluation paired with user testing as a strategy for optimal system performance design.
Toma, Daniel; Río Fernandez, Joaquín del; Jirka, Simon; Delory, Eric; Pearlman, Jay S
The objective of the European FP7 project NeXOS (Next generation Low-Cost Multifunctional Web Enabled Ocean Sensor Systems Empowering Marine, Maritime and Fisheries Management) is to develop cost-efficient innovative and interoperable in-situ sensors deployable from multiple platforms to support the development of a truly integrated Ocean Observing System. Therefore, several sensor systems will be developed in NeXOS project for specific technologies and monitoring stra...
Manaud, N.; Gonzalez, J.
We present a first prototype of a Web Map Interface that will serve as a proof of concept and design for ESA's future fully web-based Planetary Science Archive (PSA) User Interface. The PSA is ESA's planetary science archiving authority and central repository for all scientific and engineering data returned by ESA's Solar System missions . All data are compliant with NASA's Planetary Data System (PDS) Standards and are accessible through several interfaces : in addition to serving all public data via FTP and the Planetary Data Access Protocol (PDAP), a Java-based User Interface provides advanced search, preview, download, notification and delivery-basket functionality. It allows the user to query and visualise instrument observations footprints using a map-based interface (currently only available for Mars Express HRSC and OMEGA instruments). During the last decade, the planetary mapping science community has increasingly been adopting Geographic Information System (GIS) tools and standards, originally developed for and used in Earth science. There is an ongoing effort to produce and share cartographic products through Open Geospatial Consortium (OGC) Web Services, or as standalone data sets, so that they can be readily used in existing GIS applications [3,4,5]. Previous studies conducted at ESAC [6,7] have helped identify the needs of Planetary GIS users, and define key areas of improvement for the future Web PSA User Interface. Its web map interface shall will provide access to the full geospatial content of the PSA, including (1) observation geometry footprints of all remote sensing instruments, and (2) all georeferenced cartographic products, such as HRSC map-projected data or OMEGA global maps from Mars Express. It shall aim to provide a rich user experience for search and visualisation of this content using modern and interactive web mapping technology. A comprehensive set of built-in context maps from external sources, such as MOLA topography, TES
In recent years the user interfaces of the TV platform have been powered by HTML, but since the platform is starting to support new techniques it might be time to change the focus. HTML is a good choice for interface development because of its high level and platform independence; however, when performance is critical and the requirements are high HTML can impose serious restrictions. WebGL is a technology released in 2011 that brings a low-level graphics API to the web. The API allows for de...
Bethel, Wes; Siegerist, Cristina; Shalf, John; Shetty, Praveenkumar; Jankun-Kelly, T.J.; Kreylos, Oliver; Ma, Kwan-Liu
The LBNL/NERSC Visportal effort explores ways to deliver advanced Remote/Distributed Visualization (RDV) capabilities through a Grid-enabled web-portal interface. The effort focuses on latency tolerant distributed visualization algorithms, GUI designs that are more appropriate for the capabilities of web interfaces, and refactoring parallel-distributed applications to work in a N-tiered component deployment strategy. Most importantly, our aim is to leverage commercially-supported technology as much as possible in order to create a deployable, supportable, and hence viable platform for delivering grid-based visualization services to collaboratory users.
Full Text Available Determining whether a site has a search interface is a crucial priority for further research of deep web databases. This study first reviews the current approaches employed in search interface identification for deep web databases. Then, a novel identification scheme using hybrid features and a feature-weighted instance-based learner is put forward. Experiment results show that the proposed scheme is satisfactory in terms of classification accuracy and our feature-weighted instance-based learner gives better results than classical algorithms such as C4.5, random forest and KNN.
Newman, R. L.; Clemesha, A.; Lindquist, K. G.; Reyes, J.; Steidl, J. H.; Vernon, F. L.
Cohen-Boulakia, Sarah; Valduriez, Patrick
The volumes of bioinformatics data available on the Web are constantly increasing.Access and joint exploitation of these highly distributed data (i.e, available in distributed Webdata sources) and highly heterogeneous (in text or tabulated les including images, in dierentformats, described with dierent levels of detail and dierent levels of quality ...) is essential forthe biological knowledge to progress. The purpose of this short report is to present in a simpleway the problems of the joint...
Burger, Melanie C
Zain, Jasni Mohamad; Goh, Yingsoon
This article describes the accurateness of our application namely Self-Developed Aesthetics Measurement Application (SDA) in measuring the aesthetics aspect by comparing the results of our application and users' perceptions in measuring the aesthetics of the web page interfaces. For this research, the positions of objects, images element and texts element are defined as objects in a web page interface. Mandarin learning web pages are used in this research. These learning web pages comprised of main pages, learning pages and exercise pages, on the first author's E-portfolio web site. The objects of the web pages were manipulated in order to produce the desired aesthetic values. The six aesthetics related elements used are balance, equilibrium, symmetry, sequence, rhythm, as well as order and complexity. Results from the research showed that the ranking of the aesthetics values of the web page interfaces measured of the users were congruent with the expected perceptions of our designed Mandarin learning web pag...
Tengku Siti Meriam Tengku Wook
Full Text Available There have been numerous studies done on the guidelines of user interface, but only a number of them have considered specific guidelines for the design of children’s interface. This paper is about a research on the specific guidelines for children, focusing on the criteria of graphic design. The objective of this research is to study on the guidelines of user interface design and to develop specific guidelines on children’s graphic design. The criteria of graphic design are the priority of this research since previous research have proven that graphic design is the main factor which contributes to the problem of usability of web application interfaces, in terms of the overall graphic layout not being in a hierarchical order, not taken into concern the availability of space, inappropriate margin, improper type and font size selection, and less concentration on the use of the colours. The research methodology makes use of the comparison of and the coordination to the guidelines on children’s interface and the specific guidelines on the graphic design of web application interfaces. The contribution of this research is the guidelines on the design of web application graphics which are specifically for children.
Jacobs, Joshua L
'Give a man a fish and you feed him for a day. Teach him how to fish, and you feed him for a lifetime…'. Although the exact origin of this proverb is unknown, its meaning is clear and wisdom self-evident. In the field of health professions education, there are many websites that can be used as teaching aids, some of which have undergone peer review. Some organizations have created repositories of online teaching materials hosted by various organizations. You can certainly find a lot of 'fish' there to feed your appetite for high-quality teaching materials. For examples of repositories that contain online teaching materials, see Table 1. However, these repositories and other lists of websites cannot be comprehensive, so it is important to know the basic review skills to evaluate websites that may be useful for your teaching that you come across in your journeys around the web. This article intends to teach you 'how to fish' for useful web-based teaching resources to help you succeed as a clinical teacher. PMID:22905659
Knipp, D.; Kilcommons, L. M.; Damas, M. C.
We have created a simple and user-friendly web application to visualize output from empirical atmospheric models that describe the lower atmosphere and the Space-Atmosphere Interface Region (SAIR). The Atmospheric Model Web Explorer (AtModWeb) is a lightweight, multi-user, Python-driven application which uses standard web technology (jQuery, HTML5, CSS3) to give an in-browser interface that can produce plots of modeled quantities such as temperature and individual species and total densities of neutral and ionized upper-atmosphere. Output may be displayed as: 1) a contour plot over a map projection, 2) a pseudo-color plot (heatmap) which allows visualization of a variable as a function of two spatial coordinates, or 3) a simple line plot of one spatial coordinate versus any number of desired model output variables. The application is designed around an abstraction of an empirical atmospheric model, essentially treating the model code as a black box, which makes it simple to add additional models without modifying the main body of the application. Currently implemented are the Naval Research Laboratory NRLMSISE00 model for neutral atmosphere and the International Reference Ionosphere (IRI). These models are relevant to the Low Earth Orbit environment and the SAIR. The interface is simple and usable, allowing users (students and experts) to specify time and location, and choose between historical (i.e. the values for the given date) or manual specification of whichever solar or geomagnetic activity drivers are required by the model. We present a number of use-case examples from research and education: 1) How does atmospheric density between the surface and 1000 km vary with time of day, season and solar cycle?; 2) How do ionospheric layers change with the solar cycle?; 3 How does the composition of the SAIR vary between day and night at a fixed altitude?
Groenewegen, D.M.; Visser, E.
This paper is a pre-print of: Danny M. Groenewegen, Eelco Visser. Integration of Data Validation and User Interface Concerns in a DSL for Web Applications. In Mark G. J. van den Brand, Jeff Gray, editors, Software Language Engineering, Second International Conference, SLE 2009, Denver, USA, October,
Kim, Taewan; Sim, Chul-Min; Yuh, Sanghwa; Jung, Hanmin; Kim, Young-Kil; Choi, Sung-Kwon; Park, Dong-In; Choi, Key Sun
Describes the implementation of FromTo-CLIR, a Web-based natural-language interface for cross-language information retrieval that was tested with Korean and Japanese. Proposes a method that uses a semantic category tree and collocation to resolve the ambiguity of query translation. (Author/LRW)
We introduce the concept of a Fusion Data Grid and discuss the management of metadata within such a Grid. We describe a prototype application which serves fusion data over the internet together with metadata information which can be flexibly created and modified over time. The application interfaces with the MDSplus data acquisition system and it has been designed to capture metadata which is generated by scientists from the post-processing of experimental data. The implementation of dynamic metadata tables using the Java programming language together with an object-relational mapping system, Hibernate, is described in the Appendix
Joan Segura Mora
Full Text Available BACKGROUND: It is well established that only a portion of residues that mediate protein-protein interactions (PPIs, the so-called hot spot, contributes the most to the total binding energy, and thus its identification is an important and relevant question that has clear applications in drug discovery and protein design. The experimental identification of hot spots is however a lengthy and costly process, and thus there is an interest in computational tools that can complement and guide experimental efforts. PRINCIPAL FINDINGS: Here, we present Presaging Critical Residues in Protein interfaces-Web server (http://www.bioinsilico.org/PCRPi, a web server that implements a recently described and highly accurate computational tool designed to predict critical residues in protein interfaces: PCRPi. PRCPi depends on the integration of structural, energetic, and evolutionary-based measures by using Bayesian Networks (BNs. CONCLUSIONS: PCRPi-W has been designed to provide an easy and convenient access to the broad scientific community. Predictions are readily available for download or presented in a web page that includes among other information links to relevant files, sequence information, and a Jmol applet to visualize and analyze the predictions in the context of the protein structure.
Alonso Vega, Adrián
Este proyecto tiene como objetivo presentar la filosofía "Responsive Web Design" que actualmente se encuentra en auge, ya que intenta dar solución a los problemas de experiencias de usuario que surgen con la variedad de dispositivos móviles que se conectan a Internet. A través de un recorrido por los puntos más importantes se intentará dar una visión general del problema y buscar que soluciones son más eficientes para conseguir que nuestras páginas webs se adapten a cualquier tipo de disposit...
Full Text Available To surface the Deep Web, one crucial task is to predict whether a given web page has a search interface (searchable HyperText Markup Language (HTML form or not. Previous studies have focused on supervised classification with labeled examples. However, labeled data are scarce, hard to get and requires tediousmanual work, while unlabeled HTML forms are abundant and easy to obtain. In this research, we consider the plausibility of using both labeled and unlabeled data to train better models to identify search interfaces more effectively. We present a semi-supervised co-training ensemble learning approach using both neural networks and decision trees to deal with the search interface identification problem. We show that the proposed model outperforms previous methods using only labeled data. We also show that adding unlabeled data improves the effectiveness of the proposed model.
Burger, Melanie C
Hendry Setyawans Sutedjo
Full Text Available Informasi dalam sebuah website atau web diharapkan dapat sampaikan dan diterima oleh pencari informasi dengan mudah. Di dalam Dunia pendidikan, informasi yang ada di dalam web juga diharapkan mampu diterima oleh para penggunanya dengan tujuan media komunikasi online seperti website dapat membantu para pelajar menerima ilmu yang disampaikan melalui media online. Untuk Mengetahui seberapa mudahnya informasi itu ditangkap ditandai dengan seberapa mudah website itu digunakan (usable. Untuk mengetahui seberapa mudah penggunaan suatu website digunakan analisa usability, banyak metode yang dapat digunakan untuk mengidentifikasi masalah usability terutama dari sisi interface web. Heuristic evaluation merupakan salah satu teknik dalam melakukan hal tersebut yang digunakan dalam penelitian ini guna menilai seberapa mudahnya website Institut Teknologi Sepuluh Nopember dalam menyampaikan informasi yang ada. Dalam penelitian ini digunakan juga Quality Function Deployment (QFD untuk mengidentifikasi keinginan pengguna terhadap tampilan dari web ITS
Vali, Faisal; Hong, Robert
This paper presents a generic web-based database interface implemented in Prolog. We discuss the advantages of the implementation platform and demonstrate the system's applicability in providing access to integrated biochemical data. Our system exploits two libraries of SWI-Prolog to create a schema-transparent interface within a relational setting. As is expected in declarative programming, the interface was written with minimal programming effort due to the high level of the language and its suitability to the task. We highlight two of Prolog's features that are well suited to the task at hand: term representation of structured documents and relational nature of Prolog which facilitates transparent integration of relational databases. Although we developed the system for accessing in-house biochemical and genomic data the interface is generic and provides a number of extensible features. We describe some of these features with references to our research databases. Finally we outline an in-house library that...
Lassnig, Mario; The ATLAS collaboration; Barisits, Martin-Stefan; Serfon, Cedric; Vigne, Ralph; Garonne, Vincent
The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component, and allows easy integration for user-generated views. The interface follows three design principles. First, the collection and storage of data from internal and external systems is asynchronous to reduce latency. This includes the use of technologies like ActiveMQ or Nagios. Second, analysis of the data into information is done massively parallel due to its volume, using a combined approach with an Oracle database and Hadoop MapReduce. Third, sharing of the information does not distinguish between human or programmatic access, making it easy to access selective parts of the information both in constrained frontends like ...
Lassnig, Mario; The ATLAS collaboration; Vigne, Ralph; Barisits, Martin-Stefan; Garonne, Vincent; Serfon, Cedric
The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new data management system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component, and allows easy integration for user-generated views. The interface follows three design principles. First, the collection and storage of data from internal and external systems is asynchronous to reduce latency. This includes the use of technologies like ActiveMQ or Nagios. Second, analysis of the data into information is done massively parallel due to its volume, using a combined approach with an Oracle database and Hadoop MapReduce. Third, sharing of the information does not distinguish between human or programmatic access, making it easy to access selective parts of the information both in constrained...
The purpose of this master thesis project was to investigate and analyse the main design and development approaches to creating a user interface of a cross-platform web application that is optimised for usage on both mobile and non-mobile devices. The additional goals were to analyse the main challenges in implementing such a user interface and find out whether it is feasible to achieve a consistent user experience both on mobile and desktop devices. The theoretical part of this paper ana...
Lange Ramos, Bruno; The ATLAS collaboration; Pommes, Kathy; Pavani Neto, Varlen; Vieira Arosa, Breno
In order to manage a heterogeneous and worldwide collaboration, the ATLAS experiment develops web systems that range from supporting the process of publishing scientific papers to monitoring equipment radiation levels. These systems are vastly supported by Glance, a technology that was set forward in 2004 to create an abstraction layer on top of varied databases that automatically recognizes their modeling and generate web search interfaces. Fence (Front ENd ENgine for glaNCE) assembles classes to build applications by making extensive use of configuration files. It produces templates of the core JSON files on top of which it is possible to create Glance-compliant search interfaces. Once the database, its schemas and tables are defined using Glance, its records can be incorporated into the templates by escaping the returned values with a reference to the column identifier wrapped around double enclosing brackets. The developer may also expand on available configuration files to create HTML forms and securely ...
Vernon, Frank; Newman, Robert; Lindquist, Kent
Rädle, Roman; Jetter, Hans-Christian; Reiterer, Harald
Although a Web search is typically regarded as a solitary activity, col-laborative search approaches are becoming an increasingly relevant topic for HCI and distributed user interfaces (DUIs). Today’s collaborative search systems lack comprehensive search support that also involves pre- or post-search activities such as preparing for a search or making sense of search results. We believe that post-WIMP DUIs can help to better support social searches and have identified four design goals that ...
Weber, Tilmann; Kim, Hyun Uk
. In this context, this review gives a summary of tools and databases that currently are available to mine, identify and characterize natural product biosynthesis pathways and their producers based on ‘omics data. A web portal called Secondary Metabolite Bioinformatics Portal (SMBP at http...
Buchan, Daniel W A; Minneci, Federico; Nugent, Tim C O; Bryson, Kevin; Jones, David T
Here, we present the new UCL Bioinformatics Group's PSIPRED Protein Analysis Workbench. The Workbench unites all of our previously available analysis methods into a single web-based framework. The new web portal provides a greatly streamlined user interface with a number of new features to allow users to better explore their results. We offer a number of additional services to enable computationally scalable execution of our prediction methods; these include SOAP and XML-RPC web server access and new HADOOP packages. All software and services are available via the UCL Bioinformatics Group website at http://bioinf.cs.ucl.ac.uk/. PMID:23748958
Lassnig, M.; Beermann, T.; Vigne, R.; Barisits, M.; Garonne, V.; Serfon, C.
The monitoring and controlling interfaces of the previous data management system DQ2 followed the evolutionary requirements and needs of the ATLAS collaboration. The new data management system, Rucio, has put in place a redesigned web-based interface based upon the lessons learnt from DQ2, and the increased volume of managed information. This interface encompasses both a monitoring and controlling component, and allows easy integration for usergenerated views. The interface follows three design principles. First, the collection and storage of data from internal and external systems is asynchronous to reduce latency. This includes the use of technologies like ActiveMQ or Nagios. Second, analysis of the data into information is done massively parallel due to its volume, using a combined approach with an Oracle database and Hadoop MapReduce. Third, sharing of the information does not distinguish between human or programmatic access, making it easy to access selective parts of the information both in constrained frontends like web-browsers as well as remote services. This contribution will detail the reasons for these principles and the design choices taken. Additionally, the implementation, the interactions with external systems, and an evaluation of the system in production, both from a technological and user perspective, conclude this contribution.
Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of scientific data between information resources difficu...
Tungkasthan, Anucha; Jongsawat, Nipat; Poompuang, Pittaya; Intarasema, Sarayut; Premchaiswadi, Wichian
This paper presented a practical framework for automating the building of diagnostic BN models from data sources obtained from the WWW and demonstrates the use of a SMILE web-based interface to represent them. The framework consists of the following components: RSS agent, transformation/conversion tool, core reasoning engine, and the SMILE web-based interface. The RSS agent automatically collects and reads the provided RSS feeds according to the agent's predefined URLs. A transformation/conve...
Licurse, Mindy Y.; Cook, Tessa S.
Radiology and imaging informatics education have rapidly evolved over the past few decades. With the increasing recognition that future growth and maintenance of radiology practices will rely heavily on radiologists with fundamentally sound informatics skills, the onus falls on radiology residency programs to properly implement and execute an informatics curriculum. In addition, the American Board of Radiology may choose to include even more informatics on the new board examinations. However, the resources available for didactic teaching and guidance most especially at the introductory level are widespread and varied. Given the breadth of informatics, a centralized web-based interface designed to serve as an adjunct to standardized informatics curriculums as well as a stand-alone for other interested audiences is desirable. We present the development of a curriculum using PearlTrees, an existing web-interface based on the concept of a visual interest graph that allows users to collect, organize, and share any URL they find online as well as to upload photos and other documents. For our purpose, the group of "pearls" includes informatics concepts linked by appropriate hierarchal relationships. The curriculum was developed using a combination of our institution's current informatics fellowship curriculum, the Practical Imaging Informatics textbook1 and other useful online resources. After development of the initial interface and curriculum has been publicized, we anticipate that involvement by the informatics community will help promote collaborations and foster mentorships at all career levels.
Thomas A Schlacher
Full Text Available Food webs near the interface of adjacent ecosystems are potentially subsidised by the flux of organic matter across system boundaries. Such subsidies, including carrion of marine provenance, are predicted to be instrumental on open-coast sandy shores where in situ productivity is low and boundaries are long and highly permeable to imports from the sea. We tested the effect of carrion supply on the structure of consumer dynamics in a beach-dune system using broad-scale, repeated additions of carcasses at the strandline of an exposed beach in eastern Australia. Carrion inputs increased the abundance of large invertebrate scavengers (ghost crabs, Ocypode spp., a numerical response most strongly expressed by the largest size-class in the population, and likely due to aggregative behaviour in the short term. Consumption of carrion at the beach-dune interface was rapid and efficient, driven overwhelmingly by facultative avian scavengers. This guild of vertebrate scavengers comprises several species of birds of prey (sea eagles, kites, crows and gulls, which reacted strongly to concentrations of fish carrion, creating hotspots of intense scavenging activity along the shoreline. Detection of carrion effects at several trophic levels suggests that feeding links arising from carcasses shape the architecture and dynamics of food webs at the land-ocean interface.
Karp Peter D
Lange Ramos, Bruno; The ATLAS collaboration; Pommes, Kathy; Pavani Neto, Varlen; Vieira Arosa, Breno; Abreu Da Silva, Igor
The ATLAS Technical Coordination disposes of 17 Web systems to support its operation. These applications, whilst ranging from supporting the process of publishing scientific papers to monitoring radiation levels in the equipment at the cave, are constantly prone to changes in requirements due to the collaborative nature of the experiment and its management. In this context, a Web framework is proposed to unify the generation of the supporting interfaces. Fence assembles classes to build applications by making extensive use of JSON configuration files. It relies vastly on Glance, a technology that was set forth in 2003 to create an abstraction layer on top of the heterogeneous sources that store the technical coordination data. Once Glance maps out the database modeling, records can be referenced in the configuration files by wrapping unique identifiers around double enclosing brackets. The deployed content can be individually secured by attaching clearance attributes to their description thus ensuring that vi...
Jakobovits, R. M.; Brinkley, J. F.
This paper describes the Web-Interfacing Repository Manager (WIRM), a perl toolkit for managing and deploying multimedia data, which is built entirely from free, platform-independent components. The WIRM consists of an object-relational API layered over a relational database, with built-in support for file management and CGI programming. The basic underlying data structure for all WIRM data is the repository object, a perl associative array whose values are bound to a row of a table in the re...
This paper describes a new approach to the visualization of information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers to the network utilization are stored in several databases for later analysis. Although the ability to view these data-sets individually is already in place, currently there is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple providers that have different structures. It is capable of aggregating and correlating the data according to user defined criteria. Finally, it visualizes the collected data using a flexible and interactive front-end web system. Structurally, the project comprises of 3 main levels of the data collection cycle: The Level 0 represents the information sources within ATLAS. These providers do not store information in a uniform fashion. The first step of the project was to define a common interface with which to expose stored data. The interface designed for the project originates from the Google Data Protocol API. The idea is to allow read-only access to data providers, through HTTP requests similar in format to the SQL query structure. This provides a standardized way to access this different information sources within ATLAS. The Level 1 can be considered the engine of the system. The primary task of the Level 1 is to gather data from multiple data sources via the common interface, to correlate this data together, or over a defined time series, and expose the combined data as a whole to the Level 2 web
Trabant, C. M.; Ahern, T. K.; Stults, M.
At the IRIS Data Management Center (DMC) we have been developing web service data access interfaces for our, primarily seismological, repositories for five years. These interfaces have become the primary access mechanisms for all data extraction from the DMC. For the last two years the DMC has been a principal participant in the GeoWS project, which aims to develop common web service interfaces for data access across hydrology, geodesy, seismology, marine geophysics, atmospheric and other geoscience disciplines. By extending our approach we have converged, along with other project members, on a web service interface and presentation design appropriate for geoscience and other data. The key principles of the approach include using a simple subset of RESTful concepts, common calling conventions whenever possible, a common tabular text data set convention, human-readable documentation and tools to help scientific end users learn how to use the interfaces. The common tabular text format, called GeoCSV, has been incorporated into the DMC's seismic station and event (earthquake) services. In addition to modifying our existing services, we have developed prototype GeoCSV web services for data sets managed by external (unfunded) collaborators. These prototype services include interfaces for data sets at NGDC/NCEI (water level tides and meteorological satellite measurements), INTERMAGNET repository and UTEP gravity and magnetic measurements. In progress are interfaces for WOVOdat (volcano observatory measurements), NEON (ecological observatory measurements) and more. An important goal of our work is to build interfaces usable by non-technologist end users. We find direct usability by researchers to be a major factor in cross-discipline data use, which itself is a key to solving complex research questions. In addition to data discovery and collection by end users, these interfaces provide a foundation upon which federated data access and brokering systems are already being
Full Text Available Satellite data, radiative power of hot spots as measured with remote sensing, historical records, on site geological surveys, digital elevation model data, and simulation results together provide a massive data source to investigate the behavior of active volcanoes like Mount Etna (Sicily, Italy over recent times. The integration of these heterogeneous data into a coherent visualization framework is important for their practical exploitation. It is crucial to fill in the gap between experimental and numerical data, and the direct human perception of their meaning. Indeed, the people in charge of safety planning of an area need to be able to quickly assess hazards and other relevant issues even during critical situations. With this in mind, we developed LAV@HAZARD, a web-based geographic information system that provides an interface for the collection of all of the products coming from the LAVA project research activities. LAV@HAZARD is based on Google Maps application programming interface, a choice motivated by its ease of use and the user-friendly interactive environment it provides. In particular, the web structure consists of four modules for satellite applications (time-space evolution of hot spots, radiant flux and effusion rate, hazard map visualization, a database of ca. 30,000 lava-flow simulations, and real-time scenario forecasting by MAGFLOW on Compute Unified Device Architecture.
Hayashi, S.; Gopu, A.; Young, M. D.; Kotulla, R.
While some astronomical archives have begun serving standard calibrated data products, the process of producing stacked images remains a challenge left to the end-user. The benefits of astronomical image stacking are well established, and dither patterns are recommended for almost all observing targets. Some archives automatically produce stacks of limited scientific usefulness without any fine-grained user or operator configurability. In this paper, we present PPA Stack, a web based stacking framework within the ODI - Portal, Pipeline, and Archive system. PPA Stack offers a web user interface with built-in heuristics (based on pointing, filter, and other metadata information) to pre-sort images into a set of likely stacks while still allowing the user or operator complete control over the images and parameters for each of the stacks they wish to produce. The user interface, designed using AngularJS, provides multiple views of the input dataset and parameters, all of which are synchronized in real time. A backend consisting of a Python application optimized for ODI data, wrapped around the SWarp software, handles the execution of stacking workflow jobs on Indiana University's Big Red II supercomputer, and the subsequent ingestion of the combined images back into the PPA archive. PPA Stack is designed to enable seamless integration of other stacking applications in the future, so users can select the most appropriate option for their science.
Chen, J.; Pullman, S.; Hubbard, S. S.; Peterson, J.
The induced-polarization (IP) method has been used increasingly in environmental investigations because IP measurements are very sensitive to the low frequency capacitive properties of rocks and soils. The Cole-Cole model has been very useful for interpreting spectral IP data in terms of parameters, such as chargeability and time constant, which are used to estimate various subsurface properties. However, conventional methods for estimating Cole-Cole parameters use an iterative Gauss-Newton-based deterministic method, which has been shown that the obtained optimal solution depends on the choice of initial values and the estimated uncertainty information often is inaccurate or insufficient. Chen, Kemna, and Hubbard (2008) developed a Bayesian model for inverting spectral IP data for Cole-Cole parameters based on Markov chain Monte Carlo (MCMC) sampling methods. They have demonstrated that the MCMC-based inversion method provides extensive global information on unknown parameters, such as the marginal probability distribution functions, from which better estimates and tighter uncertainty bounds of the parameters can be obtained. Additionally, the results obtained with the MCMC method are almost independent of the choice of initial values. We have developed a web interface to the stochastic inversion software, which permits easy accessibility to the code. The web interface allows users to upload their own spectral IP data, specify prior ranges of unknown parameters, and remotely run the code in real time. After running the code (a few minutes), the interface provides a data file with all the statistics of each unknown parameter, including the median, mean, standard deviation, and 95% predictive intervals, and provides a data misfit file. The interface also allows users to visualize the histogram and posterior probability density of each unknown parameter as well as data misfits. For advanced users, the interface provides an option of producing time-series plots of all
Oakley, N.; Daudert, B.
Accessing scientific data through an online portal can be a frustrating task. The concept of making web interfaces easy to use known as "usability" has been thoroughly researched in the field of e-commerce but has not been explicitly addressed in the atmospheric sciences. As more observation stations are installed, satellite missions flown, models run, and field campaigns performed, large amounts of data are produced. Portals on the Internet have become the favored mechanisms to share this information and are ever increasing in number. Portals are often created without being tested for usability with the target audience though the expenses of testing are low and the returns high. To remain competitive and relevant in the provision of atmospheric data, it is imperative that developers understand design elements of a successful portal to make their product stand out among others. This presentation informs the audience of the benefits and basic principles of usability for web pages presenting atmospheric data. We will also share some of the best practices and recommendations we have formulated from the results of usability testing performed on two data provision web sites hosted by the Western Regional Climate Center.
Scholl, I.; Girard, Y.; Bykowski, A.
This paper presents the architecture of a Java web-based graphical interface dedicated to the access of the SOHO Data archive. This application allows local and remote users to search in the SOHO data catalog and retrieve the SOHO data files from the archive. It has been developed at MEDOC (Multi-Experiment Data and Operations Centre), located at the Institut d'Astrophysique Spatiale (Orsay, France), which is one of the European Archives for the SOHO data. This development is part of a joint effort between ESA, NASA and IAS in order to implement long term archive systems for the SOHO data. The software architecture is built as a client-server application using Java language and SQL above a set of components such as an HTTP server, a JDBC gateway, a RDBMS server, a data server and a Web browser. Since HTML pages and CGI scripts are not powerful enough to allow user interaction during a multi-instrument catalog search, this type of requirement enforces the choice of Java as the main language. We also discuss performance issues, security problems and portability on different Web browsers and operating syste ms.
Arakawa, Kazuharu; Kido, Nobuhiro; Oshita, Kazuki; Tomita, Masaru
G-language genome analysis environment (G-language GAE) contains more than 100 programs that focus on the analysis of bacterial genomes, including programs for the identification of binding sites by means of information theory, analysis of nucleotide composition bias and the distribution of particular oligonucleotides, calculation of codon bias and prediction of expression levels, and visualization of genomic information. We have provided a collection of web services for these programs by utilizing REST and SOAP technologies. The REST interface, available at http://rest.g-language.org/, provides access to all 145 functions of the G-language GAE. These functions can be accessed from other online resources. All analysis functions are represented by unique universal resource identifiers. Users can access the functions directly via the corresponding universe resource locators (URLs), and biological web sites can readily embed the functions by simply linking to these URLs. The SOAP services, available at http://www.g-language.org/wiki/soap/, provide language-independent programmatic access to 77 analysis programs. The SOAP service Web Services Definition Language file can be readily loaded into graphical clients such as the Taverna workbench to integrate the programs with other services and workflows. PMID:20439313
Full Text Available Purpose: The aim of this paper is to present a prototype of web-based programming interface for the MitsubishiMovemaster RV-M1 robot.Design/methodology/approach: The web techniques have been selected due to modularity of this solution andpossibility of use the existing code fragments for elaborating new applications. The previous papers [11-14] havepresented the off-line, remote programming system for the RV-M1 robot. The general idea of this system is abase for developing a web-based programming interface.Findings: The prototype of the system has been developed.Research limitations/implications: The presented system is in the early development stage and there is a lackof some functions. In the future a visualisation module will be elaborated and the trajectory translator intendedto co-operate with CAD software will be included.Practical implications: The previous version of the system has been intended for educational purposes. It isplanned that new version will be more flexible and it will have the possibility of being adapted for other devices,like small PLC’s or other robots.Originality/value: Remote supervision of machines during a manufacturing process is an actual issue. Most ofautomation systems manufacturers produce supervising software for their PLC’s and robots. The MovemasterRV-M1 robot is an old model and is lack of the high-tech software. On the other hand, the programming anddevelopment of applications for this robot are very easy. The aim of the presented project is to develop a flexible,remote-programming environment.
Full Text Available Purpose: of this paper. The aim of this paper is to present a prototype of web-based programming interface for the Mitsubishi Movemaster RV-M1 robot.Design/methodology/approach: In the previous papers [11-14] the off-line, remote programming system for the mentioned robot has been presented. It has been used as a base for developing a new branch: web-based programming interface. The web techniques have been selected due to possibility of use existing code fragments for elaborating new applications and modularity of this solution.Findings: As a result, a prototype of the system has been developed.Research limitations/implications: Because the presented system is in the early development stage, there is a lack of some useful functions. Future work will include elaboration of the robot’s visualisation module and implementation of a trajectory translator intended to co-operate with CAD software.Practical implications: The elaborated system has been previously intended for educational purposes, but it may be adapted for other devices, like small PLC’s or other robots.Originality/value: Remote supervision of machines during a manufacturing process is an actual issue. Most of automation systems manufacturers produce software for their PLC’s and robots. Mitsubishi Movemaster RV-M1 is an old model and there is very few programs dedicated to this machine. On the other hand the programming and development of applications for this robot are very easy. The aim of the presented project is to develop a flexible, remote-programming environment.
Lange, Bruno; Maidantchik, Carmen; Pommes, Kathy; Pavani, Varlen; Arosa, Breno; Abreu, Igor
The ATLAS Technical Coordination disposes of 17 Web systems to support its operation. These applications, whilst ranging from managing the process of publishing scientific papers to monitoring radiation levels in the equipment in the experimental cavern, are constantly prone to changes in requirements due to the collaborative nature of the experiment and its management. In this context, a Web framework is proposed to unify the generation of the supporting interfaces. FENCE assembles classes to build applications by making extensive use of JSON configuration files. It relies heavily on Glance, a technology that was set forth in 2003 to create an abstraction layer on top of the heterogeneous sources that store the technical coordination data. Once Glance maps out the database modeling, records can be referenced in the configuration files by wrapping unique identifiers around double enclosing brackets. The deployed content can be individually secured by attaching clearance attributes to their description thus ensuring that view/edit privileges are granted to eligible users only. The framework also provides tools for securely writing into a database. Fully HTML5-compliant multi-step forms can be generated from their JSON description to assure that the submitted data comply with a series of constraints. Input validation is carried out primarily on the server- side but, following progressive enhancement guidelines, verification might also be performed on the client-side by enabling specific markup data attributes which are then handed over to the jQuery validation plug-in. User monitoring is accomplished by thoroughly logging user requests along with any POST data. Documentation is built from the source code using the phpDocumentor tool and made readily available for developers online. Fence, therefore, speeds up the implementation of Web interfaces and reduces the response time to requirement changes by minimizing maintenance overhead.
Full Text Available Abstract Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.
Attwood, Terri K.; Selimas, Ioannis; Buis, Rob; Altenburg, Ruud; Herzog, Robert; Ledent, Valerie; Ghita, Viorica; Fernandes, Pedro; Marques, Isabel; Brugman, Marc
EMBER was a European project aiming to develop bioinformatics teaching materials on the Web and CD-ROM to help address the recognised skills shortage in bioinformatics. The project grew out of pilot work on the development of an interactive web-based bioinformatics tutorial and the desire to repackage that resource with the help of a professional…
This thesis reports on the user-interface design guidelines for usability and accessibility in their connection to human-computer interaction and their implementation in the web design. The goal is to study the theoretical background of the design rules and apply them in designing a real-world website. The analysis of Jakobson’s communication theory applied in the web design and its implications in the design guidelines of visibility, affordance, feedback, simplicity, structure, consisten...
Matthew L. Clark; T. Mitchell Aide
Web-based applications that integrate geospatial information, or the geoweb, offer exciting opportunities for remote sensing science. One such application is a Web‑based system for automating the collection of reference data for producing and verifying the accuracy of land-use/land-cover (LULC) maps derived from satellite imagery. Here we describe the capabilities and technical components of the Virtual Interpretation of Earth Web-Interface Tool (VIEW-IT), a collaborative browser-based tool f...
, and software are also key parts of flow cytometry bioinformatics. Data standards include the widely adopted Flow Cytometry Standard (FCS defining how data from cytometers should be stored, but also several new standards under development by the International Society for Advancement of Cytometry (ISAC to aid in storing more detailed information about experimental design and analytical steps. Open data is slowly growing with the opening of the CytoBank database in 2010 and FlowRepository in 2012, both of which allow users to freely distribute their data, and the latter of which has been recommended as the preferred repository for MIFlowCyt-compliant data by ISAC. Open software is most widely available in the form of a suite of Bioconductor packages, but is also available for web execution on the GenePattern platform.
Harwood, A; The ATLAS collaboration; Lehmann Miotto, G
This paper describes a new approach to the visualization of stored information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers, to the network utilization are stored in several databases for a posterior analysis. Although the ability to view these data-sets individually is already in place, there currently is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple diversely structured providers. It is capable of aggregating and correlating the data according to user defined criteria. Finally it v...
Harwood, A; The ATLAS collaboration; Magnoni, L; Vandelli, W; Savu, D
This paper describes a new approach to the visualization of stored information about the operation of the ATLAS Trigger and Data Acquisition system. ATLAS is one of the two general purpose detectors positioned along the Large Hadron Collider at CERN. Its data acquisition system consists of several thousand computers interconnected via multiple gigabit Ethernet networks, that are constantly monitored via different tools. Operational parameters ranging from the temperature of the computers to the network utilization are stored in several databases for later analysis. Although the ability to view these data-sets individually is already in place, currently there is no way to view this data together, in a uniform format, from one location. The ADAM project has been launched in order to overcome this limitation. It defines a uniform web interface to collect data from multiple providers that have different structures. It is capable of aggregating and correlating the data according to user defined criteria. Finally, ...
Golonka, Piotr; Fabian, Wojciech; Gonzalez-Berges, Manuel; Jasiun, Piotr; Varela-Rodriguez, Fernando
Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro
Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linke...
Palazov, A.; Stefanov, A.; Marinova, V.; Slabakova, V.
Fundamental elements of the success of marine data and information management system and an effective support of marine and maritime economic activities are the speed and the ease with which users can identify, locate, get access, exchange and use oceanographic and marine data and information. There are a lot of activities and bodies have been identified as marine data and information users, such as: science, government and local authorities, port authorities, shipping, marine industry, fishery and aquaculture, tourist industry, environmental protection, coast protection, oil spills combat, Search and Rescue, national security, civil protection, and general public. On other hand diverse sources of real-time and historical marine data and information exist and generally they are fragmented, distributed in different places and sometimes unknown for the users. The marine web portal concept is to build common web based interface which will provide users fast and easy access to all available marine data and information sources, both historical and real-time such as: marine data bases, observing systems, forecasting systems, atlases etc. The service is regionally oriented to meet user needs. The main advantage of the portal is that it provides general look "at glance" on all available marine data and information as well as direct user to easy discover data and information in interest. It is planned to provide personalization ability, which will give the user instrument to tailor visualization according its personal needs.
Mandl, Daniel; Cappelaere, Pat; Frye, Stuart; Sohlberg, Rob; Ly, Vuong; Chien, Steve; Sullivan, Don
A SensorWeb is a set of sensors, which can consist of ground, airborne and space-based sensors interoperating in an automated or autonomous collaborative manner. The NASA SensorWeb toolbox, developed at NASA/GSFC in collaboration with NASA/JPL, NASA/Ames and other partners, is a set of software and standards that (1) enables users to create virtual private networks of sensors over open networks; (2) provides the capability to orchestrate their actions; (3) provides the capability to customize the output data products and (4) enables automated delivery of the data products to the users desktop. A recent addition to the SensorWeb Toolbox is a new user interface, together with web services co-resident with the sensors, to enable rapid creation, loading and execution of new algorithms for processing sensor data. The web service along with the user interface follows the Open Geospatial Consortium (OGC) standard called Web Coverage Processing Service (WCPS). This presentation will detail the prototype that was built and how the WCPS was tested against a HyspIRI flight testbed and an elastic computation cloud on the ground with EO-1 data. HyspIRI is a future NASA decadal mission. The elastic computation cloud stores EO-1 data and runs software similar to Amazon online shopping.
As a newborn interdisciplinary field, bioinformatics is receiving increasing attention from biologists, computer scientists, statisticians, mathematicians and engineers. This paper briefly introduces the birth, importance, and extensive applications of bioinformatics in the different fields of biological research. A major challenge in bioinformatics - the unraveling of gene regulation - is discussed in detail.
The use of 3D graphics on the web has been limited by several factors, such as the inadequate quality of the 3D web graphics and the inability of the different web browsers to support the different 3D technologies. The development of modern web browsers, 3D technologies and standards that do not demand the use of plug-ins, such as HTML5 and WebGL, have facilitated the use and development of 3D web applications. Although 3D web applications have been used in several fields, such as gaming, edu...
Colini, L.; Doumaz, F.; Spinetti, C.; Mazzarini, F.; Favalli, M.; Isola, I.; Buongiorno, M. F.; Ananasso, C.
In the frame of the future Italian Space Agency (ASI) Space Mission PRISMA (Precursore IperSpettrale della Missione Applicativa), the Istituto Nazionale di Geofisica e Vulcanologia (INGV) coordinates the scientific project ASI-AGI (Analisi Sistemi Iperspettrali per le Applicazioni Geofisiche Integrate) aimed to study the hyperspectral volcanic applications and to identify and characterize a vicarious validation and calibration site for hyperspectral space missions. PRISMA is an Earth observation system with innovative electro-optical instrumentation which combines an hyperspectral sensor with a panchromatic medium-resolution camera. These instruments offer the scientific community and users many applications in the field of environmental monitoring, risk management, crop classification, pollution control, and Security. In this context Mt. Etna (Italy) has been choose as main site for testing the sensor capability to assess volcanic risk. The volcanic calibration and validation activities comprise the managing of a large amount of in situ hyperspectral data collected during the last 10 years. The usability and interoperability of these datasets represents a task of ASI-AGI project. For this purpose a database has been created to collect all the spectral signatures of the measured volcanic surfaces. This process has begun with the creation of the metadata structure compliant with those belonging to some standard spectral libraries such as USGS ones. Each spectral signature is described in a table containing ancillary data such as the location map of where it was collected, description of the target selected, etc. The relational database structure has been developed WOVOdat compliant. Specific tables have been formatted for each type of measurements, instruments and targets in order to query the database through a user-friendly web-interface. The interface has an upload area to populate the database and a visualization tool that allows downloading the ASCII spectral
Li, Ping; Cunningham, Krystal
The APA Style Converter is a Web-based tool with which authors may prepare their articles in APA style according to the APA Publication Manual (5th ed.). The Converter provides a user-friendly interface that allows authors to copy and paste text and upload figures through the Web, and it automatically converts all texts, references, and figures to a structured article in APA style. The output is saved in PDF or RTF format, ready for either electronic submission or hardcopy printing. PMID:16171194
Full Text Available Abstract Background Expressed sequence tag (EST collections are composed of a high number of single-pass, redundant, partial sequences, which need to be processed, clustered, and annotated to remove low-quality and vector regions, eliminate redundancy and sequencing errors, and provide biologically relevant information. In order to provide a suitable way of performing the different steps in the analysis of the ESTs, flexible computation pipelines adapted to the local needs of specific EST projects have to be developed. Furthermore, EST collections must be stored in highly structured relational databases available to researchers through user-friendly interfaces which allow efficient and complex data mining, thus offering maximum capabilities for their full exploitation. Results We have created EST2uni, an integrated, highly-configurable EST analysis pipeline and data mining software package that automates the pre-processing, clustering, annotation, database creation, and data mining of EST collections. The pipeline uses standard EST analysis tools and the software has a modular design to facilitate the addition of new analytical methods and their configuration. Currently implemented analyses include functional and structural annotation, SNP and microsatellite discovery, integration of previously known genetic marker data and gene expression results, and assistance in cDNA microarray design. It can be run in parallel in a PC cluster in order to reduce the time necessary for the analysis. It also creates a web site linked to the database, showing collection statistics, with complex query capabilities and tools for data mining and retrieval. Conclusion The software package presented here provides an efficient and complete bioinformatics tool for the management of EST collections which is very easy to adapt to the local needs of different EST projects. The code is freely available under the GPL license and can be obtained at http
This project is about developing a web-based interface for accessing the Marine Contamination database records. The system contains of information pertaining to the occurrence of contaminants and natural elements in the marine eco-system based on samples taken at various locations within the shores of Malaysia in the form of sediment, seawater and marine biota. It represents a systematic approach for recording, storing and managing the vast amount of marine environmental data collected as output of the Marine Contamination and Transport Phenomena Research Project since 1990. The resultant collection of data is to form the background information (or baseline data) which could later be used to monitor the level of marine environmental pollutions around the country. Data collected from the various sampling and related laboratory activities are previously kept in conventional forms such as Excel worksheets and other documents, both in digital and/or paper form. With the help of modern database storage and retrieval techniques, the task of storage and retrieval of data has been made easier and manageable. It can also provide easy access to other parties who are interested in the data. (author)
Abouelhoda, Mohamed; Ghanem, Moustafa
Sequence analysis is a major area in bioinformatics encompassing the methods and techniques for studying the biological sequences, DNA, RNA, and proteins, on the linear structure level. The focus of this area is generally on the identification of intra- and inter-molecular similarities. Identifying intra-molecular similarities boils down to detecting repeated segments within a given sequence, while identifying inter-molecular similarities amounts to spotting common segments among two or multiple sequences. From a data mining point of view, sequence analysis is nothing but string- or pattern mining specific to biological strings. For a long time, this point of view, however, has not been explicitly embraced neither in the data mining nor in the sequence analysis text books, which may be attributed to the co-evolution of the two apparently independent fields. In other words, although the word "data-mining" is almost missing in the sequence analysis literature, its basic concepts have been implicitly applied. Interestingly, recent research in biological sequence analysis introduced efficient solutions to many problems in data mining, such as querying and analyzing time series [49,53], extracting information from web pages , fighting spam mails , detecting plagiarism , and spotting duplications in software systems .
The World Wide Web is rapidly being adopted by libraries and database vendors as a front end for bibliographic databases, reflecting the fact that the Web browser is becoming a universal tool. When the Web is also used for bibliographic instruction about these Web-based resources, it is possible to build tutorials incorporating actual screens from a database. The result is a realistic, highly interactive simulation of database searching that can provide a very detailed level of instruction.
Web与数据库的结合是目前web技术和数据库技术发展的主流方向.本文通过对B/S体系结构的分析,对当前比较流行的几种接口技术进行了研究和分析,主要包括;CGI(Common Gateway Interface)技术、Web API(Appllcation Programming Interface)技术、JDBC(Java DataBase Connection)技术和ASP(Acrive Server Pages)技术.
Huang, Kuo Hung
Although Web-based instruction provides learners with sufficient resources for self-paced learning, previous studies have confirmed that browsing navigation-oriented Web sites possibly hampers users' comprehension of information. Web sites designed as "categories of materials" for navigation demand more cognitive effort from users to orient their…
Mesbah, A.; Van Deursen, A.; Lenselink, S.
Lemer, C.; Antezana, E; Couche, F; Fays, F; Santolaria, X; Janky, R.; Deville, Yves; Richelle, J; Wodak, SJ
The aMAZE LightBench (http://www.amaze.ulb. ac.be/) is a web interface to the aMAZE relational database, which contains information on gene expression, catalysed chemical reactions, regulatory interactions, protein assembly, as well as metabolic and signal transduction pathways. It allows the user to browse the information in an intuitive way, which also reflects the underlying data model. Moreover links are provided to literature references, and whenever appropriate, to external databases.
Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh
In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current res...
Ms. Veena Singh Bhadauriya
Full Text Available Rapid growth of web application has increased the researcher’s interests in this era. All over the world has surrounded by the computer network. There is a very useful application call web application used for the communication and data transfer. An application that is accessed via a web browser over a network is called the web application. Web caching is a well-known strategy for improving the performance of Web based system by keeping Web objects that are likely to be used in the near future in location closer to user. The Web caching mechanisms are implemented at three levels: client level, proxy level and original server level. Significantly, proxy servers play the key roles between users and web sites in lessening of the response time of user requests and saving of network bandwidth. Therefore, for achieving better response time, an efficient caching approach should be built in a proxy server. This paper use FP growth, weighted rule mining concept and Markov model for fast and frequent web pre fetching in order to has improved the hit ratio of the web page and expedites users visiting speed.
Thampi, Sabu M.
Bioinformatics is a new discipline that addresses the need to manage and interpret the data that in the past decade was massively generated by genomic research. This discipline represents the convergence of genomics, biotechnology and information technology, and encompasses analysis and interpretation of data, modeling of biological phenomena, and development of algorithms and statistics. This article presents an introduction to bioinformatics
Hassanzadeh, Hamed; Keyvanpour, Mohammad Reza
In recent years, Semantic web has become a topic of active research in several fields of computer science and has applied in a wide range of domains such as bioinformatics, life sciences, and knowledge management. The two fast-developing research areas semantic web and web mining can complement each other and their different techniques can be used jointly or separately to solve the issues in both areas. In addition, since shifting from current web to semantic web mainly depends on the enhance...
Full Text Available Abstract Background Recent advances in genomic sequencing have enabled the use of genome sequencing in standard biological and biotechnological research projects. The challenge is how to integrate the large amount of data in order to gain novel biological insights. One way to leverage sequence data is to use genome-scale metabolic models. We have therefore designed and implemented a bioinformatics platform which supports the development of such metabolic models. Results MEMOSys (MEtabolic MOdel research and development System is a versatile platform for the management, storage, and development of genome-scale metabolic models. It supports the development of new models by providing a built-in version control system which offers access to the complete developmental history. Moreover, the integrated web board, the authorization system, and the definition of user roles allow collaborations across departments and institutions. Research on existing models is facilitated by a search system, references to external databases, and a feature-rich comparison mechanism. MEMOSys provides customizable data exchange mechanisms using the SBML format to enable analysis in external tools. The web application is based on the Java EE framework and offers an intuitive user interface. It currently contains six annotated microbial metabolic models. Conclusions We have developed a web-based system designed to provide researchers a novel application facilitating the management and development of metabolic models. The system is freely available at http://www.icbi.at/MEMOSys.
Kamel Boulos Maged N
Full Text Available Abstract Background On 21 July 2004, the Healthcare Commission http://www.healthcarecommission.org.uk/ released its annual star ratings of the performance of NHS Primary Care Trusts (PCTs in England for the year ending March 2004. The Healthcare Commission started work on 1 April 2004, taking over all the functions of the former Commission for Health Improvement http://www.chi.nhs.uk/, which had released the corresponding PCT ratings for 2002/2003 in July 2003. Results We produced two Web-based interactive maps of PCT star ratings, one for 2003 and the other for 2004 http://healthcybermap.org/PCT/ratings/, with handy functions like map search (by PCT name or part of it. The maps feature a colour-blind friendly quadri-colour scheme to represent PCT star ratings. Clicking a PCT on any of the maps will display the detailed performance report of that PCT for the corresponding year. Conclusion Using our Web-based interactive maps, users can visually appreciate at a glance the distribution of PCT performance across England. They can visually compare the performance of different PCTs in the same year and also between 2003 and 2004 (by switching between the synchronised 'PCT Ratings 2003' and 'PCT Ratings 2004' themes. The performance of many PCTs has improved in 2004, whereas some PCTs achieved lower ratings in 2004 compared to 2003. Web-based interactive geographical interfaces offer an intuitive way of indexing, accessing, mining, and understanding large healthcare information sets describing geographically differentiated phenomena. By acting as an enhanced alternative or supplement to purely textual online interfaces, interactive Web maps can further empower organisations and decision makers.
P. Šimek; J. Jarolímek; J. Masner
The paper treats the process of the creation of a web application optimal output for mobile devices in the form of a responsive layout with focus on the agrarian web portal. The utilization and testing of user experience (UX) techniques in four steps - UX, research, design and testing - were of great benefit. Two groups of five people representing the task group were employed for the research and testing. The resulting responsive layout was developed with the emphasis on the ergonomic layout ...
Hendry Setyawans Sutedjo; Sritomo Wignjosoebroto; Arief Rahman
Informasi dalam sebuah website atau web diharapkan dapat sampaikan dan diterima oleh pencari informasi dengan mudah. Di dalam Dunia pendidikan, informasi yang ada di dalam web juga diharapkan mampu diterima oleh para penggunanya dengan tujuan media komunikasi online seperti website dapat membantu para pelajar menerima ilmu yang disampaikan melalui media online. Untuk Mengetahui seberapa mudahnya informasi itu ditangkap ditandai dengan seberapa mudah website itu digunakan (usable). Untuk menge...
Mitchell John; Murray-Rust Peter; Rzepa Henry
Abstract Chemical information is now seen as critical for most areas of life sciences. But unlike Bioinformatics, where data is openly available and freely re-usable, most chemical information is closed and cannot be re-distributed without permission. This has led to a failure to adopt modern informatics and software techniques and therefore paucity of chemistry in bioinformatics. New technology, however, offers the hope of making chemical data (compounds and properties) free during the auth...
Full Text Available The paper treats the process of the creation of a web application optimal output for mobile devices in the form of a responsive layout with focus on the agrarian web portal. The utilization and testing of user experience (UX techniques in four steps - UX, research, design and testing - were of great benefit. Two groups of five people representing the task group were employed for the research and testing. The resulting responsive layout was developed with the emphasis on the ergonomic layout of control elements and content, a conservative design, the securing of content accessibility for disabled users and the possibility of fast and simple updating. The resulting knowledge is applicable to web information sources in the agrarian sector (agriculture, food industry, forestry, water supply and distribution and the development of rural areas. In wider context, this knowledge is valid in general.
Walaa Nagy; Hoda M.O. Mokhtar
Workflow systems are typical fit for in the explorative research of bioinformaticians. These systems can help bioinformaticians to design and run their experiments and to automatically capture and store the data generated at runtime. On the other hand, Web services are increasingly used as the preferred method for accessing and processing the information coming from the diverse life science sources. In this work we provide an efficient approach for creating bioinformatic workflow for all-serv...
Polkowski, Marcin; Grad, Marek
Passive seismic experiment "13BB Star" is operated since mid 2013 in northern Poland and consists of 13 broadband seismic stations. One of the elements of this experiment is dedicated on-line data acquisition system comprised of both client (station) side and server side modules with web based interface that allows monitoring of network status and provides tools for preliminary data analysis. Station side is controlled by ARM Linux board that is programmed to maintain 3G/EDGE internet connection, receive data from digitizer, send data do central server among with additional auxiliary parameters like temperatures, voltages and electric current measurements. Station side is controlled by set of easy to install PHP scripts. Data is transmitted securely over SSH protocol to central server. Central server is a dedicated Linux based machine. Its duty is receiving and processing all data from all stations including auxiliary parameters. Server side software is written in PHP and Python. Additionally, it allows remote station configuration and provides web based interface for user friendly interaction. All collected data can be displayed for each day and station. It also allows manual creation of event oriented plots with different filtering abilities and provides numerous status and statistic information. Our solution is very flexible and easy to modify. In this presentation we would like to share our solution and experience. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.
Tengku Siti Meriam Tengku Wook; Siti Salwah Salim
There have been numerous studies done on the guidelines of user interface, but only a number of them have considered specific guidelines for the design of children’s interface. This paper is about a research on the specific guidelines for children, focusing on the criteria of graphic design. The objective of this research is to study on the guidelines of user interface design and to develop specific guidelines on children’s graphic design. The criteria of graphic design are the priority of th...
Full Text Available Abstract Background Computational methods for problem solving need to interleave information access and algorithm execution in a problem-specific workflow. The structures of these workflows are defined by a scaffold of syntactic, semantic and algebraic objects capable of representing them. Despite the proliferation of GUIs (Graphic User Interfaces in bioinformatics, only some of them provide workflow capabilities; surprisingly, no meta-analysis of workflow operators and components in bioinformatics has been reported. Results We present a set of syntactic components and algebraic operators capable of representing analytical workflows in bioinformatics. Iteration, recursion, the use of conditional statements, and management of suspend/resume tasks have traditionally been implemented on an ad hoc basis and hard-coded; by having these operators properly defined it is possible to use and parameterize them as generic re-usable components. To illustrate how these operations can be orchestrated, we present GPIPE, a prototype graphic pipeline generator for PISE that allows the definition of a pipeline, parameterization of its component methods, and storage of metadata in XML formats. This implementation goes beyond the macro capacities currently in PISE. As the entire analysis protocol is defined in XML, a complete bioinformatic experiment (linked sets of methods, parameters and results can be reproduced or shared among users. Availability: http://if-web1.imb.uq.edu.au/Pise/5.a/gpipe.html (interactive, ftp://ftp.pasteur.fr/pub/GenSoft/unix/misc/Pise/ (download. Conclusion From our meta-analysis we have identified syntactic structures and algebraic operators common to many workflows in bioinformatics. The workflow components and algebraic operators can be assimilated into re-usable software components. GPIPE, a prototype implementation of this framework, provides a GUI builder to facilitate the generation of workflows and integration of heterogeneous
Berrios, Daniel C.; Keller, Richard M.
While there are now a number of languages and frameworks that enable computer-based systems to search stored data semantically, the optimal design for effective user interfaces for such systems is still uncle ar. Such interfaces should mask unnecessary query detail from users, yet still allow them to build queries of arbitrary complexity without significant restrictions. We developed a user interface supporting s emantic query generation for Semanticorganizer, a tool used by scient ists and engineers at NASA to construct networks of knowledge and dat a. Through this interface users can select node types, node attribute s and node links to build ad-hoc semantic queries for searching the S emanticOrganizer network.
Perri, M. J.; Weber, S. H.
A Web site is described that facilitates use of the free computational chemistry software: General Atomic and Molecular Electronic Structure System (GAMESS). Its goal is to provide an opportunity for undergraduate students to perform computational chemistry experiments without the need to purchase expensive software.
Groenewegen, D.M.; Visser, E.
Data validation rules constitute the constraints that data input and processing must adhere to in addition to the structural constraints imposed by a data model. Web modeling tools do not make all types of data validation explicit in their models, hampering full code generation and model expressivit
Groenewegen, D.M.; Visser, E.
Data validation rules constitute the constraints that data input and processing must adhere to in addition to the structural constraints imposed by a data model. Web modeling tools do not make all types of data validation explicit in their models, hampering full code generation and model expressivit
Kitalong, Karla Saari; Hoeppner, Athena; Scharf, Meg
Library patrons familiar with Web searching conventions often find library searching to be less familiar and even intimidating. This article describes and evaluates a series of usability research studies employing two different and popular methodologies: user-centered redesign and usability testing. Card sorting and affinity mapping were used to…
Full Text Available Abstract Background To aid in bioinformatics data processing and analysis, an increasing number of web-based applications are being deployed. Although this is a positive circumstance in general, the proliferation of tools makes it difficult to find the right tool, or more importantly, the right set of tools that can work together to solve real complex problems. Results Magallanes (Magellan is a versatile, platform-independent Java library of algorithms aimed at discovering bioinformatics web services and associated data types. A second important feature of Magallanes is its ability to connect available and compatible web services into workflows that can process data sequentially to reach a desired output given a particular input. Magallanes' capabilities can be exploited both as an API or directly accessed through a graphic user interface. The Magallanes' API is freely available for academic use, and together with Magallanes application has been tested in MS-Windows™ XP and Unix-like operating systems. Detailed implementation information, including user manuals and tutorials, is available at http://www.bitlab-es.com/magallanes. Conclusion Different implementations of the same client (web page, desktop applications, web services, etc. have been deployed and are currently in use in real installations such as the National Institute of Bioinformatics (Spain and the ACGT-EU project. This shows the potential utility and versatility of the software library, including the integration of novel tools in the domain and with strong evidences in the line of facilitate the automatic discovering and composition of workflows.
Full Text Available According to the advancement in internet and web-based application, the survey via the internet has been increasingly utilized due to its convenience and time saving. This article studied the influence of five web-design techniques - screen design, response format, logo type, progress indicator, and image display on the interest of the respondents. Two screen display designs from each design technique were made for selection. Focus group discussion technique was conducted on the four groups of Y generation participants with different characteristics. Open discussion was performed to identify additional design factors that will affect the interest of the questionnaire. The study found the degree of influence of all related design factors can be ranked from screen design, response format, font type, logo type, background color, progress indicator, and image display respectively.
Arakawa, Kazuharu; Kido, Nobuhiro; Oshita, Kazuki; Tomita, Masaru
G-language genome analysis environment (G-language GAE) contains more than 100 programs that focus on the analysis of bacterial genomes, including programs for the identification of binding sites by means of information theory, analysis of nucleotide composition bias and the distribution of particular oligonucleotides, calculation of codon bias and prediction of expression levels, and visualization of genomic information. We have provided a collection of web services for these programs by uti...
Full Text Available Abstract Chemical information is now seen as critical for most areas of life sciences. But unlike Bioinformatics, where data is openly available and freely re-usable, most chemical information is closed and cannot be re-distributed without permission. This has led to a failure to adopt modern informatics and software techniques and therefore paucity of chemistry in bioinformatics. New technology, however, offers the hope of making chemical data (compounds and properties free during the authoring process. We argue that the technology is already available; we require a collective agreement to enhance publication protocols.
The rapidly changing field of bioinformatics is fuelling the need for suitably trained personnel with skills in relevant biological "sub-disciplines" such as proteomics, transcriptomics and metabolomics, etc. But because of the complexity--and sheer weight of data--associated with these new areas of biology, many school teachers feel…
The integration of services is transparent, meaning that users no longer face the millions of Web services, do not care about the required data stored, but do not need to learn how to obtain these data. In this paper, we analyze the uncertainty of schema matching, and then propose a series of similarity measures. To reduce the cost of execution, we propose the type-based optimization method and schema matching pruning method of numeric data. Based on above analysis, we propose the uncertain schema matching method. The experiments prove the effectiveness and efficiency of our method.
Full Text Available Data visualization methods are necessary during the exploration and analysis activities of an increasingly data-intensive scientific process. There are few existing visualization methods for raw nucleotide sequences of a whole genome or chromosome. Software for data visualization should allow the researchers to create accessible data visualization interfaces that can be exported and shared with others on the web. Herein, novel software developed for generating DNA data visualization interfaces is described. The software converts DNA data sets into images that are further processed as multi-scale images to be accessed through a web-based interface that supports zooming, panning and sequence fragment selection. Nucleotide composition frequencies and GC skew of a selected sequence segment can be obtained through the interface. The software was used to generate DNA data visualization of human and bacterial chromosomes. Examples of visually detectable features such as short and long direct repeats, long terminal repeats, mobile genetic elements, heterochromatic segments in microbial and human chromosomes, are presented. The software and its source code are available for download and further development. The visualization interfaces generated with the software allow for the immediate identification and observation of several types of sequence patterns in genomes of various sizes and origins. The visualization interfaces generated with the software are readily accessible through a web browser. This software is a useful research and teaching tool for genetics and structural genomics.
EPICS framework still uses console line interfaces for distributed control systems development. Each EPICS application provides a command shell as part of its core functionality, which is used to parse and execute commands from the startup script. The EPICS shell is an invaluable tool, but becomes an obstacle once we want to deploy the application into a production environment. The application is required to run as a daemon, because we need to guarantee that it is independent of other p...
Maddox, Marlo; Zheng, Yihua; Rastaetter, Lutz; Taktakishvili, A.; Mays, M. L.; Kuznetsova, M.; Lee, Hyesook; Chulaki, Anna; Hesse, Michael; Mullinix, Richard; Berrios, David
The NASA GSFC Space Weather Center (http://swc.gsfc.nasa.gov) is committed to providing forecasts, alerts, research, and educational support to address NASA's space weather needs - in addition to the needs of the general space weather community. We provide a host of services including spacecraft anomaly resolution, historical impact analysis, real-time monitoring and forecasting, custom space weather alerts and products, weekly summaries and reports, and most recently - video casts. There are many challenges in providing accurate descriptions of past, present, and expected space weather events - and the Space Weather Center at NASA GSFC employs several innovative solutions to provide access to a comprehensive collection of both observational data, as well as space weather model/simulation data. We'll describe the challenges we've faced with managing hundreds of data streams, running models in real-time, data storage, and data dissemination. We'll also highlight several systems and tools that are utilized by the Space Weather Center in our daily operations, all of which are available to the general community as well. These systems and services include a web-based application called the Integrated Space Weather Analysis System (iSWA http://iswa.gsfc.nasa.gov), two mobile space weather applications for both IOS and Android devices, an external API for web-service style access to data, google earth compatible data products, and a downloadable client-based visualization tool.
Full Text Available Information and communication technology plays essential role for people’s day-to-day business activities. People receive most of their knowledge by processing, recording and transferring necessary information through surfing Internet websites. Internet as an essential part of information technology (IT has grown remarkably. Nowadays, there have been significant amount of efforts in Iran for developing e-commerce. This paper studies the effects of environmental internet features on internet purchase intention. The study divides internet environment into demographic and technologic parts and, for studying each of them, many features are investigated such as internet connection speed, connectivity model, web browser, type of payments, user’s income, user’s education, user’s gender, frequency of online usage per week and users’ goal for using internet. Using Logistic regression technique, the study has determined a meaningful effects of income, education, connection type, browser and goal on consumers’ behavior.
L Jegatha Deborah; R Sathiyaseelan; S Audithan; P Vijayakumar
he e-learners' excellence can be improved by recommending suitable e-contents available in e-learning servers that are based on investigating their learning styles. The learning styles had to be predicted carefully, because the psychological balance is variable in nature and the e-learners are diversified based on the learning patterns, environment, time and their mood. Moreover, the knowledge about the learners used for learning style prediction is uncertain in nature. This paper identifies Felder–Silverman learning style model as a suitable model for learning style prediction, especially in web environments and proposes to use Fuzzy rules to handle the uncertainty in the learning style predictions. The evaluations have used the Gaussian membership function based fuzzy logic for 120 students and tested for learning of C programming language and it has been observed that the proposed model improved the accuracy in prediction significantly.
This diploma work has been done as a part of the EC funded projects, MUSIC VK1- CT-2000-00058 and SmartDoc IST-2000-28137. The objective was to create an intuitive and easy to use visualization of flood forecasting data provided in the MUSIC project. This visualization is focused on the Visual User Interface and is built on small, reusable components. The visualization, FloodViewer, is small enough to ensure the possibility of distribution via the Internet, yet capable of enabling collaborati...
Full Text Available Workflow systems are typical fit for in the explorative research of bioinformaticians. These systems can help bioinformaticians to design and run their experiments and to automatically capture and store the data generated at runtime. On the other hand, Web services are increasingly used as the preferred method for accessing and processing the information coming from the diverse life science sources. In this work we provide an efficient approach for creating bioinformatic workflow for all-service architecture systems (i.e., all system components are services . This architecture style simplifies the user interaction with workflow systems and facilitates both the change of individual components, and the addition of new components to adopt to other workflow tasks if required. We finally present a case study for the bioinformatics domain to elaborate the applicability of our proposed approach.
Alva, V.; Nam, S.; Söding, J.; Lupas, A.
The MPI Bioinformatics Toolkit (http://toolkit.tuebingen.mpg.de) is an open, interactive web service for comprehensive and collaborative protein bioinformatic analysis. It offers a wide array of interconnected, state-of-the-art bioinformatics tools to experts and non-experts alike, developed both externally (e.g. BLAST+, HMMER3, MUSCLE) and internally (e.g. HHpred, HHblits, PCOILS). While a beta version of the Toolkit was released 10 years ago, the current production-level release has been av...
Qiao, Li-An; Zhu, Jing; Liu, Qingyan; Zhu, Tao; Song, Chi; Lin, Wei; Wei, Guozhu; Mu, Lisen; Tao, Jiang; Zhao, Nanming; Yang, Guangwen; Liu, Xiangjun
The integration of bioinformatics resources worldwide is one of the major concerns of the biological community. We herein established the BOD (Bioinformatics on demand) system to use Grid computing technology to set up a virtual workbench via a web-based platform, to assist researchers performing customized comprehensive bioinformatics work. Users will be able to submit entire search queries and computation requests, e.g. from DNA assembly to gene prediction and finally protein folding, from ...
Wang, May Dongmei
During 2012, next generation sequencing (NGS) has attracted great attention in the biomedical research community, especially for personalized medicine. Also, third generation sequencing has become available. Therefore, state-of-art sequencing technology and analysis are reviewed in this Bioinformatics spotlight on 2012. Next-generation sequencing (NGS) is high-throughput nucleic acid sequencing technology with wide dynamic range and single base resolution. The full promise of NGS depends on the optimization of NGS platforms, sequence alignment and assembly algorithms, data analytics, novel algorithms for integrating NGS data with existing genomic, proteomic, or metabolomic data, and quantitative assessment of NGS technology in comparing to more established technologies such as microarrays. NGS technology has been predicated to become a cornerstone of personalized medicine. It is argued that NGS is a promising field for motivated young researchers who are looking for opportunities in bioinformatics. PMID:23192635
Johnson, Kathy A.
For the purpose of this paper, bioinformatics is defined as the application of computer technology to the management of biological information. It can be thought of as the science of developing computer databases and algorithms to facilitate and expedite biological research. This is a crosscutting capability that supports nearly all human health areas ranging from computational modeling, to pharmacodynamics research projects, to decision support systems within autonomous medical care. Bioinformatics serves to increase the efficiency and effectiveness of the life sciences research program. It provides data, information, and knowledge capture which further supports management of the bioastronautics research roadmap - identifying gaps that still remain and enabling the determination of which risks have been addressed.
Full Text Available Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise.We designed and implemented the Genomics Virtual Laboratory (GVL as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic.This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints
Full Text Available Abstract This paper covers the use of depth sensors such as Microsoft Kinect and ASUS Xtion to provide a natural user interface (NUI for controlling 3-D (three-dimensional virtual globes such as Google Earth (including its Street View mode, Bing Maps 3D, and NASA World Wind. The paper introduces the Microsoft Kinect device, briefly describing how it works (the underlying technology by PrimeSense, as well as its market uptake and application potential beyond its original intended purpose as a home entertainment and video game controller. The different software drivers available for connecting the Kinect device to a PC (Personal Computer are also covered, and their comparative pros and cons briefly discussed. We survey a number of approaches and application examples for controlling 3-D virtual globes using the Kinect sensor, then describe Kinoogle, a Kinect interface for natural interaction with Google Earth, developed by students at Texas A&M University. Readers interested in trying out the application on their own hardware can download a Zip archive (included with the manuscript as additional files 1, 2, &3 that contains a 'Kinnogle installation package for Windows PCs'. Finally, we discuss some usability aspects of Kinoogle and similar NUIs for controlling 3-D virtual globes (including possible future improvements, and propose a number of unique, practical 'use scenarios' where such NUIs could prove useful in navigating a 3-D virtual globe, compared to conventional mouse/3-D mouse and keyboard-based interfaces. Additional file 1 Installation package for Kinoogle (part 1 of 3. Compressed (zipped archive containing Kinoogle's installation package for Microsoft Windows operating systems. Download and unzip the contents of Additional file 1, Additional file 2, and Additional file 3 to the same hard drive location, then run 'Additional_file.part1.exe' from that location. Click here for file Additional file 2 Installation package for Kinoogle (part 2
Burr, Tom L [Los Alamos National Laboratory
Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.
Delattre, Hadrien; Souiai, Oussema; Fagoonee, Khema; Guerois, Raphaël; Petit, Marie-Agnès
Distant homology search tools are of great help to predict viral protein functions. However, due to the lack of profile databases dedicated to viruses, they can lack sensitivity. We constructed HMM profiles for more than 80,000 proteins from both phages and archaeal viruses, and performed all pairwise comparisons with HHsearch program. The whole resulting database can be explored through a user-friendly "Phagonaute" interface to help predict functions. Results are displayed together with their genetic context, to strengthen inferences based on remote homology. Beyond function prediction, this tool permits detections of co-occurrences, often indicative of proteins completing a task together, and observation of conserved patterns across large evolutionary distances. As a test, Herpes simplex virus I was added to Phagonaute, and 25% of its proteome matched to bacterial or archaeal viral protein counterparts. Phagonaute should therefore help virologists in their quest for protein functions and evolutionary relationships. PMID:27254594
Barth, A.; Alvera-Azcárate, A.; Troupin, C.; Ouberdous, M.; Beckers, J.-M.
The Hidden Web databases contain much more searchable information than the Surface Web databases. If the query interfaces on the Deep Web are integrated, the recall and precision of web information retrieval will be highly improved. This paper discusses the clustering analysis for query schema integration problem. The query＇ interface schema integration method costs less, compared with the Deep Web data source integration.%Deep Web信息是隐藏在Web服务器中可搜索的数据库信息资源，其信息量远比表面web信息量大。将Deep Web信息查询的接口模式集成为统一的查询接口，将极大地提高web信息检索的查全率和查准率。讨论了查询模式集成问题的聚类分析方法，相对于直接对Deep Web数据源的进行集成，对查询模式加以集成的思路成本更低。
Gelbart, Hadas; Yarden, Anat
Following the rationale that learning is an active process of knowledge construction as well as enculturation into a community of experts, we developed a novel web-based learning environment in bioinformatics for high-school biology majors in Israel. The learning environment enables the learners to actively participate in a guided inquiry process…
Keith, J. Brandon; Fennick, Jacob R.; Junkermeier, Chad E.; Nelson, Daniel R.; Lewis, James P.
FIREBALL is an ab initio technique for fast local orbital simulations of nanotechnological, solid state, and biological systems. We have implemented a convenient interface for new users and software architects in the platform-independent Java language to access FIREBALL's unique and powerful capabilities. The graphical user interface can be run directly from a web server or from within a larger framework such as the Computational Science and Engineering Online (CSE-Online) environment or the Distributed Analysis of Neutron Scattering Experiments (DANSE) framework. We demonstrate its use for high-throughput electronic structure calculations and a multi-100 atom quantum molecular dynamics (MD) simulation. Program summaryProgram title: FireballUI Catalogue identifier: AECF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 279 784 No. of bytes in distributed program, including test data, etc.: 12 836 145 Distribution format: tar.gz Programming language: Java Computer: PC and workstation Operating system: The GUI will run under Windows, Mac and Linux. Executables for Mac and Linux are included in the package. RAM: 512 MB Word size: 32 or 64 bits Classification: 4.14 Nature of problem: The set up and running of many simulations (all of the same type), from the command line, is a slow process. But most research quality codes, including the ab initio tight-binding code FIREBALL, are designed to run from the command line. The desire is to have a method for quickly and efficiently setting up and running a host of simulations. Solution method: We have created a graphical user interface for use with the FIREBALL code. Once the user has created the files containing the atomic coordinates for each system that they are
Bioinformatics is an emerging interdisciplinary research field in which mathematics. computer science and biology meet. In this thesis. bioinformatic methods for analysis of functional and structural properties among proteins will be presented. I have developed and applied bioinformatic methods on the enzyme superfamily of short-chain dehydrogenases/reductases (SDRs), coenzyme-binding enzymes of the Rossmann fold type, and amyloid-forming proteins and peptides. The basis...
Kim, Ju Han
Bioinformatics is a rapidly emerging field of biomedical research. A flood of large-scale genomic and postgenomic data means that many of the challenges in biomedical research are now challenges in computational science. Clinical informatics has long developed methodologies to improve biomedical research and clinical care by integrating experimental and clinical information systems. The informatics revolution in both bioinformatics and clinical informatics will eventually change the current practice of medicine, including diagnostics, therapeutics, and prognostics. Postgenome informatics, powered by high-throughput technologies and genomic-scale databases, is likely to transform our biomedical understanding forever, in much the same way that biochemistry did a generation ago. This paper describes how these technologies will impact biomedical research and clinical care, emphasizing recent advances in biochip-based functional genomics and proteomics. Basic data preprocessing with normalization and filtering, primary pattern analysis, and machine-learning algorithms are discussed. Use of integrative biochip informatics technologies, including multivariate data projection, gene-metabolic pathway mapping, automated biomolecular annotation, text mining of factual and literature databases, and the integrated management of biomolecular databases, are also discussed. PMID:12544491
de Groot Joost CW
Full Text Available Abstract Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1 a web based, graphical user interface (GUI that enables a pipeline operator to manage the system; 2 the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3 the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines.
Barth, A.; Alvera-Azcárate, A.; Troupin, C.; Ouberdous, M.; Beckers, J.-M.
Tolvanen, Martti; Vihinen, Mauno
Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…
@@ With the completion of human genome sequencing, a new era of bioinformatics st arts. On one hand, due to the advance of high throughput DNA microarray technol ogies, functional genomics such as gene expression information has increased exp onentially and will continue to do so for the foreseeable future. Conventional m eans of storing, analysing and comparing related data are already overburdened. Moreover, the rich information in genes , their functions and their associated wide biological implication requires new technologies of analysing data that employ sophisticated statistical and machine learning algorithms, powerful com puters and intensive interaction together different data sources such as seque nce data, gene expression data, proteomics data and metabolic pathway informati on to discover complex genomic structures and functional patterns with other bi ological process to gain a comprehensive understanding of cell physiology.
Emergent Computation is concerned with recent applications of Mathematical Linguistics or Automata Theory. This subject has a primary focus upon "Bioinformatics" (the Genome and arising interest in the Proteome), but the closing chapter also examines applications in Biology, Medicine, Anthropology, etc. The book is composed of an organized examination of DNA, RNA, and the assembly of amino acids into proteins. Rather than examine these areas from a purely mathematical viewpoint (that excludes much of the biochemical reality), the author uses scientific papers written mostly by biochemists based upon their laboratory observations. Thus while DNA may exist in its double stranded form, triple stranded forms are not excluded. Similarly, while bases exist in Watson-Crick complements, mismatched bases and abasic pairs are not excluded, nor are Hoogsteen bonds. Just as there are four bases naturally found in DNA, the existence of additional bases is not ignored, nor amino acids in addition to the usual complement of...
Duarte Jose M
bioinformatics. We made the corresponding software implementation available to the community as an easy-to-use graphical web interface at http://www.eppic-web.org.
Om Prakash Sharma
Full Text Available This review article discusses the current scenario of the national and international burden due to lymphatic filariasis (LF and describes the active elimination programmes for LF and their achievements to eradicate this most debilitating disease from the earth. Since, bioinformatics is a rapidly growing field of biological study, and it has an increasingly significant role in various fields of biology. We have reviewed its leading involvement in the filarial research using different approaches of bioinformatics and have summarized available existing drugs and their targets to re-examine and to keep away from the resisting conditions. Moreover, some of the novel drug targets have been assembled for further study to design fresh and better pharmacological therapeutics. Various bioinformatics-based web resources, and databases have been discussed, which may enrich the filarial research.
Pallen, Mark J
Microbial bioinformatics in 2020 will remain a vibrant, creative discipline, adding value to the ever-growing flood of new sequence data, while embracing novel technologies and fresh approaches. Databases and search strategies will struggle to cope and manual curation will not be sustainable during the scale-up to the million-microbial-genome era. Microbial taxonomy will have to adapt to a situation in which most microorganisms are discovered and characterised through the analysis of sequences. Genome sequencing will become a routine approach in clinical and research laboratories, with fresh demands for interpretable user-friendly outputs. The "internet of things" will penetrate healthcare systems, so that even a piece of hospital plumbing might have its own IP address that can be integrated with pathogen genome sequences. Microbiome mania will continue, but the tide will turn from molecular barcoding towards metagenomics. Crowd-sourced analyses will collide with cloud computing, but eternal vigilance will be the price of preventing the misinterpretation and overselling of microbial sequence data. Output from hand-held sequencers will be analysed on mobile devices. Open-source training materials will address the need for the development of a skilled labour force. As we boldly go into the third decade of the twenty-first century, microbial sequence space will remain the final frontier! PMID:27471065
Global computing, the collaboration of idle PCs via the Internet in a SETI@home style, emerges as a new way of massive parallel multiprocessing with potentially enormous CPU power. Its relations to the broader, fast-moving field of Grid computing are discussed without attempting a review of the latter. This review (i) includes a short table of milestones in global computing history, (ii) lists opportunities global computing offers for bioinformatics, (iii) describes the structure of problems well suited for such an approach, (iv) analyses the anatomy of successful projects and (v) points to existing software frameworks. Finally, an evaluation of the various costs shows that global computing indeed has merit, if the problem to be solved is already coded appropriately and a suitable global computing framework can be found. Then, either significant amounts of computing power can be recruited from the general public, or--if employed in an enterprise-wide Intranet for security reasons--idle desktop PCs can substitute for an expensive dedicated cluster. PMID:12511066
Lawlor, Brendan; Walsh, Paul
There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians. PMID:25996054
Blaz, Jacquelyn W; Pearce, Patricia F
The world is becoming increasingly web-based. Health care institutions are utilizing the web for personal health records, surveillance, communication, and education; health care researchers are finding value in using the web for research subject recruitment, data collection, and follow-up. Programming languages, such as Java, require knowledge and experience usually found only in software engineers and consultants. The purpose of this paper is to demonstrate Ruby on Rails as a feasible alternative for programming questionnaires for use on the web. Ruby on Rails was specifically designed for the development, deployment, and maintenance of database-backed web applications. It is flexible, customizable, and easy to learn. With a relatively little initial training, a novice programmer can create a robust web application in a small amount of time, without the need of a software consultant. The translation of the Children's Computerized Physical Activity Reporter (C-CPAR) from a local installation in Microsoft Access to a web-based format utilizing Ruby on Rails is given as an example. PMID:19592849
李决龙; 李亮; 邢建春; 杨启亮
为了验证Web应用的质量,首次采用了基于交际接口及其工具TICC的建筑智能化系统Web应用验证方法,通过一个简单的能源管理Web应用系统实例说明了整个建模、构件模块组合验证和系统性质验证过程.结果表明验证能够顺利实现,因而该方法是一种合适的Web应用验证方法.%In order to verify Web applications' quality, the paper firstly adopted the methodology based on sociable interface and its tool TICC to check Web applications in the intelligent building systems, used a simple case of energy sources management Web application system to illustrate the whole process of modeling, component composing verification and characteristic model checking.The result shows that verification is done successfully, so it is an appropriate verification method for Web applications.
Hall, Wendy; O'Hara, Kieron
The Semantic Web is a proposed extension to the World Wide Web (WWW) that aims to provide a common framework for sharing and reusing data across applications. The most common interfaces to the World Wide Web present it as a Web of Documents, linked in various ways including hyperlinks. But from the data point of view, each document is a black box – the data are not given independently of their representation in the document. This reduces its power, and also (as most information needs to be ex...
Full Text Available Abstract Background The BioMoby project aims to identify and deploy standards and conventions that aid in the discovery, execution, and pipelining of distributed bioinformatics Web Services. As of August, 2006, approximately 680 bioinformatics resources were available through the BioMoby interoperability platform. There are a variety of clients that can interact with BioMoby-style services. Here we describe a Web-based browser-style client – Gbrowse Moby – that allows users to discover and "surf" from one bioinformatics service to the next using a semantically-aided browsing interface. Results Gbrowse Moby is a low-throughput, exploratory tool specifically aimed at non-informaticians. It provides a straightforward, minimal interface that enables a researcher to query the BioMoby Central web service registry for data retrieval or analytical tools of interest, and then select and execute their chosen tool with a single mouse-click. The data is preserved at each step, thus allowing the researcher to manually "click" the data from one service to the next, with the Gbrowse Moby application managing all data formatting and interface interpretation on their behalf. The path of manual exploration is preserved and can be downloaded for import into automated, high-throughput tools such as Taverna. Gbrowse Moby also includes a robust data rendering system to ensure that all new data-types that appear in the BioMoby registry can be properly displayed in the Web interface. Conclusion Gbrowse Moby is a robust, yet facile entry point for both newcomers to the BioMoby interoperability project who wish to manually explore what is known about their data of interest, as well as experienced users who wish to observe the functionality of their analytical workflows prior to running them in a high-throughput environment.
Muhammad Ali Masood
Full Text Available Dealing with data means to group information into a set of categories either in order to learn new artifacts or understand new domains. For this purpose researchers have always looked for the hidden patterns in data that can be defined and compared with other known notions based on the similarity or dissimilarity of their attributes according to well-defined rules. Data mining, having the tools of data classification and data clustering, is one of the most powerful techniques to deal with data in such a manner that it can help researchers identify the required information. As a step forward to address this challenge, experts have utilized clustering techniques as a mean of exploring hidden structure and patterns in underlying data. Improved stability, robustness and accuracy of unsupervised data classification in many fields including pattern recognition, machine learning, information retrieval, image analysis and bioinformatics, clustering has proven itself as a reliable tool. To identify the clusters in datasets algorithm are utilized to partition data set into several groups based on the similarity within a group. There is no specific clustering algorithm, but various algorithms are utilized based on domain of data that constitutes a cluster and the level of efficiency required. Clustering techniques are categorized based upon different approaches. This paper is a survey of few clustering techniques out of many in data mining. For the purpose five of the most common clustering techniques out of many have been discussed. The clustering techniques which have been surveyed are: K-medoids, K-means, Fuzzy C-means, Density-Based Spatial Clustering of Applications with Noise (DBSCAN and Self-Organizing Map (SOM clustering.
This article highlights some of the basic concepts of bioinformatics and data mining. The major research areas of bioinformatics are highlighted. The application of data mining in the domain of bioinformatics is explained. It also highlights some of the current challenges and opportunities of data mining in bioinformatics.
Zhang, Chuanrong; Li, Weidong
This book covers key issues related to Geospatial Semantic Web, including geospatial web services for spatial data interoperability; geospatial ontology for semantic interoperability; ontology creation, sharing, and integration; querying knowledge and information from heterogeneous data source; interfaces for Geospatial Semantic Web, VGI (Volunteered Geographic Information) and Geospatial Semantic Web; challenges of Geospatial Semantic Web; and development of Geospatial Semantic Web applications. This book also describes state-of-the-art technologies that attempt to solve these problems such
Vancea, Andrei; Grossniklaus, Michael; Norrie, Moira C.
In most web mashup applications, the content is generated using either web feeds or an application programming interface (API) based on web services. Both approaches have limitations. Data models provided by web feeds are not powerful enough to permit complex data structures to be transmitted. APIs based on web services are usually different for each web application, and thus different implementations of the APIs are required for each web service that a web mashup application uses. We propose...
张伟; 王海立; 周杏鹏
In this paper,current network topology of TCMS and its potential problems in maintenance between heterogeneous systems were briefly described,and an advanced maintenance interface based on industrial ethemet and web service technology to allow seamless integration between train devices,TCMS and remote train management system was expounded.As an example,the implementation of such a maintenance interface for electronic door control unit(EDCU)was also presented in this paper to demonstrate its real world application.%扼要介绍了目前TCMS采用的列车网络系统现状及所面临的问题,并以列车车门为例,详细阐述了当前国际先进的基于工业以太网和Web Service技术的网络通信维护接口设计方法,实现车载设备与TCMS以及与远程列车管理系统异构平台之间的无缝信息集成.
Full Text Available Bioinformatics emerged as a new discipline dedicated to the answer the queries about life science using computational approaches. The basic aim of bioinformatics is to create databases, analyse data sets and managing data generated through large-scale projects such as Human Genome project (HGP. It covers a wide variety of traditional computer science domains, such as data modeling, data retrieval, data mining, data integration, data managing, data warehousing, and simulation of biological information generated through laboratory and field experiments. Due to varied form, nature, and activities in the field of bioinformatics, presenting the information in cohesive fashion is major challenge. The bioinformatics information resources are heterogeneous in nature. Integration and interoperability of information is one of the biggest challenges in this field. Bioinformatics, as an emerging field, needs attention towards metadata application for resource discovery. This paper discusses the metadata element set description framework for integration of various information resources related to the field of bioinformatics available over internet. A web-based tool iBIRA: Integrated Bioinformatics Information Resources Access has been designed and developed for integration of bioinformatics information resources. Dublin Core metadata element set has been used for description of information resources and XML schema has been used for interoperability of information resources with others. A database has been designed using structure query language (SQL–database management system and hypertext preprocessor (PHP–as web programming languages for integration of bioinformatics resources. The database is designated to categorise various resources into biological database, institutions, journals, patents, software tools, web-servers, etc., and the search result is presented in the form of ‘tree view’. Each of these categories of the resources has been analysed
Schneider, Maria V.; Walter, Peter; Blatter, Marie-Claude;
to the development of ‘high-throughput biology’, the need for training in the field of bioinformatics, in particular, is seeing a resurgence: it has been defined as a key priority by many Institutions and research programmes and is now an important component of many grant proposals. Nevertheless, when it comes...... and clearly tagged in relation to target audiences, learning objectives, etc. Ideally, they would also be peer reviewed, and easily and efficiently accessible for downloading. Here, we present the Bioinformatics Training Network (BTN), a new enterprise that has been initiated to address these needs and review...
Bartlett, Andrew; Lewis, Jamie; Williams, Matthew L.
Bioinformatics, a specialism propelled into relevance by the Human Genome Project and the subsequent -omic turn in the life science, is an interdisciplinary field of research. Qualitative work on the disciplinary identities of bioinformaticians has revealed the tensions involved in work in this “borderland.” As part of our ongoing work on the emergence of bioinformatics, between 2010 and 2011, we conducted a survey of United Kingdom-based academic bioinformaticians. Building on insights drawn from our fieldwork over the past decade, we present results from this survey relevant to a discussion of disciplinary generation and stabilization. Not only is there evidence of an attitudinal divide between the different disciplinary cultures that make up bioinformatics, but there are distinctions between the forerunners, founders and the followers; as inter/disciplines mature, they face challenges that are both inter-disciplinary and inter-generational in nature. PMID:27453689
This viewgraph presentation gives an overview of the Access to Space website, including information on the 'tool boxes' available on the website for access opportunities, performance, interfaces, volume, environments, 'wish list' entry, and educational outreach.
In the Web application system, application environment and user' s requirements is easy to change. In order to solve the problem, resented a component-based flexible Web user interface (WUI) model based on a method of combining flexible software ideology and Web user interface development, which could dynamic reconfigure the display style and functionality of WUI at runtime. It separated the two different categories of information from traditional component, namely, the template which was responsible for describing the display style and was stored in XML document, and the component-role which adapted to the change of the operation data structure and was stored in relational database, for solving the problem of the flexibility and reusability of WUI. Finally, gave a flexible WUI with the table data display function to illustrate the model' s effectiveness and availability.%针对Web应用系统中应用环境和用户需求易于变化的问题,将柔性软件思想与Web界面的设计结合起来,提出了一个具有动态重配置能力的基于构件的柔性Web用户界面模型.该模型把描述构件显示样式的模板和适应业务数据结构变化的构件规则分别存储到XML文档和关系数据库中,从而解决了Web用户界面的适应性和重用性问题.最后,通过一个具有表格数据显示功能的柔性Web用户界面来说明模型的有效性和可用性.
Cao, Chang; Yao, Yu; Wang, Bo; Zhang, Yongjun; Gu, Wanyi
This paper introduces a novel method to design SNMP-based network management system of GE-PON and its management applications. Then it introduces how to establish a web server on GE-PON NMS platform, and methods to realize the system in the Manager and Agent. Finally, a simulation result is given to show the feasibility and superiority of this method.
This work presents the ScalaBLAST Web Application (SWA), a web based application implemented using the PHP script language, MySQL DBMS, and Apache web server under a GNU/Linux platform. SWA is an application built as part of the Data Intensive Computer for Complex Biological Systems (DICCBS) project at the Pacific Northwest National Laboratory (PNNL). SWA delivers accelerated throughput of bioinformatics analysis via high-performance computing through a convenient, easy-to-use web interface. This approach greatly enhances emerging fields of study in biology such as ontology-based homology, and multiple whole genome comparisons which, in the absence of a tool like SWA, require a heroic effort to overcome the computational bottleneck associated with genome analysis. The current version of SWA includes a user account management system, a web based user interface, and a backend process that generates the files necessary for the Internet scientific community to submit a ScalaBLAST parallel processing job on a dedicated cluster
Feier, Christina; Polleres, Axel; Dumitru, Roman; Domingue, John; Stollberg, Michael; Fensel, Dieter
The Semantic Web and the Semantic Web Services build a natural application area for Intelligent Agents, namely querying and reasoning about structured knowledge and semantic descriptions of services and their interfaces on the Web. This paper provides an overview of the Web Service Modeling Ontology, a conceptual framework for the semantical description of Web services.
Bushell Michael E
Full Text Available Abstract Background Constraint-based approaches facilitate the prediction of cellular metabolic capabilities, based, in turn on predictions of the repertoire of enzymes encoded in the genome. Recently, genome annotations have been used to reconstruct genome scale metabolic reaction networks for numerous species, including Homo sapiens, which allow simulations that provide valuable insights into topics, including predictions of gene essentiality of pathogens, interpretation of genetic polymorphism in metabolic disease syndromes and suggestions for novel approaches to microbial metabolic engineering. These constraint-based simulations are being integrated with the functional genomics portals, an activity that requires efficient implementation of the constraint-based simulations in the web-based environment. Results Here, we present Acorn, an open source (GNU GPL grid computing system for constraint-based simulations of genome scale metabolic reaction networks within an interactive web environment. The grid-based architecture allows efficient execution of computationally intensive, iterative protocols such as Flux Variability Analysis, which can be readily scaled up as the numbers of models (and users increase. The web interface uses AJAX, which facilitates efficient model browsing and other search functions, and intuitive implementation of appropriate simulation conditions. Research groups can install Acorn locally and create user accounts. Users can also import models in the familiar SBML format and link reaction formulas to major functional genomics portals of choice. Selected models and simulation results can be shared between different users and made publically available. Users can construct pathway map layouts and import them into the server using a desktop editor integrated within the system. Pathway maps are then used to visualise numerical results within the web environment. To illustrate these features we have deployed Acorn and created a
DR. ANURADHA; BABITA AHUJA
In this era of digital tsunami of information on the web, everyone is completely dependent on the WWW for information retrieval. This has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. The web databases are hidden behind the query interfaces. In this paper, we propose a Hidden Web Extractor (HWE) that can automatically discover and download data from the Hidden Web databases. ...
Elwess, Nancy L.; Latourelle, Sandra M.; Cauthorn, Olivia
One of the hottest areas of science today is the field in which biology, information technology,and computer science are merged into a single discipline called bioinformatics. This field enables the discovery and analysis of biological data, including nucleotide and amino acid sequences that are easily accessed through the use of computers. As…
This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...
Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael
Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…
Bioinformatics is a usage of information technology to help solve biological problems by designing novel and in-cisive algorithms and methods of analyses.Bioinformatics becomes a discipline vital in the era of post-genom-ics.In this review article,the application of bioinformatics in tropical medicine will be presented and dis-cussed.
The new interface of the Web of Science (of Thomson Reuters) enables users to retrieve sets larger than 100,000 documents in a single search. This makes it possible to compare publication trends for China, the USA, EU-27, and a number of smaller countries. China no longer grew exponentially during the 2000s, but linearly. Contrary to previous predictions on the basis of exponential growth or Scopus data, the cross-over of the lines for China and the USA is postponed to the next decade (after 2020) according to this data. These extrapolations, however, should be used only as indicators and not as predictions. Along with the dynamics in the publication trends, one also has to take into account the dynamics of the databases used for the measurement.
元书俊; 朱守中; 金灵芝
deep web 数据源中的信息可以通过查询提交进行访问,因此分析一个查询接口的查询能力是非常关键的,本文基于原子查询的理念,提出了一种通过识别查询接口上所有原子查询的方法来估计deep web接口查询能力.
David H Johnson; Tsao, Jun; Luo, Ming; Carson, Mike
The SGCEdb () database/interface serves the primary purpose of reporting progress of the Structural Genomics of Caenorhabditis elegans project at the University of Alabama at Birmingham. It stores and analyzes results of experiments ranging from solubility screening arrays to individual protein purification and structure solution. External databases and algorithms are referenced and evaluated for target selection in the human, C.elegans and Pneumocystis carinii genomes. The flexible and reusa...
With the rapid expansion and development of Internet and WWW (World Wide Web or Web), Web GIS (Web Geographical Information Systen) is becoming ever more popular and as a result numerous sites have added GIS capability on their Web sites. In this paper, the reasons behind developing a Web GIS instead of a “traditional” GIS are first outlined. Then the current status of Web GIS is reviewed, and their implementation methodologies are explored as well.The underlying technologies for developing Web GIS, such as Web Server, Web browser, CGI (Common Gateway Interface), Java, ActiveX, are discussed, and some typical implementation tools from both commercial and public domain are given as well. Finally, the future development direction of Web GIS is predicted.
Placidi, Giuseppe; Petracca, Andrea; Spezialetti, Matteo; Iacoviello, Daniela
A Brain Computer Interface (BCI) allows communication for impaired people unable to express their intention with common channels. Electroencephalography (EEG) represents an effective tool to allow the implementation of a BCI. The present paper describes a modular framework for the implementation of the graphic interface for binary BCIs based on the selection of symbols in a table. The proposed system is also designed to reduce the time required for writing text. This is made by including a motivational tool, necessary to improve the quality of the collected signals, and by containing a predictive module based on the frequency of occurrence of letters in a language, and of words in a dictionary. The proposed framework is described in a top-down approach through its modules: signal acquisition, analysis, classification, communication, visualization, and predictive engine. The framework, being modular, can be easily modified to personalize the graphic interface to the needs of the subject who has to use the BCI and it can be integrated with different classification strategies, communication paradigms, and dictionaries/languages. The implementation of a scenario and some experimental results on healthy subjects are also reported and discussed: the modules of the proposed scenario can be used as a starting point for further developments, and application on severely disabled people under the guide of specialized personnel. PMID:26573655
Verheggen, Kenneth; Maddelein, Davy; Hulstaert, Niels; Martens, Lennart; Barsnes, Harald; Vaudel, Marc
The use of proteomics bioinformatics substantially contributes to an improved understanding of proteomes, but this novel and in-depth knowledge comes at the cost of increased computational complexity. Parallelization across multiple computers, a strategy termed distributed computing, can be used to handle this increased complexity; however, setting up and maintaining a distributed computing infrastructure requires resources and skills that are not readily available to most research groups. Here we propose a free and open-source framework named Pladipus that greatly facilitates the establishment of distributed computing networks for proteomics bioinformatics tools. Pladipus is straightforward to install and operate thanks to its user-friendly graphical interface, allowing complex bioinformatics tasks to be run easily on a network instead of a single computer. As a result, any researcher can benefit from the increased computational efficiency provided by distributed computing, hence empowering them to tackle more complex bioinformatics challenges. Notably, it enables any research group to perform large-scale reprocessing of publicly available proteomics data, thus supporting the scientific community in mining these data for novel discoveries. PMID:26510693
Tammi Martti; Ranganathan Shoba; Gribskov Michael; Tan Tin Wee
Abstract In 1998, the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation was set up to champion the advancement of bioinformatics in the Asia Pacific. By 2002, APBioNet was able to gain sufficient critical mass to initiate the first International Conference on Bioinformatics (InCoB) bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2006 Conference was organized as the 5th annual conference of the Asia-...
Schweighofer, Karl; Pohorille, Andrew
Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.
Balatsoukas, Panos; Williams, Richard; Davies, Colin; Ainsworth, John; Buchan, Iain
Integrated care pathways (ICPs) define a chronological sequence of steps, most commonly diagnostic or treatment, to be followed in providing care for patients. Care pathways help to ensure quality standards are met and to reduce variation in practice. Although research on the computerisation of ICP progresses, there is still little knowledge on what are the requirements for designing user-friendly and usable electronic care pathways, or how users (normally health care professionals) interact with interfaces that support design, analysis and visualisation of ICPs. The purpose of the study reported in this paper was to address this gap by evaluating the usability of a novel web-based tool called COCPIT (Collaborative Online Care Pathway Investigation Tool). COCPIT supports the design, analysis and visualisation of ICPs at the population level. In order to address the aim of this study, an evaluation methodology was designed based on heuristic evaluations and a mixed method usability test. The results showed that modular visualisation and direct manipulation of information related to the design and analysis of ICPs is useful for engaging and stimulating users. However, designers should pay attention to issues related to the visibility of the system status and the match between the system and the real world, especially in relation to the display of statistical information about care pathways and the editing of clinical information within a care pathway. The paper concludes with recommendations for interface design. PMID:26446014
Based on a comparative study of Cambridge Scientific Abstracts' Internet Database Service and OCLC's FirstSearch, this paper discusses the user-friendly interfaces of Web-based databases according to their characteristics such as database selection, search strategy formulation and reformulation, online help and result output.
茅琴娇; 冯博琴; 潘善亮
为了进一步提高搜索引擎的效率,实现对deep web中所蕴含的大量有用信息的检索、索引和定位,引入潜在语义分析理论是一种简单而有效的方法.通过对作为deep web站点入口的查询界面里的表单属性进行潜在语义分析,从表单属性中挖掘出潜在语义结构,并实现一定程度上的降维.利用这种潜在语义结构,推断对应站点的数据内容并改善不同站点的相似度计算.实验结果显示,潜在语义分析修正和改善了deep web站点的表单属性的语义理解,弥补了单纯的关键字匹配带来的一些不足.该方法可以被用来实现为某一站点查找网络上相似度高的站点及通过键入表单属性给出拥有相似表单的站点列表.%To further enhance the efficiencies of search engines, achieving capabilities of searching, indexing and locating the information in the deep web, latent semantic analysis is a simple and effective way. Through the latent semantic analysis of the attributes in the query interfaces and the unique entrances of the deep web sites, the hidden semantic structure information can be retrieved and dimension reduction can be achieved to a certain extent. Using this semantic structure information, the contents in the site can be inferred and the similarity measures among sites in deep web can be revised. Experimental results show that latent semantic analysis revises and improves the semantic understanding of the query form in the deep web, which overcomes the shortcomings of the keyword-based methods. This approach can be used to effectively search the most similar site for any given site and to obtain a site list which conforms to the restrictions one specifies.
Byun, Yanga; Han, Kyungsook
Visualizing RNA secondary structures and pseudoknot structures is essential to bioinformatics systems that deal with RNA structures. However, many bioinformatics systems use heterogeneous data structures and incompatible software components, so integration of software components (including a visualization component) into a system can be hindered by incompatibilities between the components of the system. This paper presents an XML web service and web application program for visualizing RNA sec...
Full Text Available Abstract With the decreasing cost of DNA sequencing technology and the vast diversity of biological resources, researchers increasingly face the basic challenge of annotating a larger number of expressed sequences tags (EST from a variety of species. This typically consists of a series of repetitive tasks, which should be automated and easy to use. The results of these annotation tasks need to be stored and organized in a consistent way. All these operations should be self-installing, platform independent, easy to customize and amenable to using distributed bioinformatics resources available on the Internet. In order to address these issues, we present EST-PAC a web oriented multi-platform software package for expressed sequences tag (EST annotation. EST-PAC provides a solution for the administration of EST and protein sequence annotations accessible through a web interface. Three aspects of EST annotation are automated: 1 searching local or remote biological databases for sequence similarities using Blast services, 2 predicting protein coding sequence from EST data and, 3 annotating predicted protein sequences with functional domain predictions. In practice, EST-PAC integrates the BLASTALL suite, EST-Scan2 and HMMER in a relational database system accessible through a simple web interface. EST-PAC also takes advantage of the relational database to allow consistent storage, powerful queries of results and, management of the annotation process. The system allows users to customize annotation strategies and provides an open-source data-management environment for research and education in bioinformatics.
Chau, M; H Chen; Li, X; Ho, YJ; Tseng, C
With the advent of the World Wide Web, many business applications that utilize data mining and text mining techniques to extract useful business information on the Web have evolved from Web searching to Web mining. It is important for students to acquire knowledge and hands-on experience in Web mining during their education in information systems curricula. This paper reports on an experience using open Web Application Programming Interfaces (APIs) that have been made available by major Inter...
Full Text Available Abstract Background The analysis of high-throughput screening data sets is an expanding field in bioinformatics. High-throughput screens by RNAi generate large primary data sets which need to be analyzed and annotated to identify relevant phenotypic hits. Large-scale RNAi screens are frequently used to identify novel factors that influence a broad range of cellular processes, including signaling pathway activity, cell proliferation, and host cell infection. Here, we present a web-based application utility for the end-to-end analysis of large cell-based screening experiments by cellHTS2. Results The software guides the user through the configuration steps that are required for the analysis of single or multi-channel experiments. The web-application provides options for various standardization and normalization methods, annotation of data sets and a comprehensive HTML report of the screening data analysis, including a ranked hit list. Sessions can be saved and restored for later re-analysis. The web frontend for the cellHTS2 R/Bioconductor package interacts with it through an R-server implementation that enables highly parallel analysis of screening data sets. web cellHTS2 further provides a file import and configuration module for common file formats. Conclusions The implemented web-application facilitates the analysis of high-throughput data sets and provides a user-friendly interface. web cellHTS2 is accessible online at http://web-cellHTS2.dkfz.de. A standalone version as a virtual appliance and source code for platforms supporting Java 1.5.0 can be downloaded from the web cellHTS2 page. web cellHTS2 is freely distributed under GPL.
Since remote participation in ITER experiments is planned, it is expected to demonstrate that the JT-60SA experiment is controlled from a Japanese remote experiment center located in Rokkasho-mura, Aomori-ken, Japan as a part of the ITER-BA project. Functions required for this experiment are monitoring of the discharge sequence status, handling of the discharge parameter, checking of experiment data, and monitoring of plant data, all of which are included in the existing JT-60 Man-Machine Interfacing System (MMIF). The MMIF is now only available to on-site users at the Naka site due to network safety. The motivation for remote MMIF is prompted by the issue of developing and achieving compatibility with network safety. The Java language has been chosen to implement this task. This paper deals with details of the JT-60 MMIF for the remote experiment that has evolved using the Java language
Holtzclaw, J. David; Eisen, Arri; Whitney, Erika M.; Penumetcha, Meera; Hoey, J. Joseph; Kimbro, K. Sean
Many students at minority-serving institutions are underexposed to Internet resources such as the human genome project, PubMed, NCBI databases, and other Web-based technologies because of a lack of financial resources. To change this, we designed and implemented a new bioinformatics component to supplement the undergraduate Genetics course at…
Tusch, Guenter; Bretl, Chris; O'Connor, Martin; Connor, Martin; Das, Amar
Mining large clinical and bioinformatics databases often includes exploration of temporal data. E.g., in liver transplantation, researchers might look for patients with an unusual time pattern of potential complications of the liver. In Knowledge-based Temporal Abstraction time-stamped data points are transformed into an interval-based representation. We extended this framework by creating an open-source platform, SPOT. It supports the R statistical package and knowledge representation standards (OWL, SWRL) using the open source Semantic Web tool Protégé-OWL. PMID:18999225
Lopez, Rodrigo; Silventoinen, Ville; Robinson, Stephen; Kibria, Asif; Gish, Warren
Since 1995, the WU-BLAST programs (http://blast.wustl.edu) have provided a fast, flexible and reliable method for similarity searching of biological sequence databases. The software is in use at many locales and web sites. The European Bioinformatics Institute's WU-Blast2 (http://www.ebi.ac.uk/blast2/) server has been providing free access to these search services since 1997 and today supports many features that both enhance the usability and expand on the scope of the software.
Pseudocontact shifts (PCSs) and residual dipolar couplings (RDCs) arising from the presence of paramagnetic metal ions in proteins as well as RDCs due to partial orientation induced by external orienting media are nowadays routinely measured as a part of the NMR characterization of biologically relevant systems. PCSs and RDCs are becoming more and more popular as restraints (1) to determine and/or refine protein structures in solution, (2) to monitor the extent of conformational heterogeneity in systems composed of rigid domains which can reorient with respect to one another, and (3) to obtain structural information in protein–protein complexes. The use of both PCSs and RDCs proceeds through the determination of the anisotropy tensors which are at the origin of these NMR observables. A new user-friendly web tool, called FANTEN (Finding ANisotropy TENsors), has been developed for the determination of the anisotropy tensors related to PCSs and RDCs and has been made freely available through the WeNMR ( http://fanten-enmr.cerm.unifi.it:8080 http://fanten-enmr.cerm.unifi.it:8080 ) gateway. The program has many new features not available in other existing programs, among which the possibility of a joint analysis of several sets of PCS and RDC data and the possibility to perform rigid body minimizations
Rinaldelli, Mauro; Carlon, Azzurra; Ravera, Enrico; Parigi, Giacomo, E-mail: email@example.com; Luchinat, Claudio, E-mail: firstname.lastname@example.org [University of Florence, CERM and Department of Chemistry “Ugo Schiff” (Italy)
Pseudocontact shifts (PCSs) and residual dipolar couplings (RDCs) arising from the presence of paramagnetic metal ions in proteins as well as RDCs due to partial orientation induced by external orienting media are nowadays routinely measured as a part of the NMR characterization of biologically relevant systems. PCSs and RDCs are becoming more and more popular as restraints (1) to determine and/or refine protein structures in solution, (2) to monitor the extent of conformational heterogeneity in systems composed of rigid domains which can reorient with respect to one another, and (3) to obtain structural information in protein–protein complexes. The use of both PCSs and RDCs proceeds through the determination of the anisotropy tensors which are at the origin of these NMR observables. A new user-friendly web tool, called FANTEN (Finding ANisotropy TENsors), has been developed for the determination of the anisotropy tensors related to PCSs and RDCs and has been made freely available through the WeNMR ( http://fanten-enmr.cerm.unifi.it:8080 http://fanten-enmr.cerm.unifi.it:8080 ) gateway. The program has many new features not available in other existing programs, among which the possibility of a joint analysis of several sets of PCS and RDC data and the possibility to perform rigid body minimizations.
Antonio d'Acierno; Angelo Facchiano; Anna Marabotti
We describe the GALT-Prot database and its related web-based application that have been developed to collect information about the structural and functional effects of mutations on the human enzyme galactose-1-phosphate uridyltransferase (GALT) involved in the genetic disease named galactosemia type Ⅰ. Besides a list of missense mutations at gene and protein sequence levels, GALT-Prot reports the analysis results of mutant GALT structures. In addition to the structural information about the wild-type enzyme, the database also includes structures of over 100 single point mutants simulated by means of a computational procedure, and the analysis to each mutant was made with several bioinformatics programs in order to investigate the effect of the mutations. The web-based interface allows querying of the database, and several links are also provided in order to guarantee a high integration with other resources already present on the web. Moreover, the architecture of the database and the web application is flexible and can be easily adapted to store data related to other proteins with point mutations. GALT-Prot is freely available at http://bioinformatica.isa.cnr.it/GALT/.
Kuhn, Gerhard; Krammes, Gary S.; Beal, Vivian J.
The U.S. Geological Survey, in cooperation with Colorado Springs Utilities, the Colorado Water Conservation Board, and the El Paso County Water Authority, began a study in 2004 with the following objectives: (1) Apply a stream-aquifer model to Monument Creek, (2) use the results of the modeling to develop a transit-loss accounting program for Monument Creek, (3) revise an existing accounting program for Fountain Creek to easily incorporate ongoing and future changes in management of return flows of reusable water, and (4) integrate the two accounting programs into a single program and develop a Web-based interface to the integrated program that incorporates simple and reliable data entry that is automated to the fullest extent possible. This report describes the results of completing objectives (2), (3), and (4) of that study. The accounting program for Monument Creek was developed first by (1) using the existing accounting program for Fountain Creek as a prototype, (2) incorporating the transit-loss results from a stream-aquifer modeling analysis of Monument Creek, and (3) developing new output reports. The capabilities of the existing accounting program for Fountain Creek then were incorporated into the program for Monument Creek and the output reports were expanded to include Fountain Creek. A Web-based interface to the new transit-loss accounting program then was developed that provided automated data entry. An integrated system of 34 nodes and 33 subreaches was integrated by combining the independent node and subreach systems used in the previously completed stream-aquifer modeling studies for the Monument and Fountain Creek reaches. Important operational criteria that were implemented in the new transit-loss accounting program for Monument and Fountain Creeks included the following: (1) Retain all the reusable water-management capabilities incorporated into the existing accounting program for Fountain Creek; (2) enable daily accounting and transit
Thomas K Karikari
Full Text Available Until recently, bioinformatics, an important discipline in the biological sciences, was largely limited to countries with advanced scientific resources. Nonetheless, several developing countries have lately been making progress in bioinformatics training and applications. In Africa, leading countries in the discipline include South Africa, Nigeria, and Kenya. However, one country that is less known when it comes to bioinformatics is Ghana. Here, I provide a first description of the development of bioinformatics activities in Ghana and how these activities contribute to the overall development of the discipline in Africa. Over the past decade, scientists in Ghana have been involved in publications incorporating bioinformatics analyses, aimed at addressing research questions in biomedical science and agriculture. Scarce research funding and inadequate training opportunities are some of the challenges that need to be addressed for Ghanaian scientists to continue developing their expertise in bioinformatics.
Smita, Shuchi; Lenka, Sangram Keshari; Katiyar, Amit; Jaiswal, Pankaj; Preece, Justin; Bansal, Kailash Chander
The QlicRice database is designed to host publicly accessible, abiotic stress responsive quantitative trait loci (QTLs) in rice (Oryza sativa) and their corresponding sequenced gene loci. It provides a platform for the data mining of abiotic stress responsive QTLs, as well as browsing and annotating associated traits, their location on a sequenced genome, mapped expressed sequence tags (ESTs) and tissue and growth stage-specific expressions on the whole genome. Information on QTLs related to abiotic stresses and their corresponding loci from a genomic perspective has not yet been integrated on an accessible, user-friendly platform. QlicRice offers client-responsive architecture to retrieve meaningful biological information--integrated and named 'Qlic Search'--embedded in a query phrase autocomplete feature, coupled with multiple search options that include trait names, genes and QTL IDs. A comprehensive physical and genetic map and vital statistics have been provided in a graphical manner for deciphering the position of QTLs on different chromosomes. A convenient and intuitive user interface have been designed to help users retrieve associations to agronomically important QTLs on abiotic stress response in rice. Database URL: http://nabg.iasri.res.in:8080/qlic-rice/. PMID:21965557
李雪玲; 施化吉; 兰均; 李星毅
针对现有Deep Web查询接口判定方法误判较多、无法有效区分搜索引擎类接口的不足,提出了基于决策树和链接相似的Deep Web查询接口判定方法.该方法利用信息增益率选取重要属性,并构建决策树对接口表单进行预判定,识别特征较为明显的接口;然后利用基于链接相似的判定方法对未识别出的接口进行二次判定,准确识别真正查询接口,排除搜索引擎类接口.结果表明,该方法能有效区分搜索引擎类接口,提高了分类的准确率和查全率.%In order to solve the problems existed in the traditional method that Deep Web query interfaces are more false positives and search engine class interface can not be effectively distinguished, this paper proposed a Deep Web query interface identification method based on decision tree and link-similar. This method used attribute information gain ratio as selection level, built a decision tree to pre-determine the form of the interfaces to identify the most interfaces which had some distinct features, and then used a new method based on link-similar to identify these unidentified again, distinguishing between Deep Web query interface and the interface of search engines. The result of experiment shows that it can enhance the accuracy and proves that it is better than the traditional methods.
Full Text Available Abstract In 1998, the Asia Pacific Bioinformatics Network (APBioNet, Asia's oldest bioinformatics organisation was set up to champion the advancement of bioinformatics in the Asia Pacific. By 2002, APBioNet was able to gain sufficient critical mass to initiate the first International Conference on Bioinformatics (InCoB bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2006 Conference was organized as the 5th annual conference of the Asia-Pacific Bioinformatics Network, on Dec. 18–20, 2006 in New Delhi, India, following a series of successful events in Bangkok (Thailand, Penang (Malaysia, Auckland (New Zealand and Busan (South Korea. This Introduction provides a brief overview of the peer-reviewed manuscripts accepted for publication in this Supplement. It exemplifies a typical snapshot of the growing research excellence in bioinformatics of the region as we embark on a trajectory of establishing a solid bioinformatics research culture in the Asia Pacific that is able to contribute fully to the global bioinformatics community.
With the development of power industry , it is hoped that the power industry marketing and acquisition can become an integrated information platform in order to provide a unified management , and realize data sharing among systems.This paper in-troduces a way of power marketing and acquisition data exchange under heterogeneous circumstance .Taking a county power ac-quisition data uploaded to the provincial center as a case , it is proved that this scheme can realize the interface of power market-ing and acquisition services.% 随着电力行业的发展，电力行业的营销和采集希望整合成一体化的信息平台，提供统一管理，实现系统之间的数据共享。基于Web Service提出一种异构环境下电力营销与采集数据交换方法，以某县电力采集数据上传到省中心为实例，进行实例实践，实际应用表明，该方案可以实现电力营销与采集业务。
A huge portion of the Web known as the deep Web is accessible via search interfaces to myriads of databases on the Web. While relatively good approaches for querying the contents of web databases have been recently proposed, one cannot fully utilize them having most search interfaces unlocated. Thus, the automatic recognition of search interfaces to online databases is crucial for any application accessing the deep Web. This paper describes the architecture of the I-Crawler, a system for finding and classifying search interfaces. The I-Crawler is intentionally designed to be used in the deep web characterization surveys and for constructing directories of deep web resources.
Falquet, Laurent; Bordoli, Lorenza; Ioannidis, Vassilios; Pagni, Marco; Jongeneel, C. Victor
EMBnet is a consortium of collaborating bioinformatics groups located mainly within Europe (http://www.embnet.org). Each member country is represented by a ‘node’, a group responsible for the maintenance of local services for their users (e.g. education, training, software, database distribution, technical support, helpdesk). Among these services a web portal with links and access to locally developed and maintained software is essential and different for each node. Our web portal targets bio...
Full Text Available Abstract Background Many cutting-edge microarray analysis tools and algorithms, including commonly used limma and affy packages in Bioconductor, need sophisticated knowledge of mathematics, statistics and computer skills for implementation. Commercially available software can provide a user-friendly interface at considerable cost. To facilitate the use of these tools for microarray data analysis on an open platform we developed an online microarray data analysis platform, WebArray, for bench biologists to utilize these tools to explore data from single/dual color microarray experiments. Results The currently implemented functions were based on limma and affy package from Bioconductor, the spacings LOESS histogram (SPLOSH method, PCA-assisted normalization method and genome mapping method. WebArray incorporates these packages and provides a user-friendly interface for accessing a wide range of key functions of limma and others, such as spot quality weight, background correction, graphical plotting, normalization, linear modeling, empirical bayes statistical analysis, false discovery rate (FDR estimation, chromosomal mapping for genome comparison. Conclusion WebArray offers a convenient platform for bench biologists to access several cutting-edge microarray data analysis tools. The website is freely available at http://bioinformatics.skcc.org/webarray/. It runs on a Linux server with Apache and MySQL.
Explains the Human Genome Project (HGP) and efforts to sequence the human genome. Describes the role of bioinformatics in the project and considers it the genetics Swiss Army Knife, which has many different uses, for use in forensic science, medicine, agriculture, and environmental sciences. Discusses the use of bioinformatics in the high school…
Torres, Angela; Nieto, Juan J.
The purpose of this paper is to present a general view of the current applications of fuzzy logic in medicine and bioinformatics. We particularly review the medical literature using fuzzy logic. We then recall the geometrical interpretation of fuzzy sets as points in a fuzzy hypercube and present two concrete illustrations in medicine (drug addictions) and in bioinformatics (comparison of genomes).
This article describes a new approach to teaching bioinformatics using "Arabidopsis" genetic sequences. Several open-ended and inquiry-based laboratory exercises have been designed to help students grasp key concepts and gain practical skills in bioinformatics, using "Arabidopsis" leucine-rich repeat receptor-like kinase (LRR RLK) genetic…
Heyer, Laurie J.
This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…
Bioinformatics is a scientific discipline that applies computer science and information technology to help understand biological processes. The NIH provides a list of free online bioinformatics tutorials, either generated by the NIH Library or other institutes, which includes introductory lectures and "how to" videos on using various tools.
Zhong, Yang; Zhang, Xiaoyan; Ma, Jian; Zhang, Liang
As the Human Genome Project experiences remarkable success and a flood of biological data is produced, bioinformatics becomes a very "hot" cross-disciplinary field, yet experienced bioinformaticians are urgently needed worldwide. This paper summarises the rapid development of bioinformatics education in China, especially related undergraduate…
Ménager, Hervé; Kalaš, Matúš; Rapacki, Kristoffer;
within convenient, integrated “workbench” environments. Resource descriptions are the core element of registry and workbench systems, which are used to both help the user find and comprehend available software tools, data resources, and Web Services, and to localise, execute and combine them......The diversity and complexity of bioinformatics resources presents significant challenges to their localisation, deployment and use, creating a need for reliable systems that address these issues. Meanwhile, users demand increasingly usable and integrated ways to access and analyse data, especially......, a software component that will ease the integration of bioinformatics resources in a workbench environment, using their description provided by the existing ELIXIR Tools and Data Services Registry....
Full Text Available Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS, Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS, and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.
As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics.This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. 2012 Dai et al.; licensee BioMed Central Ltd.
Despite all of the UI toolkits available today, it's still not easy to design good application interfaces. This bestselling book is one of the few reliable sources to help you navigate through the maze of design options. By capturing UI best practices and reusable ideas as design patterns, Designing Interfaces provides solutions to common design problems that you can tailor to the situation at hand. This updated edition includes patterns for mobile apps and social media, as well as web applications and desktop software. Each pattern contains full-color examples and practical design advice th
Schneider, Bohdan; Černý, Jiří; Svozil, D.; Čech, P.; Gelly, J.-Ch.; de Brevern, A.G.
Roč. 42, č. 5 (2014), s. 3381-3394. ISSN 0305-1048 R&D Projects: GA MŠk(CZ) MEB021032; GA ČR GAP305/12/1801; GA MŠk(CZ) ED1.1.00/02.0109 Institutional research plan: CEZ:AV0Z50520701 Keywords : DNA-BINDING-SITES * STRUCTURE-BASED PREDICTION * NUCLEIC ACID RECOGNITION Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 9.112, year: 2014
S. Oswalt Manoj
The World Wide Web has more online web database which can be searched through their web query interface. Deep Web contents are accessed by queries submitted to Web databases and the returned data records are enwrapped in dynamically generated Web pages. Extracting structured data from deep Web pages is a challenging task due to the underlying complicate structures of such pages. Until now, a large number of techniques have been proposed to address this problem, but all of them have inherent l...
Segun A Fatumo
Full Text Available Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.
Full Text Available In this era of digital tsunami of information on the web, everyone is completely dependent on the WWW for information retrieval. This has posed a challenging problem in extracting relevant data. Traditional web crawlers focus only on the surface web while the deep web keeps expanding behind the scene. The web databases are hidden behind the query interfaces. In this paper, we propose a Hidden Web Extractor (HWE that can automatically discover and download data from the Hidden Web databases. Since the only “entry point” to a Hidden Web site is a query interface, the main challenge that a Hidden WebExtractor has to face is how to automatically generate meaningful queries for the unlimited number of website pages.
Sirintrapun, S Joseph; Zehir, Ahmet; Syed, Aijazuddin; Gao, JianJiong; Schultz, Nikolaus; Cheng, Donavan T
Translational bioinformatics and clinical research (biomedical) informatics are the primary domains related to informatics activities that support translational research. Translational bioinformatics focuses on computational techniques in genetics, molecular biology, and systems biology. Clinical research (biomedical) informatics involves the use of informatics in discovery and management of new knowledge relating to health and disease. This article details 3 projects that are hybrid applications of translational bioinformatics and clinical research (biomedical) informatics: The Cancer Genome Atlas, the cBioPortal for Cancer Genomics, and the Memorial Sloan Kettering Cancer Center clinical variants and results database, all designed to facilitate insights into cancer biology and clinical/therapeutic correlations. PMID:26851671
Web 2.0 technologies enable users to produce and distribute their own content. The variety of motives for taking part in these communication processes leads to considerable differences in levels of quality. While social media contexts have developed features for evaluating contributions, user-generated maps frequently do not offer tools to question or examine the origin and elements of user-generated content. This paper discusses the effects of the integration of Web 2.0 features with web map...
US Agency for International Development — QWICR is a secure, online Title II commodity reporting system accessible to USAID Missions, PVO Cooperating Sponsors and Food for Peace Officers. QWICR provides PVO...
Carlisle, W. H.
This reports on a project to extend the capabilities of a Virtual Research Center (VRC) for NASA's Advanced Concepts Office. The work was performed as part of NASA's 1995 Summer Faculty Fellowship program and involved the development of a prototype component of the VRC - a database system that provides data creation and access services within a room of the VRC. In support of VRC development, NASA has assembled a laboratory containing the variety of equipment expected to be used by scientists within the VRC. This laboratory consists of the major hardware platforms, SUN, Intel, and Motorola processors and their most common operating systems UNIX, Windows NT, Windows for Workgroups, and Macintosh. The SPARC 20 runs SUN Solaris 2.4, an Intel Pentium runs Windows NT and is installed on a different network from the other machines in the laboratory, a Pentium PC runs Windows for Workgroups, two Intel 386 machines run Windows 3.1, and finally, a PowerMacintosh and a Macintosh IIsi run MacOS.
Ku, David Tawei; Chang, Chia-Chi
By conducting usability testing on a multilanguage Web site, this study analyzed the cultural differences between Taiwanese and American users in the performance of assigned tasks. To provide feasible insight into cross-cultural Web site design, Microsoft Office Online (MOO) that supports both traditional Chinese and English and contains an almost…
Gorodkin, Jan; Hofacker, Ivo L.; Ruzzo, Walter L.
RNA bioinformatics and computational RNA biology have emerged from implementing methods for predicting the secondary structure of single sequences. The field has evolved to exploit multiple sequences to take evolutionary information into account, such as compensating (and structure preserving) base...
Mudasser Fraz Wyne
Full Text Available Bioinformatics is a new field that is poorly served by any of the traditional science programs in Biology, Computer science or Biochemistry. Known to be a rapidly evolving discipline, Bioinformatics has emerged from experimental molecular biology and biochemistry as well as from the artificial intelligence, database, pattern recognition, and algorithms disciplines of computer science. While institutions are responding to this increased demand by establishing graduate programs in bioinformatics, entrance barriers for these programs are high, largely due to the significant prerequisite knowledge which is required, both in the fields of biochemistry and computer science. Although many schools currently have or are proposing graduate programs in bioinformatics, few are actually developing new undergraduate programs. In this paper I explore the blend of a multidisciplinary approach, discuss the response of academia and highlight challenges faced by this emerging field.
Systems integration is becoming the driving force for 21st century biology. Researchers are systematically tackling gene functions and complex regulatory processes by studying organisms at different levels of organization, from genomes and transcriptomes to proteomes and interactomes. To fully realize the value of such high-throughput data requires advanced bioinformatics for integration, mining, comparative analysis, and functional interpretation. We are developing a bioinformatics research ...
Dai Lin; Gao Xin; Guo Yan; Xiao Jingfa; Zhang Zhang
Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and a...
Huang, Xiuzhen; Bruce, Barry; Buchan, Alison; Congdon, Clare Bates; Cramer, Carole L.; Jennings, Steven F; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Rick; Moore, Jason H.; Nanduri, Bindu; Peckham, Joan; Perkins, Andy; Polson, Shawn W.
Currently there are definitions from many agencies and research societies defining “bioinformatics” as deriving knowledge from computational analysis of large volumes of biological and biomedical data. Should this be the bioinformatics research focus? We will discuss this issue in this review article. We would like to promote the idea of supporting human-infrastructure (HI) with no-boundary thinking (NT) in bioinformatics (HINT).
Ghulam A. PARRAY; Abdul G. Rather; Parvez Sofi; Shafiq A. Wani; Amjad M. Husaini; Asif B. Shikari; Javid I. Mir
Saffron (Crocus sativus L.) is a sterile triploid plant and belongs to the Iridaceae (Liliales, Monocots). Its genome is of relatively large size and is poorly characterized. Bioinformatics can play an enormous technical role in the sequence-level structural characterization of saffron genomic DNA. Bioinformatics tools can also help in appreciating the extent of diversity of various geographic or genetic groups of cultivated saffron to infer relationships between groups and accessions. The ch...
Tanaka Masahiro; Sasaki Kensaku; Mishima Hiroyuki; Tatebe Osamu; Yoshiura Koh-ichiro
Abstract Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environm...
Alva, Vikram; Nam, Seung-Zin; Söding, Johannes; Lupas, Andrei N
The MPI Bioinformatics Toolkit (http://toolkit.tuebingen.mpg.de) is an open, interactive web service for comprehensive and collaborative protein bioinformatic analysis. It offers a wide array of interconnected, state-of-the-art bioinformatics tools to experts and non-experts alike, developed both externally (e.g. BLAST+, HMMER3, MUSCLE) and internally (e.g. HHpred, HHblits, PCOILS). While a beta version of the Toolkit was released 10 years ago, the current production-level release has been available since 2008 and has serviced more than 1.6 million external user queries. The usage of the Toolkit has continued to increase linearly over the years, reaching more than 400 000 queries in 2015. In fact, through the breadth of its tools and their tight interconnection, the Toolkit has become an excellent platform for experimental scientists as well as a useful resource for teaching bioinformatic inquiry to students in the life sciences. In this article, we report on the evolution of the Toolkit over the last ten years, focusing on the expansion of the tool repertoire (e.g. CS-BLAST, HHblits) and on infrastructural work needed to remain operative in a changing web environment. PMID:27131380
Full Text Available Language WorkBenches (LWBs are software engineering tools that help domain experts develop solutions to various classes of problems. Some of these tools focus on non-technical users and provide languages to help organize knowledge while other workbenches provide means to create new programming languages. A key advantage of language workbenches is that they support the seamless composition of independently developed languages. This capability is useful when developing programs that can benefit from different levels of abstraction. We reasoned that language workbenches could be useful to develop bioinformatics software solutions. In order to evaluate the potential of language workbenches in bioinformatics, we tested a prominent workbench by developing an alternative to shell scripting. To illustrate what LWBs and Language Composition can bring to bioinformatics, we report on our design and development of NYoSh (Not Your ordinary Shell. NYoSh was implemented as a collection of languages that can be composed to write programs as expressive and concise as shell scripts. This manuscript offers a concrete illustration of the advantages and current minor drawbacks of using the MPS LWB. For instance, we found that we could implement an environment-aware editor for NYoSh that can assist the programmers when developing scripts for specific execution environments. This editor further provides semantic error detection and can be compiled interactively with an automatic build and deployment system. In contrast to shell scripts, NYoSh scripts can be written in a modern development environment, supporting context dependent intentions and can be extended seamlessly by end-users with new abstractions and language constructs. We further illustrate language extension and composition with LWBs by presenting a tight integration of NYoSh scripts with the GobyWeb system. The NYoSh Workbench prototype, which implements a fully featured integrated development environment
Simi, Manuele; Campagne, Fabien
Language WorkBenches (LWBs) are software engineering tools that help domain experts develop solutions to various classes of problems. Some of these tools focus on non-technical users and provide languages to help organize knowledge while other workbenches provide means to create new programming languages. A key advantage of language workbenches is that they support the seamless composition of independently developed languages. This capability is useful when developing programs that can benefit from different levels of abstraction. We reasoned that language workbenches could be useful to develop bioinformatics software solutions. In order to evaluate the potential of language workbenches in bioinformatics, we tested a prominent workbench by developing an alternative to shell scripting. To illustrate what LWBs and Language Composition can bring to bioinformatics, we report on our design and development of NYoSh (Not Your ordinary Shell). NYoSh was implemented as a collection of languages that can be composed to write programs as expressive and concise as shell scripts. This manuscript offers a concrete illustration of the advantages and current minor drawbacks of using the MPS LWB. For instance, we found that we could implement an environment-aware editor for NYoSh that can assist the programmers when developing scripts for specific execution environments. This editor further provides semantic error detection and can be compiled interactively with an automatic build and deployment system. In contrast to shell scripts, NYoSh scripts can be written in a modern development environment, supporting context dependent intentions and can be extended seamlessly by end-users with new abstractions and language constructs. We further illustrate language extension and composition with LWBs by presenting a tight integration of NYoSh scripts with the GobyWeb system. The NYoSh Workbench prototype, which implements a fully featured integrated development environment for NYoSh is
Full Text Available The drastic increase in the number of coronaviruses discovered and coronavirus genomes being sequenced have given us an unprecedented opportunity to perform genomics and bioinformatics analysis on this family of viruses. Coronaviruses possess the largest genomes (26.4 to 31.7 kb among all known RNA viruses, with G + C contents varying from 32% to 43%. Variable numbers of small ORFs are present between the various conserved genes (ORF1ab, spike, envelope, membrane and nucleocapsid and downstream to nucleocapsid gene in different coronavirus lineages. Phylogenetically, three genera, Alphacoronavirus, Betacoronavirus and Gammacoronavirus, with Betacoronavirus consisting of subgroups A, B, C and D, exist. A fourth genus, Deltacoronavirus, which includes bulbul coronavirus HKU11, thrush coronavirus HKU12 and munia coronavirus HKU13, is emerging. Molecular clock analysis using various gene loci revealed that the time of most recent common ancestor of human/civet SARS related coronavirus to be 1999-2002, with estimated substitution rate of 4´10-4 to 2´10-2 substitutions per site per year. Recombination in coronaviruses was most notable between different strains of murine hepatitis virus (MHV, between different strains of infectious bronchitis virus, between MHV and bovine coronavirus, between feline coronavirus (FCoV type I and canine coronavirus generating FCoV type II, and between the three genotypes of human coronavirus HKU1 (HCoV-HKU1. Codon usage bias in coronaviruses were observed, with HCoV-HKU1 showing the most extreme bias, and cytosine deamination and selection of CpG suppressed clones are the two major independent biological forces that shape such codon usage bias in coronaviruses.
Web Dynpro ABAP, a NetWeaver web application user interface tool from SAP, enables web programming connected to SAP Systems. The authors' main focus was to create a book based on their own practical experience. Each chapter includes examples which lead through the content step-by-step and enable the reader to gradually explore and grasp the Web Dynpro ABAP process. The authors explain in particular how to design Web Dynpro components, the data binding and interface methods, and the view controller methods. They also describe the other SAP NetWeaver Elements (ABAP Dictionary, Authorization) and
石龙; 强保华; 谌超; 吴春明
With the rapid development of Internet technology,a large number of Web databases have mushroomed and the number remains in a fast-growing trend.In order to effectively organise and utilise the information which hides deeply in Web databases,it is necessary to classify and integrate them according to domains.Since the query interface of Webpage is the unique channel to access the Web database,the classification of Deep Web data source can be realised by classifying the query interfaces.In this paper,a classification method based on text VSM of query interface is proposed.The basic idea is to build a vector space model (VSM) by using query interface text information firstly.Then the typical data mining classification algorithm is employed to train one or more classifiers,thus to classify the domains the query interfaces belonging to is implemented.Experimental result shows that the approach proposed in the paper has excellent classification performance.%随着Intemet技术的快速发展,Web数据库数目庞大而且仍在快速增长.为有效组织利用深藏于Web数据库上的信息,需对其按领域进行分类和集成.Web页面上的查询接口是网络用户访问Web数据库的唯一途径,对Deep Web数据源分类可通过对查询接口分类实现.为此,提出一种基于查询接口文本VSM(Vector Space Model)的分类方法.首先,使用查询接口文本信息构建向量空间模型,然后通过典型的数据挖掘分类算法训练分类器,从而实现对查询接口所属领域进行分类.实验结果表明给出的方法具有良好的分类性能.
Schultheiss, Sebastian J.; Münch, Marc-Christian; Andreeva, Gergana D.; Rätsch, Gunnar
We have conducted a study on the long-term availability of bioinformatics Web services: an observation of 927 Web services published in the annual Nucleic Acids Research Web Server Issues between 2003 and 2009. We found that 72% of Web sites are still available at the published addresses, only 9% of services are completely unavailable. Older addresses often redirect to new pages. We checked the functionality of all available services: for 33%, we could not test functionality because there was...
This undergraduate thesis presents the analysis of the GWT framework usage for the development of web applications. GWT offers functionality for the development of complete web applications, as the user interface as the implementation of the application logic on the server. The thesis shows the architectural concepts and the model of framework operation. There is a great emphasis on the compiler, which uses the concept of deffered binding, providing a coherent render on various browsers witho...
Janssens, Frizo; Glänzel, Wolfgang; De Moor, Bart
To unravel the concept structure and dynamics of the bioinformatics field, we analyze a set of 7401 publications from the Web of Science and MEDLINE databases, publication years 1981–2004. For delineating this complex, interdisciplinary field, a novel bibliometric retrieval strategy is used. Given that the performance of unsupervised clustering and classification of scientific publications is significantly improved by deeply merging textual contents with the structure of the citation graph, w...
Pettifer, S.; Ison, J.; Kalas, M.;
The EMBRACE (European Model for Bioinformatics Research and Community Education) web service collection is the culmination of a 5-year project that set out to investigate issues involved in developing and deploying web services for use in the life sciences. The project concluded that in order for...... ontology for describing life science web services; BioXSD, a schema for exchanging data between services; and a centralized registry (http://www.embraceregistry.net) that collects together around 1000 services developed by the consortium partners. This article presents the current status of the collection...... web services to achieve widespread adoption, standards must be defined for the choice of web service technology, for semantically annotating both service function and the data exchanged, and a mechanism for discovering services must be provided. Building on this, the project developed: EDAM, an...
Full Text Available Recent studies have demonstrated equal quality of targeted next generation sequencing (NGS compared to Sanger Sequencing. Whereas these novel sequencing processes have a validated robust performance, choice of enrichment method and different available bioinformatic software as reliable analysis tool needs to be further investigated in a diagnostic setting.DNA from 21 patients with genetic variants in SDHB, VHL, EPAS1, RET, (n=17 or clinical criteria of NF1 syndrome (n=4 were included. Targeted NGS was performed using Truseq custom amplicon enrichment sequenced on an Illumina MiSEQ instrument. Results were analysed in parallel using three different bioinformatics pipelines; (1 Commercially available MiSEQ Reporter, fully automatized and integrated software, (2 CLC Genomics Workbench, graphical interface based software, also commercially available, and ICP (3 an in-house scripted custom bioinformatic tool.A tenfold read coverage was achieved in between 95-98% of targeted bases. All workflows had alignment of reads to SDHA and NF1 pseudogenes. Compared to Sanger sequencing, variant calling revealed a sensitivity ranging from 83 to 100% and a specificity of 99.9-100%. Only MiSEQ reporter identified all pathogenic variants in both sequencing runs.We conclude that targeted next generation sequencing have equal quality compared to Sanger sequencing. Enrichment specificity and the bioinformatic performance need to be carefully assessed in a diagnostic setting. As acceptable accuracy was noted for a fully automated bioinformatic workflow, we suggest that processing of NGS data could be performed without expert bioinformatics skills utilizing already existing commercially available bioinformatics tools.
Neculae, Alina Georgiana
The aim of this project is to develop a web interface that would be used by the Icinga monitoring system to manage the CMS online cluster, in the experimental site. The interface would allow users to visualize the information in a compressed and intuitive way, as well as modify the information of each individual object and edit the relationships between classes.
AMI was chosen as the ATLAS dataset selection interface in July 2006. It is the main interface for searching for ATLAS data using physics metadata criteria. AMI has been implemented as a generic database management framework which allows parallel searching over many catalogues, which may have differing schema. The main features of the web interface will be described; in particular the powerful graphic query builder. The use of XML/XLST technology ensures that all commands can be used either on the web or from a command line interface via a web service. We also describe the overall architecture of ATLAS metadata and the different actors and granularity involved, and the place of AMI within this architecture. We discuss the problems involved in the correlation of metadata of differing granularity, and propose a solution for information mediation
The thesis focuses on two bioinformatics research topics: the development of tools for an efficient and reliable identification of single nucleotides polymorphisms (SNPs) and polymorphic simple sequence repeats (SSRs) from expressed sequence tags (ESTs) (Chapter 2, 3 and 4), and the subsequent imple
Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...
Kelley, Scott; Alger, Christianna; Deutschman, Douglas
The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP). The…
Ondrej, Vladan; Dvorak, Petr
Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…
In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…
Boyle, John A.
Bioinformatics has emerged as an important research tool in recent years. The ability to mine large databases for relevant information has become increasingly central to many different aspects of biochemistry and molecular biology. It is important that undergraduates be introduced to the available information and methodologies. We present a…
Wang, Bing; Zhang, Xiang
This chapter provides an overview of some bioinformatics tasks and the relevance of the evolutionary computation methods, especially GAs. There are two advantages of GA-based approaches. One is that GAs are easier to run in parallel than single trajectory search procedures, and therefore allow groups of processors to be utilized for a search. The other is
Russell J. KOHEL; John Z. YU; Piyush GUPTA; Rajeev AGRAWAL
@@ There are several web sites for which information is available to the cotton research community. Most of these sites relate to resources developed or available to the research community. Few provide bioinformatic tools,which usually relate to the specific data sets and materials presented in the database. Just as the bioinformatics area is evolving, the available resources reflect this evolution.
Zhao, Shanrong; Lu, Jin
, including de novo library design in selection of favorable germline V gene scaffolds and CDR lengths. In addition, we have also developed a web application framework to present our knowledge database, and the web interface can help people to easily retrieve a variety of information from the knowledge database. PMID:21310488
Bogliolo, M. P.; Contino, G.
A GIS-based web-mapping system is presented, aimed at providing specialists, stakeholders and population with a simple, while scientifically rigorous, way to obtain information about people exposure to air pollution in the city of Rome (Italy). It combines a geo-spatial visualization with easy access to time dimension and to quantitative information. The study is part of the EXPAH (Population Exposure to PAHs) LIFE+ EC Project, which goal is to identify and quantify children and elderly people exposure to PM2.5-bound Polycyclic Aromatic Hydrocarbons (PAHs) in the atmosphere of Rome, and to assess the impact on human health. The core of the system is a GIS, which database contains data and results of the project research activity. They include daily indoor and outdoor ground measurements and daily maps from simulation modeling of atmospheric PAHs and PM2.5 concentration for the period June 2011-May 2012, and daily and average exposure maps. Datasets have been published as time-enabled standard OGC Web Map Services (WMS). A set of web mapping applications query the web services to produce a set of interactive and time-aware thematic maps. Finding effective ways to communicate risk for human health, and environmental determinants for it, is a topical and challenging task: the web mapping system presented is a prototype of a possible model to disseminate scientific results on these items, providing a sight into impacts of air pollution on people living and working in a big city, and shipping information about the overall exposure, its spatial pattern and levels at specific locations.
Full Text Available Abstract Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error. Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. Findings We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Conclusions Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows
邓中亮; 林清; 李来新
针对当前3G业务平台统一、面向服务的发展需求,提出以Portal,Web Service两种技术为基础的增值业务门户的实现方法,介绍Web Service作为接口技术在增值业务门户与业务管理平台互通中的具体应用,并针对业务产品/产品包订购的需求,给出基于Apache Axis的接口设计与实现过程.其统一、可定制的管理方式,为今后运营商的门户提供了新的设计思路与实践基础.
Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.
There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…
Radhakrishnan, Sabarinathan; Tafer, Hakim; Seemann, Ernst Stefan;
, are derived from extensive pre-computed tables of distributions of substitution effects as a function of gene length and GC content. Here, we present a web service that not only provides an interface for RNAsnp but also features a graphical output representation. In addition, the web server is connected...... to a local mirror of the UCSC genome browser database that enables the users to select the genomic sequences for analysis and visualize the results directly in the UCSC genome browser. The RNAsnp web server is freely available at: http://rth.dk/resources/rnasnp/....
Klatt, Edward C.
Reddy, Ch Ram Mohan; Geetha, D. Evangelin; Srinivasa, K. G.; Kumar, T. V. Suresh; Kanth, K. Rajani
Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified Modeling Language, Use Case Diagram, Sequence Diagram, Deployment Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Mult...
Ch Ram Mohan Reddy; D Evangelin Geetha; KG Srinivasa; Suresh Kumar, T V.; K Rajani Kanth
Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified Modeling Language, Use Case Diagram, Sequence Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Multi-Tie...
Full Text Available Abstract Background The SEED integrates many publicly available genome sequences into a single resource. The database contains accurate and up-to-date annotations based on the subsystems concept that leverages clustering between genomes and other clues to accurately and efficiently annotate microbial genomes. The backend is used as the foundation for many genome annotation tools, such as the Rapid Annotation using Subsystems Technology (RAST server for whole genome annotation, the metagenomics RAST server for random community genome annotations, and the annotation clearinghouse for exchanging annotations from different resources. In addition to a web user interface, the SEED also provides Web services based API for programmatic access to the data in the SEED, allowing the development of third-party tools and mash-ups. Results The currently exposed Web services encompass over forty different methods for accessing data related to microbial genome annotations. The Web services provide comprehensive access to the database back end, allowing any programmer access to the most consistent and accurate genome annotations available. The Web services are deployed using a platform independent service-oriented approach that allows the user to choose the most suitable programming platform for their application. Example code demonstrate that Web services can be used to access the SEED using common bioinformatics programming languages such as Perl, Python, and Java. Conclusions We present a novel approach to access the SEED database. Using Web services, a robust API for access to genomics data is provided, without requiring large volume downloads all at once. The API ensures timely access to the most current datasets available, including the new genomes as soon as they come online.
We present SMPDF Web, a web interface for the construction of parton distribution functions (PDFs) with a minimal number of error sets needed to represent the PDF uncertainty of specific processes (SMPDF).
Liang, Chun; Liu, Lin; Ji, Guoli
The genomes of thousands of organisms are being sequenced, often with accompanying sequences of cDNAs or ESTs. One of the great challenges in bioinformatics is to make these genomic sequences and genome annotations accessible in a user-friendly manner to general biologists to address interesting biological questions. We have created an open-access web service called WebGMAP (http://www.bioinfolab.org/software/webgmap) that seamlessly integrates cDNA-genome alignment tools, such as GMAP, with ...
Ketterl, Markus; Mertens, Robert; Vornberger, Oliver
Purpose: At many universities, web lectures have become an integral part of the e-learning portfolio over the last few years. While many aspects of the technology involved, like automatic recording techniques or innovative interfaces for replay, have evolved at a rapid pace, web lecturing has remained independent of other important developments…
Mandal, Chittaranjan; Sinha, Vijay Luxmi; Reade, Christopher M. P.
The architecture of a web-based course management tool that has been developed at IIT [Indian Institute of Technology], Kharagpur and which manages the submission of assignments is discussed. Both the distributed architecture used for data storage and the client-server architecture supporting the web interface are described. Further developments…
Lelieveld, Stefan H; Veltman, Joris A; Gilissen, Christian
With the widespread adoption of next generation sequencing technologies by the genetics community and the rapid decrease in costs per base, exome sequencing has become a standard within the repertoire of genetic experiments for both research and diagnostics. Although bioinformatics now offers standard solutions for the analysis of exome sequencing data, many challenges still remain; especially the increasing scale at which exome data are now being generated has given rise to novel challenges in how to efficiently store, analyze and interpret exome data of this magnitude. In this review we discuss some of the recent developments in bioinformatics for exome sequencing and the directions that this is taking us to. With these developments, exome sequencing is paving the way for the next big challenge, the application of whole genome sequencing. PMID:27075447
Ghulam A. Parray
Full Text Available Saffron (Crocus sativus L. is a sterile triploid plant and belongs to the Iridaceae (Liliales, Monocots. Its genome is of relatively large size and is poorly characterized. Bioinformatics can play an enormous technical role in the sequence-level structural characterization of saffron genomic DNA. Bioinformatics tools can also help in appreciating the extent of diversity of various geographic or genetic groups of cultivated saffron to infer relationships between groups and accessions. The characterization of the transcriptome of saffron stigmas is the most vital for throwing light on the molecular basis of flavor, color biogenesis, genomic organization and biology of gynoecium of saffron. The information derived can be utilized for constructing biological pathways involved in the biosynthesis of principal components of saffron i.e., crocin, crocetin, safranal, picrocrocin and safchiA
LIU Wei; LI Dong; ZHU YunPing; HE FuChu
Research in signaling networks contributes to a deeper understanding of organism living activities. With the development of experimental methods in the signal transduction field, more and more mechanisms of signaling pathways have been discovered. This paper introduces such popular bioin-formatics analysis methods for signaling networks as the common mechanism of signaling pathways and database resource on the Internet, summerizes the methods of analyzing the structural properties of networks, including structural Motif finding and automated pathways generation, and discusses the modeling and simulation of signaling networks in detail, as well as the research situation and tendency in this area. Now the investigation of signal transduction is developing from small-scale experiments to large-scale network analysis, and dynamic simulation of networks is closer to the real system. With the investigation going deeper than ever, the bioinformatics analysis of signal transduction would have immense space for development and application.
We describe a relatively new effort within CMS to converge on a set of web based tools, using state of the art industry techniques, to engage with the CMS offline computing system. CMS collaborators require tools to monitor various components of the computing system and interact with the system itself. The current state of the various CMS web tools is described along side current planned developments. The CMS collaboration comprises of nearly 3000 people from all over the world. As well as its collaborators, its computing resources are spread all over globe and are accessed via the LHC grid to run analysis, large scale production and data transfer tasks. Due to the distributed nature of collaborators effective provision of collaborative tools is essential to maximise physics exploitation of the CMS experiment, especially when the size of the CMS data set is considered. CMS has chosen to provide such tools over the world wide web as a top level service, enabling all members of the collaboration to interact with the various offline computing components. Traditionally web interfaces have been added in HEP experiments as an afterthought. In the CMS offline we have decided to put web interfaces, and the development of a common CMS web framework, on an equal footing with the rest of the offline development. Tools exist within CMS to transfer and catalogue data (PhEDEx and DBS/DLS), run Monte Carlo production (ProdAgent) and submit analysis (CRAB). Effective human interfaces to these systems are required for users with different agendas and practical knowledge of the systems to effectively use the CMS computing system. The CMS web tools project aims to provide a consistent interface to all these tools
Francisco O. Martínez P.
Full Text Available Restrictions regardingnavigability and user-friendliness are the main challenges the Mobile Web faces to be accepted worldwide. W3C has recently developed the Mobile Web Initiative (MWI, a set of directives for the suitable design and presentation of mobile Web interfaces. This article presents the main features and functional modules of OneWeb, an MWI-based Web content adaptation platform developed by Mobile Devices Applications Development Interest Group’s (W@PColombia research activities, forming part of the Universidad de Cauca’s Telematics Engineering Group.Some performance measurementresults and comparison with other Web content adaptation platforms are presented. Tests have shown suitable response times for Mobile Web environments; MWI guidelines were applied to over twenty Web pages selected for testing purposes.
Greene, Anna C.; Giffin, Kristine A.; Greene, Casey S; Jason H Moore
Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these...
Bretonnel Cohen, K; Hunter, Lawrence E.
Text mining for translational bioinformatics is a new field with tremendous research potential. It is a subfield of biomedical natural language processing that concerns itself directly with the problem of relating basic biomedical research to clinical practice, and vice versa. Applications of text mining fall both into the category of T1 translational research-translating basic science results into new interventions-and T2 translational research, or translational research for public health. P...
In the past two decades genome sequencing has developed from a laborious and costly technology employed by large international consortia to a widely used, automated and affordable tool used worldwide by many individual research groups. Genome sequences of many food animals and crop plants have been deciphered and are being exploited for fundamental research and applied to improve their breeding programs. The developments in sequencing technologies have also impacted the associated bioinformat...
Biological sequence alignment is an important and challenging task in bioinformatics. Alignment may be defined as an arrangement of two or more DNA or protein sequences to highlight the regions of their similarity. Sequence alignment is used to infer the evolutionary relationship between a set of protein or DNA sequences. An accurate alignment can provide valuable information for experimentation on the newly found sequences. It is indispensable in basic research as well as in practical applic...
Casari, G; Daruvar, Dea; Sander, C.; Schneider, Reinhard
Scientific history was made in completing the yeast genuine sequence, yet its 13 Mb are a mere starting point. Two challenges loom large: to decipher the function of all genes and to describe the workings of the eukaryotic cell in full molecular detail. A combination of experimental and theoretical approaches will be brought to bear on these challenges. What will be next in yeast genome analysis from the point of view of bioinformatics?
Fang, Wai-Chi; Lue, Jaw-Chyng
A system comprising very-large-scale integrated (VLSI) circuits is being developed as a means of bioinformatics-oriented analysis and recognition of patterns of fluorescence generated in a microarray in an advanced, highly miniaturized, portable genetic-expression-assay instrument. Such an instrument implements an on-chip combination of polymerase chain reactions and electrochemical transduction for amplification and detection of deoxyribonucleic acid (DNA).
Sudhakar, R.; Gupta, Kuhu; Kumar, Sushant
Bioinformatics can be defined as conceptualization of biology, in specific- Molecular Biology and then application of certain techniques from multiple disciplines such as statistics, computer science and applied mathematics to analyze and understand the vast information related to molecular structures. Hence, its management becomes difficult. The manager must be thorough with the concepts of biology- genetic studies in particular, as well as information technology. Discussed below is the mana...
Napolitano, Francesco; Mariani-Costantini, Renato; Tagliaferri, Roberto
Background An incremental, loosely planned development approach is often used in bioinformatic studies when dealing with custom data analysis in a rapidly changing environment. Unfortunately, the lack of a rigorous software structuring can undermine the maintainability, communicability and replicability of the process. To ameliorate this problem we propose the Leaf system, the aim of which is to seamlessly introduce the pipeline formality on top of a dynamical development process with minimum...
Shanahan, Hugh P; Owen, Anne M; Harrison, Andrew P
We discuss the applicability of the Microsoft cloud computing platform, Azure, for bioinformatics. We focus on the usability of the resource rather than its performance. We provide an example of how R can be used on Azure to analyse a large amount of microarray expression data deposited at the public database ArrayExpress. We provide a walk through to demonstrate explicitly how Azure can be used to perform these analyses in Appendix S1 and we offer a comparison with a local computation. We note that the use of the Platform as a Service (PaaS) offering of Azure can represent a steep learning curve for bioinformatics developers who will usually have a Linux and scripting language background. On the other hand, the presence of an additional set of libraries makes it easier to deploy software in a parallel (scalable) fashion and explicitly manage such a production run with only a few hundred lines of code, most of which can be incorporated from a template. We propose that this environment is best suited for running stable bioinformatics software by users not involved with its development. PMID:25050811
Translational bioinformatics plays an indispensable role in transforming psychoneuroimmunology (PNI) into personalized medicine. It provides a powerful method to bridge the gaps between various knowledge domains in PNI and systems biology. Translational bioinformatics methods at various systems levels can facilitate pattern recognition, and expedite and validate the discovery of systemic biomarkers to allow their incorporation into clinical trials and outcome assessments. Analysis of the correlations between genotypes and phenotypes including the behavioral-based profiles will contribute to the transition from the disease-based medicine to human-centered medicine. Translational bioinformatics would also enable the establishment of predictive models for patient responses to diseases, vaccines, and drugs. In PNI research, the development of systems biology models such as those of the neurons would play a critical role. Methods based on data integration, data mining, and knowledge representation are essential elements in building health information systems such as electronic health records and computerized decision support systems. Data integration of genes, pathophysiology, and behaviors are needed for a broad range of PNI studies. Knowledge discovery approaches such as network-based systems biology methods are valuable in studying the cross-talks among pathways in various brain regions involved in disorders such as Alzheimer's disease. PMID:22933157
Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H
Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs. PMID:25829469
Ong, Quang; Nguyen, Phuc; Thao, Nguyen Phuong; Le, Ly
The advance in genomics technology leads to the dramatic change in plant biology research. Plant biologists now easily access to enormous genomic data to deeply study plant high-density genetic variation at molecular level. Therefore, fully understanding and well manipulating bioinformatics tools to manage and analyze these data are essential in current plant genome research. Many plant genome databases have been established and continued expanding recently. Meanwhile, analytical methods based on bioinformatics are also well developed in many aspects of plant genomic research including comparative genomic analysis, phylogenomics and evolutionary analysis, and genome-wide association study. However, constantly upgrading in computational infrastructures, such as high capacity data storage and high performing analysis software, is the real challenge for plant genome research. This review paper focuses on challenges and opportunities which knowledge and skills in bioinformatics can bring to plant scientists in present plant genomics era as well as future aspects in critical need for effective tools to facilitate the translation of knowledge from new sequencing data to enhancement of plant productivity. PMID:27499685
Yuen Macaire MS
Full Text Available Abstract Background We present a biological data warehouse called Atlas that locally stores and integrates biological sequences, molecular interactions, homology information, functional annotations of genes, and biological ontologies. The goal of the system is to provide data, as well as a software infrastructure for bioinformatics research and development. Description The Atlas system is based on relational data models that we developed for each of the source data types. Data stored within these relational models are managed through Structured Query Language (SQL calls that are implemented in a set of Application Programming Interfaces (APIs. The APIs include three languages: C++, Java, and Perl. The methods in these API libraries are used to construct a set of loader applications, which parse and load the source datasets into the Atlas database, and a set of toolbox applications which facilitate data retrieval. Atlas stores and integrates local instances of GenBank, RefSeq, UniProt, Human Protein Reference Database (HPRD, Biomolecular Interaction Network Database (BIND, Database of Interacting Proteins (DIP, Molecular Interactions Database (MINT, IntAct, NCBI Taxonomy, Gene Ontology (GO, Online Mendelian Inheritance in Man (OMIM, LocusLink, Entrez Gene and HomoloGene. The retrieval APIs and toolbox applications are critical components that offer end-users flexible, easy, integrated access to this data. We present use cases that use Atlas to integrate these sources for genome annotation, inference of molecular interactions across species, and gene-disease associations. Conclusion The Atlas biological data warehouse serves as data infrastructure for bioinformatics research and development. It forms the backbone of the research activities in our laboratory and facilitates the integration of disparate, heterogeneous biological sources of data enabling new scientific inferences. Atlas achieves integration of diverse data sets at two levels. First
Reviews Equipment: BioLite Camp Stove Game: Burnout Paradise Equipment: 850 Universal interface and Capstone software Equipment: xllogger Book: Science Magic Tricks and Puzzles Equipment: Spinthariscope Equipment: DC Power Supply HY5002 Web Watch
WE RECOMMEND BioLite CampStove Robust and multifaceted stove illuminates physics concepts 850 Universal interface and Capstone software Powerful data-acquisition system offers many options for student experiments and demonstrations xllogger Obtaining results is far from an uphill struggle with this easy-to-use datalogger Science Magic Tricks and Puzzles Small but perfectly formed and inexpensive book packed with 'magic-of-science' demonstrations Spinthariscope Kit for older students to have the memorable experience of 'seeing' radioactivity WORTH A LOOK DC Power Supply HY5002 Solid and effective, but noisy and lacks portability HANDLE WITH CARE Burnout Paradise Car computer game may be quick off the mark, but goes nowhere fast when it comes to lab use WEB WATCH 'Live' tube map and free apps would be a useful addition to school physics, but maths-questions website of no more use than a textbook
Code injection is the most critical threat for the web applications. The security vulnerabilities have been growing on web applications. With the growth of the importance of web application, preventing the applications from unauthorized usage and maintaining data integrity have been challenging. Especially those applications which an interface with back-end database components like mainframes and product databases that contain sensitive data can be addressed as the attacker’s main target. ...
Li, Jun; Roebuck, Paul; Grünewald, Stefan; Liang, Han
An important task in biomedical research is identifying biomarkers that correlate with patient clinical data, and these biomarkers then provide a critical foundation for the diagnosis and treatment of disease. Conventionally, such an analysis is based on individual genes, but the results are often noisy and difficult to interpret. Using a biological network as the searching platform, network-based biomarkers are expected to be more robust and provide deep insights into the molecular mechanisms of disease. We have developed a novel bioinformatics web server for identifying network-based biomarkers that most correlate with patient survival data, SurvNet. The web server takes three input files: one biological network file, representing a gene regulatory or protein interaction network; one molecular profiling file, containing any type of gene- or protein-centred high-throughput biological data (e.g. microarray expression data or DNA methylation data); and one patient survival data file (e.g. patients' progression-free survival data). Given user-defined parameters, SurvNet will automatically search for subnetworks that most correlate with the observed patient survival data. As the output, SurvNet will generate a list of network biomarkers and display them through a user-friendly interface. SurvNet can be accessed at http://bioinformatics.mdanderson.org/main/SurvNet. PMID:22570412
Full Text Available Constant improvements in the field of surveying, computing and distribution of digital-content are reshaping the way Cultural Heritage can be digitised and virtually accessed, even remotely via web. A traditional 2D approach for data access, exploration, retrieval and exploration may generally suffice, however more complex analyses concerning spatial and temporal features require 3D tools, which, in some cases, have not yet been implemented or are not yet generally commercially available. Efficient organisation and integration strategies applicable to the wide array of heterogeneous data in the field of Cultural Heritage represent a hot research topic nowadays. This article presents a visualisation and query tool (QueryArch3D conceived to deal with multi-resolution 3D models. Geometric data are organised in successive levels of detail (LoD, provided with geometric and semantic hierarchies and enriched with attributes coming from external data sources. The visualisation and query front-end enables the 3D navigation of the models in a virtual environment, as well as the interaction with the objects by means of queries based on attributes or on geometries. The tool can be used as a standalone application, or served through the web. The characteristics of the research work, along with some implementation issues and the developed QueryArch3D tool will be discussed and presented.
饶志敏; 余阳; 李长森
Tjin-Kam-Jet, Kien-Tsoi T.E.
This proposal identifies two main problems related to deep web search, and proposes a step by step solution for each of them. The first problem is about searching deep web content by means of a simple free-text interface (with just one input field, instead of a complex interface with many input fiel
Head, Alison J.
Discusses usability as an interface design concept to improve information retrieval on the World Wide Web. Highlights include criteria for a usable Web site; usability testing; usability resources on the Web; and a sidebar that gives an example of usability testing by Hewlett-Packard. (LRW)
Roger Andrew J
Full Text Available Background An increasing number of bioinformatics methods are considering the phylogenetic relationships between biological sequences. Implementing new methodologies using the maximum likelihood phylogenetic framework can be a time consuming task. Results The bioinformatics library libcov is a collection of C++ classes that provides a high and low-level interface to maximum likelihood phylogenetics, sequence analysis and a data structure for structural biological methods. libcov can be used to compute likelihoods, search tree topologies, estimate site rates, cluster sequences, manipulate tree structures and compare phylogenies for a broad selection of applications. Conclusion Using this library, it is possible to rapidly prototype applications that use the sophistication of phylogenetic likelihoods without getting involved in a major software engineering project. libcov is thus a potentially valuable building block to develop in-house methodologies in the field of protein phylogenetics.
Roman, J. H. (Jorge H.)
As programmers we have worked with many Application Development Interface API development kits. They are well suited for interaction with a particular system. A vast source of information can be made accessible by using the http protocol through the web as an API. This setup has many advantages including the vast knowledge available on setting web servers and services. Also, these tools are available on most hardware and operating system combinations. In this paper I will cover the various types of systems that can be developed this way, their advantages and some drawbacks of this approach. Index Terms--Application Programmer Interface, Distributed applications, Hyper Text Transfer Protocol, Web.
Yang Woo Ick
Full Text Available Abstract Background For the past few years, scientific controversy has surrounded the large number of errors in forensic and literature mitochondrial DNA (mtDNA data. However, recent research has shown that using mtDNA phylogeny and referring to known mtDNA haplotypes can be useful for checking the quality of sequence data. Results We developed a Web-based bioinformatics resource "mtDNAmanager" that offers a convenient interface supporting the management and quality analysis of mtDNA sequence data. The mtDNAmanager performs computations on mtDNA control-region sequences to estimate the most-probable mtDNA haplogroups and retrieves similar sequences from a selected database. By the phased designation of the most-probable haplogroups (both expected and estimated haplogroups, mtDNAmanager enables users to systematically detect errors whilst allowing for confirmation of the presence of clear key diagnostic mutations and accompanying mutations. The query tools of mtDNAmanager also facilitate database screening with two options of "match" and "include the queried nucleotide polymorphism". In addition, mtDNAmanager provides Web interfaces for users to manage and analyse their own data in batch mode. Conclusion The mtDNAmanager will provide systematic routines for mtDNA sequence data management and analysis via easily accessible Web interfaces, and thus should be very useful for population, medical and forensic studies that employ mtDNA analysis. mtDNAmanager can be accessed at http://mtmanager.yonsei.ac.kr.
Martins, Wellington Santos; Soares Lucas, Divino César; de Souza Neves, Kelligton Fabricio; Bertioli, David John
Simple sequence repeats (SSR), also known as microsatellites, have been extensively used as molecular markers due to their abundance and high degree of polymorphism. We have developed a simple to use web software, called WebSat, for microsatellite molecular marker prediction and development. WebSat is accessible through the Internet, requiring no program installation. Although a web solution, it makes use of Ajax techniques, providing a rich, responsive user interface. WebSat allows the submi...
Full Text Available 應用網路遠距教學於教學上，可增加師生互動以及學習者間的互動機會，讓教學活動變得更為豐富。因此在網路的環境中，學生被要求不僅做一個被服侍者，他們更應是積極的參與者，必須是主動建立學習系統的團隊成員。但須注意的是，不適當的應用網路教學反而會使學習更混亂。而網路教學的環境中，「介面」是使用者在操作電腦設備時一定要接觸部分，同時也是學習效果能否有效達成之重要因素，所以介面設計的好壞可相對於學習的效率及效果。由於圖像較不受文字、語言的限制，較能簡潔的呈現空間訊息，且較為自然、容易了解。因此本研究期望能發展網路教學方法的圖形化使用者介面，以增進教師、學習者以及教學內容三者間的互動。本研究選擇將「專家討論」、「小組討論」及「角色扮演」三種互動性較高的教學方法，經過分析、設計、發展與評鑑四個階段，完成網路教學互動討論方法之介面設計與發展的研究。Distance education is predicted to be a major growth area for education in the future. With this growth come challenges in instructional design in terms of new skill acquisition for instructors. The focus of this study is to design interactive visual interface for instructors and students to interact successfully in a web-based instructional environment. Interface of common used instructional methods such as panel discussion, group discussion and roll play were developed for web-based system. By using usability testing methods, interview and observation were performed. Three groups of undergraduate students were interviewed to examine what interface design elements were used and how they were implemented in relation to current web-based environment. Designing guidelines for on-line panel discussion interface were further examined after the evaluation.
The general audience for these lectures is mainly physicists, computer scientists, engineers or the general public wanting to know more about what’s going on in the biosciences. What’s bioinformatics and why is all this fuss being made about it ? What’s this revolution triggered by the human genome project ? Are there any results yet ? What are the problems ? What new avenues of research have been opened up ? What about the technology ? These new developments will be compared with what happened at CERN earlier in its evolution, and it is hoped that the similiraties and contrasts will stimulate new curiosity and provoke new thoughts.
Lue, Jaw-Chyng L.; Fang, Wai-Chi
A microsystem architecture for real-time, on-site, robust bioinformatic patterns recognition and analysis has been proposed. This system is compatible with on-chip DNA analysis means such as polymerase chain reaction (PCR)amplification. A corresponding novel artificial neural network (ANN) learning algorithm using new sigmoid-logarithmic transfer function based on error backpropagation (EBP) algorithm is invented. Our results show the trained new ANN can recognize low fluorescence patterns better than the conventional sigmoidal ANN does. A differential logarithmic imaging chip is designed for calculating logarithm of relative intensities of fluorescence signals. The single-rail logarithmic circuit and a prototype ANN chip are designed, fabricated and characterized.
Magana, Alejandra J.; Taleyarkhan, Manaz; Alvarado, Daniela Rivera; Kane, Michael; Springer, John; Clase, Kari
Bioinformatics education can be broadly defined as the teaching and learning of the use of computer and information technology, along with mathematical and statistical analysis for gathering, storing, analyzing, interpreting, and integrating data to solve biological problems. The recent surge of genomics, proteomics, and structural biology in the…
Deshpande, Yogesh; Murugesan, San; Ginige, Athula; Hansen, Steve; Schwabe, Daniel; Gaedke, Martin; White, Bebo
Web Engineering is the application of systematic, disciplined and quantifiable approaches to development, operation, and maintenance of Web-based applications. It is both a pro-active approach and a growing collection of theoretical and empirical research in Web application development. This paper gives an overview of Web Engineering by addressing the questions: a) why is it needed? b) what is its domain of operation? c) how does it help and what should it do to improve Web application develo...
Full Text Available Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model.
Friedrich, Andreas; Kenar, Erhan; Kohlbacher, Oliver; Nahnsen, Sven
Big data bioinformatics aims at drawing biological conclusions from huge and complex biological datasets. Added value from the analysis of big data, however, is only possible if the data is accompanied by accurate metadata annotation. Particularly in high-throughput experiments intelligent approaches are needed to keep track of the experimental design, including the conditions that are studied as well as information that might be interesting for failure analysis or further experiments in the future. In addition to the management of this information, means for an integrated design and interfaces for structured data annotation are urgently needed by researchers. Here, we propose a factor-based experimental design approach that enables scientists to easily create large-scale experiments with the help of a web-based system. We present a novel implementation of a web-based interface allowing the collection of arbitrary metadata. To exchange and edit information we provide a spreadsheet-based, humanly readable format. Subsequently, sample sheets with identifiers and metainformation for data generation facilities can be created. Data files created after measurement of the samples can be uploaded to a datastore, where they are automatically linked to the previously created experimental design model. PMID:25954760
Gurjar, Anoop Kishor Singh; Panwar, Abhijeet Singh; Gupta, Rajinder; Mantri, Shrikant S
High-throughput small RNA (sRNA) sequencing technology enables an entirely new perspective for plant microRNA (miRNA) research and has immense potential to unravel regulatory networks. Novel insights gained through data mining in publically available rich resource of sRNA data will help in designing biotechnology-based approaches for crop improvement to enhance plant yield and nutritional value. Bioinformatics resources enabling meta-analysis of miRNA expression across multiple plant species are still evolving. Here, we report PmiRExAt, a new online database resource that caters plant miRNA expression atlas. The web-based repository comprises of miRNA expression profile and query tool for 1859 wheat, 2330 rice and 283 maize miRNA. The database interface offers open and easy access to miRNA expression profile and helps in identifying tissue preferential, differential and constitutively expressing miRNAs. A feature enabling expression study of conserved miRNA across multiple species is also implemented. Custom expression analysis feature enables expression analysis of novel miRNA in total 117 datasets. New sRNA dataset can also be uploaded for analysing miRNA expression profiles for 73 plant species. PmiRExAt application program interface, a simple object access protocol web service allows other programmers to remotely invoke the methods written for doing programmatic search operations on PmiRExAt database.Database URL:http://pmirexat.nabi.res.in. PMID:27081157
Liva, Stéphane; Hupé, Philippe; Neuvial, Pierre; Brito, Isabel; Viara, Eric; La Rosa, Philippe; Barillot, Emmanuel
Assessing variations in DNA copy number is crucial for understanding constitutional or somatic diseases, particularly cancers. The recently developed array-CGH (comparative genomic hybridization) technology allows this to be investigated at the genomic level. We report the availability of a web tool for analysing array-CGH data. CAPweb (CGH array Analysis Platform on the Web) is intended as a user-friendly tool enabling biologists to completely analyse CGH arrays from the raw data to the visu...
Weniger, Thomas; Krawczyk, Justina; Supply, Philip; Niemann, Stefan; Harmsen, Dag
Harmonized typing of bacteria and easy identification of locally or internationally circulating clones are essential for epidemiological surveillance and disease control. For Mycobacterium tuberculosis complex (MTBC) species, multi-locus variable number tandem repeat analysis (MLVA) targeting mycobacterial interspersed repetitive units (MIRU) has been internationally adopted as the new standard, portable, reproducible and discriminatory typing method. However, no specialized bioinformatics web tools are available for analysing MLVA data in combination with other, complementary typing data. Therefore, we have developed the web application MIRU-VNTRplus (http://www.miru-vntrplus.org). This freely accessible service allows users to analyse genotyping data of their strains alone or in comparison with a reference database of strains representing the major MTBC lineages. Analysis and comparisons of genotypes can be based on MLVA-, spoligotype-, large sequence polymorphism and single nucleotide polymorphism data, or on a weighted combination of these markers. Tools for data exploration include search for similar strains, creation of phylogenetic and minimum spanning trees and mapping of geographic information. To facilitate scientific communication, an expanding genotype nomenclature (MLVA MtbC15-9 type) that can be queried via a web- or a SOAP-interface has been implemented. An extensive documentation guides users through all application functions. PMID:20457747
Robbins Kay A
Full Text Available Abstract Background Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. Findings SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. Conclusions We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.
Vila, Joaquin; Lim, Billy; Anajpure, Archana
The most common interaction with the Web is through the ubiquitous browser using visual interfaces. Most search engines use traditional visual interfaces to interact with its users. However, for certain applications and users it is desirable to have other modes of interaction with the Web. Current interfaces limit the mobility of the user and…
Burch, Randall O.
Discussion of Web-based distance education focuses on communication issues. Highlights include Internet communications; components of a Web site, including site architecture, user interface, information delivery method, and mode of feedback; elements of Web design, including conceptual design, sensory design, and reactive design; and a Web…
Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.
The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.
Nomi L Harris
Full Text Available The Bioinformatics Open Source Conference (BOSC is organized by the Open Bioinformatics Foundation (OBF, a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG before the annual Intelligent Systems in Molecular Biology (ISMB conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.
@@ The Studio of Computational Biology & Bioinformatics (SCBB), IHBT, CSIR,Palampur, India organized one of the very first national workshop funded by DBT,Govt.of India, on the Bioinformatics issues associated with next generation sequencing approaches.The course structure was designed by SCBB, IHBT.The workshop took place in the IHBT premise on 17 and 18 June 2010.
Probabilistic topic models have been developed for applications in various domains such as text mining, information retrieval and computer vision and bioinformatics domain. In this thesis, we focus on developing novel probabilistic topic models for image mining and bioinformatics studies. Specifically, a probabilistic topic-connection (PTC) model…
Ramlo, Susan E.; McConnell, David; Duan, Zhong-Hui; Moore, Francisco B.
Faculty at a Midwestern metropolitan public university recently developed a course on bioinformatics that emphasized collaboration and inquiry. Bioinformatics, essentially the application of computational tools to biological data, is inherently interdisciplinary. Thus part of the challenge of creating this course was serving the needs and…
Howard, David R.; Miskowski, Jennifer A.; Grunwald, Sandra K.; Abler, Michael L.
At the University of Wisconsin-La Crosse, we have undertaken a program to integrate the study of bioinformatics across the undergraduate life science curricula. Our efforts have included incorporating bioinformatics exercises into courses in the biology, microbiology, and chemistry departments, as well as coordinating the efforts of faculty within…
Olsen, Lars Rønn; Campos, Benito; Barnkob, Mike Stein;
cancer immunotherapies has yet to be fulfilled. The insufficient efficacy of existing treatments can be attributed to a number of biological and technical issues. In this review, we detail the current limitations of immunotherapy target selection and design, and review computational methods to streamline......The mechanisms of immune response to cancer have been studied extensively and great effort has been invested into harnessing the therapeutic potential of the immune system. Immunotherapies have seen significant advances in the past 20 years, but the full potential of protective and therapeutic...... and co-targets for single-epitope and multi-epitope strategies. We provide examples of application to the well-known tumor antigen HER2 and suggest bioinformatics methods to ameliorate therapy resistance and ensure efficient and lasting control of tumors....
Surangi W. Punyasena
Full Text Available Recent advances in microscopy, imaging, and data analyses have permitted both the greater application of quantitative methods and the collection of large data sets that can be used to investigate plant morphology. This special issue, the first for Applications in Plant Sciences, presents a collection of papers highlighting recent methods in the quantitative study of plant form. These emerging biometric and bioinformatic approaches to plant sciences are critical for better understanding how morphology relates to ecology, physiology, genotype, and evolutionary and phylogenetic history. From microscopic pollen grains and charcoal particles, to macroscopic leaves and whole root systems, the methods presented include automated classification and identification, geometric morphometrics, and skeleton networks, as well as tests of the limits of human assessment. All demonstrate a clear need for these computational and morphometric approaches in order to increase the consistency, objectivity, and throughput of plant morphological studies.
Multi-class classification is one of the fundamental tasks in bioinformatics and typically arises in cancer diagnosis studies by gene expression profiling. This article reviews two recent approaches to multi-class classification by combining multiple binary classifiers, which are formulated based on a unified framework of error-correcting output coding (ECOC). The first approach is to construct a multi-class classifier in which each binary classifier to be aggregated has a weight value to be optimally tuned based on the observed data. In the second approach, misclassification of each binary classifier is formulated as a bit inversion error with a probabilistic model by making an analogy to the context of information transmission theory. Experimental studies using various real-world datasets including cancer classification problems reveal that both of the new methods are superior or comparable to other multi-class classification methods
Handel, Adam E
Estrogen is a steroid hormone that plays critical roles in a myriad of intracellular pathways. The expression of many genes is regulated through the steroid hormone receptors ESR1 and ESR2. These bind to DNA and modulate the expression of target genes. Identification of estrogen target genes is greatly facilitated by the use of transcriptomic methods, such as RNA-seq and expression microarrays, and chromatin immunoprecipitation with massively parallel sequencing (ChIP-seq). Combining transcriptomic and ChIP-seq data enables a distinction to be drawn between direct and indirect estrogen target genes. This chapter discusses some methods of identifying estrogen target genes that do not require any expertise in programming languages or complex bioinformatics. PMID:26585125
ACADEMIC TRAINING LECTURE SERIES 27, 28 February 1, 2, 3 March 2006 from 11:00 to 12:00 - Auditorium, bldg. 500 Decoding the Genome A special series of 5 lectures on: Recent extraordinary advances in the life sciences arising through new detection technologies and bioinformatics The past five years have seen an extraordinary change in the information and tools available in the life sciences. The sequencing of the human genome, the discovery that we possess far fewer genes than foreseen, the measurement of the tiny changes in the genomes that differentiate us, the sequencing of the genomes of many pathogens that lead to diseases such as malaria are all examples of completely new information that is now available in the quest for improved healthcare. New tools have allowed similar strides in the discovery of the associated protein structures, providing invaluable information for those searching for new drugs. New DNA microarray chips permit simultaneous measurement of the state of expression of tens...
Next generation sequencing (NGS) has revolutionized the field of genomics and its wide range of applications has resulted in the genome-wide analysis of hundreds of species and the development of thousands of computational tools. This thesis represents my work on NGS analysis of four species, Lotus...... japonicus (Lotus), Vaccinium corymbosum (blueberry), Stegodyphus mimosarum (spider) and Trifolium occidentale (clover). From a bioinformatics data analysis perspective, my work can be divided into three parts; genome annotation, small RNA, and gene expression analysis. Lotus is a legume of significant...... agricultural and biological importance. Its capacity to form symbiotic relationships with rhizobia and microrrhizal fungi has fascinated researchers for years. Lotus has a small genome of approximately 470 Mb and a short life cycle of 2 to 3 months, which has made Lotus a model legume plant for many molecular...
Pravin Dudhagara; Sunil Bhavsar; Chintan Bhagat; Anjana Ghelani; Shreyas Bhatt; Rajesh Patel
The development of next-generation sequencing (NGS) platforms spawned an enormous volume of data. This explosion in data has unearthed new scalability challenges for existing bioinformatics tools. The analysis of metagenomic sequences using bioinformatics pipelines is complicated by the substantial complexity of these data. In this article, we review several commonly-used online tools for metagenomics data analysis with respect to their quality and detail of analysis using simulated metagenomics data. There are at least a dozen such software tools presently available in the public domain. Among them, MGRAST, IMG/M, and METAVIR are the most well-known tools according to the number of citations by peer-reviewed scientific media up to mid-2015. Here, we describe 12 online tools with respect to their web link, annotation pipelines, clustering methods, online user support, and availability of data storage. We have also done the rating for each tool to screen more potential and preferential tools and evaluated five best tools using synthetic metagenome. The article comprehensively deals with the contemporary problems and the prospects of metagenomics from a bioinformatics viewpoint.
Kristina M. Obom
Full Text Available The completely online Master of Science in Bioinformatics program differs from the onsite program only in the mode of content delivery. Analysis of student satisfaction indicates no statistically significant difference between most online and onsite student responses, however, online and onsite students do differ significantly in their responses to a few questions on the course evaluation queries. Analysis of student exam performance using three assessments indicates that there was no significant difference in grades earned by students in online and onsite courses. These results suggest that our model for online bioinformatics education provides students with a rigorous course of study that is comparable to onsite course instruction and possibly provides a more rigorous course load and more opportunities for participation.
Obom, Kristina. M.; Cummings, Patrick J.
The completely online Master of Science in Bioinformatics program differs from the onsite program only in the mode of content delivery. Analysis of student satisfaction indicates no statistically significant difference between most online and onsite student responses, however, online and onsite students do differ significantly in their responses to a few questions on the course evaluation queries. Analysis of student exam performance using three assessments indicates that there was no signifi...
Asem KASEM; Tetsuo IDA
We present a computing environment for ori-gami on the web. The environment consists of the compu-tational origami engine Eos for origami construction, visualization, and geometrical reasoning, WEвEOS for pro-viding web interface to the functionalities of Eos, and web service system SCORUM for symbolic computing web ser-vices. WEBEOS is developed using Web2.0 technologies, and provides a graphical interactive web interface for ori-gami construction and proving. In SCORUM, we are prepar-ing web services for a wide range of symbolic computing systems, and are using these services in our origami envir-onment. We explain the functionalities of this environment, and discuss its architectural and technological features.
Fuertes Castro, José Luis; Pérez Pérez, Aurora
Muchos sitios Web tienen un importante problema de accesibilidad, ya que su diseño no ha contemplado la gran diversidad funcional que presentan cada uno de los potenciales usuarios. Las directrices de accesibilidad del contenido Web, desarrolladas por el Consorcio de la Web, están compuestas por una serie de recomendaciones para que una página Web pueda utilizarse por cualquier persona. Uno de los principales problemas surge a la hora de comprobar la accesibilidad de una página Web, dado que,...
The University of Arizona Artificial Intelligence Lab (AI Lab) Dark Web project is a long-term scientific research program that aims to study and understand the international terrorism (Jihadist) phenomena via a computational, data-centric approach. We aim to collect "ALL" web content generated by international terrorist groups, including web sites, forums, chat rooms, blogs, social networking sites, videos, virtual world, etc. We have developed various multilingual data mining, text mining, and web mining techniques to perform link analysis, content analysis, web metrics (technical
Hall, Wendy; Tiropanis, Thanassis
This paper examines the evolution of the World Wide Web as a network of networks and discusses the emergence of Web Science as an interdisciplinary area that can provide us with insights on how the Web developed, and how it has affected and is affected by society. Through its different stages of evolution, the Web has gradually changed from a technological network of documents to a network where documents, data, people and organisations are interlinked in various and often unexpected ways. It...
Sprimont, P.-G.; Ricci, D.; Nicastro, L.
List, Markus; Alcaraz, Nicolas; Dissing-Hansen, Martin;
We present KeyPathwayMinerWeb, the first online platform for de novo pathway enrichment analysis directly in the browser. Given a biological interaction network (e.g. protein-protein interactions) and a series of molecular profiles derived from one or multiple OMICS studies (gene expression, for...... instance), KeyPathwayMiner extracts connected sub-networks containing a high number of active or differentially regulated genes (proteins, metabolites) in the molecular profiles. The web interface at (http://keypathwayminer.compbio.sdu.dk) implements all core functionalities of the KeyPathwayMiner tool set...... such as data integration, input of background knowledge, batch runs for parameter optimization and visualization of extracted pathways. In addition to an intuitive web interface, we also implemented a RESTful API that now enables other online developers to integrate network enrichment as a web service...
The Web browser has become one of the major application interfaces for remotely participating in magnetic fusion. Web portals are used to present very diverse sources of information in a unified way. While a web portal has several benefits over other software interfaces, such as providing single point of access for multiple computational services, and eliminating the need for client software installation, the design and development of a web portal has unique challenges. One of the challenges is that a web portal needs to be fast and interactive despite a high volume of tools and information that it presents. Another challenge is the visual output on the web portal often is overwhelming due to the high volume of data generated by complex scientific instruments and experiments; therefore the applications and information should be customizable depending on the needs of users. An appropriate software architecture and web technologies can meet these problems. A web-portal has been designed to support the experimental activities of DIII-D researchers worldwide. It utilizes a multi-tier software architecture, and web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services for users to select. Customizable services are: real-time experiment status monitoring, diagnostic data access, interactive data visualization. The web-portal also supports interactive collaborations by providing collaborative logbook, shared visualization and online instant message services. Furthermore, the web portal will provide a mechanism to allow users to create their own applications on the web portal as well as bridging capabilities to external applications such as Twitter and other social networks. In this series of slides, we describe the software architecture of this scientific web portal and our experiences in utilizing web 2.0 technologies. A
Hripcsak, G.; Cimino, J. J.; Sengupta, S.
WebCIS is a Web-based clinical information system. It sits atop the existing Columbia University clinical information system architecture, which includes a clinical repository, the Medical Entities Dictionary, an HL7 interface engine, and an Arden Syntax based clinical event monitor. WebCIS security features include authentication with secure tokens, authorization maintained in an LDAP server, SSL encryption, permanent audit logs, and application time outs. WebCIS is currently used by 810 phy...
Full Text Available ABSTRACT:The traditional methods for mining foods for bioactive peptides are tedious and long. Similar to the drug industry, the length of time to identify and deliver a commercial health ingredient that reduces disease symptoms can take anything between 5 to 10 years. Reducing this time and effort is crucial in order to create new commercially viable products with clear and important health benefits. In the past few years, bioinformatics, the science that brings together fast computational biology, and efficient genome mining, is appearing as the long awaited solution to this problem. By quickly mining food genomes for characteristics of certain food therapeutic ingredients, researchers can potentially find new ones in a matter of a few weeks. Yet, surprisingly, very little success has been achieved so far using bioinformatics in mining for food bioactives.The absence of food specific bioinformatic mining tools, the slow integration of both experimental mining and bioinformatics, and the important difference between different experimental platforms are some of the reasons for the slow progress of bioinformatics in the field of functional food and more specifically in bioactive peptide discovery.In this paper I discuss some methods that could be easily translated, using a rational peptide bioinformatics design, to food bioactive peptide mining. I highlight the need for an integrated food peptide database. I also discuss how to better integrate experimental work with bioinformatics in order to improve the mining of food for bioactive peptides, therefore achieving a higher success rates.
Brazas, Michelle D; Ouellette, B F Francis
Bioinformatics.ca has been hosting continuing education programs in introductory and advanced bioinformatics topics in Canada since 1999 and has trained more than 2,000 participants to date. These workshops have been adapted over the years to keep pace with advances in both science and technology as well as the changing landscape in available learning modalities and the bioinformatics training needs of our audience. Post-workshop surveys have been a mandatory component of each workshop and are used to ensure appropriate adjustments are made to workshops to maximize learning. However, neither bioinformatics.ca nor others offering similar training programs have explored the long-term impact of bioinformatics continuing education training. Bioinformatics.ca recently initiated a look back on the impact its workshops have had on the career trajectories, research outcomes, publications, and collaborations of its participants. Using an anonymous online survey, bioinformatics.ca analyzed responses from those surveyed and discovered its workshops have had a positive impact on collaborations, research, publications, and career progression. PMID:27281025
A kinetic interface for orientation detection in a video training system is disclosed. The interface includes a balance platform instrumented with inertial motion sensors. The interface engages a participant's sense of balance in training exercises.......A kinetic interface for orientation detection in a video training system is disclosed. The interface includes a balance platform instrumented with inertial motion sensors. The interface engages a participant's sense of balance in training exercises....
Perhaps the most significant tool of our internet age is the web search engine, providing a powerful interface for accessing the vast amount of information available on the world wide web and beyond. While still in its infancy compared to the knowledge tools that precede it - such as the dictionary or encyclopedia - the impact of web search engines on society and culture has already received considerable attention from a variety of academic disciplines and perspectives. This article aims to organize a meta-discipline of “web search studies,” centered around a nucleus of major research on web search engines from five key perspectives: technical foundations and evaluations; transaction log analyses; user studies; political, ethical, and cultural critiques; and legal and policy analyses.
Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke
Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics. PMID:23396756
Gentleman, R.C.; Carey, V.J.; Bates, D.M.;
The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisci......The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into...... interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples....
GAO; George; Fu
Highly pathogenic influenza A virus H5N1 has spread out worldwide and raised the public concerns. This increased the output of influenza virus sequence data as well as the research publication and other reports. In order to fight against H5N1 avian flu in a comprehensive way, we designed and started to set up the Website for Avian Flu Information (http://www.avian-flu.info) from 2004. Other than the influenza virus database available, the website is aiming to integrate diversified information for both researchers and the public. From 2004 to 2009, we collected information from all aspects, i.e. reports of outbreaks, scientific publications and editorials, policies for prevention, medicines and vaccines, clinic and diagnosis. Except for publications, all information is in Chinese. Till April 15, 2009, the cumulative news entries had been over 2000 and research papers were approaching 5000. By using the curated data from Influenza Virus Resource, we have set up an influenza virus sequence database and a bioinformatic platform, providing the basic functions for the sequence analysis of influenza virus. We will focus on the collection of experimental data and results as well as the integration of the data from the geological information system and avian influenza epidemiology.
LIU Di; LIU Quan-He; WU Lin-Huan; LIU Bin; WU Jun; LAO Yi-Mei; LI Xiao-Jing; GAO George Fu; MA Jun-Cai
Highly pathogenic influenza A virus H5N1 has spread out worldwide and raised the public concerns. This increased the output of influenza virus sequence data as well as the research publication and other reports. In order to fight against H5N1 avian flu in a comprehensive way, we designed and started to set up the Website for Avian Flu Information (http://www.avian-flu.info) from 2004. Other than the influenza virus database available, the website is aiming to integrate diversified information for both researchers and the public. From 2004 to 2009, we collected information from all aspects, i.e. reports of outbreaks, scientific publications and editorials, policies for prevention, medicines and vaccines, clinic and diagnosis. Except for publications, all information is in Chinese. Till April 15, 2009, the cumulative news entries had been over 2000 and research papers were approaching 5000. By using the curated data from Influenza Virus Resource, we have set up an influenza virus sequence database and a bioin-formatic platform, providing the basic functions for the sequence analysis of influenza virus. We will focus on the collection of experimental data and results as well as the integration of the data from the geological information system and avian influenza epidemiology.
One of the mechanisms of external signal transduction (ionizing radiation, toxicants, stress) to the target cell is the existence of membrane and intracellular proteins with intrinsic tyrosine kinase activity. No wonder that etiology of malignant growth links to abnormalities in signal transduction through tyrosine kinases. The epidermal growth factor receptor (EGFR) tyrosine kinases play fundamental roles in development, proliferation and differentiation of tissues of epithelial, mesenchymal and neuronal origin. There are four types of EGFR: EGF receptor (ErbB1/HER1), ErbB2/Neu/HER2, ErbB3/HER3 and ErbB4/HER4. Abnormal expression of EGFR, appearance of receptor mutants with changed ability to protein-protein interactions or increased tyrosine kinase activity have been implicated in the malignancy of different types of human tumors. Bioinformatics is currently using in investigation on design and selection of drugs that can make alterations in structure or competitively bind with receptors and so display antagonistic characteristics. (authors)
Web browsers have become a major application interface for participating in scientific experiments such as those in magnetic fusion. The recent advances in web technologies motivated the deployment of interactive web applications with rich features. In the scientific world, web applications have been deployed in portal environments. When used in a scientific research environment, such as fusion experiments, web portals can present diverse sources of information in a unified interface. However, the design and development of a scientific web portal has its own challenges. One such challenge is that a web portal needs to be fast and interactive despite the high volume of information and number of tools it presents. Another challenge is that the visual output of the web portal must not be overwhelming to the end users, despite the high volume of data generated by fusion experiments. Therefore, the applications and information should be customizable depending on the needs of end users. In order to meet these challenges, the design and implementation of a web portal needs to support high interactivity and user customization. A web portal has been designed to support the experimental activities of DIII-D researchers worldwide by providing multiple services, such as real-time experiment status monitoring, diagnostic data access and interactive data visualization. The web portal also supports interactive collaborations by providing a collaborative logbook, shared visualization and online instant messaging services. The portal's design utilizes the multi-tier software architecture and has been implemented utilizing web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services, which allows users to create a unique, personalized working environment to fit their own needs and interests. This paper describes the software
Reddy, Ch Ram Mohan; Srinivasa, K G; Kumar, T V Suresh; Kanth, K Rajani
Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified Modeling Language, Use Case Diagram, Sequence Diagram, Deployment Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Multi-Tier Queuing Architecture. We have identified the bottle neck resources.
Ch Ram Mohan Reddy
Full Text Available Web Service is an interface which implements business logic. Performance is an important quality aspect of Web services because of their distributed nature. Predicting the performance of web services during early stages of software development is significant. In this paper we model web service using Unified Modeling Language, Use Case Diagram, Sequence Diagram. We obtain the Performance metrics by simulating the web services model using a simulation tool Simulation of Multi-Tier Queuing Architecture. We have identified the bottle neck resources.
This thesis aim to create and solve a website or web interface design issue for all kind of audiences, either experienced or not experienced on the web. A simple way to create a website which everyone can use and interact with, and without any problems. Responsive web design is now a big priority, now a days people do not usually visit a web site on their desktop, and 95% of the time a site is visited via a smartphone or a tablet, which makes it a priority for every web designer to keep respo...
O'Hara, Kieron; Hall, Wendy
The Semantic Web is a vision of a web of linked data, allowing querying, integration and sharing of data from distributed sources in heterogeneous formats, using ontologies to provide an associated and explicit semantic interpretation. The article describes the series of layered formalisms and standards that underlie this vision, and chronicles their historical and ongoing development. A number of applications, scientific and otherwise, academic and commercial, are reviewed. The Semantic Web ...
Full Text Available Web Services are mounting as an inventive mechanism for rendering services to subjective devices over the WWW. As a consequence of the rapid growth of Web Services applications and the plenty of Service Providers, the consumer is facing with the inevitability of selecting the “right” Service Provider. In such a scenario the Quality of Service (QoS serves as a target to differentiate Service Providers. To select the best Web Services / Service Providers, Ranking and Optimization of Web Service Compositions are challenging areas of research with significant implications for the realization of the “Web of Services” revelation. The “Semantic Web Services” use formal semantic descriptions of Web Service functionality and interface to enable automated reasoning over Web Service Compositions. This study from its experimental results revealed that the existing Semantic Web Services faces a few challenging issues such as poor prediction of best Web Services and optimized Service Providers, which leads to QoS degradation of Semantic Web. To address and overcome these identified issues, this research work is calculating the semantic similarities, utilization of various Web Services and Service Providers. After measuring these parameters, all the Web Services are ranked based on their Utilization. Finally, our proposed technique, selected best Web Services based on their ranking and placed in Web Services Composition. From the experimental results, it is established that our proposed mechanism improves the performance of Semantic Web in terms of Execution Time, Processor Utilization and Memory Management.
Zeng, Zhiqiang; Shi, Hua; Wu, Yun; Hong, Zhiling
Informatics methods, such as text mining and natural language processing, are always involved in bioinformatics research. In this study, we discuss text mining and natural language processing methods in bioinformatics from two perspectives. First, we aim to search for knowledge on biology, retrieve references using text mining methods, and reconstruct databases. For example, protein-protein interactions and gene-disease relationship can be mined from PubMed. Then, we analyze the applications of text mining and natural language processing techniques in bioinformatics, including predicting protein structure and function, detecting noncoding RNA. Finally, numerous methods and applications, as well as their contributions to bioinformatics, are discussed for future use by text mining and natural language processing researchers. PMID:26525745
Full Text Available Informatics methods, such as text mining and natural language processing, are always involved in bioinformatics research. In this study, we discuss text mining and natural language processing methods in bioinformatics from two perspectives. First, we aim to search for knowledge on biology, retrieve references using text mining methods, and reconstruct databases. For example, protein-protein interactions and gene-disease relationship can be mined from PubMed. Then, we analyze the applications of text mining and natural language processing techniques in bioinformatics, including predicting protein structure and function, detecting noncoding RNA. Finally, numerous methods and applications, as well as their contributions to bioinformatics, are discussed for future use by text mining and natural language processing researchers.
Hiraoka, Satoshi; Yang, Ching-chia; Iwasaki, Wataru
Metagenomic approaches are now commonly used in microbial ecology to study microbial communities in more detail, including many strains that cannot be cultivated in the laboratory. Bioinformatic analyses make it possible to mine huge metagenomic datasets and discover general patterns that govern microbial ecosystems. However, the findings of typical metagenomic and bioinformatic analyses still do not completely describe the ecology and evolution of microbes in their environments. Most analyses still depend on straightforward sequence similarity searches against reference databases. We herein review the current state of metagenomics and bioinformatics in microbial ecology and discuss future directions for the field. New techniques will allow us to go beyond routine analyses and broaden our knowledge of microbial ecosystems. We need to enrich reference databases, promote platforms that enable meta- or comprehensive analyses of diverse metagenomic datasets, devise methods that utilize long-read sequence information, and develop more powerful bioinformatic methods to analyze data from diverse perspectives. PMID:27383682
Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows generally require access to multiple, distributed data sources and analytic tools. The requisite data sources may include large public data repositories, community...
Li, Xin; Ma, Li; Wang, Jinjia; Zhao, Chun
Feature selection (FS) techniques have become an important tool in bioinformatics field. The core algorithm of it is to select the hidden significant data with low-dimension from high-dimensional data space, and thus to analyse the basic built-in rule of the data. The data of bioinformatics fields are always with high-dimension and small samples, so the research of FS algorithm in the bioinformatics fields has great foreground. In this article, we make the interested reader aware of the possibilities of feature selection, provide basic properties of feature selection techniques, and discuss their uses in the sequence analysis, microarray analysis, mass spectra analysis etc. Finally, the current problems and the prospects of feature selection algorithm in the application of bioinformatics is also discussed. PMID:21604512
Reviews the development of scalable pattern recognition algorithms for computational biology and bioinformatics Includes numerous examples and experimental results to support the theoretical concepts described Concludes each chapter with directions for future research and a comprehensive bibliography
Zhiqiang Zeng; Hua Shi; Yun Wu; Zhiling Hong
Informatics methods, such as text mining and natural language processing, are always involved in bioinformatics research. In this study, we discuss text mining and natural language processing methods in bioinformatics from two perspectives. First, we aim to search for knowledge on biology, retrieve references using text mining methods, and reconstruct databases. For example, protein-protein interactions and gene-disease relationship can be mined from PubMed. Then, we analyze the applications ...
Kashyap, Hirak; Ahmed, Hasin Afzal; Hoque, Nazrul; Roy, Swarup; Bhattacharyya, Dhruba Kumar
Bioinformatics research is characterized by voluminous and incremental datasets and complex data analytics methods. The machine learning methods used in bioinformatics are iterative and parallel. These methods can be scaled to handle big data using the distributed and parallel computing technologies. Usually big data tools perform computation in batch-mode and are not optimized for iterative processing and high data dependency among operations. In the recent years, parallel, incremental, and ...
Schneider, M.V.; Watson, J.; Attwood, T.;
services, and discuss successful training strategies shared by a diverse set of bioinformatics trainers. We also identify steps that trainers in bioinformatics could take together to advance the state of the art in current training practices. The ideas presented in this article derive from the first...... Trainer Networking Session held under the auspices of the EU-funded SLING Integrating Activity, which took place in November 2009....
Keane, Thomas M; Page, Andrew J.; McInerney, James O.; Naughton, Thomas J
In the past number of years the demand for high performance computing has greatly increased in the area of bioinformatics. The huge increase in size of many genomic databases has meant that many common tasks in bioinformatics are not possible to complete in a reasonable amount of time on a single processor. Recently distributed computing has emerged as an inexpensive alternative to dedicated parallel computing. We have developed a general-purpose distributed computing platform ...
Full Text Available Abstract Background Recent advances in experimental and computational technologies have fueled the development of many sophisticated bioinformatics programs. The correctness of such programs is crucial as incorrectly computed results may lead to wrong biological conclusion or misguide downstream experimentation. Common software testing procedures involve executing the target program with a set of test inputs and then verifying the correctness of the test outputs. However, due to the complexity of many bioinformatics programs, it is often difficult to verify the correctness of the test outputs. Therefore our ability to perform systematic software testing is greatly hindered. Results We propose to use a novel software testing technique, metamorphic testing (MT, to test a range of bioinformatics programs. Instead of requiring a mechanism to verify whether an individual test output is correct, the MT technique verifies whether a pair of test outputs conform to a set of domain specific properties, called metamorphic relations (MRs, thus greatly increases the number and variety of test cases that can be applied. To demonstrate how MT is used in practice, we applied MT to test two open-source bioinformatics programs, namely GNLab and SeqMap. In particular we show that MT is simple to implement, and is effective in detecting faults in a real-life program and some artificially fault-seeded programs. Further, we discuss how MT can be applied to test programs from various domains of bioinformatics. Conclusion This paper describes the application of a simple, effective and automated technique to systematically test a range of bioinformatics programs. We show how MT can be implemented in practice through two real-life case studies. Since many bioinformatics programs, particularly those for large scale simulation and data analysis, are hard to test systematically, their developers may benefit from using MT as part of the testing strategy. Therefore our work
Stringer-Calvert David WJ; Gupta Priyanka; Wagner Valerie; Pouliot Yannick; Lee Thomas J; Tenenbaum Jessica D; Karp Peter D
Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL) bu...
Ilzins, Olaf; Isea, Raul; Hoebeke, Johan
The objective of this short report is to reconsider the subject of bioinformatics as just being a tool of experimental biological science. To do that, we introduce three examples to show how bioinformatics could be considered as an experimental science. These examples show how the development of theoretical biological models generates experimentally verifiable computer hypotheses, which necessarily must be validated by experiments in vitro or in vivo.
Modraj Bhavsar; Mrs. P. M. Chavan
On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1) First we describe the basics of web mining, types of web mining. 2) Details of each...
Shoemaker Jason E
Full Text Available Abstract Background Interpreting in vivo sampled microarray data is often complicated by changes in the cell population demographics. To put gene expression into its proper biological context, it is necessary to distinguish differential gene transcription from artificial gene expression induced by changes in the cellular demographics. Results CTen (cell type enrichment is a web-based analytical tool which uses our highly expressed, cell specific (HECS gene database to identify enriched cell types in heterogeneous microarray data. The web interface is designed for differential expression and gene clustering studies, and the enrichment results are presented as heatmaps or downloadable text files. Conclusions In this work, we use an independent, cell-specific gene expression data set to assess CTen's performance in accurately identifying the appropriate cell type and provide insight into the suggested level of enrichment to optimally minimize the number of false discoveries. We show that CTen, when applied to microarray data developed from infected lung tissue, can correctly identify the cell signatures of key lymphocytes in a highly heterogeneous environment and compare its performance to another popular bioinformatics tool. Furthermore, we discuss the strong implications cell type enrichment has in the design of effective microarray workflow strategies and show that, by combining CTen with gene expression clustering, we may be able to determine the relative changes in the number of key cell types. CTen is available at http://www.influenza-x.org/~jshoemaker/cten/
Jose M. Ferreira
Full Text Available Remote Experimentation is an educational resource that allows teachers to strengthen the practical contents of science & engineering courses. However, building up the interfaces to remote experiments is not a trivial task. Although teachers normally master the practical contents addressed by a particular remote experiment they usually lack the programming skills required to quickly build up the corresponding web interface. This paper describes the automatic generation of experiment interfaces through a web-accessible Java application. The application displays a list of existent modules and once the requested modules have been selected, it generates the code that enables the browser to display the experiment interface. The toolsÃ¢Â€Â™ main advantage is enabling non-tech teachers to create their own remote experiments.
Herman, I.; Gylling, M.
Although using advanced Web technologies at their core, e-books represent a parallel universe to everyday Web documents. Their production workflows, user interfaces, their security, access, or privacy models, etc, are all distinct. There is a lack of a vision on how to unify Digital Publishing and t
Huurdeman, H.C.; Ben David, A.; Samar, T.
Web archives provide access to snapshots of the Web of the past, and could be valuable for research purposes. However, access to these archives is often limited, both in terms of data availability, and interfaces to this data. This paper explores new methods to overcome these limitations. It present
Camacho Castro, Juan; Guimerà, Roger; Nunes Amaral, Luís A.
We analyze the properties of seven community food webs from a variety of environments, including freshwater, marine-freshwater interfaces, and terrestrial environments. We uncover quantitative unifying patterns that describe the properties of the diverse trophic webs considered and suggest that statistical physics concepts such as scaling and universality may be useful in the description of ecosystems. Specifically, we find that several quantities characterizing these diverse food webs obey f...
Filipe Silva; Gabriel David
Currently, information systems are usually supported by databases (DB) and accessed through a Web interface. Pages in such Web sites are not drawn from HTML files but are generated on the fly upon request. Indexing and searching such dynamic pages raises several extra difficulties not solved by most search engines, which were designed for static contents. In this paper we describe the development of a search engine that overcomes most of the problems for a specific Web site, how the limitatio...
The Web browser has become one of the major application interfaces for remotely participating in magnetic fusion experiments. Recently in other areas, web portals have begun to be deployed. These portals are used to present very diverse sources of information in a unified way. While a web portal has several benefits over other software interfaces, such as providing single point of access for multiple computational services, and eliminating the need for client software installation, the design and development of a web portal has unique challenges. One of the challenges is that a web portal needs to be fast and interactive despite a high volume of tools and information that it presents. Another challenge is the visual output on the web portal often is overwhelming due to the high volume of data generated by complex scientific instruments and experiments; therefore the applications and information should be customizable depending on the needs of users. An appropriate software architecture and web technologies can meet these problems. A web-portal has been designed to support the experimental activities of DIII-D researchers worldwide. It utilizes a multi-tier software architecture, and web 2.0 technologies, such as AJAX, Django, and Memcached, to develop a highly interactive and customizable user interface. It offers a customizable interface with personalized page layouts and list of services for users to select. The users can create a unique personalized working environment to fit their own needs and interests. Customizable services are: real-time experiment status monitoring, diagnostic data access, interactive data visualization. The web-portal also supports interactive collaborations by providing collaborative logbook, shared visualization and online instant message services. Furthermore, the web portal will provide a mechanism to allow users to create their own applications on the web portal as well as bridging capabilities to external applications such as
Golbeck, Jennifer; Mutton, Paul
Internet Relay Chat (IRC) is a chat system that has millions of users. IRC robots (bots) are programs that sit in chat rooms and provide different services to users. The IRC bot as a mechanism for human interaction with the Semantic Web specifically with web services and knowledge bases is simple to program, has an intuitive, conversational interface for human users, and fits well with the inputs and outputs of Semantic Web queries. This paper presents implementations of bots for interacting ...
McKenna, Neil J
Nuclear receptors (NRs) are a superfamily of ligand-regulated transcription factors that interact with coregulators and other transcription factors to direct tissue-specific programs of gene expression. Recent years have witnessed a rapid acceleration of the output of high-content data platforms in this field, generating discovery-driven datasets that have collectively described: the organization of the NR superfamily (phylogenomics); the expression patterns of NRs, coregulators and their target genes (transcriptomics); ligand- and tissue-specific functional NR and coregulator sites in DNA (cistromics); the organization of nuclear receptors and coregulators into higher order complexes (proteomics); and their downstream effects on homeostasis and metabolism (metabolomics). Significant bioinformatics challenges lie ahead both in the integration of this information into meaningful models of NR and coregulator biology, as well as in the archiving and communication of datasets to the global nuclear receptor signaling community. While holding great promise for the field, the ascendancy of discovery-driven research in this field brings with it a collective responsibility for researchers, publishers and funding agencies alike to ensure the effective archiving and management of these data. This review will discuss factors lying behind the increasing impact of discovery-driven research, examples of high-content datasets and their bioinformatic analysis, as well as a summary of currently curated web resources in this field. This article is part of a Special Issue entitled: Translating nuclear receptors from health to disease. PMID:21029773
Castillo, Luis F; López-Gartner, Germán; Isaza, Gustavo A; Sánchez, Mariana; Arango, Jeferson; Agudelo-Valencia, Daniel; Castaño, Sergio
The need to process large quantities of data generated from genomic sequencing has resulted in a difficult task for life scientists who are not familiar with the use of command-line operations or developments in high performance computing and parallelization. This knowledge gap, along with unfamiliarity with necessary processes, can hinder the execution of data processing tasks. Furthermore, many of the commonly used bioinformatics tools for the scientific community are presented as isolated, unrelated entities that do not provide an integrated, guided, and assisted interaction with the scheduling facilities of computational resources or distribution, processing and mapping with runtime analysis. This paper presents the first approximation of a Web Services platform-based architecture (GITIRBio) that acts as a distributed front-end system for autonomous and assisted processing of parallel bioinformatics pipelines that has been validated using multiple sequences. Additionally, this platform allows integration with semantic repositories of genes for search annotations. GITIRBio is available at: http://c-head.ucaldas.edu.co:8080/gitirbio. PMID:26527189
Ison, Jon; Rapacki, Kristoffer; Ménager, Hervé; Kalaš, Matúš; Rydza, Emil; Chmura, Piotr; Anthon, Christian; Beard, Niall; Berka, Karel; Bolser, Dan; Booth, Tim; Bretaudeau, Anthony; Brezovsky, Jan; Casadio, Rita; Cesareni, Gianni; Coppens, Frederik; Cornell, Michael; Cuccuru, Gianmauro; Davidsen, Kristian; Vedova, Gianluca Della; Dogan, Tunca; Doppelt-Azeroual, Olivia; Emery, Laura; Gasteiger, Elisabeth; Gatter, Thomas; Goldberg, Tatyana; Grosjean, Marie; Grüning, Björn; Helmer-Citterich, Manuela; Ienasescu, Hans; Ioannidis, Vassilios; Jespersen, Martin Closter; Jimenez, Rafael; Juty, Nick; Juvan, Peter; Koch, Maximilian; Laibe, Camille; Li, Jing-Woei; Licata, Luana; Mareuil, Fabien; Mičetić, Ivan; Friborg, Rune Møllegaard; Moretti, Sebastien; Morris, Chris; Möller, Steffen; Nenadic, Aleksandra; Peterson, Hedi; Profiti, Giuseppe; Rice, Peter; Romano, Paolo; Roncaglia, Paola; Saidi, Rabie; Schafferhans, Andrea; Schwämmle, Veit; Smith, Callum; Sperotto, Maria Maddalena; Stockinger, Heinz; Vařeková, Radka Svobodová; Tosatto, Silvio C E; de la Torre, Victor; Uva, Paolo; Via, Allegra; Yachdav, Guy; Zambelli, Federico; Vriend, Gert; Rost, Burkhard; Parkinson, Helen; Løngreen, Peter; Brunak, Søren
Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task at hand.Here we present a community-driven curation effort, supported by ELIXIR-the European infrastructure for biological information-that aspires to a comprehensive and consistent registry of information about bioinformatics resources. The sustainable upkeep of this Tools and Data Services Registry is assured by a curation effort driven by and tailored to local needs, and shared amongst a network of engaged partners.As of November 2015, the registry includes 1785 resources, with depositions from 126 individual registrations including 52 institutional providers and 74 individuals. With community support, the registry can become a standard for dissemination of information about bioinformatics resources: we welcome everyone to join us in this common endeavour. The registry is freely available at https://bio.tools. PMID:26538599
Full text: 1. Presentations and status reports. T. Fukahori (JAEA) reported on the plans for the www interface layout. Discussions included which functions were needed for new RIPL-3 web pages. The results are summarized in next section. 2. Layout of the interfaces and retrieval tools and web. RIPL-3 home page will include some description about RIPL-3 and link to the Technical report in pdf-format. The web page for 'mass' segment contains same contents as RIPL-2 except the removal of the information about ground state deformation. The abundance data will be replaced by data from the new BNL wallet card (2005 version). The Q-value calculation tool will be also improved. The 'Nuclear Matter Density' will be renamed 'Nucleon Density Distribution'. 'Levels' segment will be same as before, and the deformation parameters for excited levels will be moved from 'optical' segment and given the name 'deformation'. 'Resonances' segment will be same as before - may be replaced with the new Mughabghab tables. 'Optical' segment will be same as before, and the deformation parameters for excited levels will be moved to 'optical' segment and given the name 'deformation'. The optical model calculation with ECIS and OPTMAN will be considered and double-folding calculation tool will possibly be provided. 'Densities' segment will be same as before, and the plotting programs will be checked. The 3-7 sets of combination of GC, BSFG, GSFM with/without enhancement factors will be given. 'Gamma' segment will be same as before, with addition of MLO and theoretical GDR calculation. 'Fission' segment will be same as before, and 'Exp.' will be renamed. New barrier evaluations will be added, for example, transition (2+) states. The fission spectrum calculation tool (codes and inputs) may be added. The fundamental format will be kept as before. For new items such as deformed 'nucleon density distribution', double-folding potential, evaluated fission barrier (extension into 3 or more) and fission
Kulvatunyou, Boonserm [ORNL; Ivezic, Nenad [ORNL
As markets become unexpectedly turbulent with a shortened product life cycle and a power shift towards buyers, the need for methods to rapidly and cost-effectively develop products, production facilities and supporting software is becoming urgent. The use of a virtual enterprise plays a vital role in surviving turbulent markets. However, its success requires reliable and large-scale interoperation among trading partners via a semantic web of trading partners' services whose properties, capabilities, and interfaces are encoded in an unambiguous as well as computer-understandable form. This paper demonstrates a promising approach to integration and interoperation between a design house and a manufacturer by developing semantic web services for business and engineering transactions. To this end, detailed activity and information flow diagrams are developed, in which the two trading partners exchange messages and documents. The properties and capabilities of the manufacturer sites are defined using DARPA Agent Markup Language (DAML) ontology definition language. The prototype development of semantic webs shows that enterprises can widely interoperate in an unambiguous and autonomous manner; hence, virtual enterprise is realizable at a low cost.
Hallin, Peter Fischer; Ussery, David
Currently, new bacterial genomes are being published on a monthly basis. With the growing amount of genome sequence data, there is a demand for a flexible and easy-to-maintain structure for storing sequence data and results from bioinformatic analysis. More than 150 sequenced bacterial genomes are...... detailed comparative genomics. DNA structural calculations like curvature and stacking energy, DNA compositions like base skews, oligo skews and repeats at the local and global level are just a few of the analysis that are presented on the CBS Genome Atlas Web page. Complex analysis, changing methods and...... frequent addition of new models are factors that require a dynamic database layout. Using basic tools like the GNU Make system, csh, Perl and MySQL, we have created a flexible database environment for storing and maintaining such results for a collection of complete microbial genomes. Currently, these...
Ponyik, Joseph G.; York, David W.
Embedded Systems have traditionally been developed in a highly customized manner. The user interface hardware and software along with the interface to the embedded system are typically unique to the system for which they are built, resulting in extra cost to the system in terms of development time and maintenance effort. World Wide Web standards have been developed in the passed ten years with the goal of allowing servers and clients to intemperate seamlessly. The client and server systems can consist of differing hardware and software platforms but the World Wide Web standards allow them to interface without knowing about the details of system at the other end of the interface. Embedded Web Technology is the merging of Embedded Systems with the World Wide Web. Embedded Web Technology decreases the cost of developing and maintaining the user interface by allowing the user to interface to the embedded system through a web browser running on a standard personal computer. Embedded Web Technology can also be used to simplify an Embedded System's internal network.
Vandervalk, Ben P; McCarthy, E Luke; Wilkinson, Mark D
Facile and meaningful integration of data from disparate resources is the 'holy grail' of bioinformatics. Some resources have begun to address this problem by providing their data using Semantic Web standards, specifically the Resource Description Framework (RDF) and the Web Ontology Language (OWL). Unfortunately, adoption of Semantic Web standards has been slow overall, and even in cases where the standards are being utilized, interconnectivity between resources is rare. In response, we have seen the emergence of centralized 'semantic warehouses' that collect public data from third parties, integrate it, translate it into OWL/RDF and provide it to the community as a unified and queryable resource. One limitation of the warehouse approach is that queries are confined to the resources that have been selected for inclusion. A related problem, perhaps of greater concern, is that the majority of bioinformatics data exists in the 'Deep Web'-that is, the data does not exist until an application or analytical tool is invoked, and therefore does not have a predictable Web address. The inability to utilize Uniform Resource Identifiers (URIs) to address this data is a barrier to its accessibility via URI-centric Semantic Web technologies. Here we examine 'The State of the Union' for the adoption of Semantic Web standards in the health care and life sciences domain by key bioinformatics resources, explore the nature and connectivity of several community-driven semantic warehousing projects, and report on our own progress with the CardioSHARE/Moby-2 project, which aims to make the resources of the Deep Web transparently accessible through SPARQL queries. PMID:19151099
Full Text Available On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1 First we describe the basics of web mining, types of web mining. 2 Details of each web mining technique.3We propose the architecture for the personalized web page recommendation.
Semantic Web Services for Web Databases introduces an end-to-end framework for querying Web databases using novel Web service querying techniques. This includes a detailed framework for the query infrastructure for Web databases and services. Case studies are covered in the last section of this book. Semantic Web Services For Web Databases is designed for practitioners and researchers focused on service-oriented computing and Web databases.
Delin, Kevin A. (Inventor); Jackson, Shannon P. (Inventor)
A Sensor Web formed of a number of different sensor pods. Each of the sensor pods include a clock which is synchronized with a master clock so that all of the sensor pods in the Web have a synchronized clock. The synchronization is carried out by first using a coarse synchronization which takes less power, and subsequently carrying out a fine synchronization to make a fine sync of all the pods on the Web. After the synchronization, the pods ping their neighbors to determine which pods are listening and responded, and then only listen during time slots corresponding to those pods which respond.
Dragut, Eduard Constantin
An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…
de Bruijne, M.A.
The rise of the mobile internet has rapidly changed the landscape for fielding web surveys. The devices that respondents use to take a web survey vary greatly in size and user interface. This diversity in the interaction between survey and respondent makes it challenging to design a web survey for t
Ravn, Anders P.; Staunstrup, Jørgen
This paper proposes a model for specifying interfaces between concurrently executing modules of a computing system. The model does not prescribe a particular type of communication protocol and is aimed at describing interfaces between both software and hardware modules or a combination of the two....... The model describes both functional and timing properties of an interface...
Mayr, Philipp; Tosques, Fabio
This report describes possibilities and restrictions of the Google Web APIs (Google API). The implementation of the Google API in the context of information science studies from the webometrics field shows, that the Google API can be used with restrictions for internet based studies. The comparison of hit results from the two Google interfaces Google API and the standard web interface Google.com (Google Web) shows differences concerning range, structure und availability. The study bases on si...
This thesis studies next-generation web user interaction definition languages, as well as browser software architectures. The motivation comes from new end-user requirements for web applications: demand for higher interaction, adaptation for mobile and multimodal usage, and rich multimedia content. At the same time, there is a requirement for non-programmers to be able to author, customize, and maintain web user interfaces. Current user interface tools do not support well these new kinds ...
Laranjeiro, Nuno; Vieira, Marco
Web services represent a powerful interface for back-end systems that must provide a robust interface to client applications, even in the presence of invalid inputs. However, developing robust services is a difficult task. In this paper we demonstrate wsrbench, an online tool that facilitates web services robustness testing. Additionally, we present two scenarios to motivate robustness testing and to demonstrate the power of robustness testing in web services environments.
Cohen, Andrew; Vitányi, Paul
Normalized web distance (NWD) is a similarity or normalized semantic distance based on the World Wide Web or any other large electronic database, for instance Wikipedia, and a search engine that returns reliable aggregate page counts. For sets of search terms the NWD gives a similarity on a scale from 0 (identical) to 1 (completely different). The NWD approximates the similarity according to all (upper semi)computable properties. We develop the theory and give applications. The derivation of ...
Dalpé, Gratien; Joly, Yann
Healthcare-related bioinformatics databases are increasingly offering the possibility to maintain, organize, and distribute DNA sequencing data. Different national and international institutions are currently hosting such databases that offer researchers website platforms where they can obtain sequencing data on which they can perform different types of analysis. Until recently, this process remained mostly one-dimensional, with most analysis concentrated on a limited amount of data. However, newer genome sequencing technology is producing a huge amount of data that current computer facilities are unable to handle. An alternative approach has been to start adopting cloud computing services for combining the information embedded in genomic and model system biology data, patient healthcare records, and clinical trials' data. In this new technological paradigm, researchers use virtual space and computing power from existing commercial or not-for-profit cloud service providers to access, store, and analyze data via different application programming interfaces. Cloud services are an alternative to the need of larger data storage; however, they raise different ethical, legal, and social issues. The purpose of this Commentary is to summarize how cloud computing can contribute to bioinformatics-based drug discovery and to highlight some of the outstanding legal, ethical, and social issues that are inherent in the use of cloud services. PMID:25195583
Mathe, Z.; Casajus Ramo, A.; Lazovsky, N.; Stagni, F.
For many years the DIRAC interware (Distributed Infrastructure with Remote Agent Control) has had a web interface, allowing the users to monitor DIRAC activities and also interact with the system. Since then many new web technologies have emerged, therefore a redesign and a new implementation of the DIRAC Web portal were necessary, taking into account the lessons learnt using the old portal. These new technologies allowed to build a more compact, robust and responsive web interface that enables users to have better control over the whole system while keeping a simple interface. The web framework provides a large set of “applications”, each of which can be used for interacting with various parts of the system. Communities can also create their own set of personalised web applications, and can easily extend already existing ones with a minimal effort. Each user can configure and personalise the view for each application and save it using the DIRAC User Profile service as RESTful state provider, instead of using cookies. The owner of a view can share it with other users or within a user community. Compatibility between different browsers is assured, as well as with mobile versions. In this paper, we present the new DIRAC Web framework as well as the LHCb extension of the DIRAC Web portal.
Bülow, Lorenz; Hehl, Reinhard
Bioinformatics tools can be employed to identify conserved cis-sequences in sets of coregulated plant genes because more and more gene expression and genomic sequence data become available. Knowledge on the specific cis-sequences, their enrichment and arrangement within promoters, facilitates the design of functional synthetic plant promoters that are responsive to specific stresses. The present chapter illustrates an example for the bioinformatic identification of conserved Arabidopsis thaliana cis-sequences enriched in drought stress-responsive genes. This workflow can be applied for the identification of cis-sequences in any sets of coregulated genes. The workflow includes detailed protocols to determine sets of coregulated genes, to extract the corresponding promoter sequences, and how to install and run a software package to identify overrepresented motifs. Further bioinformatic analyses that can be performed with the results are discussed. PMID:27557771
Zales, Charlotte Rappe; Cronin, Susan J.
Sixteen high school women participated in a 5-week residential summer program designed to encourage female and minority students to choose careers in scientific fields. Students gained expertise in bioinformatics through problem-based learning in a complex learning environment of content instruction, speakers, labs, and trips. Innovative hands-on activities filled the program. Students learned biological principles in context and sophisticated bioinformatics tools for processing data. Students additionally mastered a variety of information-searching techniques. Students completed creative individual and group projects, demonstrating the successful integration of biology, information technology, and bioinformatics. Discussions with female scientists allowed students to see themselves in similar roles. Summer residential aspects fostered an atmosphere in which students matured in interacting with others and in their views of diversity.
DENG You-ping; AI Jun-mei; XIAO Pei-gen
One important purpose to investigate medicinal plants is to understand genes and enzymes that govern the biological metabolic process to produce bioactive compounds.Genome wide high throughput technologies such as genomics,transcriptomics,proteomics and metabolomics can help reach that goal.Such technologies can produce a vast amount of data which desperately need bioinformatics and systems biology to process,manage,distribute and understand these data.By dealing with the"omics"data,bioinformatics and systems biology can also help improve the quality of traditional medicinal materials,develop new approaches for the classification and authentication of medicinal plants,identify new active compounds,and cultivate medicinal plant species that tolerate harsh environmental conditions.In this review,the application of bioinformatics and systems biology in medicinal plants is briefly introduced.
Structural bioinformatics is concerned with the molecular structure of biomacromolecules on a genomic scale, using computational methods. Classic problems in structural bioinformatics include the prediction of protein and RNA structure from sequence, the design of artificial proteins or enzymes...... and experimental determination of macromolecular structure that are based on such methods. These developments include generative models of protein structure, the estimation of the parameters of energy functions that are used in structure prediction, the superposition of macromolecules and structure...... bioinformatics. Recently, probabilistic models and machine learning methods based on Bayesian principles are providing efficient and rigorous solutions to challenging problems that were long regarded as intractable. In this review, I will highlight some important recent developments in the prediction, analysis...
de Miranda Antonio B
Full Text Available Abstract Background BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Results Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Conclusion Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.