WorldWideScience

Sample records for web bioinformatics framework

  1. The EMBL-EBI bioinformatics web and programmatic tools framework.

    Science.gov (United States)

    Li, Weizhong; Cowley, Andrew; Uludag, Mahmut; Gur, Tamer; McWilliam, Hamish; Squizzato, Silvano; Park, Young Mi; Buso, Nicola; Lopez, Rodrigo

    2015-07-01

    Since 2009 the EMBL-EBI Job Dispatcher framework has provided free access to a range of mainstream sequence analysis applications. These include sequence similarity search services (https://www.ebi.ac.uk/Tools/sss/) such as BLAST, FASTA and PSI-Search, multiple sequence alignment tools (https://www.ebi.ac.uk/Tools/msa/) such as Clustal Omega, MAFFT and T-Coffee, and other sequence analysis tools (https://www.ebi.ac.uk/Tools/pfa/) such as InterProScan. Through these services users can search mainstream sequence databases such as ENA, UniProt and Ensembl Genomes, utilising a uniform web interface or systematically through Web Services interfaces (https://www.ebi.ac.uk/Tools/webservices/) using common programming languages, and obtain enriched results with novel visualisations. Integration with EBI Search (https://www.ebi.ac.uk/ebisearch/) and the dbfetch retrieval service (https://www.ebi.ac.uk/Tools/dbfetch/) further expands the usefulness of the framework. New tools and updates such as NCBI BLAST+, InterProScan 5 and PfamScan, new categories such as RNA analysis tools (https://www.ebi.ac.uk/Tools/rna/), new databases such as ENA non-coding, WormBase ParaSite, Pfam and Rfam, and new workflow methods, together with the retirement of depreciated services, ensure that the framework remains relevant to today's biological community.

  2. A Generic and Dynamic Framework for Web Publishing of Bioinformatics Databases

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The present paper covers a generic and dynamic framework for the web publishing of bioinformatics databases based upon a meta data design, Java Bean, Java Server Page(JSP), Extensible Markup Language(XML), Extensible Stylesheet Language(XSL) and Extensible Stylesheet Language Transformation(XSLT). In this framework, the content is stored in a configurable and structured XML format, dynamically generated from an Oracle Relational Database Management System(RDBMS). The presentation is dynamically generated by transforming the XML document into HTML through XSLT.This clean separation between content and presentation makes the web publishing more flexible; changing the presentation only needs a modification of the Extensive Stylesheet(XS).

  3. Discovery and Classification of Bioinformatics Web Services

    Energy Technology Data Exchange (ETDEWEB)

    Rocco, D; Critchlow, T

    2002-09-02

    The transition of the World Wide Web from a paradigm of static Web pages to one of dynamic Web services provides new and exciting opportunities for bioinformatics with respect to data dissemination, transformation, and integration. However, the rapid growth of bioinformatics services, coupled with non-standardized interfaces, diminish the potential that these Web services offer. To face this challenge, we examine the notion of a Web service class that defines the functionality provided by a collection of interfaces. These descriptions are an integral part of a larger framework that can be used to discover, classify, and wrapWeb services automatically. We discuss how this framework can be used in the context of the proliferation of sites offering BLAST sequence alignment services for specialized data sets.

  4. ballaxy: web services for structural bioinformatics.

    Science.gov (United States)

    Hildebrandt, Anna Katharina; Stöckel, Daniel; Fischer, Nina M; de la Garza, Luis; Krüger, Jens; Nickels, Stefan; Röttig, Marc; Schärfe, Charlotta; Schumann, Marcel; Thiel, Philipp; Lenhof, Hans-Peter; Kohlbacher, Oliver; Hildebrandt, Andreas

    2015-01-01

    Web-based workflow systems have gained considerable momentum in sequence-oriented bioinformatics. In structural bioinformatics, however, such systems are still relatively rare; while commercial stand-alone workflow applications are common in the pharmaceutical industry, academic researchers often still rely on command-line scripting to glue individual tools together. In this work, we address the problem of building a web-based system for workflows in structural bioinformatics. For the underlying molecular modelling engine, we opted for the BALL framework because of its extensive and well-tested functionality in the field of structural bioinformatics. The large number of molecular data structures and algorithms implemented in BALL allows for elegant and sophisticated development of new approaches in the field. We hence connected the versatile BALL library and its visualization and editing front end BALLView with the Galaxy workflow framework. The result, which we call ballaxy, enables the user to simply and intuitively create sophisticated pipelines for applications in structure-based computational biology, integrated into a standard tool for molecular modelling.  ballaxy consists of three parts: some minor modifications to the Galaxy system, a collection of tools and an integration into the BALL framework and the BALLView application for molecular modelling. Modifications to Galaxy will be submitted to the Galaxy project, and the BALL and BALLView integrations will be integrated in the next major BALL release. After acceptance of the modifications into the Galaxy project, we will publish all ballaxy tools via the Galaxy toolshed. In the meantime, all three components are available from http://www.ball-project.org/ballaxy. Also, docker images for ballaxy are available at https://registry.hub.docker.com/u/anhi/ballaxy/dockerfile/. ballaxy is licensed under the terms of the GPL. © The Author 2014. Published by Oxford University Press. All rights reserved. For

  5. The GMOD Drupal Bioinformatic Server Framework

    Science.gov (United States)

    Papanicolaou, Alexie; Heckel, David G.

    2010-01-01

    Motivation: Next-generation sequencing technologies have led to the widespread use of -omic applications. As a result, there is now a pronounced bioinformatic bottleneck. The general model organism database (GMOD) tool kit (http://gmod.org) has produced a number of resources aimed at addressing this issue. It lacks, however, a robust online solution that can deploy heterogeneous data and software within a Web content management system (CMS). Results: We present a bioinformatic framework for the Drupal CMS. It consists of three modules. First, GMOD-DBSF is an application programming interface module for the Drupal CMS that simplifies the programming of bioinformatic Drupal modules. Second, the Drupal Bioinformatic Software Bench (biosoftware_bench) allows for a rapid and secure deployment of bioinformatic software. An innovative graphical user interface (GUI) guides both use and administration of the software, including the secure provision of pre-publication datasets. Third, we present genes4all_experiment, which exemplifies how our work supports the wider research community. Conclusion: Given the infrastructure presented here, the Drupal CMS may become a powerful new tool set for bioinformaticians. The GMOD-DBSF base module is an expandable community resource that decreases development time of Drupal modules for bioinformatics. The biosoftware_bench module can already enhance biologists' ability to mine their own data. The genes4all_experiment module has already been responsible for archiving of more than 150 studies of RNAi from Lepidoptera, which were previously unpublished. Availability and implementation: Implemented in PHP and Perl. Freely available under the GNU Public License 2 or later from http://gmod-dbsf.googlecode.com Contact: alexie@butterflybase.org PMID:20971988

  6. Bringing Web 2.0 to bioinformatics.

    Science.gov (United States)

    Zhang, Zhang; Cheung, Kei-Hoi; Townsend, Jeffrey P

    2009-01-01

    Enabling deft data integration from numerous, voluminous and heterogeneous data sources is a major bioinformatic challenge. Several approaches have been proposed to address this challenge, including data warehousing and federated databasing. Yet despite the rise of these approaches, integration of data from multiple sources remains problematic and toilsome. These two approaches follow a user-to-computer communication model for data exchange, and do not facilitate a broader concept of data sharing or collaboration among users. In this report, we discuss the potential of Web 2.0 technologies to transcend this model and enhance bioinformatics research. We propose a Web 2.0-based Scientific Social Community (SSC) model for the implementation of these technologies. By establishing a social, collective and collaborative platform for data creation, sharing and integration, we promote a web services-based pipeline featuring web services for computer-to-computer data exchange as users add value. This pipeline aims to simplify data integration and creation, to realize automatic analysis, and to facilitate reuse and sharing of data. SSC can foster collaboration and harness collective intelligence to create and discover new knowledge. In addition to its research potential, we also describe its potential role as an e-learning platform in education. We discuss lessons from information technology, predict the next generation of Web (Web 3.0), and describe its potential impact on the future of bioinformatics studies.

  7. Evolution of web services in bioinformatics.

    Science.gov (United States)

    Neerincx, Pieter B T; Leunissen, Jack A M

    2005-06-01

    Bioinformaticians have developed large collections of tools to make sense of the rapidly growing pool of molecular biological data. Biological systems tend to be complex and in order to understand them, it is often necessary to link many data sets and use more than one tool. Therefore, bioinformaticians have experimented with several strategies to try to integrate data sets and tools. Owing to the lack of standards for data sets and the interfaces of the tools this is not a trivial task. Over the past few years building services with web-based interfaces has become a popular way of sharing the data and tools that have resulted from many bioinformatics projects. This paper discusses the interoperability problem and how web services are being used to try to solve it, resulting in the evolution of tools with web interfaces from HTML/web form-based tools not suited for automatic workflow generation to a dynamic network of XML-based web services that can easily be used to create pipelines.

  8. BOWS (bioinformatics open web services) to centralize bioinformatics tools in web services.

    Science.gov (United States)

    Velloso, Henrique; Vialle, Ricardo A; Ortega, J Miguel

    2015-06-02

    Bioinformaticians face a range of difficulties to get locally-installed tools running and producing results; they would greatly benefit from a system that could centralize most of the tools, using an easy interface for input and output. Web services, due to their universal nature and widely known interface, constitute a very good option to achieve this goal. Bioinformatics open web services (BOWS) is a system based on generic web services produced to allow programmatic access to applications running on high-performance computing (HPC) clusters. BOWS intermediates the access to registered tools by providing front-end and back-end web services. Programmers can install applications in HPC clusters in any programming language and use the back-end service to check for new jobs and their parameters, and then to send the results to BOWS. Programs running in simple computers consume the BOWS front-end service to submit new processes and read results. BOWS compiles Java clients, which encapsulate the front-end web service requisitions, and automatically creates a web page that disposes the registered applications and clients. Bioinformatics open web services registered applications can be accessed from virtually any programming language through web services, or using standard java clients. The back-end can run in HPC clusters, allowing bioinformaticians to remotely run high-processing demand applications directly from their machines.

  9. MAPI: towards the integrated exploitation of bioinformatics Web Services

    Directory of Open Access Journals (Sweden)

    Karlsson Johan

    2011-10-01

    Full Text Available Abstract Background Bioinformatics is commonly featured as a well assorted list of available web resources. Although diversity of services is positive in general, the proliferation of tools, their dispersion and heterogeneity complicate the integrated exploitation of such data processing capacity. Results To facilitate the construction of software clients and make integrated use of this variety of tools, we present a modular programmatic application interface (MAPI that provides the necessary functionality for uniform representation of Web Services metadata descriptors including their management and invocation protocols of the services which they represent. This document describes the main functionality of the framework and how it can be used to facilitate the deployment of new software under a unified structure of bioinformatics Web Services. A notable feature of MAPI is the modular organization of the functionality into different modules associated with specific tasks. This means that only the modules needed for the client have to be installed, and that the module functionality can be extended without the need for re-writing the software client. Conclusions The potential utility and versatility of the software library has been demonstrated by the implementation of several currently available clients that cover different aspects of integrated data processing, ranging from service discovery to service invocation with advanced features such as workflows composition and asynchronous services calls to multiple types of Web Services including those registered in repositories (e.g. GRID-based, SOAP, BioMOBY, R-bioconductor, and others.

  10. Evolution of web services in bioinformatics

    NARCIS (Netherlands)

    Neerincx, P.B.T.; Leunissen, J.A.M.

    2005-01-01

    Bioinformaticians have developed large collections of tools to make sense of the rapidly growing pool of molecular biological data. Biological systems tend to be complex and in order to understand them, it is often necessary to link many data sets and use more than one tool. Therefore, bioinformatic

  11. Web Application Frameworks

    Directory of Open Access Journals (Sweden)

    Maria Cristina ENACHE

    2015-12-01

    Full Text Available As modern browsers become more powerful with rich features, building full-blown web applications in JavaScript is not only feasible, but increasingly popular. Based on trends on HTTP Archive, deployed JavaScript code size has grown 45% over the course of the year. MVC offers architectural benefits over standard JavaScript — it helps you write better organized, and therefore more maintainable code. This pattern has been used and extensively tested over multiple languages and generations of programmers. It's no coincidence that many of the most popular web programming frameworks also encapsulate MVC principles: Django, Ruby on Rails, CakePHP, Struts, or Laravell.

  12. WIWS: a protein structure bioinformatics Web service collection.

    Science.gov (United States)

    Hekkelman, M L; Te Beek, T A H; Pettifer, S R; Thorne, D; Attwood, T K; Vriend, G

    2010-07-01

    The WHAT IF molecular-modelling and drug design program is widely distributed in the world of protein structure bioinformatics. Although originally designed as an interactive application, its highly modular design and inbuilt control language have recently enabled its deployment as a collection of programmatically accessible web services. We report here a collection of WHAT IF-based protein structure bioinformatics web services: these relate to structure quality, the use of symmetry in crystal structures, structure correction and optimization, adding hydrogens and optimizing hydrogen bonds and a series of geometric calculations. The freely accessible web services are based on the industry standard WS-I profile and the EMBRACE technical guidelines, and are available via both REST and SOAP paradigms. The web services run on a dedicated computational cluster; their function and availability is monitored daily.

  13. Bioinformatics Data Distribution and Integration via Web Services and XML

    Institute of Scientific and Technical Information of China (English)

    Xiao Li; Yizheng Zhang

    2003-01-01

    It is widely recognized that exchange, distribution, and integration of biological data are the keys to improve bioinformatics and genome biology in post-genomic era. However, the problem of exchanging and integrating biological data is not solved satisfactorily. The eXtensible Markup Language (XML) is rapidly spreading as an emerging standard for structuring documents to exchange and integrate data on the World Wide Web (WWW). Web service is the next generation of WWW and is founded upon the open standards of W3C (World Wide Web Consortium)and IETF (Internet Engineering Task Force). This paper presents XML and Web Services technologies and their use for an appropriate solution to the problem of bioinformatics data exchange and integration.

  14. MVC Frameworks in Web Development

    OpenAIRE

    Kolu, Aku

    2012-01-01

    With the increased demand of complex, well-scalable and maintainable web applications, the MVC architecture is increasing in popularity and frameworks (whether they utilize the MVC architecture or not) are quickly becoming de facto –standard in web development. This Bachelor’s Thesis introduces the use of MVC architecture in web development and how several web application frameworks make use of it. This research introduces the concepts of both the MVC architecture and web ap...

  15. A web services choreography scenario for interoperating bioinformatics applications

    Directory of Open Access Journals (Sweden)

    Cheung David W

    2004-03-01

    Full Text Available Abstract Background Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1 the platforms on which the applications run are heterogeneous, 2 their web interface is not machine-friendly, 3 they use a non-standard format for data input and output, 4 they do not exploit standards to define application interface and message exchange, and 5 existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. Results To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates

  16. A web services choreography scenario for interoperating bioinformatics applications

    Science.gov (United States)

    de Knikker, Remko; Guo, Youjun; Li, Jin-long; Kwan, Albert KH; Yip, Kevin Y; Cheung, David W; Cheung, Kei-Hoi

    2004-01-01

    Background Very often genome-wide data analysis requires the interoperation of multiple databases and analytic tools. A large number of genome databases and bioinformatics applications are available through the web, but it is difficult to automate interoperation because: 1) the platforms on which the applications run are heterogeneous, 2) their web interface is not machine-friendly, 3) they use a non-standard format for data input and output, 4) they do not exploit standards to define application interface and message exchange, and 5) existing protocols for remote messaging are often not firewall-friendly. To overcome these issues, web services have emerged as a standard XML-based model for message exchange between heterogeneous applications. Web services engines have been developed to manage the configuration and execution of a web services workflow. Results To demonstrate the benefit of using web services over traditional web interfaces, we compare the two implementations of HAPI, a gene expression analysis utility developed by the University of California San Diego (UCSD) that allows visual characterization of groups or clusters of genes based on the biomedical literature. This utility takes a set of microarray spot IDs as input and outputs a hierarchy of MeSH Keywords that correlates to the input and is grouped by Medical Subject Heading (MeSH) category. While the HTML output is easy for humans to visualize, it is difficult for computer applications to interpret semantically. To facilitate the capability of machine processing, we have created a workflow of three web services that replicates the HAPI functionality. These web services use document-style messages, which means that messages are encoded in an XML-based format. We compared three approaches to the implementation of an XML-based workflow: a hard coded Java application, Collaxa BPEL Server and Taverna Workbench. The Java program functions as a web services engine and interoperates with these web

  17. KBWS: an EMBOSS associated package for accessing bioinformatics web services.

    Science.gov (United States)

    Oshita, Kazuki; Arakawa, Kazuharu; Tomita, Masaru

    2011-04-29

    The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS) UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS), adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded) and http://soap.g-language.org/kbws_dl.wsdl (Document/literal).

  18. KBWS: an EMBOSS associated package for accessing bioinformatics web services

    Directory of Open Access Journals (Sweden)

    Tomita Masaru

    2011-04-01

    Full Text Available Abstract The availability of bioinformatics web-based services is rapidly proliferating, for their interoperability and ease of use. The next challenge is in the integration of these services in the form of workflows, and several projects are already underway, standardizing the syntax, semantics, and user interfaces. In order to deploy the advantages of web services with locally installed tools, here we describe a collection of proxy client tools for 42 major bioinformatics web services in the form of European Molecular Biology Open Software Suite (EMBOSS UNIX command-line tools. EMBOSS provides sophisticated means for discoverability and interoperability for hundreds of tools, and our package, named the Keio Bioinformatics Web Service (KBWS, adds functionalities of local and multiple alignment of sequences, phylogenetic analyses, and prediction of cellular localization of proteins and RNA secondary structures. This software implemented in C is available under GPL from http://www.g-language.org/kbws/ and GitHub repository http://github.com/cory-ko/KBWS. Users can utilize the SOAP services implemented in Perl directly via WSDL file at http://soap.g-language.org/kbws.wsdl (RPC Encoded and http://soap.g-language.org/kbws_dl.wsdl (Document/literal.

  19. The web server of IBM's Bioinformatics and Pattern Discovery group

    OpenAIRE

    Huynh, Tien; Rigoutsos, Isidore; Parida, Laxmi; Platt, Daniel,; Shibuya, Tetsuo

    2003-01-01

    We herein present and discuss the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server is operational around the clock and provides access to a variety of methods that have been published by the group's members and collaborators. The available tools correspond to applications ranging from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic ...

  20. Biopipe: a flexible framework for protocol-based bioinformatics analysis.

    Science.gov (United States)

    Hoon, Shawn; Ratnapu, Kiran Kumar; Chia, Jer-Ming; Kumarasamy, Balamurugan; Juguang, Xiao; Clamp, Michele; Stabenau, Arne; Potter, Simon; Clarke, Laura; Stupka, Elia

    2003-08-01

    We identify several challenges facing bioinformatics analysis today. Firstly, to fulfill the promise of comparative studies, bioinformatics analysis will need to accommodate different sources of data residing in a federation of databases that, in turn, come in different formats and modes of accessibility. Secondly, the tsunami of data to be handled will require robust systems that enable bioinformatics analysis to be carried out in a parallel fashion. Thirdly, the ever-evolving state of bioinformatics presents new algorithms and paradigms in conducting analysis. This means that any bioinformatics framework must be flexible and generic enough to accommodate such changes. In addition, we identify the need for introducing an explicit protocol-based approach to bioinformatics analysis that will lend rigorousness to the analysis. This makes it easier for experimentation and replication of results by external parties. Biopipe is designed in an effort to meet these goals. It aims to allow researchers to focus on protocol design. At the same time, it is designed to work over a compute farm and thus provides high-throughput performance. A common exchange format that encapsulates the entire protocol in terms of the analysis modules, parameters, and data versions has been developed to provide a powerful way in which to distribute and reproduce results. This will enable researchers to discuss and interpret the data better as the once implicit assumptions are now explicitly defined within the Biopipe framework.

  1. Web services at the European Bioinformatics Institute-2009.

    Science.gov (United States)

    McWilliam, Hamish; Valentin, Franck; Goujon, Mickael; Li, Weizhong; Narayanasamy, Menaka; Martin, Jenny; Miyar, Teresa; Lopez, Rodrigo

    2009-07-01

    The European Bioinformatics Institute (EMBL-EBI) has been providing access to mainstream databases and tools in bioinformatics since 1997. In addition to the traditional web form based interfaces, APIs exist for core data resources such as EMBL-Bank, Ensembl, UniProt, InterPro, PDB and ArrayExpress. These APIs are based on Web Services (SOAP/REST) interfaces that allow users to systematically access databases and analytical tools. From the user's point of view, these Web Services provide the same functionality as the browser-based forms. However, using the APIs frees the user from web page constraints and are ideal for the analysis of large batches of data, performing text-mining tasks and the casual or systematic evaluation of mathematical models in regulatory networks. Furthermore, these services are widespread and easy to use; require no prior knowledge of the technology and no more than basic experience in programming. In the following we wish to inform of new and updated services as well as briefly describe planned developments to be made available during the course of 2009-2010.

  2. MOWServ: a web client for integration of bioinformatic resources.

    Science.gov (United States)

    Ramírez, Sergio; Muñoz-Mérida, Antonio; Karlsson, Johan; García, Maximiliano; Pérez-Pulido, Antonio J; Claros, M Gonzalo; Trelles, Oswaldo

    2010-07-01

    The productivity of any scientist is affected by cumbersome, tedious and time-consuming tasks that try to make the heterogeneous web services compatible so that they can be useful in their research. MOWServ, the bioinformatic platform offered by the Spanish National Institute of Bioinformatics, was released to provide integrated access to databases and analytical tools. Since its release, the number of available services has grown dramatically, and it has become one of the main contributors of registered services in the EMBRACE Biocatalogue. The ontology that enables most of the web-service compatibility has been curated, improved and extended. The service discovery has been greatly enhanced by Magallanes software and biodataSF. User data are securely stored on the main server by an authentication protocol that enables the monitoring of current or already-finished user's tasks, as well as the pipelining of successive data processing services. The BioMoby standard has been greatly extended with the new features included in the MOWServ, such as management of additional information (metadata such as extended descriptions, keywords and datafile examples), a qualified registry, error handling, asynchronous services and service replication. All of them have increased the MOWServ service quality, usability and robustness. MOWServ is available at http://www.inab.org/MOWServ/ and has a mirror at http://www.bitlab-es.com/MOWServ/.

  3. MOWServ: a web client for integration of bioinformatic resources

    Science.gov (United States)

    Ramírez, Sergio; Muñoz-Mérida, Antonio; Karlsson, Johan; García, Maximiliano; Pérez-Pulido, Antonio J.; Claros, M. Gonzalo; Trelles, Oswaldo

    2010-01-01

    The productivity of any scientist is affected by cumbersome, tedious and time-consuming tasks that try to make the heterogeneous web services compatible so that they can be useful in their research. MOWServ, the bioinformatic platform offered by the Spanish National Institute of Bioinformatics, was released to provide integrated access to databases and analytical tools. Since its release, the number of available services has grown dramatically, and it has become one of the main contributors of registered services in the EMBRACE Biocatalogue. The ontology that enables most of the web-service compatibility has been curated, improved and extended. The service discovery has been greatly enhanced by Magallanes software and biodataSF. User data are securely stored on the main server by an authentication protocol that enables the monitoring of current or already-finished user’s tasks, as well as the pipelining of successive data processing services. The BioMoby standard has been greatly extended with the new features included in the MOWServ, such as management of additional information (metadata such as extended descriptions, keywords and datafile examples), a qualified registry, error handling, asynchronous services and service replication. All of them have increased the MOWServ service quality, usability and robustness. MOWServ is available at http://www.inab.org/MOWServ/ and has a mirror at http://www.bitlab-es.com/MOWServ/. PMID:20525794

  4. Automatic Discovery and Inferencing of Complex Bioinformatics Web Interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Ngu, A; Rocco, D; Critchlow, T; Buttler, D

    2003-12-22

    The World Wide Web provides a vast resource to genomics researchers in the form of web-based access to distributed data sources--e.g. BLAST sequence homology search interfaces. However, the process for seeking the desired scientific information is still very tedious and frustrating. While there are several known servers on genomic data (e.g., GeneBank, EMBL, NCBI), that are shared and accessed frequently, new data sources are created each day in laboratories all over the world. The sharing of these newly discovered genomics results are hindered by the lack of a common interface or data exchange mechanism. Moreover, the number of autonomous genomics sources and their rate of change out-pace the speed at which they can be manually identified, meaning that the available data is not being utilized to its full potential. An automated system that can find, classify, describe and wrap new sources without tedious and low-level coding of source specific wrappers is needed to assist scientists to access to hundreds of dynamically changing bioinformatics web data sources through a single interface. A correct classification of any kind of Web data source must address both the capability of the source and the conversation/interaction semantics which is inherent in the design of the Web data source. In this paper, we propose an automatic approach to classify Web data sources that takes into account both the capability and the conversational semantics of the source. The ability to discover the interaction pattern of a Web source leads to increased accuracy in the classification process. At the same time, it facilitates the extraction of process semantics, which is necessary for the automatic generation of wrappers that can interact correctly with the sources.

  5. SIDECACHE: Information access, management and dissemination framework for web services

    Directory of Open Access Journals (Sweden)

    Robbins Kay A

    2011-06-01

    Full Text Available Abstract Background Many bioinformatics algorithms and data sets are deployed using web services so that the results can be explored via the Internet and easily integrated into other tools and services. These services often include data from other sites that is accessed either dynamically or through file downloads. Developers of these services face several problems because of the dynamic nature of the information from the upstream services. Many publicly available repositories of bioinformatics data frequently update their information. When such an update occurs, the developers of the downstream service may also need to update. For file downloads, this process is typically performed manually followed by web service restart. Requests for information obtained by dynamic access of upstream sources is sometimes subject to rate restrictions. Findings SideCache provides a framework for deploying web services that integrate information extracted from other databases and from web sources that are periodically updated. This situation occurs frequently in biotechnology where new information is being continuously generated and the latest information is important. SideCache provides several types of services including proxy access and rate control, local caching, and automatic web service updating. Conclusions We have used the SideCache framework to automate the deployment and updating of a number of bioinformatics web services and tools that extract information from remote primary sources such as NCBI, NCIBI, and Ensembl. The SideCache framework also has been used to share research results through the use of a SideCache derived web service.

  6. BioSWR--semantic web services registry for bioinformatics.

    Directory of Open Access Journals (Sweden)

    Dmitry Repchevsky

    Full Text Available Despite of the variety of available Web services registries specially aimed at Life Sciences, their scope is usually restricted to a limited set of well-defined types of services. While dedicated registries are generally tied to a particular format, general-purpose ones are more adherent to standards and usually rely on Web Service Definition Language (WSDL. Although WSDL is quite flexible to support common Web services types, its lack of semantic expressiveness led to various initiatives to describe Web services via ontology languages. Nevertheless, WSDL 2.0 descriptions gained a standard representation based on Web Ontology Language (OWL. BioSWR is a novel Web services registry that provides standard Resource Description Framework (RDF based Web services descriptions along with the traditional WSDL based ones. The registry provides Web-based interface for Web services registration, querying and annotation, and is also accessible programmatically via Representational State Transfer (REST API or using a SPARQL Protocol and RDF Query Language. BioSWR server is located at http://inb.bsc.es/BioSWR/and its code is available at https://sourceforge.net/projects/bioswr/under the LGPL license.

  7. WeBIAS: a web server for publishing bioinformatics applications.

    Science.gov (United States)

    Daniluk, Paweł; Wilczyński, Bartek; Lesyng, Bogdan

    2015-11-02

    One of the requirements for a successful scientific tool is its availability. Developing a functional web service, however, is usually considered a mundane and ungratifying task, and quite often neglected. When publishing bioinformatic applications, such attitude puts additional burden on the reviewers who have to cope with poorly designed interfaces in order to assess quality of presented methods, as well as impairs actual usefulness to the scientific community at large. In this note we present WeBIAS-a simple, self-contained solution to make command-line programs accessible through web forms. It comprises a web portal capable of serving several applications and backend schedulers which carry out computations. The server handles user registration and authentication, stores queries and results, and provides a convenient administrator interface. WeBIAS is implemented in Python and available under GNU Affero General Public License. It has been developed and tested on GNU/Linux compatible platforms covering a vast majority of operational WWW servers. Since it is written in pure Python, it should be easy to deploy also on all other platforms supporting Python (e.g. Windows, Mac OS X). Documentation and source code, as well as a demonstration site are available at http://bioinfo.imdik.pan.pl/webias . WeBIAS has been designed specifically with ease of installation and deployment of services in mind. Setting up a simple application requires minimal effort, yet it is possible to create visually appealing, feature-rich interfaces for query submission and presentation of results.

  8. The web server of IBM's Bioinformatics and Pattern Discovery group.

    Science.gov (United States)

    Huynh, Tien; Rigoutsos, Isidore; Parida, Laxmi; Platt, Daniel; Shibuya, Tetsuo

    2003-07-01

    We herein present and discuss the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server is operational around the clock and provides access to a variety of methods that have been published by the group's members and collaborators. The available tools correspond to applications ranging from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences and the interactive annotation of amino acid sequences. Additionally, annotations for more than 70 archaeal, bacterial, eukaryotic and viral genomes are available on-line and can be searched interactively. The tools and code bundles can be accessed beginning at http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/.

  9. Meta-learning framework applied in bioinformatics inference system design.

    Science.gov (United States)

    Arredondo, Tomás; Ormazábal, Wladimir

    2015-01-01

    This paper describes a meta-learner inference system development framework which is applied and tested in the implementation of bioinformatic inference systems. These inference systems are used for the systematic classification of the best candidates for inclusion in bacterial metabolic pathway maps. This meta-learner-based approach utilises a workflow where the user provides feedback with final classification decisions which are stored in conjunction with analysed genetic sequences for periodic inference system training. The inference systems were trained and tested with three different data sets related to the bacterial degradation of aromatic compounds. The analysis of the meta-learner-based framework involved contrasting several different optimisation methods with various different parameters. The obtained inference systems were also contrasted with other standard classification methods with accurate prediction capabilities observed.

  10. Implementing a web-based introductory bioinformatics course for non-bioinformaticians that incorporates practical exercises.

    Science.gov (United States)

    Vincent, Antony T; Bourbonnais, Yves; Brouard, Jean-Simon; Deveau, Hélène; Droit, Arnaud; Gagné, Stéphane M; Guertin, Michel; Lemieux, Claude; Rathier, Louis; Charette, Steve J; Lagüe, Patrick

    2017-09-13

    A recent scientific discipline, bioinformatics, defined as using informatics for the study of biological problems, is now a requirement for the study of biological sciences. Bioinformatics has become such a powerful and popular discipline that several academic institutions have created programs in this field, allowing students to become specialized. However, biology students who are not involved in a bioinformatics program also need a solid toolbox of bioinformatics software and skills. Therefore, we have developed a completely online bioinformatics course for non-bioinformaticians, entitled "BIF-1901 Introduction à la bio-informatique et à ses outils (Introduction to bioinformatics and bioinformatics tools)," given by the Department of Biochemistry, Microbiology, and Bioinformatics of Université Laval (Quebec City, Canada). This course requires neither a bioinformatics background nor specific skills in informatics. The underlying main goal was to produce a completely online up-to-date bioinformatics course, including practical exercises, with an intuitive pedagogical framework. The course, BIF-1901, was conceived to cover the three fundamental aspects of bioinformatics: (1) informatics, (2) biological sequence analysis, and (3) structural bioinformatics. This article discusses the content of the modules, the evaluations, the pedagogical framework, and the challenges inherent to a multidisciplinary, fully online course. © 2017 by The International Union of Biochemistry and Molecular Biology, 2017. © 2017 The International Union of Biochemistry and Molecular Biology.

  11. ZBIT Bioinformatics Toolbox: A Web-Platform for Systems Biology and Expression Data Analysis.

    Science.gov (United States)

    Römer, Michael; Eichner, Johannes; Dräger, Andreas; Wrzodek, Clemens; Wrzodek, Finja; Zell, Andreas

    2016-01-01

    Bioinformatics analysis has become an integral part of research in biology. However, installation and use of scientific software can be difficult and often requires technical expert knowledge. Reasons are dependencies on certain operating systems or required third-party libraries, missing graphical user interfaces and documentation, or nonstandard input and output formats. In order to make bioinformatics software easily accessible to researchers, we here present a web-based platform. The Center for Bioinformatics Tuebingen (ZBIT) Bioinformatics Toolbox provides web-based access to a collection of bioinformatics tools developed for systems biology, protein sequence annotation, and expression data analysis. Currently, the collection encompasses software for conversion and processing of community standards SBML and BioPAX, transcription factor analysis, and analysis of microarray data from transcriptomics and proteomics studies. All tools are hosted on a customized Galaxy instance and run on a dedicated computation cluster. Users only need a web browser and an active internet connection in order to benefit from this service. The web platform is designed to facilitate the usage of the bioinformatics tools for researchers without advanced technical background. Users can combine tools for complex analyses or use predefined, customizable workflows. All results are stored persistently and reproducible. For each tool, we provide documentation, tutorials, and example data to maximize usability. The ZBIT Bioinformatics Toolbox is freely available at https://webservices.cs.uni-tuebingen.de/.

  12. WebSelF: A Web Scraping Framework

    DEFF Research Database (Denmark)

    Thomsen, Jakob; Ernst, Erik; Brabrand, Claus

    2012-01-01

    We present, WebSelF, a framework for web scraping which models the process of web scraping and decomposes it into four conceptually independent, reusable, and composable constituents. We have validated our framework through a full parameterized implementation that is flexible enough to capture...... previous work on web scraping. We have experimentally evaluated our framework and implementation in an experiment that evaluated several qualitatively different web scraping constituents (including previous work and combinations hereof) on about 11,000 HTML pages on daily versions of 17 web sites over...... a period of more than one year. Our framework solves three concrete problems with current web scraping and our experimental results indicate that com- position of previous and our new techniques achieve a higher degree of accuracy, precision and specificity than existing techniques alone....

  13. Design and Analysis of Web Application Frameworks

    DEFF Research Database (Denmark)

    Schwarz, Mathias Romme

    -state manipulation vulnerabilities. The hypothesis of this dissertation is that we can design frameworks and static analyses that aid the programmer to avoid such errors. First, we present the JWIG web application framework for writing secure and maintainable web applications. We discuss how this framework solves...... some of the common errors through an API that is designed to be safe by default. Second, we present a novel technique for checking HTML validity for output that is generated by web applications. Through string analysis, we approximate the output of web applications as context-free grammars. We model......Numerous web application frameworks have been developed in recent years. These frameworks enable programmers to reuse common components and to avoid typical pitfalls in web application development. Although such frameworks help the programmer to avoid many common errors, we nd...

  14. WIWS: a protein structure bioinformatics Web service collection.

    NARCIS (Netherlands)

    Hekkelman, M.L.; Beek, T.A.H. te; Pettifer, S.R.; Thorne, D.; Attwood, T.K.; Vriend, G.

    2010-01-01

    The WHAT IF molecular-modelling and drug design program is widely distributed in the world of protein structure bioinformatics. Although originally designed as an interactive application, its highly modular design and inbuilt control language have recently enabled its deployment as a collection of p

  15. WebSelF: A Web Scraping Framework

    DEFF Research Database (Denmark)

    Thomsen, Jakob; Ernst, Erik; Brabrand, Claus

    2012-01-01

    previous work on web scraping. We have experimentally evaluated our framework and implementation in an experiment that evaluated several qualitatively different web scraping constituents (including previous work and combinations hereof) on about 11,000 HTML pages on daily versions of 17 web sites over...

  16. The Firegoose: two-way integration of diverse data from different bioinformatics web resources with desktop applications

    Directory of Open Access Journals (Sweden)

    Schmid Amy K

    2007-11-01

    Full Text Available Abstract Background Information resources on the World Wide Web play an indispensable role in modern biology. But integrating data from multiple sources is often encumbered by the need to reformat data files, convert between naming systems, or perform ongoing maintenance of local copies of public databases. Opportunities for new ways of combining and re-using data are arising as a result of the increasing use of web protocols to transmit structured data. Results The Firegoose, an extension to the Mozilla Firefox web browser, enables data transfer between web sites and desktop tools. As a component of the Gaggle integration framework, Firegoose can also exchange data with Cytoscape, the R statistical package, Multiexperiment Viewer (MeV, and several other popular desktop software tools. Firegoose adds the capability to easily use local data to query KEGG, EMBL STRING, DAVID, and other widely-used bioinformatics web sites. Query results from these web sites can be transferred to desktop tools for further analysis with a few clicks. Firegoose acquires data from the web by screen scraping, microformats, embedded XML, or web services. We define a microformat, which allows structured information compatible with the Gaggle to be embedded in HTML documents. We demonstrate the capabilities of this software by performing an analysis of the genes activated in the microbe Halobacterium salinarum NRC-1 in response to anaerobic environments. Starting with microarray data, we explore functions of differentially expressed genes by combining data from several public web resources and construct an integrated view of the cellular processes involved. Conclusion The Firegoose incorporates Mozilla Firefox into the Gaggle environment and enables interactive sharing of data between diverse web resources and desktop software tools without maintaining local copies. Additional web sites can be incorporated easily into the framework using the scripting platform of the

  17. Technosciences in Academia: Rethinking a Conceptual Framework for Bioinformatics Undergraduate Curricula

    Science.gov (United States)

    Symeonidis, Iphigenia Sofia

    This paper aims to elucidate guiding concepts for the design of powerful undergraduate bioinformatics degrees which will lead to a conceptual framework for the curriculum. "Powerful" here should be understood as having truly bioinformatics objectives rather than enrichment of existing computer science or life science degrees on which bioinformatics degrees are often based. As such, the conceptual framework will be one which aims to demonstrate intellectual honesty in regards to the field of bioinformatics. A synthesis/conceptual analysis approach was followed as elaborated by Hurd (1983). The approach takes into account the following: bioinfonnatics educational needs and goals as expressed by different authorities, five undergraduate bioinformatics degrees case-studies, educational implications of bioinformatics as a technoscience and approaches to curriculum design promoting interdisciplinarity and integration. Given these considerations, guiding concepts emerged and a conceptual framework was elaborated. The practice of bioinformatics was given a closer look, which led to defining tool-integration skills and tool-thinking capacity as crucial areas of the bioinformatics activities spectrum. It was argued, finally, that a process-based curriculum as a variation of a concept-based curriculum (where the concepts are processes) might be more conducive to the teaching of bioinformatics given a foundational first year of integrated science education as envisioned by Bialek and Botstein (2004). Furthermore, the curriculum design needs to define new avenues of communication and learning which bypass the traditional disciplinary barriers of academic settings as undertaken by Tador and Tidmor (2005) for graduate studies.

  18. Bioinformatics

    DEFF Research Database (Denmark)

    Baldi, Pierre; Brunak, Søren

    , and medicine will be particularly affected by the new results and the increased understanding of life at the molecular level. Bioinformatics is the development and application of computer methods for analysis, interpretation, and prediction, as well as for the design of experiments. It has emerged...... as a strategic frontier between biology and computer science. Machine learning approaches (e.g. neural networks, hidden Markov models, and belief networsk) are ideally suited for areas in which there is a lot of data but little theory. The goal in machine learning is to extract useful information from a body...... of data by building good probabilistic models. The particular twist behind machine learning, however, is to automate the process as much as possible.In this book, the authors present the key machine learning approaches and apply them to the computational problems encountered in the analysis of biological...

  19. The 2nd DBCLS BioHackathon: interoperable bioinformatics Web services for integrated applications.

    Science.gov (United States)

    Katayama, Toshiaki; Wilkinson, Mark D; Vos, Rutger; Kawashima, Takeshi; Kawashima, Shuichi; Nakao, Mitsuteru; Yamamoto, Yasunori; Chun, Hong-Woo; Yamaguchi, Atsuko; Kawano, Shin; Aerts, Jan; Aoki-Kinoshita, Kiyoko F; Arakawa, Kazuharu; Aranda, Bruno; Bonnal, Raoul Jp; Fernández, José M; Fujisawa, Takatomo; Gordon, Paul Mk; Goto, Naohisa; Haider, Syed; Harris, Todd; Hatakeyama, Takashi; Ho, Isaac; Itoh, Masumi; Kasprzyk, Arek; Kido, Nobuhiro; Kim, Young-Joo; Kinjo, Akira R; Konishi, Fumikazu; Kovarskaya, Yulia; von Kuster, Greg; Labarga, Alberto; Limviphuvadh, Vachiranee; McCarthy, Luke; Nakamura, Yasukazu; Nam, Yunsun; Nishida, Kozo; Nishimura, Kunihiro; Nishizawa, Tatsuya; Ogishima, Soichi; Oinn, Tom; Okamoto, Shinobu; Okuda, Shujiro; Ono, Keiichiro; Oshita, Kazuki; Park, Keun-Joon; Putnam, Nicholas; Senger, Martin; Severin, Jessica; Shigemoto, Yasumasa; Sugawara, Hideaki; Taylor, James; Trelles, Oswaldo; Yamasaki, Chisato; Yamashita, Riu; Satoh, Noriyuki; Takagi, Toshihisa

    2011-08-02

    The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i) a workflow to annotate 100,000 sequences from an invertebrate species; ii) an integrated system for analysis of the transcription factor binding sites (TFBSs) enriched based on differential gene expression data obtained from a microarray experiment; iii) a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv) a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i) the absence of several useful data or analysis functions in the Web service "space"; ii) the lack of documentation of methods; iii) lack of compliance with the SOAP/WSDL specification among and

  20. The 2nd DBCLS BioHackathon: interoperable bioinformatics Web services for integrated applications

    Directory of Open Access Journals (Sweden)

    Katayama Toshiaki

    2011-08-01

    Full Text Available Abstract Background The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009. Results Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i a workflow to annotate 100,000 sequences from an invertebrate species; ii an integrated system for analysis of the transcription factor binding sites (TFBSs enriched based on differential gene expression data obtained from a microarray experiment; iii a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs. Conclusions Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i the absence of several useful data or analysis functions in the Web service "space"; ii the lack of documentation of methods; iii lack of

  1. The DBCLS BioHackathon: standardization and interoperability for bioinformatics web services and workflows

    NARCIS (Netherlands)

    Katayama, T.; Arakawa, K.; Nakao, M.; Prins, J.C.P.

    2010-01-01

    Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, vario

  2. A semantic web approach applied to integrative bioinformatics experimentation: a biological use case with genomics data.

    NARCIS (Netherlands)

    Post, L.J.G.; Roos, M.; Marshall, M.S.; van Driel, R.; Breit, T.M.

    2007-01-01

    The numerous public data resources make integrative bioinformatics experimentation increasingly important in life sciences research. However, it is severely hampered by the way the data and information are made available. The semantic web approach enhances data exchange and integration by providing

  3. WEB MINING BASED FRAMEWORK FOR ONTOLOGY LEARNING

    Directory of Open Access Journals (Sweden)

    C.Ramesh

    2015-07-01

    Full Text Available Today, the notion of Semantic Web has emerged as a prominent solution to the problem of organizing the immense information provided by World Wide Web, and its focus on supporting a better co-operation between humans and machines is noteworthy. Ontology forms the major component of Semantic Web in its realization. However, manual method of ontology construction is time-consuming, costly, error-prone and inflexible to change and in addition, it requires a complete participation of knowledge engineer or domain expert. To address this issue, researchers hoped that a semi-automatic or automatic process would result in faster and better ontology construction and enrichment. Ontology learning has become recently a major area of research, whose goal is to facilitate construction of ontologies, which reduces the effort in developing ontology for a new domain. However, there are few research studies that attempt to construct ontology from semi-structured Web pages. In this paper, we present a complete framework for ontology learning that facilitates the semi-automation of constructing and enriching web site ontology from semi structured Web pages. The proposed framework employs Web Content Mining and Web Usage mining in extracting conceptual relationship from Web. The main idea behind this concept was to incorporate the web author's ideas as well as web users’ intentions in the ontology development and its evolution.

  4. A Web Service Framework for Economic Applications

    Directory of Open Access Journals (Sweden)

    Dan BENTA

    2010-01-01

    Full Text Available The Internet offers multiple solutions to linkcompanies with their partners, customers or suppliersusing IT solutions, including a special focus on Webservices. Web services are able to solve the problem relatedto the exchange of data between business partners, marketsthat can use each other's services, problems ofincompatibility between IT applications. As web servicesare described, discovered and accessed programs based onXML vocabularies and Web protocols, Web servicesrepresents solutions for Web-based technologies for smalland medium-sized enterprises (SMEs. This paper presentsa web service framework for economic applications. Also, aprototype of this IT solution using web services waspresented and implemented in a few companies from IT,commerce and consulting fields measuring the impact ofthe solution in the business environment development.

  5. The DBCLS BioHackathon: standardization and interoperability for bioinformatics web services and workflows

    OpenAIRE

    Katayama, T.; Arakawa, K; Nakao, M.; Prins, J.C.P.

    2010-01-01

    Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain speci...

  6. The web server of IBM's Bioinformatics and Pattern Discovery group: 2004 update

    OpenAIRE

    Huynh, Tien; Rigoutsos, Isidore

    2004-01-01

    In this report, we provide an update on the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server, which is operational around the clock, provides access to a large number of methods that have been developed and published by the group's members. There is an increasing number of problems that these tools can help tackle; these problems range from the discovery of patterns in streams of events and the computation of multiple s...

  7. PIBAS FedSPARQL: a web-based platform for integration and exploration of bioinformatics datasets.

    Science.gov (United States)

    Djokic-Petrovic, Marija; Cvjetkovic, Vladimir; Yang, Jeremy; Zivanovic, Marko; Wild, David J

    2017-09-20

    There are a huge variety of data sources relevant to chemical, biological and pharmacological research, but these data sources are highly siloed and cannot be queried together in a straightforward way. Semantic technologies offer the ability to create links and mappings across datasets and manage them as a single, linked network so that searching can be carried out across datasets, independently of the source. We have developed an application called PIBAS FedSPARQL that uses semantic technologies to allow researchers to carry out such searching across a vast array of data sources. PIBAS FedSPARQL is a web-based query builder and result set visualizer of bioinformatics data. As an advanced feature, our system can detect similar data items identified by different Uniform Resource Identifiers (URIs), using a text-mining algorithm based on the processing of named entities to be used in Vector Space Model and Cosine Similarity Measures. According to our knowledge, PIBAS FedSPARQL was unique among the systems that we found in that it allows detecting of similar data items. As a query builder, our system allows researchers to intuitively construct and run Federated SPARQL queries across multiple data sources, including global initiatives, such as Bio2RDF, Chem2Bio2RDF, EMBL-EBI, and one local initiative called CPCTAS, as well as additional user-specified data source. From the input topic, subtopic, template and keyword, a corresponding initial Federated SPARQL query is created and executed. Based on the data obtained, end users have the ability to choose the most appropriate data sources in their area of interest and exploit their Resource Description Framework (RDF) structure, which allows users to select certain properties of data to enhance query results. The developed system is flexible and allows intuitive creation and execution of queries for an extensive range of bioinformatics topics. Also, the novel "similar data items detection" algorithm can be particularly

  8. BioXSD: the common data-exchange format for everyday bioinformatics web services

    Science.gov (United States)

    Kalaš, Matúš; Puntervoll, Pæl; Joseph, Alexandre; Bartaševičiūtė, Edita; Töpfer, Armin; Venkataraman, Prabakar; Pettifer, Steve; Bryne, Jan Christian; Ison, Jon; Blanchet, Christophe; Rapacki, Kristoffer; Jonassen, Inge

    2010-01-01

    Motivation: The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use of the resources is hampered by the lack of widely used, standard data-exchange formats for the basic, everyday bioinformatics data types. Results: BioXSD has been developed as a candidate for standard, canonical exchange format for basic bioinformatics data. BioXSD is represented by a dedicated XML Schema and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web. Availability: The BioXSD 1.0 XML Schema is freely available at http://www.bioxsd.org/BioXSD-1.0.xsd under the Creative Commons BY-ND 3.0 license. The http://bioxsd.org web page offers documentation, examples of data in BioXSD format, example workflows with source codes in common programming languages, an updated list of compatible web services and tools and a repository of feature requests from the community. Contact: matus.kalas@bccs.uib.no; developers@bioxsd.org; support@bioxsd.org PMID:20823319

  9. BioXSD: the common data-exchange format for everyday bioinformatics web services.

    Science.gov (United States)

    Kalas, Matús; Puntervoll, Pål; Joseph, Alexandre; Bartaseviciūte, Edita; Töpfer, Armin; Venkataraman, Prabakar; Pettifer, Steve; Bryne, Jan Christian; Ison, Jon; Blanchet, Christophe; Rapacki, Kristoffer; Jonassen, Inge

    2010-09-15

    The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use of the resources is hampered by the lack of widely used, standard data-exchange formats for the basic, everyday bioinformatics data types. BioXSD has been developed as a candidate for standard, canonical exchange format for basic bioinformatics data. BioXSD is represented by a dedicated XML Schema and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web. The BioXSD 1.0 XML Schema is freely available at http://www.bioxsd.org/BioXSD-1.0.xsd under the Creative Commons BY-ND 3.0 license. The http://bioxsd.org web page offers documentation, examples of data in BioXSD format, example workflows with source codes in common programming languages, an updated list of compatible web services and tools and a repository of feature requests from the community.

  10. An Abstract Description Approach to the Discovery and Classification of Bioinformatics Web Sources

    Energy Technology Data Exchange (ETDEWEB)

    Rocco, D; Critchlow, T J

    2003-05-01

    The World Wide Web provides an incredible resource to genomics researchers in the form of dynamic data sources--e.g. BLAST sequence homology search interfaces. The growth rate of these sources outpaces the speed at which they can be manually classified, meaning that the available data is not being utilized to its full potential. Existing research has not addressed the problems of automatically locating, classifying, and integrating classes of bioinformatics data sources. This paper presents an overview of a system for finding classes of bioinformatics data sources and integrating them behind a unified interface. We examine an approach to classifying these sources automatically that relies on an abstract description format: the service class description. This format allows a domain expert to describe the important features of an entire class of services without tying that description to any particular Web source. We present the features of this description format in the context of BLAST sources to show how the service class description relates to Web sources that are being described. We then show how a service class description can be used to classify an arbitrary Web source to determine if that source is an instance of the described service. To validate the effectiveness of this approach, we have constructed a prototype that can correctly classify approximately two-thirds of the BLAST sources we tested. We then examine these results, consider the factors that affect correct automatic classification, and discuss future work.

  11. BioXSD: the common data-exchange format for everyday bioinformatics web services

    DEFF Research Database (Denmark)

    Kalas, M.; Puntervoll, P.; Joseph, A.;

    2010-01-01

    Motivation: The world-wide community of life scientists has access to a large number of public bioinformatics databases and tools, which are developed and deployed using diverse technologies and designs. More and more of the resources offer programmatic web-service interface. However, efficient use...... and defines syntax for biological sequences, sequence annotations, alignments and references to resources. We have adapted a set of web services to use BioXSD as the input and output format, and implemented a test-case workflow. This demonstrates that the approach is feasible and provides smooth...... interoperability. Semantics for BioXSD is provided by annotation with the EDAM ontology. We discuss in a separate section how BioXSD relates to other initiatives and approaches, including existing standards and the Semantic Web....

  12. A Simplified Database-Oriented Web Framework

    Institute of Scientific and Technical Information of China (English)

    ZHU Qiao-ming; ZHAO Lei; QIAN Pei-de

    2004-01-01

    The paper firstly analyses disadvantages in the traditional Web database systems.And then it puts forward a simplified framework called database-oriented Web framework (DOWF).In this framework, the pages and the data are all managed by database system.In order to get keywords or search a special page, which the users accessed before, users access the static pages by common script procedure (CSP), and access the dynamic pages by functional script procedure (FSP) and CSP.The article expounds the method how to implement DOWF in details.The paper also analyses the mechanism of a DOWF site by implementing a prototype system.At last, the article gives the features of DOWS in search, in security, in reuse of pages and in offline waiting, etc.

  13. The Web as an educational tool for/in learning/teaching bioinformatics statistics.

    Science.gov (United States)

    Oliver, J; Pisano, M E; Alonso, T; Roca, P

    2005-12-01

    Statistics provides essential tool in Bioinformatics to interpret the results of a database search or for the management of enormous amounts of information provided from genomics, proteomics and metabolomics. The goal of this project was the development of a software tool that would be as simple as possible to demonstrate the use of the Bioinformatics statistics. Computer Simulation Methods (CSMs) developed using Microsoft Excel were chosen for their broad range of applications, immediate and easy formula calculation, immediate testing and easy graphics representation, and of general use and acceptance by the scientific community. The result of these endeavours is a set of utilities which can be accessed from the following URL: http://gmein.uib.es/bioinformatica/statistics. When tested on students with previous coursework with traditional statistical teaching methods, the general opinion/overall consensus was that Web-based instruction had numerous advantages, but traditional methods with manual calculations were also needed for their theory and practice. Once having mastered the basic statistical formulas, Excel spreadsheets and graphics were shown to be very useful for trying many parameters in a rapid fashion without having to perform tedious calculations. CSMs will be of great importance for the formation of the students and professionals in the field of bioinformatics, and for upcoming applications of self-learning and continuous formation.

  14. WebLab: a data-centric, knowledge-sharing bioinformatic platform.

    Science.gov (United States)

    Liu, Xiaoqiao; Wu, Jianmin; Wang, Jun; Liu, Xiaochuan; Zhao, Shuqi; Li, Zhe; Kong, Lei; Gu, Xiaocheng; Luo, Jingchu; Gao, Ge

    2009-07-01

    With the rapid progress of biological research, great demands are proposed for integrative knowledge-sharing systems to efficiently support collaboration of biological researchers from various fields. To fulfill such requirements, we have developed a data-centric knowledge-sharing platform WebLab for biologists to fetch, analyze, manipulate and share data under an intuitive web interface. Dedicated space is provided for users to store their input data and analysis results. Users can upload local data or fetch public data from remote databases, and then perform analysis using more than 260 integrated bioinformatic tools. These tools can be further organized as customized analysis workflows to accomplish complex tasks automatically. In addition to conventional biological data, WebLab also provides rich supports for scientific literatures, such as searching against full text of uploaded literatures and exporting citations into various well-known citation managers such as EndNote and BibTex. To facilitate team work among colleagues, WebLab provides a powerful and flexible sharing mechanism, which allows users to share input data, analysis results, scientific literatures and customized workflows to specified users or groups with sophisticated privilege settings. WebLab is publicly available at http://weblab.cbi.pku.edu.cn, with all source code released as Free Software.

  15. The World-Wide Web: An Interface between Research and Teaching in Bioinformatics

    Directory of Open Access Journals (Sweden)

    James F. Aiton

    1994-01-01

    Full Text Available The rapid expansion occurring in World-Wide Web activity is beginning to make the concepts of ‘global hypermedia’ and ‘universal document readership’ realistic objectives of the new revolution in information technology. One consequence of this increase in usage is that educators and students are becoming more aware of the diversity of the knowledge base which can be accessed via the Internet. Although computerised databases and information services have long played a key role in bioinformatics these same resources can also be used to provide core materials for teaching and learning. The large datasets and arch ives th at have been compiled for biomedical research can be enhanced with the addition of a variety of multimedia elements (images. digital videos. animation etc.. The use of this digitally stored information in structured and self-directed learning environments is likely to increase as activity across World-Wide Web increases.

  16. A New Framework for Focused Web Crawling

    Institute of Scientific and Technical Information of China (English)

    PENG Tao; HE Fengling; ZUO Wanli

    2006-01-01

    Focused crawlers are important tools to support applications such as specialized Web portals, online searching, and Web search engines.A topic driven crawler chooses the best URLs and relevant pages to pursue during Web crawling.It is difficult to deal with irrelevant pages.This paper presents a novel focused crawler framework.In our focused crawler, we propose a method to overcome some of the limitations of dealing with the irrelevant pages.We also introduce the implementation of our focused crawler and present some important metrics and an evaluation function for ranking pages relevance.The experimental result shows that our crawler can obtain more "important" pages and has a high precision and recall value.

  17. XMPP for cloud computing in bioinformatics supporting discovery and invocation of asynchronous web services

    Directory of Open Access Journals (Sweden)

    Willighagen Egon L

    2009-09-01

    Full Text Available Abstract Background Life sciences make heavily use of the web for both data provision and analysis. However, the increasing amount of available data and the diversity of analysis tools call for machine accessible interfaces in order to be effective. HTTP-based Web service technologies, like the Simple Object Access Protocol (SOAP and REpresentational State Transfer (REST services, are today the most common technologies for this in bioinformatics. However, these methods have severe drawbacks, including lack of discoverability, and the inability for services to send status notifications. Several complementary workarounds have been proposed, but the results are ad-hoc solutions of varying quality that can be difficult to use. Results We present a novel approach based on the open standard Extensible Messaging and Presence Protocol (XMPP, consisting of an extension (IO Data to comprise discovery, asynchronous invocation, and definition of data types in the service. That XMPP cloud services are capable of asynchronous communication implies that clients do not have to poll repetitively for status, but the service sends the results back to the client upon completion. Implementations for Bioclipse and Taverna are presented, as are various XMPP cloud services in bio- and cheminformatics. Conclusion XMPP with its extensions is a powerful protocol for cloud services that demonstrate several advantages over traditional HTTP-based Web services: 1 services are discoverable without the need of an external registry, 2 asynchronous invocation eliminates the need for ad-hoc solutions like polling, and 3 input and output types defined in the service allows for generation of clients on the fly without the need of an external semantics description. The many advantages over existing technologies make XMPP a highly interesting candidate for next generation online services in bioinformatics.

  18. Should we have blind faith in bioinformatics software? Illustrations from the SNAP web-based tool.

    Directory of Open Access Journals (Sweden)

    Sébastien Robiou-du-Pont

    Full Text Available Bioinformatics tools have gained popularity in biology but little is known about their validity. We aimed to assess the early contribution of 415 single nucleotide polymorphisms (SNPs associated with eight cardio-metabolic traits at the genome-wide significance level in adults in the Family Atherosclerosis Monitoring In earLY Life (FAMILY birth cohort. We used the popular web-based tool SNAP to assess the availability of the 415 SNPs in the Illumina Cardio-Metabochip genotyped in the FAMILY study participants. We then compared the SNAP output with the Cardio-Metabochip file provided by Illumina using chromosome and chromosomal positions of SNPs from NCBI Human Genome Browser (Genome Reference Consortium Human Build 37. With the HapMap 3 release 2 reference, 201 out of 415 SNPs were reported as missing in the Cardio-Metabochip by the SNAP output. However, the Cardio-Metabochip file revealed that 152 of these 201 SNPs were in fact present in the Cardio-Metabochip array (false negative rate of 36.6%. With the more recent 1000 Genomes Project release, we found a false-negative rate of 17.6% by comparing the outputs of SNAP and the Illumina product file. We did not find any 'false positive' SNPs (SNPs specified as available in the Cardio-Metabochip by SNAP, but not by the Cardio-Metabochip Illumina file. The Cohen's Kappa coefficient, which calculates the percentage of agreement between both methods, indicated that the validity of SNAP was fair to moderate depending on the reference used (the HapMap 3 or 1000 Genomes. In conclusion, we demonstrate that the SNAP outputs for the Cardio-Metabochip are invalid. This study illustrates the importance of systematically assessing the validity of bioinformatics tools in an independent manner. We propose a series of guidelines to improve practices in the fast-moving field of bioinformatics software implementation.

  19. The BioExtract Server: a web-based bioinformatic workflow platform.

    Science.gov (United States)

    Lushbough, Carol M; Jennewein, Douglas M; Brendel, Volker P

    2011-07-01

    The BioExtract Server (bioextract.org) is an open, web-based system designed to aid researchers in the analysis of genomic data by providing a platform for the creation of bioinformatic workflows. Scientific workflows are created within the system by recording tasks performed by the user. These tasks may include querying multiple, distributed data sources, saving query results as searchable data extracts, and executing local and web-accessible analytic tools. The series of recorded tasks can then be saved as a reproducible, sharable workflow available for subsequent execution with the original or modified inputs and parameter settings. Integrated data resources include interfaces to the National Center for Biotechnology Information (NCBI) nucleotide and protein databases, the European Molecular Biology Laboratory (EMBL-Bank) non-redundant nucleotide database, the Universal Protein Resource (UniProt), and the UniProt Reference Clusters (UniRef) database. The system offers access to numerous preinstalled, curated analytic tools and also provides researchers with the option of selecting computational tools from a large list of web services including the European Molecular Biology Open Software Suite (EMBOSS), BioMoby, and the Kyoto Encyclopedia of Genes and Genomes (KEGG). The system further allows users to integrate local command line tools residing on their own computers through a client-side Java applet.

  20. Ergatis: a web interface and scalable software system for bioinformatics workflows

    Science.gov (United States)

    Orvis, Joshua; Crabtree, Jonathan; Galens, Kevin; Gussman, Aaron; Inman, Jason M.; Lee, Eduardo; Nampally, Sreenath; Riley, David; Sundaram, Jaideep P.; Felix, Victor; Whitty, Brett; Mahurkar, Anup; Wortman, Jennifer; White, Owen; Angiuoli, Samuel V.

    2010-01-01

    Motivation: The growth of sequence data has been accompanied by an increasing need to analyze data on distributed computer clusters. The use of these systems for routine analysis requires scalable and robust software for data management of large datasets. Software is also needed to simplify data management and make large-scale bioinformatics analysis accessible and reproducible to a wide class of target users. Results: We have developed a workflow management system named Ergatis that enables users to build, execute and monitor pipelines for computational analysis of genomics data. Ergatis contains preconfigured components and template pipelines for a number of common bioinformatics tasks such as prokaryotic genome annotation and genome comparisons. Outputs from many of these components can be loaded into a Chado relational database. Ergatis was designed to be accessible to a broad class of users and provides a user friendly, web-based interface. Ergatis supports high-throughput batch processing on distributed compute clusters and has been used for data management in a number of genome annotation and comparative genomics projects. Availability: Ergatis is an open-source project and is freely available at http://ergatis.sourceforge.net Contact: jorvis@users.sourceforge.net PMID:20413634

  1. Galaxy Workflows for Web-based Bioinformatics Analysis of Aptamer High-throughput Sequencing Data

    Directory of Open Access Journals (Sweden)

    William H Thiel

    2016-01-01

    Full Text Available Development of RNA and DNA aptamers for diagnostic and therapeutic applications is a rapidly growing field. Aptamers are identified through iterative rounds of selection in a process termed SELEX (Systematic Evolution of Ligands by EXponential enrichment. High-throughput sequencing (HTS revolutionized the modern SELEX process by identifying millions of aptamer sequences across multiple rounds of aptamer selection. However, these vast aptamer HTS datasets necessitated bioinformatics techniques. Herein, we describe a semiautomated approach to analyze aptamer HTS datasets using the Galaxy Project, a web-based open source collection of bioinformatics tools that were originally developed to analyze genome, exome, and transcriptome HTS data. Using a series of Workflows created in the Galaxy webserver, we demonstrate efficient processing of aptamer HTS data and compilation of a database of unique aptamer sequences. Additional Workflows were created to characterize the abundance and persistence of aptamer sequences within a selection and to filter sequences based on these parameters. A key advantage of this approach is that the online nature of the Galaxy webserver and its graphical interface allow for the analysis of HTS data without the need to compile code or install multiple programs.

  2. Enabling the democratization of the genomics revolution with a fully integrated web-based bioinformatics platform

    Science.gov (United States)

    Li, Po-E; Lo, Chien-Chi; Anderson, Joseph J.; Davenport, Karen W.; Bishop-Lilly, Kimberly A.; Xu, Yan; Ahmed, Sanaa; Feng, Shihai; Mokashi, Vishwesh P.; Chain, Patrick S.G.

    2017-01-01

    Continued advancements in sequencing technologies have fueled the development of new sequencing applications and promise to flood current databases with raw data. A number of factors prevent the seamless and easy use of these data, including the breadth of project goals, the wide array of tools that individually perform fractions of any given analysis, the large number of associated software/hardware dependencies, and the detailed expertise required to perform these analyses. To address these issues, we have developed an intuitive web-based environment with a wide assortment of integrated and cutting-edge bioinformatics tools in pre-configured workflows. These workflows, coupled with the ease of use of the environment, provide even novice next-generation sequencing users with the ability to perform many complex analyses with only a few mouse clicks and, within the context of the same environment, to visualize and further interrogate their results. This bioinformatics platform is an initial attempt at Empowering the Development of Genomics Expertise (EDGE) in a wide range of applications for microbial research. PMID:27899609

  3. TogoWS: integrated SOAP and REST APIs for interoperable bioinformatics Web services.

    Science.gov (United States)

    Katayama, Toshiaki; Nakao, Mitsuteru; Takagi, Toshihisa

    2010-07-01

    Web services have become widely used in bioinformatics analysis, but there exist incompatibilities in interfaces and data types, which prevent users from making full use of a combination of these services. Therefore, we have developed the TogoWS service to provide an integrated interface with advanced features. In the TogoWS REST (REpresentative State Transfer) API (application programming interface), we introduce a unified access method for major database resources through intuitive URIs that can be used to search, retrieve, parse and convert the database entries. The TogoWS SOAP API resolves compatibility issues found on the server and client-side SOAP implementations. The TogoWS service is freely available at: http://togows.dbcls.jp/.

  4. Extending Symfony 2 web application framework

    CERN Document Server

    Armand, Sébastien

    2014-01-01

    Symfony is a high performance PHP framework for developing MVC web applications. Symfony1 allowed for ease of use but its shortcoming was the difficulty of extending it. However, this difficulty has now been eradicated by the more powerful and extensible Symfony2. Information on more advanced techniques for extending Symfony can be difficult to find, so you need one resource that contains the advanced features in a way you can understand. This tutorial offers solutions to all your Symfony extension problems. You will get to grips with all the extension points that Symfony, Twig, and Doctrine o

  5. The OGC Sensor Web Enablement framework

    Science.gov (United States)

    Cox, S. J.; Botts, M.

    2006-12-01

    Sensor observations are at the core of natural sciences. Improvements in data-sharing technologies offer the promise of much greater utilisation of observational data. A key to this is interoperable data standards. The Open Geospatial Consortium's (OGC) Sensor Web Enablement initiative (SWE) is developing open standards for web interfaces for the discovery, exchange and processing of sensor observations, and tasking of sensor systems. The goal is to support the construction of complex sensor applications through real-time composition of service chains from standard components. The framework is based around a suite of standard interfaces, and standard encodings for the message transferred between services. The SWE interfaces include: Sensor Observation Service (SOS)-parameterized observation requests (by observation time, feature of interest, property, sensor); Sensor Planning Service (SPS)-tasking a sensor- system to undertake future observations; Sensor Alert Service (SAS)-subscription to an alert, usually triggered by a sensor result exceeding some value. The interface design generally follows the pattern established in the OGC Web Map Service (WMS) and Web Feature Service (WFS) interfaces, where the interaction between a client and service follows a standard sequence of requests and responses. The first obtains a general description of the service capabilities, followed by obtaining detail required to formulate a data request, and finally a request for a data instance or stream. These may be implemented in a stateless "REST" idiom, or using conventional "web-services" (SOAP) messaging. In a deployed system, the SWE interfaces are supplemented by Catalogue, data (WFS) and portrayal (WMS) services, as well as authentication and rights management. The standard SWE data formats are Observations and Measurements (O&M) which encodes observation metadata and results, Sensor Model Language (SensorML) which describes sensor-systems, Transducer Model Language (TML) which

  6. JWIG: Yet Another Framework for Maintainable and Secure Web Applications

    DEFF Research Database (Denmark)

    Møller, Anders; Schwarz, Mathias Romme

    2009-01-01

    Although numerous frameworks for web application programming have been developed in recent years, writing web applications remains a challenging task. Guided by a collection of classical design principles, we propose yet another framework. It is based on a simple but flexible server-oriented arch...... services.The resulting framework provides a novel foundation for developing maintainable and secure web applications.......Although numerous frameworks for web application programming have been developed in recent years, writing web applications remains a challenging task. Guided by a collection of classical design principles, we propose yet another framework. It is based on a simple but flexible server......-oriented architecture that coherently supports general aspects of modern web applications, including dynamic XML construction, session management, data persistence, caching, and authentication, but it also simplifies programming of server-push communication and integration of XHTML-based applications and XML-based web...

  7. The web server of IBM's Bioinformatics and Pattern Discovery group: 2004 update.

    Science.gov (United States)

    Huynh, Tien; Rigoutsos, Isidore

    2004-07-01

    In this report, we provide an update on the services and content which are available on the web server of IBM's Bioinformatics and Pattern Discovery group. The server, which is operational around the clock, provides access to a large number of methods that have been developed and published by the group's members. There is an increasing number of problems that these tools can help tackle; these problems range from the discovery of patterns in streams of events and the computation of multiple sequence alignments, to the discovery of genes in nucleic acid sequences, the identification--directly from sequence--of structural deviations from alpha-helicity and the annotation of amino acid sequences for antimicrobial activity. Additionally, annotations for more than 130 archaeal, bacterial, eukaryotic and viral genomes are now available on-line and can be searched interactively. The tools and code bundles continue to be accessible from http://cbcsrv.watson.ibm.com/Tspd.html whereas the genomics annotations are available at http://cbcsrv.watson.ibm.com/Annotations/.

  8. Food web framework for size-structured populations

    DEFF Research Database (Denmark)

    Hartvig, Martin; Andersen, Ken Haste; Beyer, Jan

    2011-01-01

    We synthesise traditional unstructured food webs, allometric body size scaling, trait-based modelling, and physiologically structured modelling to provide a novel and ecologically relevant tool for size-structured food webs. The framework allows food web models to include ontogenetic growth...

  9. A P2P Framework for Developing Bioinformatics Applications in Dynamic Cloud Environments

    Directory of Open Access Journals (Sweden)

    Chun-Hung Richard Lin

    2013-01-01

    Full Text Available Bioinformatics is advanced from in-house computing infrastructure to cloud computing for tackling the vast quantity of biological data. This advance enables large number of collaborative researches to share their works around the world. In view of that, retrieving biological data over the internet becomes more and more difficult because of the explosive growth and frequent changes. Various efforts have been made to address the problems of data discovery and delivery in the cloud framework, but most of them suffer the hindrance by a MapReduce master server to track all available data. In this paper, we propose an alternative approach, called PRKad, which exploits a Peer-to-Peer (P2P model to achieve efficient data discovery and delivery. PRKad is a Kademlia-based implementation with Round-Trip-Time (RTT as the associated key, and it locates data according to Distributed Hash Table (DHT and XOR metric. The simulation results exhibit that our PRKad has the low link latency to retrieve data. As an interdisciplinary application of P2P computing for bioinformatics, PRKad also provides good scalability for servicing a greater number of users in dynamic cloud environments.

  10. Teaching Bioinformatics and Neuroinformatics by Using Free Web-Based Tools

    Science.gov (United States)

    Grisham, William; Schottler, Natalie A.; Valli-Marill, Joanne; Beck, Lisa; Beatty, Jackson

    2010-01-01

    This completely computer-based module's purpose is to introduce students to bioinformatics resources. We present an easy-to-adopt module that weaves together several important bioinformatic tools so students can grasp how these tools are used in answering research questions. Students integrate information gathered from websites dealing with…

  11. Incorporating a Collaborative Web-Based Virtual Laboratory in an Undergraduate Bioinformatics Course

    Science.gov (United States)

    Weisman, David

    2010-01-01

    Face-to-face bioinformatics courses commonly include a weekly, in-person computer lab to facilitate active learning, reinforce conceptual material, and teach practical skills. Similarly, fully-online bioinformatics courses employ hands-on exercises to achieve these outcomes, although students typically perform this work offsite. Combining a…

  12. Incorporating a Collaborative Web-Based Virtual Laboratory in an Undergraduate Bioinformatics Course

    Science.gov (United States)

    Weisman, David

    2010-01-01

    Face-to-face bioinformatics courses commonly include a weekly, in-person computer lab to facilitate active learning, reinforce conceptual material, and teach practical skills. Similarly, fully-online bioinformatics courses employ hands-on exercises to achieve these outcomes, although students typically perform this work offsite. Combining a…

  13. Teaching Bioinformatics and Neuroinformatics by Using Free Web-Based Tools

    Science.gov (United States)

    Grisham, William; Schottler, Natalie A.; Valli-Marill, Joanne; Beck, Lisa; Beatty, Jackson

    2010-01-01

    This completely computer-based module's purpose is to introduce students to bioinformatics resources. We present an easy-to-adopt module that weaves together several important bioinformatic tools so students can grasp how these tools are used in answering research questions. Students integrate information gathered from websites dealing with…

  14. Spring MVC Framework for Web 2.0

    Directory of Open Access Journals (Sweden)

    2012-06-01

    Full Text Available When building rich user experience web applications, an abundance of web application frameworks is available and only little guidance for making the decision which one to choose. Web 2.0 applications allow individuals to manage their content online and to share it with other users and services on the Web. Such sharing requires access control to be put in place. Existing access control solutions, however, are unsatisfactory as they do not offer the functionality that users need in the open and user-driven Web environment. Out of all those web development framework the most popular is MVC Framework. Model–view–controller (MVC is a software architecture, currently considered an architectural pattern used in software engineering. The pattern isolates

  15. The DBCLS BioHackathon: standardization and interoperability for bioinformatics web services and workflows. The DBCLS BioHackathon Consortium*

    Directory of Open Access Journals (Sweden)

    Katayama Toshiaki

    2010-08-01

    Full Text Available Abstract Web services have become a key technology for bioinformatics, since life science databases are globally decentralized and the exponential increase in the amount of available data demands for efficient systems without the need to transfer entire databases for every step of an analysis. However, various incompatibilities among database resources and analysis services make it difficult to connect and integrate these into interoperable workflows. To resolve this situation, we invited domain specialists from web service providers, client software developers, Open Bio* projects, the BioMoby project and researchers of emerging areas where a standard exchange data format is not well established, for an intensive collaboration entitled the BioHackathon 2008. The meeting was hosted by the Database Center for Life Science (DBCLS and Computational Biology Research Center (CBRC and was held in Tokyo from February 11th to 15th, 2008. In this report we highlight the work accomplished and the common issues arisen from this event, including the standardization of data exchange formats and services in the emerging fields of glycoinformatics, biological interaction networks, text mining, and phyloinformatics. In addition, common shared object development based on BioSQL, as well as technical challenges in large data management, asynchronous services, and security are discussed. Consequently, we improved interoperability of web services in several fields, however, further cooperation among major database centers and continued collaborative efforts between service providers and software developers are still necessary for an effective advance in bioinformatics web service technologies.

  16. A Framework of Web Data Integrated LBS Middleware

    Institute of Scientific and Technical Information of China (English)

    MENG Xiaofeng; YIN Shaoyi; XIAO Zhen

    2006-01-01

    In this paper, we propose a flexible location-based service (LBS) middleware framework to make the development and deployment of new location based applications much easier. Considering the World Wide Web as a huge data source of location-relative information, we integrate the common used web data extraction techniques into the middleware framework, exposing a unified web data interface for the upper applications to make them more attractive. Besides, the framework also emphasizes some common LBS issues, including positioning, location modeling, location-dependent query processing, privacy and secure management.

  17. A Framework for Deep Web Crawler Using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    K.F.Bharati

    2013-03-01

    Full Text Available The Web has become one of the largest and most readily accessible repositories of human knowledge. The traditional search engines index only surface Web whose pages are easily found. The focus has now been moved to invisible Web or hidden Web, which consists of a large warehouse of useful data such as images, sounds, presentations and many other types of media. To use such data, there is a need for specialized technique to locate those sites as we do with search engines. This paper focuses on an effective design of a Hidden Web Crawler that can automatically discover pages from the Hidden Web by employing multi- agent Web mining system. A framework for deep web with genetic algorithm is used to discover the resource discovery problem and the results show the improvement in the crawling strategy and harvest rate.

  18. An effective assemble-oriented framework for grid Web service

    Institute of Scientific and Technical Information of China (English)

    CHEN Zhang; CHEN Zhi-gang; DENG Xiao-heng; CHEN Li-xin

    2007-01-01

    An effective assemble-oriented framework for grid Web service based on open grid service architecture was proposed, in which Web service semantics network constructed by software reuse was designed to enhance the locating of assemble-oriented service resources. The successful Web services assembled structure was exploited to design semantics network, the logical and the physical structure of the resource was separated in Web service, and the logical resource derived from type ID of Web service was combined. Experiment results show that the success ratio of Web service request comes to 100% while providing completely assembly semantics set. This model provides guarantee of the reliability of assemble Web service and establishes the foundation of web service automatic interaction, customizing application service and dynamic service configuration.

  19. WEB MINING BASED FRAMEWORK FOR ONTOLOGY LEARNING

    OpenAIRE

    Ramesh, C.; K.V.Chalapati Rao; Govardhan, A

    2015-01-01

    Today, the notion of Semantic Web has emerged as a prominent solution to the problem of organizing the immense information provided by World Wide Web, and its focus on supporting a better co-operation between humans and machines is noteworthy. Ontology forms the major component of Semantic Web in its realization. However, manual method of ontology construction is time-consuming, costly, error-prone and inflexible to change and in addition, it requires a complete participation o...

  20. A Mediation Framework for Mobile Web Service Provisioning

    CERN Document Server

    Srirama, Satish Narayana; Prinz, Wolfgang; 10.1109/EDOCW.2006.9

    2010-01-01

    Web Services and mobile data services are the newest trends in information systems engineering in wired and wireless domains, respectively. Web Services have a broad range of service distributions while mobile phones have large and expanding user base. To address the confluence of Web Services and pervasive mobile devices and communication environments, a basic mobile Web Service provider was developed for smart phones. The performance of this Mobile Host was also analyzed in detail. Further analysis of the Mobile Host to provide proper QoS and to check Mobile Host's feasibility in the P2P networks, identified the necessity of a mediation framework. The paper describes the research conducted with the Mobile Host, identifies the tasks of the mediation framework and then discusses the feasible realization details of such a mobile Web Services mediation framework.

  1. A Framework for Dynamic Web Services Composition

    NARCIS (Netherlands)

    Lécué, Freddy; Silva, Eduardo; Ferreira Pires, Luis

    2008-01-01

    Dynamic composition of web services is a promising approach and at the same time a challenging research area for the dissemination of service-oriented applications. It is widely recognised that service semantics is a key element for the dynamic composition of Web services, since it allows the unambi

  2. A Framework for Dynamic Web Services Composition

    NARCIS (Netherlands)

    Lécué, F.; Goncalves da Silva, E.M.; Ferreira Pires, L.

    2007-01-01

    Dynamic composition of web services is a promising approach and at the same time a challenging research area for the dissemination of service-oriented applications. It is widely recognised that service semantics is a key element for the dynamic composition of Web services, since it allows the unambi

  3. Framework for Supporting Web-Based Collaborative Applications

    Science.gov (United States)

    Dai, Wei

    The article proposes an intelligent framework for supporting Web-based applications. The framework focuses on innovative use of existing resources and technologies in the form of services and takes the leverage of theoretical foundation of services science and the research from services computing. The main focus of the framework is to deliver benefits to users with various roles such as service requesters, service providers, and business owners to maximize their productivity when engaging with each other via the Web. The article opens up with research motivations and questions, analyses the existing state of research in the field, and describes the approach in implementing the proposed framework. Finally, an e-health application is discussed to evaluate the effectiveness of the framework where participants such as general practitioners (GPs), patients, and health-care workers collaborate via the Web.

  4. A New Bio-Informatics Framework: Research on 3D Sensor Data of Human Activities

    Directory of Open Access Journals (Sweden)

    Sajid Ali

    2015-05-01

    Full Text Available Due to increasing attraction of motion capture systems technology and the usage of captured data in wide range of research-oriented applications, a framework has developed as an improved version of MOCAP TOOLBOX in Matlab platform. Firstly, we have introduced a faithful script to deal with public motion capture data, which will be friendly for us. Various functions through dynamic programming, by using the Body Segment Parameters (BSP are edited and they configured the position of markers according to data. It is used to visualize and refine without the MLS view and the C3D editor software. It has opened a valuable way of sensor data in many research aspects as gait movements, marker analysis, compression and motion pattern, bioinformatics, and animation. As a result, performed on CMU and ACCAD public mocap data, and achieved higher corrected configuration scheme of 3D markers when compared with the prior art, especially for C3D file. Another distinction of this work is that it handles the extra markers distortion, and provides the meaningful way to use captured data.

  5. A Framework for Web-Based Mechanical Design and Analysis

    Institute of Scientific and Technical Information of China (English)

    Chiaming; Yen; Wujeng; Li

    2002-01-01

    In this paper, a Web-based Mechanical Design and A na lysis Framework (WMDAF) is proposed. This WMADF allows designers to develop web -based computer aided programs in a systematic way during the collaborative mec hanical system design and analysis process. This system is based on an emerg ing web-based Content Management System (CMS) called eXtended Object Oriented P ortal System (XOOPS). Due to the Open Source Status of the XOOPS CMS, programs d eveloped with this framework can be further customized to ...

  6. Toward a Unified Framework for Web Service Trustworthiness

    DEFF Research Database (Denmark)

    Miotto, N.; Dragoni, Nicola

    2012-01-01

    The intrinsic openness of the Service-Oriented Computing vision makes crucial to locate useful services and recognize them as trustworthy. What does it mean that a Web service is trustworthy? How can a software agent evaluate the trustworthiness of a Web service? In this paper we present an ongoing...... research aiming at providing an answer to these key issues to realize this vision. In particular, starting from an analysis of the weaknesses of current approaches, we discuss the possibility of a unified framework for Web service trustworthiness. The founding principle of our novel framework is that “hard...

  7. A Secured Framework for Geographical Information Applications on Web

    Directory of Open Access Journals (Sweden)

    Mennatallah H. Ibrahim

    2014-01-01

    Full Text Available Current geographical information applications increasingly require managing spatial data through the Web. Users of geographical information application need not only to display the spatial data but also to interactively modify them. As a result, the security risks that face geographical information applications are also increasing. In this paper, a secured framework is proposed. The proposed framework's goal is, providing a fine grained access control to web-based geographic information applications. A case study is finally applied to prove the proposed framework feasibility and effectiveness.

  8. An Automatic Web Service Composition Framework Using QoS-Based Web Service Ranking Algorithm.

    Science.gov (United States)

    Mallayya, Deivamani; Ramachandran, Baskaran; Viswanathan, Suganya

    2015-01-01

    Web service has become the technology of choice for service oriented computing to meet the interoperability demands in web applications. In the Internet era, the exponential addition of web services nominates the "quality of service" as essential parameter in discriminating the web services. In this paper, a user preference based web service ranking (UPWSR) algorithm is proposed to rank web services based on user preferences and QoS aspect of the web service. When the user's request cannot be fulfilled by a single atomic service, several existing services should be composed and delivered as a composition. The proposed framework allows the user to specify the local and global constraints for composite web services which improves flexibility. UPWSR algorithm identifies best fit services for each task in the user request and, by choosing the number of candidate services for each task, reduces the time to generate the composition plans. To tackle the problem of web service composition, QoS aware automatic web service composition (QAWSC) algorithm proposed in this paper is based on the QoS aspects of the web services and user preferences. The proposed framework allows user to provide feedback about the composite service which improves the reputation of the services.

  9. A Specialized Framework for Data Retrieval Web Applications

    OpenAIRE

    2005-01-01

    Although many general-purpose frameworks have been developed to aid in web application development, they typically tend to be both comprehensive and complex. To address this problem, a specialized server-side Java framework designed specifically for data retrieval and visualization has been developed. The framework's focus is on maintainability and data security. The functionality is rich with features necessary for simplifying data display design, deployment, user management and application ...

  10. Applications and methods utilizing the Simple Semantic Web Architecture and Protocol (SSWAP) for bioinformatics resource discovery and disparate data and service integration

    Science.gov (United States)

    2010-01-01

    Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap") offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS]) used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL), genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL), Resource Description Framework (RDF) and eXtensible Markup Language (XML) documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST). Conclusions The need for semantic integration technologies has preceded available solutions. We

  11. Applications and methods utilizing the Simple Semantic Web Architecture and Protocol (SSWAP for bioinformatics resource discovery and disparate data and service integration

    Directory of Open Access Journals (Sweden)

    Nelson Rex T

    2010-06-01

    Full Text Available Abstract Background Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of data between information resources difficult and labor intensive. A recently described semantic web protocol, the Simple Semantic Web Architecture and Protocol (SSWAP; pronounced "swap" offers the ability to describe data and services in a semantically meaningful way. We report how three major information resources (Gramene, SoyBase and the Legume Information System [LIS] used SSWAP to semantically describe selected data and web services. Methods We selected high-priority Quantitative Trait Locus (QTL, genomic mapping, trait, phenotypic, and sequence data and associated services such as BLAST for publication, data retrieval, and service invocation via semantic web services. Data and services were mapped to concepts and categories as implemented in legacy and de novo community ontologies. We used SSWAP to express these offerings in OWL Web Ontology Language (OWL, Resource Description Framework (RDF and eXtensible Markup Language (XML documents, which are appropriate for their semantic discovery and retrieval. We implemented SSWAP services to respond to web queries and return data. These services are registered with the SSWAP Discovery Server and are available for semantic discovery at http://sswap.info. Results A total of ten services delivering QTL information from Gramene were created. From SoyBase, we created six services delivering information about soybean QTLs, and seven services delivering genetic locus information. For LIS we constructed three services, two of which allow the retrieval of DNA and RNA FASTA sequences with the third service providing nucleic acid sequence comparison capability (BLAST. Conclusions The need for semantic integration technologies has preceded

  12. A framework for web browser-based medical simulation using WebGL.

    Science.gov (United States)

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2012-01-01

    This paper presents a web browser-based software framework that provides accessibility, portability, and platform independence for medical simulation. Typical medical simulation systems are restricted to the underlying platform and device, which limits widespread use. Our framework allows realistic and efficient medical simulation using only the web browser for anytime anywhere access using a variety of platforms ranging from desktop PCs to tablets. The framework consists of visualization, simulation, and hardware integration modules that are fundamental components for multimodal interactive simulation. Benchmark tests are performed to validate the rendering and computing performance of our framework with latest web browsers including Chrome and Firefox. The results are quite promising opening up the possibility of developing web-based medical simulation technology.

  13. Web Service Architecture Framework for Embedded Devices

    Science.gov (United States)

    Yanzick, Paul David

    2009-01-01

    The use of Service Oriented Architectures, namely web services, has become a widely adopted method for transfer of data between systems across the Internet as well as the Enterprise. Adopting a similar approach to embedded devices is also starting to emerge as personal devices and sensor networks are becoming more common in the industry. This…

  14. CoP Sensing Framework on Web-Based Environment

    Science.gov (United States)

    Mustapha, S. M. F. D. Syed

    The Web technologies and Web applications have shown similar high growth rate in terms of daily usages and user acceptance. The Web applications have not only penetrated in the traditional domains such as education and business but have also encroached into areas such as politics, social, lifestyle, and culture. The emergence of Web technologies has enabled Web access even to the person on the move through PDAs or mobile phones that are connected using Wi-Fi, HSDPA, or other communication protocols. These two phenomena are the inducement factors toward the need of building Web-based systems as the supporting tools in fulfilling many mundane activities. In doing this, one of the many focuses in research has been to look at the implementation challenges in building Web-based support systems in different types of environment. This chapter describes the implementation issues in building the community learning framework that can be supported on the Web-based platform. The Community of Practice (CoP) has been chosen as the community learning theory to be the case study and analysis as it challenges the creativity of the architectural design of the Web system in order to capture the presence of learning activities. The details of this chapter describe the characteristics of the CoP to understand the inherent intricacies in modeling in the Web-based environment, the evidences of CoP that need to be traced automatically in a slick manner such that the evidence-capturing process is unobtrusive, and the technologies needed to embrace a full adoption of Web-based support system for the community learning framework.

  15. A framework for dynamic indexing from hidden web

    Directory of Open Access Journals (Sweden)

    Hasan Mahmud

    2011-09-01

    Full Text Available The proliferation of dynamic websites operating on databases requires generating web pages on-the-fly which is too sophisticated for most of the search engines to index. In an attempt to crawl the contents of dynamic web pages, weve tried to come up with a simple approach to index these huge amounts of dynamic contents hidden behind the search forms. Our key contribution in this paper is the design and implementation of a simple framework to index the dynamic web pages and the use of Hadoop MapReduce framework to update and maintain the index. In our approach, from an initial URL, our crawler downloads both the static and dynamic web pages, detects form interfaces, adaptively selects keywords to generate most promising search results, automatically fill-up search form interfaces, submits the dynamic URL and processes the result until some conditions are satisfied.

  16. JBioWH: an open-source Java framework for bioinformatics data integration.

    Science.gov (United States)

    Vera, Roberto; Perez-Riverol, Yasset; Perez, Sonia; Ligeti, Balázs; Kertész-Farkas, Attila; Pongor, Sándor

    2013-01-01

    The Java BioWareHouse (JBioWH) project is an open-source platform-independent programming framework that allows a user to build his/her own integrated database from the most popular data sources. JBioWH can be used for intensive querying of multiple data sources and the creation of streamlined task-specific data sets on local PCs. JBioWH is based on a MySQL relational database scheme and includes JAVA API parser functions for retrieving data from 20 public databases (e.g. NCBI, KEGG, etc.). It also includes a client desktop application for (non-programmer) users to query data. In addition, JBioWH can be tailored for use in specific circumstances, including the handling of massive queries for high-throughput analyses or CPU intensive calculations. The framework is provided with complete documentation and application examples and it can be downloaded from the Project Web site at http://code.google.com/p/jbiowh. A MySQL server is available for demonstration purposes at hydrax.icgeb.trieste.it:3307. Database URL: http://code.google.com/p/jbiowh.

  17. Next-Generation Web Frameworks in Python

    CERN Document Server

    Daly, Liza

    2007-01-01

    With its flexibility, readability, and maturecode libraries, Python is a naturalchoice for developing agile and maintainableweb applications. Severalframeworks have emerged in the last fewyears that share ideas with Ruby on Railsand leverage the expressive nature of Python.This Short Cut will tell you whatyou need to know about the hottest fullstackframeworks: Django, Pylons, andTurboGears. Their philosophies, relativestrengths, and development status aredescribed in detail. What you won't find out is, "Which oneshould I use?" The short answer is thatall of them can be used to build web appl

  18. A Framework for Interactively Helpful Web Forms

    DEFF Research Database (Denmark)

    Bohøj, Morten; Bouvin, Niels Olof; Gammelmark, Henrik

    2012-01-01

    AdapForms is a framework for adaptive forms, consisting of a form definition language designating structure and constraints upon acceptable input, and a software architecture that continuously validates and adapts the form presented to the user. The validation is performed server-side, which...

  19. A specialized framework for data retrieval Web applications

    Energy Technology Data Exchange (ETDEWEB)

    Jerzy Nogiec; Kelley Trombly-Freytag; Dana Walbridge

    2004-07-12

    Although many general-purpose frameworks have been developed to aid in web application development, they typically tend to be both comprehensive and complex. To address this problem, a specialized server-side Java framework designed specifically for data retrieval and visualization has been developed. The framework's focus is on maintainability and data security. The functionality is rich with features necessary for simplifying data display design, deployment, user management and application debugging, yet the scope is deliberately kept limited to allow for easy comprehension and rapid application development. The system clearly decouples the application processing and visualization, which in turn allows for clean separation of layout and processing development. Duplication of standard web page features such as toolbars and navigational aids is therefore eliminated. The framework employs the popular Model-View-Controller (MVC) architecture, but it also uses the filter mechanism for several of its base functionalities, which permits easy extension of the provided core functionality of the system.

  20. A Specialized Framework for Data Retrieval Web Applications

    Directory of Open Access Journals (Sweden)

    Jerzy Nogiec

    2005-06-01

    Full Text Available Although many general-purpose frameworks have been developed to aid in web application development, they typically tend to be both comprehensive and complex. To address this problem, a specialized server-side Java framework designed specifically for data retrieval and visualization has been developed. The framework's focus is on maintainability and data security. The functionality is rich with features necessary for simplifying data display design, deployment, user management and application debugging, yet the scope is deliberately kept limited to allow for easy comprehension and rapid application development. The system clearly decouples the application processing and visualization, which in turn allows for clean separation of layout and processing development. Duplication of standard web page features such as toolbars and navigational aids is therefore eliminated. The framework employs the popular Model-View-Controller (MVC architecture, but it also uses the filter mechanism for several of its base functionalities, which permits easy extension of the provided core functionality of the system.

  1. A Framework for Interactively Helpful Web Forms

    DEFF Research Database (Denmark)

    Bohøj, Morten; Bouvin, Niels Olof; Gammelmark, Henrik

    2012-01-01

    AdapForms is a framework for adaptive forms, consisting of a form definition language designating structure and constraints upon acceptable input, and a software architecture that continuously validates and adapts the form presented to the user. The validation is performed server-side, which enab...... enables the use of complex business logic without du- plicate code. Thus, the state of the form is kept persistently at the server, and the system ensures that all submitted forms are valid and type safe....

  2. Detection of putative new mutacins by bioinformatic analysis using available web tools

    Directory of Open Access Journals (Sweden)

    Nicolas Guillaume G

    2011-07-01

    Full Text Available Abstract In order to characterise new bacteriocins produced by Streptococcus mutans we perform a complete bioinformatic analyses by scanning the genome sequence of strains UA159 and NN2025. By searching in the adjacent genomic context of the two-component signal transduction system we predicted the existence of many putative new bacteriocins' maturation pathways and some of them were only exclusive to a group of Streptococcus. Computational genomic and proteomic analysis combined to predictive functionnal analysis represent an alternative way for rapid identification of new putative bacteriocins as well as new potential antimicrobial drugs compared to the more traditional methods of drugs discovery using antagonism tests.

  3. The OAuth 2.0 Web Authorization Protocol for the Internet Addiction Bioinformatics (IABio) Database.

    Science.gov (United States)

    Choi, Jeongseok; Kim, Jaekwon; Lee, Dong Kyun; Jang, Kwang Soo; Kim, Dai-Jin; Choi, In Young

    2016-03-01

    Internet addiction (IA) has become a widespread and problematic phenomenon as smart devices pervade society. Moreover, internet gaming disorder leads to increases in social expenditures for both individuals and nations alike. Although the prevention and treatment of IA are getting more important, the diagnosis of IA remains problematic. Understanding the neurobiological mechanism of behavioral addictions is essential for the development of specific and effective treatments. Although there are many databases related to other addictions, a database for IA has not been developed yet. In addition, bioinformatics databases, especially genetic databases, require a high level of security and should be designed based on medical information standards. In this respect, our study proposes the OAuth standard protocol for database access authorization. The proposed IA Bioinformatics (IABio) database system is based on internet user authentication, which is a guideline for medical information standards, and uses OAuth 2.0 for access control technology. This study designed and developed the system requirements and configuration. The OAuth 2.0 protocol is expected to establish the security of personal medical information and be applied to genomic research on IA.

  4. A framework for efficient spatial web object retrieval

    DEFF Research Database (Denmark)

    Wu, Dinging; Cong, Gao; Jensen, Christian S.

    2012-01-01

    The conventional Internet is acquiring a geospatial dimension. Web documents are being geo-tagged and geo-referenced objects such as points of interest are being associated with descriptive text documents. The resulting fusion of geo-location and documents enables new kinds of queries that take...... of the framework demonstrate that the paper’s proposal is capable of excellent performance...

  5. GeneNetwork: framework for web-based genetics

    NARCIS (Netherlands)

    Sloan, Zachary; Arends, Danny; Broman, Karl W.; Centeno, Arthur; Furlotte, Nicholas; Nijveen, H.; Yan, Lei; Zhou, Xiang; Williams, Robert W.; Prins, Pjotr

    2016-01-01

    GeneNetwork (GN) is a free and open source (FOSS) framework for web-based genetics that can be deployed anywhere. GN allows biologists to upload high-throughput experimental data, such as expression data from microarrays and RNA-seq, and also `classic' phenotypes, such as disease phenotypes. These p

  6. An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics.

    Science.gov (United States)

    Taylor, Ronald C

    2010-12-21

    Bioinformatics researchers are now confronted with analysis of ultra large-scale data sets, a problem that will only increase at an alarming rate in coming years. Recent developments in open source software, that is, the Hadoop project and associated software, provide a foundation for scaling to petabyte scale data warehouses on Linux clusters, providing fault-tolerant parallelized analysis on such data using a programming style named MapReduce. An overview is given of the current usage within the bioinformatics community of Hadoop, a top-level Apache Software Foundation project, and of associated open source software projects. The concepts behind Hadoop and the associated HBase project are defined, and current bioinformatics software that employ Hadoop is described. The focus is on next-generation sequencing, as the leading application area to date. Hadoop and the MapReduce programming paradigm already have a substantial base in the bioinformatics community, especially in the field of next-generation sequencing analysis, and such use is increasing. This is due to the cost-effectiveness of Hadoop-based analysis on commodity Linux clusters, and in the cloud via data upload to cloud vendors who have implemented Hadoop/HBase; and due to the effectiveness and ease-of-use of the MapReduce method in parallelization of many data analysis algorithms.

  7. ASP.NET web API build RESTful web applications and services on the .NET framework

    CERN Document Server

    Kanjilal, Joydip

    2013-01-01

    This book is a step-by-step, practical tutorial with a simple approach to help you build RESTful web applications and services on the .NET framework quickly and efficiently.This book is for ASP.NET web developers who want to explore REST-based services with C# 5. This book contains many real-world code examples with explanations whenever necessary. Some experience with C# and ASP.NET 4 is expected.

  8. The Impact of a Web-Based Research Simulation in Bioinformatics on Students' Understanding of Genetics

    Science.gov (United States)

    Gelbart, Hadas; Brill, Gilat; Yarden, Anat

    2009-01-01

    Providing learners with opportunities to engage in activities similar to those carried out by scientists was addressed in a web-based research simulation in genetics developed for high school biology students. The research simulation enables learners to apply their genetics knowledge while giving them an opportunity to participate in an authentic…

  9. MSeqDR: A Centralized Knowledge Repository and Bioinformatics Web Resource to Facilitate Genomic Investigations in Mitochondrial Disease.

    Science.gov (United States)

    Shen, Lishuang; Diroma, Maria Angela; Gonzalez, Michael; Navarro-Gomez, Daniel; Leipzig, Jeremy; Lott, Marie T; van Oven, Mannis; Wallace, Douglas C; Muraresku, Colleen Clarke; Zolkipli-Cunningham, Zarazuela; Chinnery, Patrick F; Attimonelli, Marcella; Zuchner, Stephan; Falk, Marni J; Gai, Xiaowu

    2016-06-01

    MSeqDR is the Mitochondrial Disease Sequence Data Resource, a centralized and comprehensive genome and phenome bioinformatics resource built by the mitochondrial disease community to facilitate clinical diagnosis and research investigations of individual patient phenotypes, genomes, genes, and variants. A central Web portal (https://mseqdr.org) integrates community knowledge from expert-curated databases with genomic and phenotype data shared by clinicians and researchers. MSeqDR also functions as a centralized application server for Web-based tools to analyze data across both mitochondrial and nuclear DNA, including investigator-driven whole exome or genome dataset analyses through MSeqDR-Genesis. MSeqDR-GBrowse genome browser supports interactive genomic data exploration and visualization with custom tracks relevant to mtDNA variation and mitochondrial disease. MSeqDR-LSDB is a locus-specific database that currently manages 178 mitochondrial diseases, 1,363 genes associated with mitochondrial biology or disease, and 3,711 pathogenic variants in those genes. MSeqDR Disease Portal allows hierarchical tree-style disease exploration to evaluate their unique descriptions, phenotypes, and causative variants. Automated genomic data submission tools are provided that capture ClinVar compliant variant annotations. PhenoTips will be used for phenotypic data submission on deidentified patients using human phenotype ontology terminology. The development of a dynamic informed patient consent process to guide data access is underway to realize the full potential of these resources.

  10. A web based Publish-Subscribe framework for mobile computing

    Directory of Open Access Journals (Sweden)

    Cosmina Ivan

    2014-05-01

    Full Text Available The growing popularity of mobile devices is permanently changing the Internet user’s computing experience. Smartphones and tablets begin to replace the desktop as the primary means of interacting with various information technology and web resources. While mobile devices facilitate in consuming web resources in the form of web services, the growing demand for consuming services on mobile device is introducing a complex ecosystem in the mobile environment. This research addresses the communication challenges involved in mobile distributed networks and proposes an event-driven communication approach for information dissemination. This research investigates different communication techniques such as polling, long-polling and server-side push as client-server interaction mechanisms and the latest web technologies standard WebSocket , as communication protocol within a Publish/Subscribe paradigm. Finally, this paper introduces and evaluates the proposed framework, that is a hybrid approach of WebSocket and event-based publish/subscribe for operating in mobile environments.

  11. Browser-based Analysis of Web Framework Applications

    CERN Document Server

    Kersten, Benjamin; 10.4204/EPTCS.35.5

    2010-01-01

    Although web applications evolved to mature solutions providing sophisticated user experience, they also became complex for the same reason. Complexity primarily affects the server-side generation of dynamic pages as they are aggregated from multiple sources and as there are lots of possible processing paths depending on parameters. Browser-based tests are an adequate instrument to detect errors within generated web pages considering the server-side process and path complexity a black box. However, these tests do not detect the cause of an error which has to be located manually instead. This paper proposes to generate metadata on the paths and parts involved during server-side processing to facilitate backtracking origins of detected errors at development time. While there are several possible points of interest to observe for backtracking, this paper focuses user interface components of web frameworks.

  12. Browser-based Analysis of Web Framework Applications

    Directory of Open Access Journals (Sweden)

    Benjamin Kersten

    2010-09-01

    Full Text Available Although web applications evolved to mature solutions providing sophisticated user experience, they also became complex for the same reason. Complexity primarily affects the server-side generation of dynamic pages as they are aggregated from multiple sources and as there are lots of possible processing paths depending on parameters. Browser-based tests are an adequate instrument to detect errors within generated web pages considering the server-side process and path complexity a black box. However, these tests do not detect the cause of an error which has to be located manually instead. This paper proposes to generate metadata on the paths and parts involved during server-side processing to facilitate backtracking origins of detected errors at development time. While there are several possible points of interest to observe for backtracking, this paper focuses user interface components of web frameworks.

  13. Designing a Framework to Develop WEB Graphical Interfaces for ORACLE Databases - Web Dialog

    Directory of Open Access Journals (Sweden)

    Georgiana-Petruţa Fîntîneanu

    2009-01-01

    Full Text Available The present article aims to describe a project consisting in designing a framework of applications used to create graphical interfaces with an Oracle distributed database. The development of the project supposed the use of the latest technologies: database Oracle server, Tomcat web server, JDBC (Java library used for accessing a database, JSP and Tag Library (for the development of graphical interfaces.

  14. Optimizing medical data quality based on multiagent web service framework.

    Science.gov (United States)

    Wu, Ching-Seh; Khoury, Ibrahim; Shah, Hemant

    2012-07-01

    One of the most important issues in e-healthcare information systems is to optimize the medical data quality extracted from distributed and heterogeneous environments, which can extremely improve diagnostic and treatment decision making. This paper proposes a multiagent web service framework based on service-oriented architecture for the optimization of medical data quality in the e-healthcare information system. Based on the design of the multiagent web service framework, an evolutionary algorithm (EA) for the dynamic optimization of the medical data quality is proposed. The framework consists of two main components; first, an EA will be used to dynamically optimize the composition of medical processes into optimal task sequence according to specific quality attributes. Second, a multiagent framework will be proposed to discover, monitor, and report any inconstancy between the optimized task sequence and the actual medical records. To demonstrate the proposed framework, experimental results for a breast cancer case study are provided. Furthermore, to show the unique performance of our algorithm, a comparison with other works in the literature review will be presented.

  15. C3: A Collaborative Web Framework for NASA Earth Exchange

    Science.gov (United States)

    Foughty, E.; Fattarsi, C.; Hardoyo, C.; Kluck, D.; Wang, L.; Matthews, B.; Das, K.; Srivastava, A.; Votava, P.; Nemani, R. R.

    2010-12-01

    The NASA Earth Exchange (NEX) is a new collaboration platform for the Earth science community that provides a mechanism for scientific collaboration and knowledge sharing. NEX combines NASA advanced supercomputing resources, Earth system modeling, workflow management, NASA remote sensing data archives, and a collaborative communication platform to deliver a complete work environment in which users can explore and analyze large datasets, run modeling codes, collaborate on new or existing projects, and quickly share results among the Earth science communities. NEX is designed primarily for use by the NASA Earth science community to address scientific grand challenges. The NEX web portal component provides an on-line collaborative environment for sharing of Eearth science models, data, analysis tools and scientific results by researchers. In addition, the NEX portal also serves as a knowledge network that allows researchers to connect and collaborate based on the research they are involved in, specific geographic area of interest, field of study, etc. Features of the NEX web portal include: Member profiles, resource sharing (data sets, algorithms, models, publications), communication tools (commenting, messaging, social tagging), project tools (wikis, blogs) and more. The NEX web portal is built on the proven technologies and policies of DASHlink.arc.nasa.gov, (one of NASA's first science social media websites). The core component of the web portal is a C3 framework, which was built using Django and which is being deployed as a common framework for a number of collaborative sites throughout NASA.

  16. Arcade: A Web-Java Based Framework for Distributed Computing

    Science.gov (United States)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  17. An Agent-Based Focused Crawling Framework for Topic- and Genre-Related Web Document Discovery

    OpenAIRE

    Pappas, Nikolaos; Katsimpras, Georgios; Stamatatos, Efstathios

    2012-01-01

    The discovery of web documents about certain topics is an important task for web-based applications including web document retrieval, opinion mining and knowledge extraction. In this paper, we propose an agent-based focused crawling framework able to retrieve topic- and genre-related web documents. Starting from a simple topic query, a set of focused crawler agents explore in parallel topic-specific web paths using dynamic seed URLs that belong to certain web genres and are collected from web...

  18. Implementasi Framework Laravel Pada Aplikasi Pengolah Nilai Akademik Berbasis Web

    Directory of Open Access Journals (Sweden)

    Sari Susanti

    2017-04-01

    Abstrak  Nilai merupakan salah satu hal penting di sekolah. berdasarkan peraturan menteri pendidikan dan kebudayaan Republik Indonesia nomor 66 tahun 2013 tentang standar penilaian pendidikan menyebutkan bahwa hasil penilaian oleh pendidik dan satuan pendidikan dilaporkan dalam bentuk nilai dan deskripsi pencapaian kompetensi kepada orang tua dan pemerintah. Nilai dan deskripsi pencapaian kompetensi siswa masih diolah secara manual sehingga membutuhkan waktu lama dalam pengerjaannya. Untuk itu dibutuhkan sebuah aplikasi yang dapat mengolah nilai. Pembuatan aplikasi web pengolahan nilai siswa adalah salah satu solusi untuk mengatasi lambatnya pengolahan nilai. Aplikasi web pengolahan nilai ini dibuat menggunakan model waterfall yang mencakup : analisis, desain, pengkodean dan pengujian. pada website ini penilaian diproses berdasarkan standar kurikulum 2013 yang memiliki tiga kompetensi nilai yaitu pengetahuan, keterampilan dan sikap. Hasil akhir dari ketiga nilai tersebut diproses menjadi nilai rapor. Pembuatan web ini menggunakan bahasa pemrograman PHP dan penyimpanan basis data MySQL. Dari hasil penelitian yang dilakukan diperoleh kesimpulan bahwa Aplikasi Web Pengolahan Nilai merupakan solusi yang membantu proses pengolahan nilai bagi wali kelas dan kemudahan bagi siswa untuk melihat nilainya.   Kata Kunci: Pengolahan Nilai, Aplikasi Web, Framework Laravel, Website, Kurikulum 2013.

  19. A flexible integration framework for a Semantic Geospatial Web application

    Science.gov (United States)

    Yuan, Ying; Mei, Kun; Bian, Fuling

    2008-10-01

    With the growth of the World Wide Web technologies, the access to and use of geospatial information changed in the past decade radically. Previously, the data processed by a GIS as well as its methods had resided locally and contained information that was sufficiently unambiguous in the respective information community. Now, both data and methods may be retrieved and combined from anywhere in the world, escaping their local contexts. The last few years have seen a growing interest in the field of semantic geospatial web. With the development of semantic web technologies, we have seen the possibility of solving the heterogeneity/interoperation problem in the GIS community. The semantic geospatial web application can support a wide variety of tasks including data integration, interoperability, knowledge reuse, spatial reasoning and many others. This paper proposes a flexible framework called GeoSWF (short for Geospatial Semantic Web Framework), which supports the semantic integration of the distributed and heterogeneous geospatial information resources and also supports the semantic query and spatial relationship reasoning. We design the architecture of GeoSWF by extending the MVC Pattern. The GeoSWF use the geo-2007.owl proposed by W3C as the reference ontology of the geospatial information and design different application ontologies according to the situation of heterogeneous geospatial information resources. A Geospatial Ontology Creating Algorithm (GOCA) is designed for convert the geospatial information to the ontology instances represented by RDF/OWL. On the top of these ontology instances, the GeoSWF carry out the semantic reasoning by the rule set stored in the knowledge base to generate new system query. The query result will be ranking by ordering the Euclidean distance of each ontology instances. At last, the paper gives the conclusion and future work.

  20. Next generation of weather generators on web service framework

    Science.gov (United States)

    Chinnachodteeranun, R.; Hung, N. D.; Honda, K.; Ines, A. V. M.

    2016-12-01

    Weather generator is a statistical model that synthesizes possible realization of long-term historical weather in future. It generates several tens to hundreds of realizations stochastically based on statistical analysis. Realization is essential information as a crop modeling's input for simulating crop growth and yield. Moreover, they can be contributed to analyzing uncertainty of weather to crop development stage and to decision support system on e.g. water management and fertilizer management. Performing crop modeling requires multidisciplinary skills which limit the usage of weather generator only in a research group who developed it as well as a barrier for newcomers. To improve the procedures of performing weather generators as well as the methodology to acquire the realization in a standard way, we implemented a framework for providing weather generators as web services, which support service interoperability. Legacy weather generator programs were wrapped in the web service framework. The service interfaces were implemented based on an international standard that was Sensor Observation Service (SOS) defined by Open Geospatial Consortium (OGC). Clients can request realizations generated by the model through SOS Web service. Hierarchical data preparation processes required for weather generator are also implemented as web services and seamlessly wired. Analysts and applications can invoke services over a network easily. The services facilitate the development of agricultural applications and also reduce the workload of analysts on iterative data preparation and handle legacy weather generator program. This architectural design and implementation can be a prototype for constructing further services on top of interoperable sensor network system. This framework opens an opportunity for other sectors such as application developers and scientists in other fields to utilize weather generators.

  1. Framework to Solve Load Balancing Problem in Heterogeneous Web Servers

    CERN Document Server

    Sharma, Ms Deepti

    2011-01-01

    For popular websites most important concern is to handle incoming load dynamically among web servers, so that they can respond to their client without any wait or failure. Different websites use different strategies to distribute load among web servers but most of the schemes concentrate on only one factor that is number of requests, but none of the schemes consider the point that different type of requests will require different level of processing efforts to answer, status record of all the web servers that are associated with one domain name and mechanism to handle a situation when one of the servers is not working. Therefore, there is a fundamental need to develop strategy for dynamic load allocation on web side. In this paper, an effort has been made to introduce a cluster based frame work to solve load distribution problem. This framework aims to distribute load among clusters on the basis of their operational capabilities. Moreover, the experimental results are shown with the help of example, algorithm...

  2. Web-based supplier relationship framework using agent systems

    Institute of Scientific and Technical Information of China (English)

    Oboulhas Conrad Tsahat Onesime; XU Xiao-fei(徐晓飞); ZHAN De-chen(战德臣)

    2004-01-01

    In order to enable both manufacturers and suppliers to be profitable on today' s highly competitive markets, manufacturers and suppliers must be quick in selecting best partners establishing strategic relationship, and collaborating with each other so that they can satisfy the changing competitive manufacturing requirements. A web-based supplier relationships (SR) framework is therfore proposed using multi-agent systems and linear programming technique to reduce supply cost, increase flexibility and shorten response time. Web-based SR approach is an ideal platform for information exchange that helps buyers and suppliers to maintain the availability of materials in the right quantity, at the right place, and at the right time, and keep the customer-supplier relationship more transparent. A multi-agent system prototype was implemented by simulation, which shows the feasibility of the proposed architecture.

  3. Electronic Laboratory Notebook on Web2py Framework

    Directory of Open Access Journals (Sweden)

    2010-09-01

    Full Text Available Proper experimental record-keeping is an important cornerstone in research and development for the purpose of auditing. The gold standard of record-keeping is based on the judicious use of physical, permanent notebooks. However, advances in technology had resulted in large amounts of electronic records making it virtually impossible to maintain a full set of records in physical notebooks. Electronic laboratory notebook systems aim to meet the stringency for keeping records electronically. This manuscript describes CyNote which is an electronic laboratory notebook system that is compliant with 21 CFP Part 11 controls on electronic records, requirements set by USA Food and Drug Administration for electronic records. CyNote is implemented on web2py framework and is adhering to the architectural paradigm of model-view-controller (MVC, allowing for extension modules to be built for CyNote. CyNote is available at http://cynote.sf.net.

  4. Enhanced Architecture of a Web Warehouse based on Quality Evaluation Framework to Incorporate Quality Aspects in Web Warehouse Creation

    Directory of Open Access Journals (Sweden)

    Umm-e-Mariya Shah

    2011-01-01

    Full Text Available In the recent years, it has been observed that World Wide Web (www became a vast source of information explosion about all areas of interest. Relevant information retrieval is difficult from the web space as there is no universal configuration and organization of the web data. Taking the advantage of data warehouse functionality and integrating it with the web to retrieve relevant data is the core concept of web warehouse. It is a repository that store relevant web data for business decision making. The basic function of web warehouse is to collect and store the information for analysis of users. The quality of web warehouse data affects a lot on data analysis. To enhance the quality of decision making different quality dimensions must be incorporated in web warehouse architecture. In this paper enhanced web warehouse architecture is proposed and discussed. The enhancement in the existing architecture is based on the quality evaluation framework. The enhanced architecture adds three layers in existing architecture to insure quality at various phases of web warehouse system creation. The source assessment, query evaluation and data quality layers enhance the quality of data store in web warehouse.

  5. Context Aware Concurrent Execution Framework for Web Browsers

    DEFF Research Database (Denmark)

    Saeed, Aamir; Erbad, Aiman; Olsen, Rasmus Løvenstein

    2016-01-01

    Computing hungry multimedia web applications need to efficiently utilize all the resources of a device. HTML5 web workers is a non-sharing concurrency platform that enables multimedia web application to utilize the available multicore hardware. HTML5 web workers are implemented by major browser...

  6. Comparison of Physics Frameworks for WebGL-Based Game Engine

    Directory of Open Access Journals (Sweden)

    Yogya Resa

    2014-03-01

    Full Text Available Recently, a new technology called WebGL shows a lot of potentials for developing games. However since this technology is still new, there are still many potentials in the game development area that are not explored yet. This paper tries to uncover the potential of integrating physics frameworks with WebGL technology in a game engine for developing 2D or 3D games. Specifically we integrated three open source physics frameworks: Bullet, Cannon, and JigLib into a WebGL-based game engine. Using experiment, we assessed these frameworks in terms of their correctness or accuracy, performance, completeness and compatibility. The results show that it is possible to integrate open source physics frameworks into a WebGLbased game engine, and Bullet is the best physics framework to be integrated into the WebGL-based game engine.

  7. Enabling the democratization of the genomics revolution with a fully integrated web-based bioinformatics platform, Version 1.5 and 1.x.

    Energy Technology Data Exchange (ETDEWEB)

    2017-05-18

    EDGE bioinformatics was developed to help biologists process Next Generation Sequencing data (in the form of raw FASTQ files), even if they have little to no bioinformatics expertise. EDGE is a highly integrated and interactive web-based platform that is capable of running many of the standard analyses that biologists require for viral, bacterial/archaeal, and metagenomic samples. EDGE provides the following analytical workflows: quality trimming and host removal, assembly and annotation, comparisons against known references, taxonomy classification of reads and contigs, whole genome SNP-based phylogenetic analysis, and PCR analysis. EDGE provides an intuitive web-based interface for user input, allows users to visualize and interact with selected results (e.g. JBrowse genome browser), and generates a final detailed PDF report. Results in the form of tables, text files, graphic files, and PDFs can be downloaded. A user management system allows tracking of an individual’s EDGE runs, along with the ability to share, post publicly, delete, or archive their results.

  8. New architecture for the sensor web: the SWAP framework.

    CSIR Research Space (South Africa)

    Moodley, D

    2006-11-01

    Full Text Available Sensor Web is a revolutionary concept towards achieving a collaborative, coherent, consistent, and consolidated sensor data collection, fusion and distribution system. Sensor Webs can perform as an extensive monitoring and sensing system...

  9. A Framework for Transparently Accessing Deep Web Sources

    Science.gov (United States)

    Dragut, Eduard Constantin

    2010-01-01

    An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…

  10. A Framework for Transparently Accessing Deep Web Sources

    Science.gov (United States)

    Dragut, Eduard Constantin

    2010-01-01

    An increasing number of Web sites expose their content via query interfaces, many of them offering the same type of products/services (e.g., flight tickets, car rental/purchasing). They constitute the so-called "Deep Web". Accessing the content on the Deep Web has been a long-standing challenge for the database community. For a user interested in…

  11. A Framework for Web 2.0 Learning Design

    Science.gov (United States)

    Bower, Matt; Hedberg, John G.; Kuswara, Andreas

    2010-01-01

    This paper describes an approach to conceptualising and performing Web 2.0-enabled learning design. Based on the Technological, Pedagogical and Content Knowledge model of educational practice, the approach conceptualises Web 2.0 learning design by relating Anderson and Krathwohl's Taxonomy of Learning, Teaching and Assessing, and different types…

  12. A Framework for Web 2.0 Learning Design

    Science.gov (United States)

    Bower, Matt; Hedberg, John G.; Kuswara, Andreas

    2010-01-01

    This paper describes an approach to conceptualising and performing Web 2.0-enabled learning design. Based on the Technological, Pedagogical and Content Knowledge model of educational practice, the approach conceptualises Web 2.0 learning design by relating Anderson and Krathwohl's Taxonomy of Learning, Teaching and Assessing, and different types…

  13. eSNaPD: a versatile, web-based bioinformatics platform for surveying and mining natural product biosynthetic diversity from metagenomes.

    Science.gov (United States)

    Reddy, Boojala Vijay B; Milshteyn, Aleksandr; Charlop-Powers, Zachary; Brady, Sean F

    2014-08-14

    Environmental Surveyor of Natural Product Diversity (eSNaPD) is a web-based bioinformatics and data aggregation platform that aids in the discovery of gene clusters encoding both novel natural products and new congeners of medicinally relevant natural products using (meta)genomic sequence data. Using PCR-generated sequence tags, the eSNaPD data-analysis pipeline profiles biosynthetic diversity hidden within (meta)genomes by comparing sequence tags to a reference data set of characterized gene clusters. Sample mapping, molecule discovery, library mapping, and new clade visualization modules facilitate the interrogation of large (meta)genomic sequence data sets for diverse downstream analyses, including, but not limited to, the identification of environments rich in untapped biosynthetic diversity, targeted molecule discovery efforts, and chemical ecology studies. eSNaPD is designed to generate a global atlas of biosynthetic diversity that can facilitate a systematic, sequence-based interrogation of nature's biosynthetic potential.

  14. Context Aware Concurrent Execution Framework for Web Browser

    DEFF Research Database (Denmark)

    Saeed, Aamir; Erbad, Aiman Mahmood; Olsen, Rasmus Løvenstein

    by balancing load across workers/CPU cores. This work presents load-balancing algorithms between web workers using parameters such as scheduler throughput, computation priority and game entity locality. An award-winning web-based multimedia game (raptjs.com) is used to test the performance of the load balance...... algorithms. The preliminary results indicate that the performance of game improved with effective load-balancing across web workers. Three load balancing algorithm were developed on top of DOHA, an open source JavaScript execution layer for multimedia applications. The load between web workers is transferred...... between web workers via serialized objects. The application performance is measured by measuring the jitter and the number of frames per second. The results showed improvement in performance with lower jitter and high FPS. These techniques can be used by developers to improve the application design giving...

  15. Context Aware Concurrent Execution Framework for Web Browsers

    DEFF Research Database (Denmark)

    Saeed, Aamir; Erbad, Aiman; Olsen, Rasmus Løvenstein

    2016-01-01

    be maximized by balancing load across workers/CPU cores. This work presents load-balancing algorithms between web workers using parameters such as scheduler throughput, computation priority and game entities locality. An award-winning web-based multimedia game (raptjs.com) is used to evaluate the performance...... of load balancing algorithms. The preliminary results indicated that the performance of game improved with the proposed load-balancing across web workers. The load balancing algorithms were developed on top of DOHA, an open source JavaScript execution layer for multimedia applications. The load between...... web workers is transferred between web workers via serialized objects. Effects of load transfer between on the overall application performance was measured by analyzing jitter and number of frames per second. Load balancing algorithms and load transfer mechanism can be used by developers to improve...

  16. iHOPerator: user-scripting a personalized bioinformatics Web, starting with the iHOP website

    Directory of Open Access Journals (Sweden)

    Wilkinson Mark D

    2006-12-01

    Full Text Available Abstract Background User-scripts are programs stored in Web browsers that can manipulate the content of websites prior to display in the browser. They provide a novel mechanism by which users can conveniently gain increased control over the content and the display of the information presented to them on the Web. As the Web is the primary medium by which scientists retrieve biological information, any improvements in the mechanisms that govern the utility or accessibility of this information may have profound effects. GreaseMonkey is a Mozilla Firefox extension that facilitates the development and deployment of user-scripts for the Firefox web-browser. We utilize this to enhance the content and the presentation of the iHOP (information Hyperlinked Over Proteins website. Results The iHOPerator is a GreaseMonkey user-script that augments the gene-centred pages on iHOP by providing a compact, configurable visualization of the defining information for each gene and by enabling additional data, such as biochemical pathway diagrams, to be collected automatically from third party resources and displayed in the same browsing context. Conclusion This open-source script provides an extension to the iHOP website, demonstrating how user-scripts can personalize and enhance the Web browsing experience in a relevant biological setting. The novel, user-driven controls over the content and the display of Web resources made possible by user-scripts, such as the iHOPerator, herald the beginning of a transition from a resource-centric to a user-centric Web experience. We believe that this transition is a necessary step in the development of Web technology that will eventually result in profound improvements in the way life scientists interact with information.

  17. A Framework for Automatic Web Service Discovery Based on Semantics and NLP Techniques

    OpenAIRE

    Asma Adala; Nabil Tabbane; Sami Tabbane

    2011-01-01

    As a greater number of Web Services are made available today, automatic discovery is recognized as an important task. To promote the automation of service discovery, different semantic languages have been created that allow describing the functionality of services in a machine interpretable form using Semantic Web technologies. The problem is that users do not have intimate knowledge about semantic Web service languages and related toolkits. In this paper, we propose a discovery framework tha...

  18. Analyzing HT-SELEX data with the Galaxy Project tools--A web based bioinformatics platform for biomedical research.

    Science.gov (United States)

    Thiel, William H; Giangrande, Paloma H

    2016-03-15

    The development of DNA and RNA aptamers for research as well as diagnostic and therapeutic applications is a rapidly growing field. In the past decade, the process of identifying aptamers has been revolutionized with the advent of high-throughput sequencing (HTS). However, bioinformatics tools that enable the average molecular biologist to analyze these large datasets and expedite the identification of candidate aptamer sequences have been lagging behind the HTS revolution. The Galaxy Project was developed in order to efficiently analyze genome, exome, and transcriptome HTS data, and we have now applied these tools to aptamer HTS data. The Galaxy Project's public webserver is an open source collection of bioinformatics tools that are powerful, flexible, dynamic, and user friendly. The online nature of the Galaxy webserver and its graphical interface allow users to analyze HTS data without compiling code or installing multiple programs. Herein we describe how tools within the Galaxy webserver can be adapted to pre-process, compile, filter and analyze aptamer HTS data from multiple rounds of selection.

  19. ScaleMT: a free/open-source framework for building scalable machine translation web services

    OpenAIRE

    Sánchez-Cartagena, Víctor M.; Pérez-Ortiz, Juan Antonio

    2009-01-01

    Machine translation web services usage is growing amazingly mainly because of the translation quality and reliability of the service provided by the Google Ajax Language API. To allow the open-source machine ranslation projects to compete with Google’s one and gain visibility on the internet, we have developed ScaleMT: a free/open-source framework that exposes existing machine translation engines as public web services. This framework is highly scalable as it can run coordinately on many serv...

  20. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks.

    Science.gov (United States)

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-02-01

    Hybrid mobile applications (apps) combine the features of Web applications and "native" mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources-file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies "bridges" that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources-the ability to read and write contacts list, local files, etc.-to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign-origin Web content

  1. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks

    Science.gov (United States)

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-01-01

    Hybrid mobile applications (apps) combine the features of Web applications and “native” mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources—file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies “bridges” that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources—the ability to read and write contacts list, local files, etc.—to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign

  2. Undergraduate Bioinformatics Workshops Provide Perceived Skills

    Directory of Open Access Journals (Sweden)

    Robin Herlands Cresiski

    2014-07-01

    Full Text Available Bioinformatics is becoming an important part of undergraduate curriculum, but expertise and well-evaluated teaching materials may not be available on every campus. Here, a guest speaker was utilized to introduce bioinformatics and web-available exercises were adapted for student investigation. Students used web-based nucleotide comparison tools to examine the medical and evolutionary relevance of a unidentified genetic sequence. Based on pre- and post-workshop surveys, there were significant gains in the students understanding of bioinformatics, as well as their perceived skills in using bioinformatics tools. The relevance of bioinformatics to a student’s career seemed dependent on career aspirations.

  3. GeoSymbio: a hybrid, cloud-based web application of global geospatial bioinformatics and ecoinformatics for Symbiodinium-host symbioses.

    Science.gov (United States)

    Franklin, Erik C; Stat, Michael; Pochon, Xavier; Putnam, Hollie M; Gates, Ruth D

    2012-03-01

    The genus Symbiodinium encompasses a group of unicellular, photosynthetic dinoflagellates that are found free living or in hospite with a wide range of marine invertebrate hosts including scleractinian corals. We present GeoSymbio, a hybrid web application that provides an online, easy to use and freely accessible interface for users to discover, explore and utilize global geospatial bioinformatic and ecoinformatic data on Symbiodinium-host symbioses. The novelty of this application lies in the combination of a variety of query and visualization tools, including dynamic searchable maps, data tables with filter and grouping functions, and interactive charts that summarize the data. Importantly, this application is hosted remotely or 'in the cloud' using Google Apps, and therefore does not require any specialty GIS, web programming or data programming expertise from the user. The current version of the application utilizes Symbiodinium data based on the ITS2 genetic marker from PCR-based techniques, including denaturing gradient gel electrophoresis, sequencing and cloning of specimens collected during 1982-2010. All data elements of the application are also downloadable as spatial files, tables and nucleic acid sequence files in common formats for desktop analysis. The application provides a unique tool set to facilitate research on the basic biology of Symbiodinium and expedite new insights into their ecology, biogeography and evolution in the face of a changing global climate. GeoSymbio can be accessed at https://sites.google.com/site/geosymbio/.

  4. Design and implementation of an architectural framework for web portals in a ubiquitous pervasive environment.

    Science.gov (United States)

    Raza, Muhammad Taqi; Yoo, Seung-Wha; Kim, Ki-Hyung; Joo, Seong-Soon; Jeong, Wun-Cheol

    2009-01-01

    Web Portals function as a single point of access to information on the World Wide Web (WWW). The web portal always contacts the portal's gateway for the information flow that causes network traffic over the Internet. Moreover, it provides real time/dynamic access to the stored information, but not access to the real time information. This inherent functionality of web portals limits their role for resource constrained digital devices in the Ubiquitous era (U-era). This paper presents a framework for the web portal in the U-era. We have introduced the concept of Local Regions in the proposed framework, so that the local queries could be solved locally rather than having to route them over the Internet. Moreover, our framework enables one-to-one device communication for real time information flow. To provide an in-depth analysis, firstly, we provide an analytical model for query processing at the servers for our framework-oriented web portal. At the end, we have deployed a testbed, as one of the world's largest IP based wireless sensor networks testbed, and real time measurements are observed that prove the efficacy and workability of the proposed framework.

  5. A Framework for Effective User Interface Design for Web-Based Electronic Commerce Applications

    Directory of Open Access Journals (Sweden)

    Justyna Burns

    2001-01-01

    Full Text Available Efficient delivery of relevant product information is increasingly becoming the central basis of competition between firms. The interface design represents the central component for successful information delivery to consumers. However, interface design for web-based information systems is probably more an art than a science at this point in time. Much research is needed to understand properties of an effective interface for electronic commerce. This paper develops a framework identifying the relationship between user factors, the role of the user interface and overall system success for web-based electronic commerce. The paper argues that web-based systems for electronic commerce have some similar properties to decision support systems (DSS and adapts an established DSS framework to the electronic commerce domain. Based on a limited amount of research studying web browser interface design, the framework identifies areas of research needed and outlines possible relationships between consumer characteristics, interface design attributes and measures of overall system success.

  6. Application of a Reference Framework for Integration of Web Resources in Dotlrn--Case Study of Physics--Topic: Waves

    Science.gov (United States)

    Gomez, Fabinton Sotelo; Ordóñez, Armando

    2016-01-01

    Previously a framework for integrating web resources providing educational services in dotLRN was presented. The present paper describes the application of this framework in a rural school in Cauca--Colombia. The case study includes two web resources about the topic of waves (physics) which is oriented in secondary education. Web classes and…

  7. Development of web-based collaborative framework for the simulation of embedded systems

    Directory of Open Access Journals (Sweden)

    Woong Yang

    2016-10-01

    In this study, It has been developed a Web-based collaboration framework which can be a flexible connection between macroscopically virtual environment and the physical environment. This framework is able to verifiy and manage physical environments. Also it can resolve the bottlenecks encountered during the base expansion and development process of IoT (Internet of Things environment.

  8. The DWAN Framework: Application of a Web Annotation Framework for the General Humanities to the Domain of Language Resources

    NARCIS (Netherlands)

    Lenkiewicz, Przemyslaw; Shkaravska, Olha; Goosen, Twan; Broeder, Daan; Windhouwer, Menzo; Roth, Stephanie; Olsson, Olof; Chair), Nicoletta Calzolari (Conference; Choukri, Khalid; Declerck, Thierry; Loftsson, Hrafn; Maegaard, Bente; Mariani, Joseph; Moreno, Asuncion; Odijk, Jan; Piperidis, Stelios

    2014-01-01

    Researchers share large amounts of digital resources, which offer new chances for cooperation. Collaborative annotation systems are meant to support this. Often these systems are targeted at a specific task or domain, e.g., annotation of a corpus. The DWAN framework for web annotation is generic and

  9. Semantic Web Portal in University Research Community Framework

    Directory of Open Access Journals (Sweden)

    Rahmat Hidayat

    2012-01-01

    Full Text Available One way overcome the weakness of semantic web to make it more user friendly is by displaying, browsing and semantically query data. In this research, we propose Semantic Web Research Community Portal at Faculty of Information Science and Technology – Universiti Kebangsaan Malaysia (FTSM RC as the lightest platform of Semantic Web. This platform assists the users in managing the content and making visualization of relevant semantic data by applying meaningful periodically research. In such a way it will strengthen the research information related to research, publications, departments, organizations, events, and groups of researchers. Moreover, it will streamline the issuance process, making it easier for academic staff, support staff, and faculty itself to publish information of faculty and studies research information. By the end, this will provide end users with a better view of the structure of research at the university, allowing users to conduct cross-communication between faculty and study groups by using the search information.

  10. JClarens: A Java Framework for Developing and Deploying Web Services for Grid Computing

    CERN Document Server

    Thomas, M; Van Lingen, F; Newman, H; Bunn, J; Ali, A; McClatchey, R; Anjum, A; Azim, T; Rehman, W; Khan, F; In, J U; Thomas, Michael; Steenberg, Conrad; Lingen, Frank van; Newman, Harvey; Bunn, Julian; Ali, Arshad; Clatchey, Richard Mc; Anjum, Ashiq; Azim, Tahir; Rehman, Waqas ur; Khan, Faisal; In, Jang Uk

    2005-01-01

    High Energy Physics (HEP) and other scientific communities have adopted Service Oriented Architectures (SOA) as part of a larger Grid computing effort. This effort involves the integration of many legacy applications and programming libraries into a SOA framework. The Grid Analysis Environment (GAE) is such a service oriented architecture based on the Clarens Grid Services Framework and is being developed as part of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) at European Laboratory for Particle Physics (CERN). Clarens provides a set of authorization, access control, and discovery services, as well as XMLRPC and SOAP access to all deployed services. Two implementations of the Clarens Web Services Framework (Python and Java) offer integration possibilities for a wide range of programming languages. This paper describes the Java implementation of the Clarens Web Services Framework called JClarens. and several web services of interest to the scientific and Grid community that hav...

  11. A patient centred framework for improving LTC quality of life through Web 2.0 technology.

    Science.gov (United States)

    Pulman, Andy

    2010-03-01

    The NHS and Social Care Model - a blueprint supporting organisations in improving services for people with long-term conditions (LTCs) - noted options to support people with LTCs might include technological tools supporting personalised care and choice and providing resources for patients to self-care and self-manage. Definitions concerning the integration of health information and support with Web 2.0 technology are primarily concerned with approaches from the healthcare perspective. There is a need to design a patient centred framework, encapsulating the use of Web 2.0 technology for people with LTCs who want to support, mitigate or improve quality of life. Existing theoretical frameworks offer a means of informing the design and measurement of this framework. This article describes how Web 2.0 technology could impact on the quality of life of individuals with LTCs and suggests a starting point for developing a theoretically informed patient centred framework.

  12. A Framework for Automatic Web Service Discovery Based on Semantics and NLP Techniques

    Directory of Open Access Journals (Sweden)

    Asma Adala

    2011-01-01

    Full Text Available As a greater number of Web Services are made available today, automatic discovery is recognized as an important task. To promote the automation of service discovery, different semantic languages have been created that allow describing the functionality of services in a machine interpretable form using Semantic Web technologies. The problem is that users do not have intimate knowledge about semantic Web service languages and related toolkits. In this paper, we propose a discovery framework that enables semantic Web service discovery based on keywords written in natural language. We describe a novel approach for automatic discovery of semantic Web services which employs Natural Language Processing techniques to match a user request, expressed in natural language, with a semantic Web service description. Additionally, we present an efficient semantic matching technique to compute the semantic distance between ontological concepts.

  13. A Framework For An E-Learning System Based on Semantic Web

    Directory of Open Access Journals (Sweden)

    Tarek M. Mahmoud

    2013-08-01

    Full Text Available E-Learning is efficient, task relevant and just-in-time learning grown from the learning requirements of the new, dynamically changing, distributed business world. The term “Semantic Web”encompasses efforts to build a new WWW architecture that enhances content with formal semantics, which enables better possibilities for navigating through the cyberspace and accessing its contents. Semantic Web is a product of Web 2.0 (second generation of web that is supported with automated semantic agents for processing user data to help the user on ease of use and personalization of services.The paper describes a framework for the implementation of an e-learning system based on the Semantic Web, using desktop c# application, data visualization tools, HTML web page parser, RDF generating tools, and SPARQL RDF query language. Hopefully we have elucidated the enormous potential of making web content machine-understandable.

  14. Proposed Quality Evaluation Framework to Incorporate Quality Aspects in Web Warehouse Creation

    CERN Document Server

    Shah, Umm-e-Mariya; Shamim, Azra; Mehmood, Yasir

    2011-01-01

    Web Warehouse is a read only repository maintained on the web to effectively handle the relevant data. Web warehouse is a system comprised of various subsystems and process. It supports the organizations in decision making. Quality of data store in web warehouse can affect the quality of decision made. For a valuable decision making it is required to consider the quality aspects in designing and modelling of a web warehouse. Thus data quality is one of the most important issues of the web warehousing system. Quality must be incorporated at different stages of the web warehousing system development. It is necessary to enhance existing data warehousing system to increase the data quality. It results in the storage of high quality data in the repository and efficient decision making. In this paper a Quality Evaluation Framework is proposed keeping in view the quality dimensions associated with different phases of a web warehouse. Further more, the proposed framework is validated empirically with the help of quan...

  15. Building Points - Montana Structures/Addresses Framework - Web Service

    Data.gov (United States)

    NSGIC GIS Inventory (aka Ramona) — Map service for the Montana Structures MSDI Framework. The service will only display at scales of 1:100,000 or larger. Structures are grouped into general categories...

  16. Building Points - Montana Structures/Addresses Framework - Web Service

    Data.gov (United States)

    NSGIC State | GIS Inventory — Map service for the Montana Structures MSDI Framework. The service will only display at scales of 1:100,000 or larger. Structures are grouped into general categories...

  17. Analysis Of Cancer Omics Data In A Semantic Web Framework

    CERN Document Server

    Holford, Matt; Cheung, Kei; Krauthammer, Michael

    2010-01-01

    Our work concerns the elucidation of the cancer (epi)genome, transcriptome and proteome to better understand the complex interplay between a cancer cell's molecular state and its response to anti-cancer therapy. To study the problem, we have previously focused on data warehousing technologies and statistical data integration. In this paper, we present recent work on extending our analytical capabilities using Semantic Web technology. A key new component presented here is a SPARQL endpoint to our existing data warehouse. This endpoint allows the merging of observed quantitative data with existing data from semantic knowledge sources such as Gene Ontology (GO). We show how such variegated quantitative and functional data can be integrated and accessed in a universal manner using Semantic Web tools. We also demonstrate how Description Logic (DL) reasoning can be used to infer previously unstated conclusions from existing knowledge bases. As proof of concept, we illustrate the ability of our setup to answer compl...

  18. Web of Science收录生物信息学数据库研究文献的分析%Analysis of Bioinformatics Databases Research Literatures Based on "Web of Science"

    Institute of Scientific and Technical Information of China (English)

    杨长平; 吴登俊

    2009-01-01

    利用文献计量学方法,统计分析了1995~2007 年Web of Science收录生物信息学数据库(bioinformatics databases)研究文献,探讨了生物信息学数据库文献研究的年代分布、语种、期刊分布、作者、文献类型、主题分布以及发文量前10名的国家和机构,以期了解世界各国在这一研究领域的进展情况.

  19. Everything Integrated: A Framework for Associative Writing in the Web

    OpenAIRE

    Miles-Board, Timothy

    2004-01-01

    Hypermedia is the vision of the complete integration of all information in any media, including text, image, audio and video. The depth and diversity of the World-Wide Web, the most successful and farthest-reaching hypermedia system to date, has tremendous potential to provide such an integrated docuverse. This thesis explores the issues and challenges surrounding the realisation of this potential through the process of Associative Writing - the authoring and publishing of integrated hypertex...

  20. Standardized mappings--a framework to combine different semantic mappers into a standardized web-API.

    Science.gov (United States)

    Neuhaus, Philipp; Doods, Justin; Dugas, Martin

    2015-01-01

    Automatic coding of medical terms is an important, but highly complicated and laborious task. To compare and evaluate different strategies a framework with a standardized web-interface was created. Two UMLS mapping strategies are compared to demonstrate the interface. The framework is a Java Spring application running on a Tomcat application server. It accepts different parameters and returns results in JSON format. To demonstrate the framework, a list of medical data items was mapped by two different methods: similarity search in a large table of terminology codes versus search in a manually curated repository. These mappings were reviewed by a specialist. The evaluation shows that the framework is flexible (due to standardized interfaces like HTTP and JSON), performant and reliable. Accuracy of automatically assigned codes is limited (up to 40%). Combining different semantic mappers into a standardized Web-API is feasible. This framework can be easily enhanced due to its modular design.

  1. Spring Web MVC Framework for rapid open source J2EE application development: a case study

    Directory of Open Access Journals (Sweden)

    Praveen Gupta,

    2010-06-01

    Full Text Available Today it is the highly competitive for the development of Web application, it is the need of the time to develop the application accurately, economically, and efficiently. We are interested to increase productivity and decrease complexity. This has been an underlying theme in a movement to change the way programmers approach developing Java 2 Platform, Enterprise Edition (J2EE Web applications. Our focus is how to create J2EE-compliant software without using Enterprise Java Beans (EJB. The one of the best alternative is the Spring framework, which provides less services but it is much less intrusive than EJB. The driving force behind this shift is the need for greater productivity and reduced complexity in the area of Web application software development and implementation. In this paper, we briefly describe spring underlying architecture and present a case study using Spring web MVC Framework.

  2. Bioinformatics projects supporting life-sciences learning in high schools.

    Science.gov (United States)

    Marques, Isabel; Almeida, Paulo; Alves, Renato; Dias, Maria João; Godinho, Ana; Pereira-Leal, José B

    2014-01-01

    The interdisciplinary nature of bioinformatics makes it an ideal framework to develop activities enabling enquiry-based learning. We describe here the development and implementation of a pilot project to use bioinformatics-based research activities in high schools, called "Bioinformatics@school." It includes web-based research projects that students can pursue alone or under teacher supervision and a teacher training program. The project is organized so as to enable discussion of key results between students and teachers. After successful trials in two high schools, as measured by questionnaires, interviews, and assessment of knowledge acquisition, the project is expanding by the action of the teachers involved, who are helping us develop more content and are recruiting more teachers and schools.

  3. Bioinformatics projects supporting life-sciences learning in high schools.

    Directory of Open Access Journals (Sweden)

    Isabel Marques

    2014-01-01

    Full Text Available The interdisciplinary nature of bioinformatics makes it an ideal framework to develop activities enabling enquiry-based learning. We describe here the development and implementation of a pilot project to use bioinformatics-based research activities in high schools, called "Bioinformatics@school." It includes web-based research projects that students can pursue alone or under teacher supervision and a teacher training program. The project is organized so as to enable discussion of key results between students and teachers. After successful trials in two high schools, as measured by questionnaires, interviews, and assessment of knowledge acquisition, the project is expanding by the action of the teachers involved, who are helping us develop more content and are recruiting more teachers and schools.

  4. Collaborative Framework with User Personalization for Efficient web Search : A D3 Mining approach

    Directory of Open Access Journals (Sweden)

    V.Vijayadeepa

    2011-04-01

    Full Text Available User personalization becomes more important task for web search engines. We develop a unified model to provide user personalization for efficient web search. We collect implicit feedback from the users by tracking their behavior on the web page based on their actions on the web page. We track actions like save, copy, bookmark, time spent and logging into data base, which will be used to build unified model. Our model is used as a collaborative framework using which related users can mine the information collaboratively with littleamount of time. Based on the feed back from the users we categorize the users and search query. We build the unified model based on the categorized information, using which we provide personalized results to the user during web search. Our methodology minimizes the search time and provides more amount of relevant information.

  5. Towards a Simple and Efficient Web Search Framework

    Science.gov (United States)

    2014-11-01

    any useful information about the various aspects of a topic. For example, for the query “ raspberry pi ”, it covers topics such as “what is raspberry pi ...topics generated by the LDA topic model for query ” raspberry pi ”. One simple explanation is that web texts are too noisy and unfocused for the LDA process...making a rasp- berry pi ”. However, the topics generated based on the 10 top ranked documents do not make much sense to us in terms of their keywords

  6. AN ENHANCED PRE-PROCESSING RESEARCH FRAMEWORK FOR WEB LOG DATA USING A LEARNING ALGORITHM

    Directory of Open Access Journals (Sweden)

    V.V.R. Maheswara Rao

    2011-01-01

    Full Text Available With the continued growth and proliferation of Web services and Web based information systems, the volumes of user data have reached astronomical proportions. Before analyzing such data using web mining techniques, the web log has to be pre processed, integrated and transformed. As the World Wide Web is continuously and rapidly growing, it is necessary for the web miners to utilize intelligent tools in order to find, extract, filter and evaluate the desired information. The data pre-processing stage is the most important phase for investigation of the web user usage behaviour. To do this one must extract the only human user accesses from weblog data which is critical and complex. The web log is incremental in nature, thus conventional data pre-processing techniques were proved to be not suitable. Hence an extensive learning algorithm is required in order to get the desired information.This paper introduces an extensive research frame work capable of pre processing web log data completely and efficiently. The learning algorithm of proposed research frame work can separates human user and search engine accesses intelligently, with less time. In order to create suitable target data, the further essential tasks of pre-processing Data Cleansing, User Identification, Sessionization and Path Completion are designed collectively. The framework reduces the error rate and improves significant learning performance of the algorithm. The work ensures the goodness of split by using popular measures like Entropy and Gini index. This framework helps to investigate the web user usage behaviour efficiently. The experimental results proving this claim are given in this paper.

  7. The Gaia Framework: Version Support In Web Based Open Hypermedia

    DEFF Research Database (Denmark)

    Kejser, Thomas; Grønbæk, Kaj

    2003-01-01

    The GAIA framework prototype, described herein, explores the possibilities and problems that arise when combining versioning and open hypermedia paradigms. It will be argued that it - by adding versioning as a separate service in the hypermedia architecture - is possible to build consistent versi...

  8. A Smart Modeling Framework for Integrating BMI-enabled Models as Web Services

    Science.gov (United States)

    Jiang, P.; Elag, M.; Kumar, P.; Peckham, S. D.; Liu, R.; Marini, L.; Hsu, L.

    2015-12-01

    Serviced-oriented computing provides an opportunity to couple web service models using semantic web technology. Through this approach, models that are exposed as web services can be conserved in their own local environment, thus making it easy for modelers to maintain and update the models. In integrated modeling, the serviced-oriented loose-coupling approach requires (1) a set of models as web services, (2) the model metadata describing the external features of a model (e.g., variable name, unit, computational grid, etc.) and (3) a model integration framework. We present the architecture of coupling web service models that are self-describing by utilizing a smart modeling framework. We expose models that are encapsulated with CSDMS (Community Surface Dynamics Modeling System) Basic Model Interfaces (BMI) as web services. The BMI-enabled models are self-describing by uncovering models' metadata through BMI functions. After a BMI-enabled model is serviced, a client can initialize, execute and retrieve the meta-information of the model by calling its BMI functions over the web. Furthermore, a revised version of EMELI (Peckham, 2015), an Experimental Modeling Environment for Linking and Interoperability, is chosen as the framework for coupling BMI-enabled web service models. EMELI allows users to combine a set of component models into a complex model by standardizing model interface using BMI as well as providing a set of utilities smoothing the integration process (e.g., temporal interpolation). We modify the original EMELI so that the revised modeling framework is able to initialize, execute and find the dependencies of the BMI-enabled web service models. By using the revised EMELI, an example will be presented on integrating a set of topoflow model components that are BMI-enabled and exposed as web services. Reference: Peckham, S.D. (2014) EMELI 1.0: An experimental smart modeling framework for automatic coupling of self-describing models, Proceedings of HIC 2014

  9. A Proposal of Ajax Framework for Web-based Supervisory and Control Systems

    Science.gov (United States)

    Yanagihara, Shintaro; Ishihara, Akira; Ishii, Toshinao; Kitsuki, Junichi; Seo, Kazuo

    In recent years, with spread of Web application and performance gain of Web browsers, the demand of the web-based supervisory and control(WSCADA) systems based on RIA(Rich Internet Application) is increased. To develop CRUD operations(Create, Read, Update, Delete which corresponds to the basic database operations) of RIA-based web applications, various frameworks and libraries are being provided. However, to develop behavior operations, a lot of program must be written manually. The typical operations of WSCADA are behavior operations, so even if RIA frameworks and libraries are used to develop WSCADA, the productivity of development doesn't improve. Although conceptual models and development environment have been proposed for typical web applications consisted mostly of CRUD operations, those for WSCADA is still the unsolved problem. This paper proposes the user interface model and the development environment for the monitoring user interface program of WSCADA. We focus on the productivity enhancement of the WSCADA development, and propose the Monitoring User Interface Model(MUM) extended Model-View-Controller(MVC) model. We design the Ajax framework and the development environment based on our model. We define the DisplayItem as the advanced View and the MonitoringItem as the advanced Model, and classify the Controller into the Interaction and the Behavior. Our Ajax framework based on web browser's standard technologies, provides the mapping between conceptual model elements. We define the domain specific language for writing the mapping. We design development environment for auto-generating Behavior program from the mapping. In this paper, we evaluate our model and development environment through the experimental development of the typical WSCADA. As a result, the development cost of the WSCADA based on our framework is only one fifth of that based on the typical Ajax library.

  10. 一种柔性 Web 展现框架模型%Flexible Web Framework Model

    Institute of Scientific and Technical Information of China (English)

    刘一田; 刘士进

    2013-01-01

    Web2.0时代,越来越多的网站采用了动态脚本的方式和用户进行交互,大量客户端脚本的应用,造成了代码的可适应性、可维护性、可扩展性较差,无法兼容各种主流浏览器,页面之间的跳转仍然较多,资源的加载没有规则等问题,影响了应用性能和用户体验.提出了一种柔性 Web 展现框架模型 FWF,构造了符合AJAX+MVC 模式的框架模型,定义了组件模型并通过策略适配器的驱动及事件机制,较好解决了软件适应性问题;对 UI 组件进行面向对象的封装,实现模型(Model)、视图(View)和控制(Controller)的合理分层,并通过内置的资源加载规则,缩短资源加载时间,从而提升用户应用体验,通过 OSGI 框架的模块扩展机制实现了 Web 组件的可扩展.此外,通过原型实例实验证明了框架的柔性和性能.%In the Web2.0 era, more sites are using dynamic scripting approach and user interaction, a large number of client-side scripting applications, resulting in a code of adaptability, maintainability, scalability is poor, and not compatible with all major browsers, jump between pages still more, resource loading has no rules and other issues affecting the application performance and user experience. Presents a flexible framework model FWF, constructed accord AJAX + MVC pattern framework model that defines the component model, and through a policy driven and event adapter mechanism solves the problem of software adaptation. package UI components in object-oriented pattern, the implementation of model (Model), view (View) and control (Controller) are layered rationally, and through the built-in resource loading rules, shorten resource loading time, thereby enhancing the user application experience through OSGI framework's module extension mechanism to achieve a scalable Web components. In addition, through a framework prototype instance proved the flexibility and performance.

  11. WebMedSA: a web-based framework for segmenting and annotating medical images using biomedical ontologies

    Science.gov (United States)

    Vega, Francisco; Pérez, Wilson; Tello, Andrés.; Saquicela, Victor; Espinoza, Mauricio; Solano-Quinde, Lizandro; Vidal, Maria-Esther; La Cruz, Alexandra

    2015-12-01

    Advances in medical imaging have fostered medical diagnosis based on digital images. Consequently, the number of studies by medical images diagnosis increases, thus, collaborative work and tele-radiology systems are required to effectively scale up to this diagnosis trend. We tackle the problem of the collaborative access of medical images, and present WebMedSA, a framework to manage large datasets of medical images. WebMedSA relies on a PACS and supports the ontological annotation, as well as segmentation and visualization of the images based on their semantic description. Ontological annotations can be performed directly on the volumetric image or at different image planes (e.g., axial, coronal, or sagittal); furthermore, annotations can be complemented after applying a segmentation technique. WebMedSA is based on three main steps: (1) RDF-ization process for extracting, anonymizing, and serializing metadata comprised in DICOM medical images into RDF/XML; (2) Integration of different biomedical ontologies (using L-MOM library), making this approach ontology independent; and (3) segmentation and visualization of annotated data which is further used to generate new annotations according to expert knowledge, and validation. Initial user evaluations suggest that WebMedSA facilitates the exchange of knowledge between radiologists, and provides the basis for collaborative work among them.

  12. Evaluación de los Frameworks en el Desarrollo de Aplicaciones Web con Python

    Directory of Open Access Journals (Sweden)

    Jimmy Rolando Molina Ríos

    2016-09-01

    Full Text Available Debido a la creciente interacción de los usuarios con sistemas web, surge la necesidad de combinar las funcionalidades de aplicaciones clásicas de escritorio, con la accesibilidad y bajo costo de la publicación de aplicaciones web; dando origen a la elección del mejor marco de trabajo que se adopte a las necesidades de los desarrolladores. Esta investigación presenta un análisis comparativo de los frameworks que trabajan con el lenguaje Python para el desarrollo de aplicaciones web. Para ello el análisis se formuló mediante un modelo de evaluación que se basa en las características de calidad propuestas en la norma ISO/IEC 9126. Estas a su vez permiten establecer sub-características, atributos y métricas para evaluar la calidad de las aplicaciones web. Permitiendo obtener como resultado una matriz para la Evaluación de Frameworks: Django, Pyramid, Turbogear y Web2PY. Los resultados obtenidos mostraron las fortalezas y debilidades de cada framework y fue la base para determinar que Django es el mejor framework para la implementación de desarrollo de sistemas web. Este framework cumplió con todos los indicadores del modelo de evaluación, los resultados redactados al final del documento determinan que tomando en cuenta las métricas de calidad se puede elegir qué marco de trabajo es el que mejor se adapta para el desarrollo de aplicaciones web en la Ciudad de Machala. Antes de realizar una evaluación se considera indispensable conocer y comprender el funcionamiento de los elementos que se vaya a cotejar, para ello es recomendable emplear tablas para la comparación de las características, teniendo como referencia sitios web confiables que aporten documentación sobre los frameworks y el empleo de estándares de calidad para su determinación.

  13. A framework for sharing and integrating remote sensing and GIS models based on Web service.

    Science.gov (United States)

    Chen, Zeqiang; Lin, Hui; Chen, Min; Liu, Deer; Bao, Ying; Ding, Yulin

    2014-01-01

    Sharing and integrating Remote Sensing (RS) and Geographic Information System/Science (GIS) models are critical for developing practical application systems. Facilitating model sharing and model integration is a problem for model publishers and model users, respectively. To address this problem, a framework based on a Web service for sharing and integrating RS and GIS models is proposed in this paper. The fundamental idea of the framework is to publish heterogeneous RS and GIS models into standard Web services for sharing and interoperation and then to integrate the RS and GIS models using Web services. For the former, a "black box" and a visual method are employed to facilitate the publishing of the models as Web services. For the latter, model integration based on the geospatial workflow and semantic supported marching method is introduced. Under this framework, model sharing and integration is applied for developing the Pearl River Delta water environment monitoring system. The results show that the framework can facilitate model sharing and model integration for model publishers and model users.

  14. 6-D, A Process Framework for the Design and Development of Web-based Systems.

    Science.gov (United States)

    Christian, Phillip

    2001-01-01

    Explores how the 6-D framework can form the core of a comprehensive systemic strategy and help provide a supporting structure for more robust design and development while allowing organizations to support whatever methods and models best suit their purpose. 6-D stands for the phases of Web design and development: Discovery, Definition, Design,…

  15. A Semantics-Based Information Distribution Framework for Large Web-Based Course Forum System

    Science.gov (United States)

    Chim, Hung; Deng, Xiaotie

    2008-01-01

    We propose a novel data distribution framework for developing a large Web-based course forum system. In the distributed architectural design, each forum server is fully equipped with the ability to support some course forums independently. The forum servers collaborating with each other constitute the whole forum system. Therefore, the workload of…

  16. 6-D, A Process Framework for the Design and Development of Web-based Systems.

    Science.gov (United States)

    Christian, Phillip

    2001-01-01

    Explores how the 6-D framework can form the core of a comprehensive systemic strategy and help provide a supporting structure for more robust design and development while allowing organizations to support whatever methods and models best suit their purpose. 6-D stands for the phases of Web design and development: Discovery, Definition, Design,…

  17. Designing a Virtual Olympic Games Framework by Using Simulation in Web 2.0 Technologies

    Science.gov (United States)

    Stoilescu, Dorian

    2013-01-01

    Instructional simulation had major difficulties in the past for offering limited possibilities in practice and learning. This article proposes a link between instructional simulation and Web 2.0 technologies. More exactly, I present the design of the Virtual Olympic Games Framework (VOGF), as a significant demonstration of how interactivity in…

  18. ChEMBL Beaker: A Lightweight Web Framework Providing Robust and Extensible Cheminformatics Services

    Directory of Open Access Journals (Sweden)

    Michał Nowotka

    2014-11-01

    Full Text Available ChEMBL Beaker is an open source web framework, exposing a versatile chemistry-focused API (Application Programming Interface to support the development of new cheminformatics applications. This paper describes the current functionality offered by Beaker and outlines the future technology roadmap.

  19. FRIAA: A FRamework for Web-based Interactive Astronomy Analysis using AMQP Messaging

    Science.gov (United States)

    Young, M. D.; Gopu, A.; Hayashi, S.; Cox, J. A.

    2013-10-01

    This paper describes a web-based FRamework for Interactive Astronomy Analysis (FRIAA) being developed as part of the One Degree Imager - Pipeline, Portal, and Archive (ODI-PPA) Science Gateway. The framework provides astronomers with the ability to invoke data processing modules including IRAF and SExtractor on large data within their ODI-PPA web account without requiring them to download the data or to access remote compute resources. Currently available functionality includes contour plots, point source detection and photometry, surface photometry, and catalog source matching. The web browser front-end developed using the Zend PHP platform and the Bootstrap library makes Remote Procedure Calls (RPC) to the back-end modules using AMQP based messaging. The compute-intensive data processing codes are executed on powerful and dedicated nodes on a compute cluster at Indiana University.

  20. A theorem proving framework for the formal verification of Web Services Composition

    Directory of Open Access Journals (Sweden)

    Petros Papapanagiotou

    2011-08-01

    Full Text Available We present a rigorous framework for the composition of Web Services within a higher order logic theorem prover. Our approach is based on the proofs-as-processes paradigm that enables inference rules of Classical Linear Logic (CLL to be translated into pi-calculus processes. In this setting, composition is achieved by representing available web services as CLL sentences, proving the requested composite service as a conjecture, and then extracting the constructed pi-calculus term from the proof. Our framework, implemented in HOL Light, not only uses an expressive logic that allows us to incorporate multiple Web Services properties in the composition process, but also provides guarantees of soundness and correctness for the composition.

  1. A theorem proving framework for the formal verification of Web Services Composition

    CERN Document Server

    Papapanagiotou, Petros; 10.4204/EPTCS.61.1

    2011-01-01

    We present a rigorous framework for the composition of Web Services within a higher order logic theorem prover. Our approach is based on the proofs-as-processes paradigm that enables inference rules of Classical Linear Logic (CLL) to be translated into pi-calculus processes. In this setting, composition is achieved by representing available web services as CLL sentences, proving the requested composite service as a conjecture, and then extracting the constructed pi-calculus term from the proof. Our framework, implemented in HOL Light, not only uses an expressive logic that allows us to incorporate multiple Web Services properties in the composition process, but also provides guarantees of soundness and correctness for the composition.

  2. Toward a More Flexible Web-Based Framework for Multidisciplinary Design

    Science.gov (United States)

    Rogers, J. L.; Salas, A. O.

    1999-01-01

    In today's competitive environment, both industry and government agencies are under pressure to reduce the time and cost of multidisciplinary design projects. New tools have been introduced to assist in this process by facilitating the integration of and communication among diverse disciplinary codes. One such tool, a framework for multidisciplinary design, is defined as a hardware-software architecture that enables integration, execution, and communication among diverse disciplinary processes. An examination of current frameworks reveals weaknesses in various areas, such as sequencing, monitoring, controlling, and displaying the design process. The objective of this research is to explore how Web technology can improve these areas of weakness and lead toward a more flexible framework. This article describes a Web-based system that optimizes and controls the execution sequence of design processes in addition to monitoring the project status and displaying the design results.

  3. A user-centred evaluation framework for the Sealife semantic web browsers.

    Science.gov (United States)

    Oliver, Helen; Diallo, Gayo; de Quincey, Ed; Alexopoulou, Dimitra; Habermann, Bianca; Kostkova, Patty; Schroeder, Michael; Jupp, Simon; Khelif, Khaled; Stevens, Robert; Jawaheer, Gawesh; Madle, Gemma

    2009-10-01

    Semantically-enriched browsing has enhanced the browsing experience by providing contextualized dynamically generated Web content, and quicker access to searched-for information. However, adoption of Semantic Web technologies is limited and user perception from the non-IT domain sceptical. Furthermore, little attention has been given to evaluating semantic browsers with real users to demonstrate the enhancements and obtain valuable feedback. The Sealife project investigates semantic browsing and its application to the life science domain. Sealife's main objective is to develop the notion of context-based information integration by extending three existing Semantic Web browsers (SWBs) to link the existing Web to the eScience infrastructure. This paper describes a user-centred evaluation framework that was developed to evaluate the Sealife SWBs that elicited feedback on users' perceptions on ease of use and information findability. Three sources of data: i) web server logs; ii) user questionnaires; and iii) semi-structured interviews were analysed and comparisons made between each browser and a control system. It was found that the evaluation framework used successfully elicited users' perceptions of the three distinct SWBs. The results indicate that the browser with the most mature and polished interface was rated higher for usability, and semantic links were used by the users of all three browsers. Confirmation or contradiction of our original hypotheses with relation to SWBs is detailed along with observations of implementation issues.

  4. Pragmatic service development and customisation with the CEDA OGC Web Services framework

    Science.gov (United States)

    Pascoe, Stephen; Stephens, Ag; Lowe, Dominic

    2010-05-01

    The CEDA OGC Web Services framework (COWS) emphasises rapid service development by providing a lightweight layer of OGC web service logic on top of Pylons, a mature web application framework for the Python language. This approach gives developers a flexible web service development environment without compromising access to the full range of web application tools and patterns: Model-View-Controller paradigm, XML templating, Object-Relational-Mapper integration and authentication/authorization. We have found this approach useful for exploring evolving standards and implementing protocol extensions to meet the requirements of operational deployments. This paper outlines how COWS is being used to implement customised WMS, WCS, WFS and WPS services in a variety of web applications from experimental prototypes to load-balanced cluster deployments serving 10-100 simultaneous users. In particular we will cover 1) The use of Climate Science Modeling Language (CSML) in complex-feature aware WMS, WCS and WFS services, 2) Extending WMS to support applications with features specific to earth system science and 3) A cluster-enabled Web Processing Service (WPS) supporting asynchronous data processing. The COWS WPS underpins all backend services in the UK Climate Projections User Interface where users can extract, plot and further process outputs from a multi-dimensional probabilistic climate model dataset. The COWS WPS supports cluster job execution, result caching, execution time estimation and user management. The COWS WMS and WCS components drive the project-specific NCEO and QESDI portals developed by the British Atmospheric Data Centre. These portals use CSML as a backend description format and implement features such as multiple WMS layer dimensions and climatology axes that are beyond the scope of general purpose GIS tools and yet vital for atmospheric science applications.

  5. A Framework to Enhance Quality of Service for Content Delivery Network Using Web Services: A Review

    Directory of Open Access Journals (Sweden)

    K.Manivannan

    2011-09-01

    Full Text Available Content Delivery Networks (CDNs is anticipated to provide better performance delivery of content in internet through worldwide coverage, which would be a fence for new content delivery network providers. The appearance of Web as a omnipresent media for sharing content and services has led to the rapid growth of the Internet. At the same time, the number of users accessing Web-based content and services are growing exponentially. This has placed a heavy demand on Internet bandwidth and Web systems hosting content and application services. As a result, many Web sites are unable to manage this demand and offer their services in a timely manner. Content Delivery Networks (CDNs have emerged to overcome these limitations by offering infrastructure and mechanisms to deliver content and services in a scalable manner, and enhancing users Web experience. The planned research provides a framework designed to enhance QoS of Web service processes for real time servicing. QoS parameters of various domains can be combined to provide differentiated services, and allocating dynamically available resources in the midst of customers while delivering high-quality real time multimedia content. While accessing the service by a customer, it is possible to adapt real time streams to vastly changeable network conditions to give suitable quality in spite of factors upsetting Quality of service. To reach these intentions, adaptive web service processes to supply more information for determining the quality and size of the delivered object. The framework includes a section for QoS monitoring and adaptation and QoS faults prediction possibility and convalesce actions in case of failure. The aim of this research is to encourage research about quality of composite services in service-oriented architectures with security measures.

  6. Unified framework for generation of 3D web visualization for mechatronic systems

    Science.gov (United States)

    Severa, O.; Goubej, M.; Konigsmarkova, J.

    2015-11-01

    The paper deals with development of a unified framework for generation of 3D visualizations of complex mechatronic systems. It provides a high-fidelity representation of executed motion by allowing direct employment of a machine geometry model acquired from a CAD system. Open-architecture multi-platform solution based on latest web standards is achieved by utilizing a web browser as a final 3D renderer. The results are applicable both for simulations and development of real-time human machine interfaces. Case study of autonomous underwater vehicle control is provided to demonstrate the applicability of the proposed approach.

  7. Neuroimaging, Genetics, and Clinical Data Sharing in Python Using the CubicWeb Framework

    Science.gov (United States)

    Grigis, Antoine; Goyard, David; Cherbonnier, Robin; Gareau, Thomas; Papadopoulos Orfanos, Dimitri; Chauvat, Nicolas; Di Mascio, Adrien; Schumann, Gunter; Spooren, Will; Murphy, Declan; Frouin, Vincent

    2017-01-01

    In neurosciences or psychiatry, the emergence of large multi-center population imaging studies raises numerous technological challenges. From distributed data collection, across different institutions and countries, to final data publication service, one must handle the massive, heterogeneous, and complex data from genetics, imaging, demographics, or clinical scores. These data must be both efficiently obtained and downloadable. We present a Python solution, based on the CubicWeb open-source semantic framework, aimed at building population imaging study repositories. In addition, we focus on the tools developed around this framework to overcome the challenges associated with data sharing and collaborative requirements. We describe a set of three highly adaptive web services that transform the CubicWeb framework into a (1) multi-center upload platform, (2) collaborative quality assessment platform, and (3) publication platform endowed with massive-download capabilities. Two major European projects, IMAGEN and EU-AIMS, are currently supported by the described framework. We also present a Python package that enables end users to remotely query neuroimaging, genetics, and clinical data from scripts. PMID:28360851

  8. Towards a framework for assessment and management of cumulative human impacts on marine food webs.

    Science.gov (United States)

    Giakoumi, Sylvaine; Halpern, Benjamin S; Michel, Loïc N; Gobert, Sylvie; Sini, Maria; Boudouresque, Charles-François; Gambi, Maria-Cristina; Katsanevakis, Stelios; Lejeune, Pierre; Montefalcone, Monica; Pergent, Gerard; Pergent-Martini, Christine; Sanchez-Jerez, Pablo; Velimirov, Branko; Vizzini, Salvatrice; Abadie, Arnaud; Coll, Marta; Guidetti, Paolo; Micheli, Fiorenza; Possingham, Hugh P

    2015-08-01

    Effective ecosystem-based management requires understanding ecosystem responses to multiple human threats, rather than focusing on single threats. To understand ecosystem responses to anthropogenic threats holistically, it is necessary to know how threats affect different components within ecosystems and ultimately alter ecosystem functioning. We used a case study of a Mediterranean seagrass (Posidonia oceanica) food web and expert knowledge elicitation in an application of the initial steps of a framework for assessment of cumulative human impacts on food webs. We produced a conceptual seagrass food web model, determined the main trophic relationships, identified the main threats to the food web components, and assessed the components' vulnerability to those threats. Some threats had high (e.g., coastal infrastructure) or low impacts (e.g., agricultural runoff) on all food web components, whereas others (e.g., introduced carnivores) had very different impacts on each component. Partitioning the ecosystem into its components enabled us to identify threats previously overlooked and to reevaluate the importance of threats commonly perceived as major. By incorporating this understanding of system vulnerability with data on changes in the state of each threat (e.g., decreasing domestic pollution and increasing fishing) into a food web model, managers may be better able to estimate and predict cumulative human impacts on ecosystems and to prioritize conservation actions. © 2015 Society for Conservation Biology.

  9. A Framework For Extracting Information From Web Using VTD-XML‘s XPath

    Directory of Open Access Journals (Sweden)

    C. Subhashini

    2012-03-01

    Full Text Available The exponential growth of WWW (World Wide Web is the cause for vast pool of information as well as several challenges posed by it, such as extracting potentially useful and unknown information from WWW. Many websites are built with HTML, because of its unstructured layout, it is difficult to obtain effective and precise data from web using HTML. The advent of XML (Extensible Markup Language proposes a better solution to extract useful knowledge from WWW. Web Data Extraction based on XML Technology solves this problem because XML is a general purpose specification for exchanging data over the Web. In this paper, a framework is suggested to extract the data from the web.Here the semi-structured data in the web page is transformed into well-structured data using standard XML technologies and the new parsing technique called extended VTD-XML (Virtual Token Descriptorfor XML along with Xpath implementation has been used to extract data from the well-structured XML document.

  10. The secondary metabolite bioinformatics portal

    DEFF Research Database (Denmark)

    Weber, Tilmann; Kim, Hyun Uk

    2016-01-01

    . In this context, this review gives a summary of tools and databases that currently are available to mine, identify and characterize natural product biosynthesis pathways and their producers based on ‘omics data. A web portal called Secondary Metabolite Bioinformatics Portal (SMBP at http...

  11. Applications and Methods Utilizing the Simple Semantic Web Architecture and Protocol (SSWAP) for Bioinformatics Resource Discovery and Disparate Data and Service Integration

    Science.gov (United States)

    Scientific data integration and computational service discovery are challenges for the bioinformatic community. This process is made more difficult by the separate and independent construction of biological databases, which makes the exchange of scientific data between information resources difficu...

  12. Analisis Arsitektur Aplikasi Web Menggunakan Model View Controller (MVC pada Framework Java Server Faces

    Directory of Open Access Journals (Sweden)

    Gunawan Gunawan

    2016-06-01

    Full Text Available Aplikasi web yang khususnya memiliki kompleksitas besar dalam melakukan transaksi data sehingga konsep arsitektur (pattern perlu menjadi perhatian khusus untuk dapat mengoptimalkan kinerja performansi sistem ketika pengguna (user menggunakan dalam waktu yang bersamaan dengan jumlah yang banyak. Analisis performa arsitektur aplikasi web yang menggunakan model 2 (MVC dengan menggunakan framework Java Server Faces (JSF dan model 1 sebagai pembanding. Metode yang digunakan adalah Load dan Scalability Testing dengan dua cara yaitu uji coba terhadap response time karena peningkatan ukuran dari database dan uji coba terhadap response time karena peningkatan jumlah user yang menggunakan sistem secara bersamaan (concurrent users dan waktu tunggu (ramp-up yang ditentukan menggunakan Apache Jmeter. Analisis menunjukkan bahwa dalam implementasi arsitektur web yang menggunakan model 1 waktu rata-rata yang dibutuhkan untuk merespon permintaan user lebih cepat dan efisien dibanding model 2 (MVC.  

  13. A new web-based framework development for fuzzy multi-criteria group decision-making.

    Science.gov (United States)

    Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik

    2016-01-01

    Fuzzy multi-criteria group decision making (FMCGDM) process is usually used when a group of decision-makers faces imprecise data or linguistic variables to solve the problems. However, this process contains many methods that require many time-consuming calculations depending on the number of criteria, alternatives and decision-makers in order to reach the optimal solution. In this study, a web-based FMCGDM framework that offers decision-makers a fast and reliable response service is proposed. The proposed framework includes commonly used tools for multi-criteria decision-making problems such as fuzzy Delphi, fuzzy AHP and fuzzy TOPSIS methods. The integration of these methods enables taking advantages of the strengths and complements each method's weakness. Finally, a case study of location selection for landfill waste in Morocco is performed to demonstrate how this framework can facilitate decision-making process. The results demonstrate that the proposed framework can successfully accomplish the goal of this study.

  14. A Framework for Automated Testing of JavaScript Web Applications

    DEFF Research Database (Denmark)

    Artzi, Shay; Dolby, Julian; Jensen, Simon Holm;

    2011-01-01

    Current practice in testing JavaScript web applications requires manual construction of test cases, which is difficult and tedious. We present a framework for feedback-directed automated test generation for JavaScript in which execution is monitored to collect information that directs the test...... generator towards inputs that yield increased coverage. We implemented several instantiations of the framework, corresponding to variations on feedback-directed random testing, in a tool called Artemis. Experiments on a suite of JavaScript applications demonstrate that a simple instantiation...

  15. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction

    Science.gov (United States)

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-01-01

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients’ psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller’s mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study. PMID:27608023

  16. Web of Objects Based Ambient Assisted Living Framework for Emergency Psychiatric State Prediction.

    Science.gov (United States)

    Alam, Md Golam Rabiul; Abedin, Sarder Fakhrul; Al Ameen, Moshaddique; Hong, Choong Seon

    2016-09-06

    Ambient assisted living can facilitate optimum health and wellness by aiding physical, mental and social well-being. In this paper, patients' psychiatric symptoms are collected through lightweight biosensors and web-based psychiatric screening scales in a smart home environment and then analyzed through machine learning algorithms to provide ambient intelligence in a psychiatric emergency. The psychiatric states are modeled through a Hidden Markov Model (HMM), and the model parameters are estimated using a Viterbi path counting and scalable Stochastic Variational Inference (SVI)-based training algorithm. The most likely psychiatric state sequence of the corresponding observation sequence is determined, and an emergency psychiatric state is predicted through the proposed algorithm. Moreover, to enable personalized psychiatric emergency care, a service a web of objects-based framework is proposed for a smart-home environment. In this framework, the biosensor observations and the psychiatric rating scales are objectified and virtualized in the web space. Then, the web of objects of sensor observations and psychiatric rating scores are used to assess the dweller's mental health status and to predict an emergency psychiatric state. The proposed psychiatric state prediction algorithm reported 83.03 percent prediction accuracy in an empirical performance study.

  17. An efficient and flexible web services-based multidisciplinary design optimisation framework for complex engineering systems

    Science.gov (United States)

    Li, Liansheng; Liu, Jihong

    2012-08-01

    Multidisciplinary design optimisation (MDO) involves multiple disciplines, multiple coupled relationships and multiple processes, which is implemented by different specialists dispersed geographically on heterogeneous platforms with different analysis and optimisation tools. The product design data integration and data sharing among the participants hampers the development and applications of MDO in enterprises seriously. Therefore, a multi-hierarchical integrated product design data model (MH-iPDM) supporting the MDO in the web environment and a web services-based multidisciplinary design optimisation (Web-MDO) framework are proposed in this article. Based on the enabling technologies including web services, ontology, workflow, agent, XML and evidence theory, the proposed framework enables the designers geographically dispersed to work collaboratively in the MDO environment. The ontology-based workflow enables the logical reasoning of MDO to be processed dynamically. The evidence theory-based uncertainty reasoning and analysis supports the quantification, aggregation and analysis of the conflicting epistemic uncertainty from multiple sources, which improves the quality of product. Finally, a proof-of-concept prototype system is developed using J2EE and an example of supersonic business jet is demonstrated to verify the autonomous execution of MDO strategies and the effectiveness of the proposed approach.

  18. A Web Application Based on Spring MVC Framework%基于 Spring MVC 的 Web 应用开发

    Institute of Scientific and Technical Information of China (English)

    舒礼莲

    2013-01-01

    This article introduces the method of developing Web application using Spring MVC framework .Controller is the core of MVC pattern.We focus on configuration method of front controller DispatcherServlet , including how to dispatch front controller to other controllers , how to binding parameters in user interface to models , how to access model data , how to output to view pages .%介绍使用Spring MVC框架进行Web应用开发的方法。 MVC的核心是控制器,本文重点介绍前端控制器Dispatch-erServlet的配置方法。包括前端控制器如何分配用户请求到其他控制器类,如何与用户页面参数绑定,如何访问模型数据,如何输出到视图页面。

  19. Semi-automatic web service composition for the life sciences using the BioMoby semantic web framework.

    Science.gov (United States)

    DiBernardo, Michael; Pottinger, Rachel; Wilkinson, Mark

    2008-10-01

    Researchers in the life-sciences are currently limited to small-scale informatics experiments and analyses because of the lack of interoperability among life-sciences web services. This limitation can be addressed by annotating services and their interfaces with semantic information, so that interoperability problems can be reasoned about programmatically. The Moby semantic web framework is a popular and mature platform that is used for this purpose. However, the number of services that are available to select from when building a workflow is becoming unmanageable for users. As such, attempts have been made to assist with service selection and composition. These tasks fall under the general label of automated service composition. We present a prototype workflow assembly client that reduces the number of choices that users have to make by (1) restricting the overall set of services presented to them and (2) ranking services so that the the most desirable ones are presented first. We demonstrate via an evaluation of this prototype that a unification of relatively simple techniques can rank desirable services highly while maintaining interactive response times.

  20. Common Web Mapping and Mobile Device Framework for Display of NASA Real-time Data

    Science.gov (United States)

    Burks, Jason

    2013-01-01

    Scientists have strategic goals to deliver their unique datasets and research to both collaborative partners and more broadly to the public. These datasets can have a significant impact locally and globally as has been shown by the success of the NASA Short-term Prediction Research and Transition (SPoRT) Center and SERVIR programs at Marshall Space Flight Center. Each of these respective organizations provides near real-time data at the best resolution possible to address concerns of the operational weather forecasting community (SPoRT) and to support environmental monitoring and disaster assessment (SERVIR). However, one of the biggest struggles to delivering the data to these and other Earth science community partners is formatting the product to fit into an end user's Decision Support System (DSS). The problem of delivering the data to the end-user's DSS can be a significant impediment to transitioning research to operational environments especially for disaster response where the deliver time is critical. The decision makers, in addition to the DSS, need seamless access to these same datasets from a web browser or a mobile phone for support when they are away from their DSS or for personnel out in the field. A framework has been developed for MSFC Earth Science program that can be used to easily enable seamless delivery of scientific data to end users in multiple formats. The first format is an open geospatial format, Web Mapping Service (WMS), which is easily integrated into most DSSs. The second format is a web browser display, which can be embedded within any MSFC Science web page with just a few lines of web page coding. The third format is accessible in the form of iOS and Android native mobile applications that could be downloaded from an "app store". The framework developed has reduced the level of effort needed to bring new and existing NASA datasets to each of these end user platforms and help extend the reach of science data.

  1. Common Web Mapping and Mobile Device Framework for Display of NASA Real-time Data

    Science.gov (United States)

    Burks, J. E.

    2013-12-01

    Scientists have strategic goals to deliver their unique datasets and research to both collaborative partners and more broadly to the public. These datasets can have a significant impact locally and globally as has been shown by the success of the NASA Short-term Prediction Research and Transition (SPoRT) Center and SERVIR programs at Marshall Space Flight Center. Each of these respective organizations provides near real-time data at the best resolution possible to address concerns of the operational weather forecasting community (SPoRT) and to support environmental monitoring and disaster assessment (SERVIR). However, one of the biggest struggles to delivering the data to these and other Earth science community partners is formatting the product to fit into an end user's Decision Support System (DSS). The problem of delivering the data to the end-user's DSS can be a significant impediment to transitioning research to operational environments especially for disaster response where the deliver time is critical. The decision makers, in addition to the DSS, need seamless access to these same datasets from a web browser or a mobile phone for support when they are away from their DSS or for personnel out in the field. A framework has been developed for MSFC Earth Science program that can be used to easily enable seamless delivery of scientific data to end users in multiple formats. The first format is an open geospatial format, Web Mapping Service (WMS), which is easily integrated into most DSSs. The second format is a web browser display, which can be embedded within any MSFC Science web page with just a few lines of web page coding. The third format is accessible in the form of iOS and Android native mobile applications that could be downloaded from an 'app store'. The framework developed has reduced the level of effort needed to bring new and existing NASA datasets to each of these end user platforms and help extend the reach of science data.

  2. Research Based on WEB Development Framework%基于WEB开发框架的研究

    Institute of Scientific and Technical Information of China (English)

    杨毅

    2015-01-01

    Program development in the framework of choice, always is a benevolent, wise thing. In particular, the development framework of WEB layer, the number is very much, and the characteristics of the common MVP, AOP, MVC, ORM, MVVM, etc., the article will be mainly for MVP, MVVM, MVC three framework for analysis, and describes its advantages and disadvantages, to facilitate the development of personnel selection.%程序开发框架的选择,始终是个仁者见仁、智者见智的事情。尤其是WEB层的开发框架,数量非常多,而且各有特色,常见的有MVC、MVP、AOP、ORM、MVVM等,文章将主要对MVC、MVP、MVVM三种框架进行分析,叙述其优缺点,以方便开发人员进行选择。

  3. Design-Grounded Assessment: A Framework and a Case Study of Web 2.0 Practices in Higher Education

    Science.gov (United States)

    Ching, Yu-Hui; Hsu, Yu-Chang

    2011-01-01

    This paper synthesis's three theoretical perspectives, including sociocultural theory, distributed cognition, and situated cognition, into a framework to guide the design and assessment of Web 2.0 practices in higher education. In addition, this paper presents a case study of Web 2.0 practices. Thirty-seven online graduate students participated in…

  4. Implement Framework of Semantic Web Service Composition%语义Web服务组合实现框架研究

    Institute of Scientific and Technical Information of China (English)

    郭颂; 柳春华; 周明林

    2011-01-01

    针对不同的Web服务提供商提供的Web服务进行组合存在的异构性问题,提出了基于语义的Web服务组合实现框架.该框架详细阐述了语义Web服务选择策略、组合关系设定、组合Web服务内部各成员之间的数据传输及调用机制等组合实现过程中的关键技术.它在语义层次上对现有Web服务进行了语义描述,很好地解决了现有Web服务语法上的异构问题,有效地改善了Web服务的组合质量.%A new framework of Web service composition based on the semantic is proposed after analyzing the heterogeneity of the existing Web service provided by the disparate provider. This framework expatiate the key technologies of Web service composition, such as the selection strategy of Web service, the design of composition connection, the data transmission and invocation mechanism between member Web services inside the composition Web service, and so on. The framework semantically descript Web service on the semantic level,primely resolve the heterogeneity of Web service in syntax,and effectively improve the quality of Web service composition.

  5. Web-Enabled Framework for Real-Time Scheduler Simulator: A Teaching Too

    Directory of Open Access Journals (Sweden)

    C. Yaashuwanth

    2010-01-01

    Full Text Available Problem statement: A Real-Time System (RTS is one which controls an environment by receiving data, processing it, and returning the results quickly enough to affect the functioning of the environment at that time. The main objective of this research was to develop an architectural model for the simulation of real time tasks to implement in distributed environment through web, and to make comparison between various scheduling algorithms. The proposed model can be used for preprogrammed scheduling policies for uniprocessor systems. This model provided user friendly Graphical User Interface (GUI. Approach: Though a lot of scheduling algorithms have been developed, just a few of them are available to be implemented in real-time applications. In order to use, test and evaluate a scheduling policy it must be integrated into an operating system, which is a complex task. Simulation is another alternative to evaluate a scheduling policy. Unfortunately, just a few real-time scheduling simulators have been developed to date and most of them require the use of a specific simulation language. Results: Task ID, deadline, priority, period, computation time and phase are the input task attributes to the scheduler simulator and chronograph imitating the real-time execution of the input task set and computational statistics of the schedule are the output. Conclusion: The Web-enabled framework proposed in this study gave the developer to evaluate the schedulability of the real time application. Numerous benefits were quoted in support of the Web-based deployment. The proposed framework can be used as an invaluable teaching tool. Further, the GUI of the framework will allow for easy comparison of the framework of existing scheduling policies and also simulate the behavior and verify the suitability of custom defined schedulers for real-time applications.

  6. BP-Broker use-cases in the UncertWeb framework

    Science.gov (United States)

    Roncella, Roberto; Bigagli, Lorenzo; Schulz, Michael; Stasch, Christoph; Proß, Benjamin; Jones, Richard; Santoro, Mattia

    2013-04-01

    The UncertWeb framework is a distributed, Web-based Information and Communication Technology (ICT) system to support scientific data modeling in presence of uncertainty. We designed and prototyped a core component of the UncertWeb framework: the Business Process Broker. The BP-Broker implements several functionalities, such as: discovery of available processes/BPs, preprocessing of a BP into its executable form (EBP), publication of EBPs and their execution through a workflow-engine. According to the Composition-as-a-Service (CaaS) approach, the BP-Broker supports discovery and chaining of modeling resources (and processing resources in general), providing the necessary interoperability services for creating, validating, editing, storing, publishing, and executing scientific workflows. The UncertWeb project targeted several scenarios, which were used to evaluate and test the BP-Broker. The scenarios cover the following environmental application domains: biodiversity and habitat change, land use and policy modeling, local air quality forecasting, and individual activity in the environment. This work reports on the study of a number of use-cases, by means of the BP-Broker, namely: - eHabitat use-case: implements a Monte Carlo simulation performed on a deterministic ecological model; an extended use-case supports inter-comparison of model outputs; - FERA use-case: is composed of a set of models for predicting land-use and crop yield response to climatic and economic change; - NILU use-case: is composed of a Probabilistic Air Quality Forecasting model for predicting concentrations of air pollutants; - Albatross use-case: includes two model services for simulating activity-travel patterns of individuals in time and space; - Overlay use-case: integrates the NILU scenario with the Albatross scenario to calculate the exposure to air pollutants of individuals. Our aim was to prove the feasibility of describing composite modeling processes with a high-level, abstract

  7. SPOT--towards temporal data mining in medicine and bioinformatics.

    Science.gov (United States)

    Tusch, Guenter; Bretl, Chris; O'Connor, Martin; Connor, Martin; Das, Amar

    2008-11-06

    Mining large clinical and bioinformatics databases often includes exploration of temporal data. E.g., in liver transplantation, researchers might look for patients with an unusual time pattern of potential complications of the liver. In Knowledge-based Temporal Abstraction time-stamped data points are transformed into an interval-based representation. We extended this framework by creating an open-source platform, SPOT. It supports the R statistical package and knowledge representation standards (OWL, SWRL) using the open source Semantic Web tool Protégé-OWL.

  8. Experimental development based on mapping rule between requirements analysis model and web framework specific design model.

    Science.gov (United States)

    Okuda, Hirotaka; Ogata, Shinpei; Matsuura, Saeko

    2013-12-01

    Model Driven Development is a promising approach to develop high quality software systems. We have proposed a method of model-driven requirements analysis using Unified Modeling Language (UML). The main feature of our method is to automatically generate a Web user interface prototype from UML requirements analysis model so that we can confirm validity of input/output data for each page and page transition on the system by directly operating the prototype. We proposes a mapping rule in which design information independent of each web application framework implementation is defined based on the requirements analysis model, so as to improve the traceability to the final product from the valid requirements analysis model. This paper discusses the result of applying our method to the development of a Group Work Support System that is currently running in our department.

  9. DescribeX: A Framework for Exploring and Querying XML Web Collections

    CERN Document Server

    Rizzolo, Flavio

    2008-01-01

    This thesis introduces DescribeX, a powerful framework that is capable of describing arbitrarily complex XML summaries of web collections, providing support for more efficient evaluation of XPath workloads. DescribeX permits the declarative description of document structure using all axes and language constructs in XPath, and generalizes many of the XML indexing and summarization approaches in the literature. DescribeX supports the construction of heterogeneous summaries where different document elements sharing a common structure can be declaratively defined and refined by means of path regular expressions on axes, or axis path regular expression (AxPREs). DescribeX can significantly help in the understanding of both the structure of complex, heterogeneous XML collections and the behaviour of XPath queries evaluated on them. Experimental results demonstrate the scalability of DescribeX summary refinements and stabilizations (the key enablers for tailoring summaries) with multi-gigabyte web collections. A com...

  10. A General Framework for Representing, Reasoning and Querying with Annotated Semantic Web Data

    CERN Document Server

    Zimmermann, Antoine; Polleres, Axel; Straccia, Umberto

    2011-01-01

    We describe a generic framework for representing and reasoning with annotated Semantic Web data, a task becoming more important with the recent increased amount of inconsistent and non-reliable meta-data on the web. We formalise the annotated language, the corresponding deductive system and address the query answering problem. Previous contributions on specific RDF annotation domains are encompassed by our unified reasoning formalism as we show by instantiating it on (i) temporal, (ii) fuzzy, and (iii) provenance annotations. Moreover, we provide a generic method for combining multiple annotation domains allowing to represent, e.g. temporally-annotated fuzzy RDF. Furthermore, we address the development of a query language -- AnQL -- that is inspired by SPARQL, including several features of SPARQL 1.1 (subqueries, aggregates, assignment, solution modifiers) along with the formal definitions of their semantics.

  11. BIRCH: A user-oriented, locally-customizable, bioinformatics system

    Directory of Open Access Journals (Sweden)

    Fristensky Brian

    2007-02-01

    Full Text Available Abstract Background Molecular biologists need sophisticated analytical tools which often demand extensive computational resources. While finding, installing, and using these tools can be challenging, pipelining data from one program to the next is particularly awkward, especially when using web-based programs. At the same time, system administrators tasked with maintaining these tools do not always appreciate the needs of research biologists. Results BIRCH (Biological Research Computing Hierarchy is an organizational framework for delivering bioinformatics resources to a user group, scaling from a single lab to a large institution. The BIRCH core distribution includes many popular bioinformatics programs, unified within the GDE (Genetic Data Environment graphic interface. Of equal importance, BIRCH provides the system administrator with tools that simplify the job of managing a multiuser bioinformatics system across different platforms and operating systems. These include tools for integrating locally-installed programs and databases into BIRCH, and for customizing the local BIRCH system to meet the needs of the user base. BIRCH can also act as a front end to provide a unified view of already-existing collections of bioinformatics software. Documentation for the BIRCH and locally-added programs is merged in a hierarchical set of web pages. In addition to manual pages for individual programs, BIRCH tutorials employ step by step examples, with screen shots and sample files, to illustrate both the important theoretical and practical considerations behind complex analytical tasks. Conclusion BIRCH provides a versatile organizational framework for managing software and databases, and making these accessible to a user base. Because of its network-centric design, BIRCH makes it possible for any user to do any task from anywhere.

  12. A framework for using reference ontologies as a foundation for the semantic web.

    Science.gov (United States)

    Brinkley, James F; Suciu, Dan; Detwiler, Landon T; Gennari, John H; Rosse, Cornelius

    2006-01-01

    The semantic web is envisioned as an evolving set of local ontologies that are gradually linked together into a global knowledge network. Many such local "application" ontologies are being built, but it is difficult to link them together because of incompatibilities and lack of adherence to ontology standards. "Reference" ontologies are an emerging ontology type that attempt to represent deep knowledge of basic science in a principled way that allows them to be re-used in multiple ways, just as the basic sciences are re-used in clinical applications. As such they have the potential to be a foundation for the semantic web if methods can be developed for deriving application ontologies from them. We describe a computational framework for this purpose that is generalized from the database concept of "views", and describe the research issues that must be solved to implement such a framework. We argue that the development of such a framework is becoming increasingly feasible due to a convergence of advances in several fields.

  13. A Web GIS Framework for Participatory Sensing Service: An Open Source-Based Implementation

    Directory of Open Access Journals (Sweden)

    Yu Nakayama

    2017-04-01

    Full Text Available Participatory sensing is the process in which individuals or communities collect and analyze systematic data using mobile phones and cloud services. To efficiently develop participatory sensing services, some server-side technologies have been proposed. Although they provide a good platform for participatory sensing, they are not optimized for spatial data management and processing. For the purpose of spatial data collection and management, many web GIS approaches have been studied. However, they still have not focused on the optimal framework for participatory sensing services. This paper presents a web GIS framework for participatory sensing service (FPSS. The proposed FPSS enables an integrated deployment of spatial data capture, storage, and data management functions. In various types of participatory sensing experiments, users can collect and manage spatial data in a unified manner. This feature is realized by the optimized system architecture and use case based on the general requirements for participatory sensing. We developed an open source GIS-based implementation of the proposed framework, which can overcome financial difficulties that are one of the major problems of deploying sensing experiments. We confirmed with the prototype that participatory sensing experiments can be performed efficiently with the proposed FPSS.

  14. Climate change-contaminant interactions in marine food webs: Toward a conceptual framework.

    Science.gov (United States)

    Alava, Juan José; Cheung, William W L; Ross, Peter S; Sumaila, U Rashid

    2017-10-01

    Climate change is reshaping the way in which contaminants move through the global environment, in large part by changing the chemistry of the oceans and affecting the physiology, health, and feeding ecology of marine biota. Climate change-associated impacts on structure and function of marine food webs, with consequent changes in contaminant transport, fate, and effects, are likely to have significant repercussions to those human populations that rely on fisheries resources for food, recreation, or culture. Published studies on climate change-contaminant interactions with a focus on food web bioaccumulation were systematically reviewed to explore how climate change and ocean acidification may impact contaminant levels in marine food webs. We propose here a conceptual framework to illustrate the impacts of climate change on contaminant accumulation in marine food webs, as well as the downstream consequences for ecosystem goods and services. The potential impacts on social and economic security for coastal communities that depend on fisheries for food are discussed. Climate change-contaminant interactions may alter the bioaccumulation of two priority contaminant classes: the fat-soluble persistent organic pollutants (POPs), such as polychlorinated biphenyls (PCBs), as well as the protein-binding methylmercury (MeHg). These interactions include phenomena deemed to be either climate change dominant (i.e., climate change leads to an increase in contaminant exposure) or contaminant dominant (i.e., contamination leads to an increase in climate change susceptibility). We illustrate the pathways of climate change-contaminant interactions using case studies in the Northeastern Pacific Ocean. The important role of ecological and food web modeling to inform decision-making in managing ecological and human health risks of chemical pollutants contamination under climate change is also highlighted. Finally, we identify the need to develop integrated policies that manage the

  15. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    CERN Document Server

    CERN. Geneva

    2012-01-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Co...

  16. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    CERN Document Server

    Andreeva, J; Karavakis, E; Kokoszkiewicz, L; Nowotka, M; Saiz, P; Tuckett, D

    2012-01-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Comp...

  17. Capataz: a framework for distributing algorithms via the World Wide Web

    Directory of Open Access Journals (Sweden)

    Gonzalo J. Martínez

    2015-08-01

    Full Text Available In recent years, some scientists have embraced the distributed computing paradigm. As experiments and simulations demand ever more computing power, coordinating the efforts of many different processors is often the only reasonable resort. We developed an open-source distributed computing framework based on web technologies, and named it Capataz. Acting as an HTTP server, web browsers running on many different devices can connect to it to contribute in the execution of distributed algorithms written in Javascript. Capataz takes advantage of architectures with many cores using web workers. This paper presents an improvement in Capataz´ usability and why it was needed. In previous experiments the total time of distributed algorithms proved to be susceptible to changes in the execution time of the jobs. The system now adapts by bundling jobs together if they are too simple. The computational experiment to test the solution is a brute force estimation of pi. The benchmark results show that by bundling jobs, the overall perfomance is greatly increased.

  18. PLIKASI SISTEM INFORMASI RUMAH SAKIT BERBASIS WEB PADA SUB-SISTEM FARMASI MENGGUNAKAN FRAMEWORK PRADO

    Directory of Open Access Journals (Sweden)

    Eko Handoyo

    2009-05-01

    Full Text Available Teknologi informasi merupakan salah satu teknologi yang sedang berkembang dengan pesat pada saat ini. Dengan kemajuan teknologi informasi, pengaksesan terhadap data atau informasi yang tersedia dapat berlangsung dengan cepat, efisien serta akurat. Penelitian ini bertujuan untuk memberikan gambaran sebuah model sistem informasi rumah sakit menggunakan Layanan Web, melalui pembangunan sebuah aplikasi sistem informasi rumah sakit untuk subsistem farmasi. Dengan aplikasi ini, pengguna dapat dengan mudah memperoleh pelayanan dan informasi seluruh kegiatan yang ada khususnya dalam hal manajemen farmasi pada rumah sakit dimanapun dan kapanpun mereka berada secara on-line.Aplikasi ini dibuat berbasiskan web dengan menggunakan framework Prado berbasiskan bahasa pemrograman PHP dan MySQL sebagai basis datanya. Dalam pembuatannya, aplikasi ini disesuaikan dengan kebutuhan rumah sakit secara umum. Tentu saja pada awalnya dilakukan analisa kebutuhan untuk suatu sistem informasi rumah sakit agar penyediaan informasi dapat dilakukan dengan berbasiskan web.Aplikasi Sistem Informasi Rumah Sakit ini dapat digunakan sebagai sarana penyedia layanan dan informasi bagi penggunanya baik untuk dokter, staf dan karyawan, maupun pasien suatu rumah sakit dimanapun dan kapanpun mereka berada. Pengguna mendapatkan semua informasi yang akurat karena informasi yang tersedia senantiasa diperbaharui. Aplikasi ini akan lebih baik jika memiliki keamanan data yang lebih tinggi dan penambahan modul

  19. A Real-Time Web of Things Framework with Customizable Openness Considering Legacy Devices

    Science.gov (United States)

    Zhao, Shuai; Yu, Le; Cheng, Bo

    2016-01-01

    With the development of the Internet of Things (IoT), resources and applications based on it have emerged on a large scale. However, most efforts are “silo” solutions where devices and applications are tightly coupled. Infrastructures are needed to connect sensors to the Internet, open up and break the current application silos and move to a horizontal application mode. Based on the concept of Web of Things (WoT), many infrastructures have been proposed to integrate the physical world with the Web. However, issues such as no real-time guarantee, lack of fine-grained control of data, and the absence of explicit solutions for integrating heterogeneous legacy devices, hinder their widespread and practical use. To address these issues, this paper proposes a WoT resource framework that provides the infrastructures for the customizable openness and sharing of users’ data and resources under the premise of ensuring the real-time behavior of their own applications. The proposed framework is validated by actual systems and experimental evaluations. PMID:27690038

  20. A Real-Time Web of Things Framework with Customizable Openness Considering Legacy Devices

    Directory of Open Access Journals (Sweden)

    Shuai Zhao

    2016-09-01

    Full Text Available With the development of the Internet of Things (IoT, resources and applications based on it have emerged on a large scale. However, most efforts are “silo” solutions where devices and applications are tightly coupled. Infrastructures are needed to connect sensors to the Internet, open up and break the current application silos and move to a horizontal application mode. Based on the concept of Web of Things (WoT, many infrastructures have been proposed to integrate the physical world with the Web. However, issues such as no real-time guarantee, lack of fine-grained control of data, and the absence of explicit solutions for integrating heterogeneous legacy devices, hinder their widespread and practical use. To address these issues, this paper proposes a WoT resource framework that provides the infrastructures for the customizable openness and sharing of users’ data and resources under the premise of ensuring the real-time behavior of their own applications. The proposed framework is validated by actual systems and experimental evaluations.

  1. A Real-Time Web of Things Framework with Customizable Openness Considering Legacy Devices.

    Science.gov (United States)

    Zhao, Shuai; Yu, Le; Cheng, Bo

    2016-09-28

    With the development of the Internet of Things (IoT), resources and applications based on it have emerged on a large scale. However, most efforts are "silo" solutions where devices and applications are tightly coupled. Infrastructures are needed to connect sensors to the Internet, open up and break the current application silos and move to a horizontal application mode. Based on the concept of Web of Things (WoT), many infrastructures have been proposed to integrate the physical world with the Web. However, issues such as no real-time guarantee, lack of fine-grained control of data, and the absence of explicit solutions for integrating heterogeneous legacy devices, hinder their widespread and practical use. To address these issues, this paper proposes a WoT resource framework that provides the infrastructures for the customizable openness and sharing of users' data and resources under the premise of ensuring the real-time behavior of their own applications. The proposed framework is validated by actual systems and experimental evaluations.

  2. An automated and integrated framework for dust storm detection based on ogc web processing services

    Science.gov (United States)

    Xiao, F.; Shea, G. Y. K.; Wong, M. S.; Campbell, J.

    2014-11-01

    Dust storms are known to have adverse effects on public health. Atmospheric dust loading is also one of the major uncertainties in global climatic modelling as it is known to have a significant impact on the radiation budget and atmospheric stability. The complexity of building scientific dust storm models is coupled with the scientific computation advancement, ongoing computing platform development, and the development of heterogeneous Earth Observation (EO) networks. It is a challenging task to develop an integrated and automated scheme for dust storm detection that combines Geo-Processing frameworks, scientific models and EO data together to enable the dust storm detection and tracking processes in a dynamic and timely manner. This study develops an automated and integrated framework for dust storm detection and tracking based on the Web Processing Services (WPS) initiated by Open Geospatial Consortium (OGC). The presented WPS framework consists of EO data retrieval components, dust storm detecting and tracking component, and service chain orchestration engine. The EO data processing component is implemented based on OPeNDAP standard. The dust storm detecting and tracking component combines three earth scientific models, which are SBDART model (for computing aerosol optical depth (AOT) of dust particles), WRF model (for simulating meteorological parameters) and HYSPLIT model (for simulating the dust storm transport processes). The service chain orchestration engine is implemented based on Business Process Execution Language for Web Service (BPEL4WS) using open-source software. The output results, including horizontal and vertical AOT distribution of dust particles as well as their transport paths, were represented using KML/XML and displayed in Google Earth. A serious dust storm, which occurred over East Asia from 26 to 28 Apr 2012, is used to test the applicability of the proposed WPS framework. Our aim here is to solve a specific instance of a complex EO data

  3. Report on the EMBER Project--A European Multimedia Bioinformatics Educational Resource

    Science.gov (United States)

    Attwood, Terri K.; Selimas, Ioannis; Buis, Rob; Altenburg, Ruud; Herzog, Robert; Ledent, Valerie; Ghita, Viorica; Fernandes, Pedro; Marques, Isabel; Brugman, Marc

    2005-01-01

    EMBER was a European project aiming to develop bioinformatics teaching materials on the Web and CD-ROM to help address the recognised skills shortage in bioinformatics. The project grew out of pilot work on the development of an interactive web-based bioinformatics tutorial and the desire to repackage that resource with the help of a professional…

  4. Flow cytometry bioinformatics.

    Directory of Open Access Journals (Sweden)

    Kieran O'Neill

    , and software are also key parts of flow cytometry bioinformatics. Data standards include the widely adopted Flow Cytometry Standard (FCS defining how data from cytometers should be stored, but also several new standards under development by the International Society for Advancement of Cytometry (ISAC to aid in storing more detailed information about experimental design and analytical steps. Open data is slowly growing with the opening of the CytoBank database in 2010 and FlowRepository in 2012, both of which allow users to freely distribute their data, and the latter of which has been recommended as the preferred repository for MIFlowCyt-compliant data by ISAC. Open software is most widely available in the form of a suite of Bioconductor packages, but is also available for web execution on the GenePattern platform.

  5. Semantic Framework for Mapping Object-Oriented Model to Semantic Web Languages

    Directory of Open Access Journals (Sweden)

    Petr eJezek

    2015-02-01

    Full Text Available The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a~Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework.

  6. Semantic framework for mapping object-oriented model to semantic web languages.

    Science.gov (United States)

    Ježek, Petr; Mouček, Roman

    2015-01-01

    The article deals with and discusses two main approaches in building semantic structures for electrophysiological metadata. It is the use of conventional data structures, repositories, and programming languages on one hand and the use of formal representations of ontologies, known from knowledge representation, such as description logics or semantic web languages on the other hand. Although knowledge engineering offers languages supporting richer semantic means of expression and technological advanced approaches, conventional data structures and repositories are still popular among developers, administrators and users because of their simplicity, overall intelligibility, and lower demands on technical equipment. The choice of conventional data resources and repositories, however, raises the question of how and where to add semantics that cannot be naturally expressed using them. As one of the possible solutions, this semantics can be added into the structures of the programming language that accesses and processes the underlying data. To support this idea we introduced a software prototype that enables its users to add semantically richer expressions into a Java object-oriented code. This approach does not burden users with additional demands on programming environment since reflective Java annotations were used as an entry for these expressions. Moreover, additional semantics need not to be written by the programmer directly to the code, but it can be collected from non-programmers using a graphic user interface. The mapping that allows the transformation of the semantically enriched Java code into the Semantic Web language OWL was proposed and implemented in a library named the Semantic Framework. This approach was validated by the integration of the Semantic Framework in the EEG/ERP Portal and by the subsequent registration of the EEG/ERP Portal in the Neuroscience Information Framework.

  7. Bioinformatics: perspectives for the future.

    Science.gov (United States)

    Costa, Luciano da Fontoura

    2004-12-30

    I give here a very personal perspective of Bioinformatics and its future, starting by discussing the origin of the term (and area) of bioinformatics and proceeding by trying to foresee the development of related issues, including pattern recognition/data mining, the need to reintegrate biology, the potential of complex networks as a powerful and flexible framework for bioinformatics and the interplay between bio- and neuroinformatics. Human resource formation and market perspective are also addressed. Given the complexity and vastness of these issues and concepts, as well as the limited size of a scientific article and finite patience of the reader, these perspectives are surely incomplete and biased. However, it is expected that some of the questions and trends that are identified will motivate discussions during the IcoBiCoBi round table (with the same name as this article) and perhaps provide a more ample perspective among the participants of that conference and the readers of this text.

  8. pWeb: A High-Performance, Parallel-Computing Framework for Web-Browser-Based Medical Simulation.

    Science.gov (United States)

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2014-01-01

    This work presents a pWeb - a new language and compiler for parallelization of client-side compute intensive web applications such as surgical simulations. The recently introduced HTML5 standard has enabled creating unprecedented applications on the web. Low performance of the web browser, however, remains the bottleneck of computationally intensive applications including visualization of complex scenes, real time physical simulations and image processing compared to native ones. The new proposed language is built upon web workers for multithreaded programming in HTML5. The language provides fundamental functionalities of parallel programming languages as well as the fork/join parallel model which is not supported by web workers. The language compiler automatically generates an equivalent parallel script that complies with the HTML5 standard. A case study on realistic rendering for surgical simulations demonstrates enhanced performance with a compact set of instructions.

  9. Navigating the changing learning landscape: perspective from bioinformatics.ca.

    Science.gov (United States)

    Brazas, Michelle D; Ouellette, B F Francis

    2013-09-01

    With the advent of YouTube channels in bioinformatics, open platforms for problem solving in bioinformatics, active web forums in computing analyses and online resources for learning to code or use a bioinformatics tool, the more traditional continuing education bioinformatics training programs have had to adapt. Bioinformatics training programs that solely rely on traditional didactic methods are being superseded by these newer resources. Yet such face-to-face instruction is still invaluable in the learning continuum. Bioinformatics.ca, which hosts the Canadian Bioinformatics Workshops, has blended more traditional learning styles with current online and social learning styles. Here we share our growing experiences over the past 12 years and look toward what the future holds for bioinformatics training programs.

  10. Analysis of a Real Online Social Network Using Semantic Web Frameworks

    Science.gov (United States)

    Erétéo, Guillaume; Buffa, Michel; Gandon, Fabien; Corby, Olivier

    Social Network Analysis (SNA) provides graph algorithms to characterize the structure of social networks, strategic positions in these networks, specific sub-networks and decompositions of people and activities. Online social platforms like Facebook form huge social networks, enabling people to connect, interact and share their online activities across several social applications. We extended SNA operators using semantic web frameworks to include the semantics of these graph-based representations when analyzing such social networks and to deal with the diversity of their relations and interactions. We present here the results of this approach when it was used to analyze a real social network with 60,000 users connecting, interacting and sharing content.

  11. SENHANCE: A Semantic Web framework for integrating social and hardware sensors in e-Health.

    Science.gov (United States)

    Pagkalos, Ioannis; Petrou, Loukas

    2016-09-01

    Self-reported data are very important in Healthcare, especially when combined with data from sensors. Social Networking Sites, such as Facebook, are a promising source of not only self-reported data but also social data, which are otherwise difficult to obtain. Due to their unstructured nature, providing information that is meaningful to health professionals from this source is a daunting task. To this end, we employ Social Network Applications as Social Sensors that gather structured data and use Semantic Web technologies to fuse them with hardware sensor data, effectively integrating both sources. We show that this combination of social and hardware sensor observations creates a novel space that can be used for a variety of feature-rich e-Health applications. We present the design of our prototype framework, SENHANCE, and our findings from its pilot application in the NutriHeAl project, where a Facebook app is integrated with Fitbit digital pedometers for Lifestyle monitoring.

  12. The NIF DISCO Framework: facilitating automated integration of neuroscience content on the web.

    Science.gov (United States)

    Marenco, Luis; Wang, Rixin; Shepherd, Gordon M; Miller, Perry L

    2010-06-01

    This paper describes the capabilities of DISCO, an extensible approach that supports integrative Web-based information dissemination. DISCO is a component of the Neuroscience Information Framework (NIF), an NIH Neuroscience Blueprint initiative that facilitates integrated access to diverse neuroscience resources via the Internet. DISCO facilitates the automated maintenance of several distinct capabilities using a collection of files 1) that are maintained locally by the developers of participating neuroscience resources and 2) that are "harvested" on a regular basis by a central DISCO server. This approach allows central NIF capabilities to be updated as each resource's content changes over time. DISCO currently supports the following capabilities: 1) resource descriptions, 2) "LinkOut" to a resource's data items from NCBI Entrez resources such as PubMed, 3) Web-based interoperation with a resource, 4) sharing a resource's lexicon and ontology, 5) sharing a resource's database schema, and 6) participation by the resource in neuroscience-related RSS news dissemination. The developers of a resource are free to choose which DISCO capabilities their resource will participate in. Although DISCO is used by NIF to facilitate neuroscience data integration, its capabilities have general applicability to other areas of research.

  13. A Generic Framework for Extraction of Knowledge from Social Web Sources (Social Networking Websites for an Online Recommendation System

    Directory of Open Access Journals (Sweden)

    Javubar Sathick

    2015-04-01

    Full Text Available Mining social web data is a challenging task and finding user interest for personalized and non-personalized recommendation systems is another important task. Knowledge sharing among web users has become crucial in determining usage of web data and personalizing content in various social websites as per the user’s wish. This paper aims to design a framework for extracting knowledge from web sources for the end users to take a right decision at a crucial juncture. The web data is collected from various web sources and structured appropriately and stored as an ontology based data repository. The proposed framework implements an online recommender application for the learners online who pursue their graduation in an open and distance learning environment. This framework possesses three phases: data repository, knowledge engine, and online recommendation system. The data repository possesses common data which is attained by the process of acquiring data from various web sources. The knowledge engine collects the semantic data from the ontology based data repository and maps it to the user through the query processor component. Establishment of an online recommendation system is used to make recommendations to the user for a decision making process. This research work is implemented with the help of an experimental case study which deals with an online recommendation system for the career guidance of a learner. The online recommendation application is implemented with the help of R-tool, NLP parser and clustering algorithm.This research study will help users to attain semantic knowledge from heterogeneous web sources and to make decisions.

  14. The Live Access Server - A Web-Services Framework for Earth Science Data

    Science.gov (United States)

    Schweitzer, R.; Hankin, S. C.; Callahan, J. S.; O'Brien, K.; Manke, A.; Wang, X. Y.

    2005-12-01

    The Live Access Server (LAS) is a general purpose Web-server for delivering services related to geo-science data sets. Data providers can use the LAS architecture to build custom Web interfaces to their scientific data. Users and client programs can then access the LAS site to search the provider's on-line data holdings, make plots of data, create sub-sets in a variety of formats, compare data sets and perform analysis on the data. The Live Access server software has continued to evolve by expanding the types of data (in-situ observations and curvilinear grids) it can serve and by taking advantages of advances in software infrastructure both in the earth sciences community (THREDDS, the GrADS Data Server, the Anagram framework and Java netCDF 2.2) and in the Web community (Java Servlet and the Apache Jakarta frameworks). This presentation will explore the continued evolution of the LAS architecture towards a complete Web-services-based framework. Additionally, we will discuss the redesign and modernization of some of the support tools available to LAS installers. Soon after the initial implementation, the LAS architecture was redesigned to separate the components that are responsible for the user interaction (the User Interface Server) from the components that are responsible for interacting with the data and producing the output requested by the user (the Product Server). During this redesign, we changed the implementation of the User Interface Server from CGI and JavaScript to the Java Servlet specification using Apache Jakarta Velocity backed by a database store for holding the user interface widget components. The User Interface server is now quite flexible and highly configurable because we modernized the components used for the implementation. Meanwhile, the implementation of the Product Server has remained a Perl CGI-based system. Clearly, the time has come to modernize this part of the LAS architecture. Before undertaking such a modernization it is

  15. A Data Capsule Framework For Web Services: Providing Flexible Data Access Control To Users

    CERN Document Server

    Kannan, Jayanthkumar; Chun, Byung-Gon

    2010-01-01

    This paper introduces the notion of a secure data capsule, which refers to an encapsulation of sensitive user information (such as a credit card number) along with code that implements an interface suitable for the use of such information (such as charging for purchases) by a service (such as an online merchant). In our capsule framework, users provide their data in the form of such capsules to web services rather than raw data. Capsules can be deployed in a variety of ways, either on a trusted third party or the user's own computer or at the service itself, through the use of a variety of hardware or software modules, such as a virtual machine monitor or trusted platform module: the only requirement is that the deployment mechanism must ensure that the user's data is only accessed via the interface sanctioned by the user. The framework further allows an user to specify policies regarding which services or machines may host her capsule, what parties are allowed to access the interface, and with what parameter...

  16. A secure and easy-to-implement web-based communication framework for caregiving robot teams

    Science.gov (United States)

    Tuna, G.; Daş, R.; Tuna, A.; Örenbaş, H.; Baykara, M.; Gülez, K.

    2016-03-01

    In recent years, robots have started to become more commonplace in our lives, from factory floors to museums, festivals and shows. They have started to change how we work and play. With an increase in the population of the elderly, they have also been started to be used for caregiving services, and hence many countries have been investing in the robot development. The advancements in robotics and wireless communications has led to the emergence of autonomous caregiving robot teams which cooperate to accomplish a set of tasks assigned by human operators. Although wireless communications and devices are flexible and convenient, they are vulnerable to many risks compared to traditional wired networks. Since robots with wireless communication capability transmit all data types, including sensory, coordination, and control, through radio frequencies, they are open to intruders and attackers unless protected and their openness may lead to many security issues such as data theft, passive listening, and service interruption. In this paper, a secure web-based communication framework is proposed to address potential security threats due to wireless communication in robot-robot and human-robot interaction. The proposed framework is simple and practical, and can be used by caregiving robot teams in the exchange of sensory data as well as coordination and control data.

  17. A CommonKADS Model Framework for Web Based Agricultural Decision Support System

    Directory of Open Access Journals (Sweden)

    Jignesh Patel

    2015-01-01

    Full Text Available Increased demand of farm products and depletion of natural resources compel the agriculture community to increase the use of Information and Communication Technology (ICT in various farming processes. Agricultural Decision Support Systems (DSS proved useful in this regard. The majority of available Agricultural DSSs are either crop or task specific. Less emphasis has been placed on the development of comprehensive DSS, which are non-specific regarding crops or farming processes. The crop or task specific DSSs are mainly developed with rule based or knowledge transfer based approaches. The DSSs based on these methodologies lack the ability for scaling up and generalization. The Knowledge engineering modeling approach is more suitable for the development of large and generalized DSS. Unfortunately the model based knowledge engineering approach is not much exploited for the development of Agricultural DSS. CommonKADS is one of the popular modeling frameworks used for the development of Knowledge Based System (KBS. The paper presents the organization, agent, task, communication, knowledge and design models based on the CommonKADS approach for the development of scalable Agricultural DSS. A specific web based DSS application is used for demonstrating the multi agent CommonKADS modeling approach. The system offers decision support for irrigation scheduling and weather based disease forecasting for the popular crops of India. The proposed framework along with the required expert knowledge, provides a platform on which the larger DSS can be built for any crop at a given location.

  18. An Introduction to Bioinformatics

    Institute of Scientific and Technical Information of China (English)

    SHENG Qi-zheng; De Moor Bart

    2004-01-01

    As a newborn interdisciplinary field, bioinformatics is receiving increasing attention from biologists, computer scientists, statisticians, mathematicians and engineers. This paper briefly introduces the birth, importance, and extensive applications of bioinformatics in the different fields of biological research. A major challenge in bioinformatics - the unraveling of gene regulation - is discussed in detail.

  19. The Impact of Instruction in the WWWDOT Framework on Students' Disposition and Ability to Evaluate Web Sites as Sources of Information

    Science.gov (United States)

    Zhang, Shenglan; Duke, Nell K.

    2011-01-01

    Much research has demonstrated that students are largely uncritical users of Web sites as sources of information. Research-tested frameworks are needed to increase elementary-age students' awareness of the need and ability to critically evaluate Web sites as sources of information. This study is a randomized field trial of such a framework called…

  20. The Impact of Instruction in the WWWDOT Framework on Students' Disposition and Ability to Evaluate Web Sites as Sources of Information

    Science.gov (United States)

    Zhang, Shenglan; Duke, Nell K.

    2011-01-01

    Much research has demonstrated that students are largely uncritical users of Web sites as sources of information. Research-tested frameworks are needed to increase elementary-age students' awareness of the need and ability to critically evaluate Web sites as sources of information. This study is a randomized field trial of such a framework called…

  1. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    Science.gov (United States)

    Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.

    2012-12-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.

  2. Research on WebGIS Analysis Framework Based on Cloud Computing%基于云计算的WebGIS分析构架研究

    Institute of Scientific and Technical Information of China (English)

    王凤领

    2014-01-01

    Cloud computing as a new business to the Internet as the center of the computing model,also is the important technical hot. WebGIS is a geographic information system using Internet/Intranet technology,applying HTTP,realizing distributed acquisition,distribu-ted storage,distributed processing,distributed analysis,distributed query,display and output of distributed geographic information in Inter-net/Intranet. Through the analysis of the characteristics of cloud computing and WebGIS,cloud computing model architecture,and Web-GIS hierarchy,combined the cloud computing and WebGIS,establish the framework of the WebGIS based on cloud computing. For the realization of magnanimous spatial data storage,spatial analysis and spatial information retrieval provide real-time geographic information services,improving the system stability and efficiency.%云计算作为以互联网为中心的一种新兴商业计算模式,也是当前重要的技术热点。 WebGIS是一种利用Internet/Intranet技术,采用HTTP,在Internet/Intranet环境下实现对分布式地理信息的分布式获取、分布式存储、分布式处理、分布式分析、分布式查询、显示和输出的地理信息系统。通过分析云计算、WebGIS的特点,云计算的模型框架以及WebGIS 系统的层次结构,提出将云计算和WebGIS相结合,建立基于云计算的WebGIS构架。对实现海量空间数据存储、空间分析和空间信息检索提供实时地理信息服务,提高了系统的稳定性和效率。

  3. A semantic web framework to integrate cancer omics data with biological knowledge.

    Science.gov (United States)

    Holford, Matthew E; McCusker, James P; Cheung, Kei-Hoi; Krauthammer, Michael

    2012-01-25

    The RDF triple provides a simple linguistic means of describing limitless types of information. Triples can be flexibly combined into a unified data source we call a semantic model. Semantic models open new possibilities for the integration of variegated biological data. We use Semantic Web technology to explicate high throughput clinical data in the context of fundamental biological knowledge. We have extended Corvus, a data warehouse which provides a uniform interface to various forms of Omics data, by providing a SPARQL endpoint. With the querying and reasoning tools made possible by the Semantic Web, we were able to explore quantitative semantic models retrieved from Corvus in the light of systematic biological knowledge. For this paper, we merged semantic models containing genomic, transcriptomic and epigenomic data from melanoma samples with two semantic models of functional data - one containing Gene Ontology (GO) data, the other, regulatory networks constructed from transcription factor binding information. These two semantic models were created in an ad hoc manner but support a common interface for integration with the quantitative semantic models. Such combined semantic models allow us to pose significant translational medicine questions. Here, we study the interplay between a cell's molecular state and its response to anti-cancer therapy by exploring the resistance of cancer cells to Decitabine, a demethylating agent. We were able to generate a testable hypothesis to explain how Decitabine fights cancer - namely, that it targets apoptosis-related gene promoters predominantly in Decitabine-sensitive cell lines, thus conveying its cytotoxic effect by activating the apoptosis pathway. Our research provides a framework whereby similar hypotheses can be developed easily.

  4. Methods for analyzing the evolutionary relationship of NF-κB proteins using free, web-driven bioinformatics and phylogenetic tools.

    Science.gov (United States)

    Finnerty, John R; Gilmore, Thomas D

    2015-01-01

    Phylogenetic analysis enables one to reconstruct the functional evolution of proteins. Current understanding of NF-κB signaling derives primarily from studies of a relatively small number of laboratory models-mainly vertebrates and insects-that represent a tiny fraction of animal evolution. As such, NF-κB has been the subject of limited phylogenetic analysis. The recent discovery of NF-κB proteins in "basal" marine animals (e.g., sponges, sea anemones, corals) and NF-κB-like proteins in non-metazoan lineages extends the origin of NF-κB signaling by several hundred million years and provides the opportunity to investigate the early evolution of this pathway using phylogenetic approaches. Here, we describe a combination of bioinformatic and phylogenetic analyses based on menu-driven, open-source computer programs that are readily accessible to molecular biologists without formal training in phylogenetic methods. These phylogenetically based comparisons of NF-κB proteins are powerful in that they reveal deep conservation and repeated instances of parallel evolution in the sequence and structure of NF-κB in distant animal groups, which suggest that important functional constraints limit the evolution of this protein.

  5. Presenting a framework for knowledge management within a web-enabled Living Lab

    Directory of Open Access Journals (Sweden)

    Lizette de Jager

    2012-02-01

    Full Text Available Background: The background to this study showed that many communities, countries and continents are only now realising the importance of discovering innovative collaborative knowledge. Knowledge management (KM enables organisations to retain tacit knowledge. It has many advantages, like competitiveness, retaining workers’ knowledge as corporate assets and assigning value to it. The value of knowledge can never depreciate. It can only grow and become more and more valuable because new knowledge is added continuously to existing knowledge.Objective: The objective of this study was to present a framework for KM processes and using social media tools in a Living Lab (LL environment.Methods: In order to find a way to help organisations to retain tacit knowledge, the researchers conducted in-depth research. They used case studies and Grounded Theory (GT to explore KM, social media tools and technologies as well as the LL environment. They emailed an online questionnaire and followed it up telephonically. The study targeted academic, support and administrative staff in higher education institutions nationwide to establish their level of KM knowledge, understanding of concepts and levels of application.Results: The researchers concluded that the participants did not know the term KM and therefore were not using KM. They only used information hubs, or general university systems, like Integrated Technology Software (ITS, to capture and store information. The researchers suggested including social media and managing them as tools to help CoPs to meet their knowledge requirements. Therefore, the researchers presented a framework that uses semantic technologies and the social media to address the problem.Conclusion: The success of the LL approach in developing new web-enabled LLs allows organisations to amalgamate various networks. The social media help organisations to gather, classify and verify knowledge.

  6. Unsupervised web event extraction framework%无监督的互联网事件抽取框架

    Institute of Scientific and Technical Information of China (English)

    何一鸣

    2011-01-01

    为高效便捷地获取互联网上发布的真实事件信息,提出了一种无监督的互联网事件抽取框架.该框架利用DOM树模型的平行结构特性对表格页面进行事件抽取,并以表格页面抽取的事件作为种子采总结详情页面的对应模式,进一步使用总结的模式在详情页面中抽取.在大量网站页面中应用该框架,并将抽取结果与常用的包装器生成算法进行比较,结果表明了该框架的有效性以及在详情页面中的抽取质量优于包装器算法.%To acquire real event information published to intemet effectively and easily, an unsupervised web event extraction framework is proposed. This framework extracts events from table WebPages by using DOM' s parallel structure, the events extracted from table WebPages are used as seeds to summary corresponding patterns from detail WebPages, then patterns summarized are used to further extract events from detail WebPages. Masses ofwebsites are used to verify this framework and the result ofextraetion, which is eompared to common wrapper-generation algorithm, indicated that this framework is feasible and better than wrapper-generation algorithm in quality of detail webpage extraction.

  7. String Mining in Bioinformatics

    Science.gov (United States)

    Abouelhoda, Mohamed; Ghanem, Moustafa

    Sequence analysis is a major area in bioinformatics encompassing the methods and techniques for studying the biological sequences, DNA, RNA, and proteins, on the linear structure level. The focus of this area is generally on the identification of intra- and inter-molecular similarities. Identifying intra-molecular similarities boils down to detecting repeated segments within a given sequence, while identifying inter-molecular similarities amounts to spotting common segments among two or multiple sequences. From a data mining point of view, sequence analysis is nothing but string- or pattern mining specific to biological strings. For a long time, this point of view, however, has not been explicitly embraced neither in the data mining nor in the sequence analysis text books, which may be attributed to the co-evolution of the two apparently independent fields. In other words, although the word "data-mining" is almost missing in the sequence analysis literature, its basic concepts have been implicitly applied. Interestingly, recent research in biological sequence analysis introduced efficient solutions to many problems in data mining, such as querying and analyzing time series [49,53], extracting information from web pages [20], fighting spam mails [50], detecting plagiarism [22], and spotting duplications in software systems [14].

  8. CURRENT USAGE OF COMPONENT BASED PRINCIPLES FOR DEVELOPING WEB APPLICATIONS WITH FRAMEWORKS: A LITERATURE REVIEW

    OpenAIRE

    Matija Novak; Ivan Švogor

    2016-01-01

    Component based software development has become a very popular paradigm in many software engineering branches. In the early phase of Web 2.0 appearance, it was also popular for web application development. From the analyzed papers, between this period and today, use of component based techniques for web application development was somewhat slowed down, however, the recent development indicates a comeback. Most of all it is apparent with W3C’s component web working group. In this article we wa...

  9. VisFlow - Web-based Visualization Framework for Tabular Data with a Subset Flow Model.

    Science.gov (United States)

    Yu, Bowen; Silva, Claudio T

    2017-01-01

    Data flow systems allow the user to design a flow diagram that specifies the relations between system components which process, filter or visually present the data. Visualization systems may benefit from user-defined data flows as an analysis typically consists of rendering multiple plots on demand and performing different types of interactive queries across coordinated views. In this paper, we propose VisFlow, a web-based visualization framework for tabular data that employs a specific type of data flow model called the subset flow model. VisFlow focuses on interactive queries within the data flow, overcoming the limitation of interactivity from past computational data flow systems. In particular, VisFlow applies embedded visualizations and supports interactive selections, brushing and linking within a visualization-oriented data flow. The model requires all data transmitted by the flow to be a data item subset (i.e. groups of table rows) of some original input table, so that rendering properties can be assigned to the subset unambiguously for tracking and comparison. VisFlow features the analysis flexibility of a flow diagram, and at the same time reduces the diagram complexity and improves usability. We demonstrate the capability of VisFlow on two case studies with domain experts on real-world datasets showing that VisFlow is capable of accomplishing a considerable set of visualization and analysis tasks. The VisFlow system is available as open source on GitHub.

  10. HOME USERS SECURITY AND THE WEB BROWSER INBUILT SETTINGS, FRAMEWORK TO SETUP IT AUTOMATICALLY

    Directory of Open Access Journals (Sweden)

    Mohammed Serrhini

    2013-01-01

    Full Text Available We are living in the electronic age where electronic transactions such as e-mail, e-banking, e-commerce and e-learning becoming more and more prominent. To access online for this services, the web browser is today’s almost unique software used. These days’ hackers know that browsers are installed into all computers and can be used to compromise a machine by distributing malware via malicious or hacked websites. Also these sites use JavaScript to manipulate browsers and can drive user system to failures. Each browser have inbuilt features setting that define his behavior, unfortunately most of end users are unwilling to enable or disable this features securely, because many of them still do not understand even basic security concepts nor variety of security technologies present in a browser. This study will deeply discuss specific modern browser inbuilt features settings and associated security risks and we present a framework developed to enhance user surfing safety by configuring automatically all installed browsers features settings securely, we call it Automatic Safe Browser Launcher, to solidify the claim, we check each browser before and after with free tool (browser_tests-1.03 which is a collection of test cases to test browser vulnerability. The more configured security features your browser has, the better protected you are from online threats.

  11. Deploying mutation impact text-mining software with the SADI Semantic Web Services framework.

    Science.gov (United States)

    Riazanov, Alexandre; Laurila, Jonas Bergman; Baker, Christopher J O

    2011-01-01

    Mutation impact extraction is an important task designed to harvest relevant annotations from scientific documents for reuse in multiple contexts. Our previous work on text mining for mutation impacts resulted in (i) the development of a GATE-based pipeline that mines texts for information about impacts of mutations on proteins, (ii) the population of this information into our OWL DL mutation impact ontology, and (iii) establishing an experimental semantic database for storing the results of text mining. This article explores the possibility of using the SADI framework as a medium for publishing our mutation impact software and data. SADI is a set of conventions for creating web services with semantic descriptions that facilitate automatic discovery and orchestration. We describe a case study exploring and demonstrating the utility of the SADI approach in our context. We describe several SADI services we created based on our text mining API and data, and demonstrate how they can be used in a number of biologically meaningful scenarios through a SPARQL interface (SHARE) to SADI services. In all cases we pay special attention to the integration of mutation impact services with external SADI services providing information about related biological entities, such as proteins, pathways, and drugs. We have identified that SADI provides an effective way of exposing our mutation impact data such that it can be leveraged by a variety of stakeholders in multiple use cases. The solutions we provide for our use cases can serve as examples to potential SADI adopters trying to solve similar integration problems.

  12. Proposing a Framework for Exploration of Crime Data Using Web Structure and Content Mining

    Directory of Open Access Journals (Sweden)

    Amin Shahraki Moghaddam

    2013-10-01

    Full Text Available The purpose of this study is to propose a framework and implement High-level architecture of a scalable universal crawler to maintenance the reliability gap and present the evaluation process of forensic data analysis criminal suspects. In Law enforcement agencies, criminal web data provide appropriate and anonymous information. Pieces of information implemented the digital data in the forensic analysis to accused social networks but the assessment of these information pieces is so difficult. In fact, the operator manually should pull out the suitable information from the text in the website and find the links and classify them into a database structure. In consequent, the set is ready to implement a various criminal network evaluation tools for testing. As a result, this procedure is not efficient because it has many errors and the quality of obtaining the analyzed data is based on the expertise and experience of the investigator subsequently the reliability of the tests is not constant. Therefore, the better result just comes from the knowledgeable operator. The objectives of this study is to show the process of investigating the criminal suspects of forensic data analysis to maintenance the reliability gap by proposing a structure and applying High-level architecture of a scalable universal crawler.

  13. Integrated Agile Development Framework for Web Applications%一体化Web应用敏捷开发框架

    Institute of Scientific and Technical Information of China (English)

    徐鹏; 李佩; 郝青

    2016-01-01

    To meet the requirements of web application development for higher efficiency and lower cost, the agile development is widely accepted. But in traditional agile development procedures, the project process management and its technical framework are isolated, which greatly limits practical effects of agile development. An integrated web application framework for agile development with high adaptability and flexibility was brought forward in the article. The integrated development framework consists of both process management framework and technical framework and they are in close collaboration. Based on the principle of agile development, the process management framework defines goals and tasks which need to be done at each stage of application development process. The technical framework provides support for the whole lifecycle of web application development by using reusable components, templates and software frameworks. Furthermore, the technical framework was deployed on PaaS, which greatly improves the re-use efficiency. A series of application development using the framework specified in this article demon-strate the framework effectiveness.%为满足高效低成本的Web应用开发需求提出了一种适应于敏捷开发中快速迭代和高灵活性的一体化Web应用敏捷开发框架。此一体化开发框架由密切协作的过程管理框架和技术框架构成。其中,过程管理框架遵循敏捷开发的原则,定义了项目实施过程中各个阶段的工作目标和内容,而技术框架则以可重用组件、模板、框架的方式提供对Web应用开发中前后端的技术支持,并基于PaaS云部署进一步提高了重用效率。应用实例证明了一体化Web应用敏捷开发框架的有效性。

  14. Deep Learning in Bioinformatics

    OpenAIRE

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2016-01-01

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current res...

  15. An Integrated WebGIS Framework for Volunteered Geographic Information and Social Media in Soil and Water Conservation

    Science.gov (United States)

    Werts, Joshua D.; Mikhailova, Elena A.; Post, Christopher J.; Sharp, Julia L.

    2012-04-01

    Volunteered geographic information and social networking in a WebGIS has the potential to increase public participation in soil and water conservation, promote environmental awareness and change, and provide timely data that may be otherwise unavailable to policymakers in soil and water conservation management. The objectives of this study were: (1) to develop a framework for combining current technologies, computing advances, data sources, and social media; and (2) develop and test an online web mapping interface. The mapping interface integrates Microsoft Silverlight, Bing Maps, ArcGIS Server, Google Picasa Web Albums Data API, RSS, Google Analytics, and Facebook to create a rich user experience. The website allows the public to upload photos and attributes of their own subdivisions or sites they have identified and explore other submissions. The website was made available to the public in early February 2011 at http://www.AbandonedDevelopments.com and evaluated for its potential long-term success in a pilot study.

  16. An integrated WebGIS framework for volunteered geographic information and social media in soil and water conservation.

    Science.gov (United States)

    Werts, Joshua D; Mikhailova, Elena A; Post, Christopher J; Sharp, Julia L

    2012-04-01

    Volunteered geographic information and social networking in a WebGIS has the potential to increase public participation in soil and water conservation, promote environmental awareness and change, and provide timely data that may be otherwise unavailable to policymakers in soil and water conservation management. The objectives of this study were: (1) to develop a framework for combining current technologies, computing advances, data sources, and social media; and (2) develop and test an online web mapping interface. The mapping interface integrates Microsoft Silverlight, Bing Maps, ArcGIS Server, Google Picasa Web Albums Data API, RSS, Google Analytics, and Facebook to create a rich user experience. The website allows the public to upload photos and attributes of their own subdivisions or sites they have identified and explore other submissions. The website was made available to the public in early February 2011 at http://www.AbandonedDevelopments.com and evaluated for its potential long-term success in a pilot study.

  17. A topological framework for interactive queries on 3D models in the Web.

    Science.gov (United States)

    Figueiredo, Mauro; Rodrigues, José I; Silvestre, Ivo; Veiga-Pires, Cristina

    2014-01-01

    Several technologies exist to create 3D content for the web. With X3D, WebGL, and X3DOM, it is possible to visualize and interact with 3D models in a web browser. Frequently, three-dimensional objects are stored using the X3D file format for the web. However, there is no explicit topological information, which makes it difficult to design fast algorithms for applications that require adjacency and incidence data. This paper presents a new open source toolkit TopTri (Topological model for Triangle meshes) for Web3D servers that builds the topological model for triangular meshes of manifold or nonmanifold models. Web3D client applications using this toolkit make queries to the web server to get adjacent and incidence information of vertices, edges, and faces. This paper shows the application of the topological information to get minimal local points and iso-lines in a 3D mesh in a web browser. As an application, we present also the interactive identification of stalactites in a cave chamber in a 3D web browser. Several tests show that even for large triangular meshes with millions of triangles, the adjacency and incidence information is returned in real time making the presented toolkit appropriate for interactive Web3D applications.

  18. Desarrollo del Front-End de una aplicación web con el framework EXT JS 5

    OpenAIRE

    Carrero Salcedo, Enrique

    2015-01-01

    Este trabajo de fin de grado trata sobre el desarrollo de una aplicación web utilizando uno de los frameworks de JavaScript más potentes del mercado, el EXT JS5. Este ha sido desarrollado por Sencha. La memoria está dividida en 9 capítulos, en los cuales se explican los conocimientos básicos que un desarrollador debe tener para comenzar a utilizar el EXT JS 5; las ventajas y los contras de este framework respecto a otros; una descripción general de cómo desarrollar el Front-End de una apli...

  19. Semantic web services for web databases

    CERN Document Server

    Ouzzani, Mourad

    2011-01-01

    Semantic Web Services for Web Databases introduces an end-to-end framework for querying Web databases using novel Web service querying techniques. This includes a detailed framework for the query infrastructure for Web databases and services. Case studies are covered in the last section of this book. Semantic Web Services For Web Databases is designed for practitioners and researchers focused on service-oriented computing and Web databases.

  20. Web scraping technologies in an API world.

    Science.gov (United States)

    Glez-Peña, Daniel; Lourenço, Anália; López-Fernández, Hugo; Reboiro-Jato, Miguel; Fdez-Riverola, Florentino

    2014-09-01

    Web services are the de facto standard in biomedical data integration. However, there are data integration scenarios that cannot be fully covered by Web services. A number of Web databases and tools do not support Web services, and existing Web services do not cover for all possible user data demands. As a consequence, Web data scraping, one of the oldest techniques for extracting Web contents, is still in position to offer a valid and valuable service to a wide range of bioinformatics applications, ranging from simple extraction robots to online meta-servers. This article reviews existing scraping frameworks and tools, identifying their strengths and limitations in terms of extraction capabilities. The main focus is set on showing how straightforward it is today to set up a data scraping pipeline, with minimal programming effort, and answer a number of practical needs. For exemplification purposes, we introduce a biomedical data extraction scenario where the desired data sources, well-known in clinical microbiology and similar domains, do not offer programmatic interfaces yet. Moreover, we describe the operation of WhichGenes and PathJam, two bioinformatics meta-servers that use scraping as means to cope with gene set enrichment analysis.

  1. Mapping objects through a data motor NoSQL case study: framework for web applications development

    Directory of Open Access Journals (Sweden)

    Roger Calderón-Moreno

    2016-01-01

    Full Text Available This article emerged as an academic initiative in which it is observed that the areas of knowledge in software develop- ment under the paradigm of Object-oriented programming (OOP is confronted by a model data storage relational raising two scenarios different developers try to mitigate through conversions between types or using intermediate tools such as mapping relational objects that bring certain advantages and disadvantages, and therefore, was raised within the project the possibility of using a storage engine type non-relational or NoSQL.With the design and development of the framework for generating Web applications, the user can define objects to consider including in the application, which will be stored in MongoDB engine, which arranges the data in the form of documents. The dynamic structure of these documents can be used in many projects, including many who traditionally would work on relational databases.Aiming to socialize and evaluate the work done, some instruments were designed to collect information from users with experience in the field of databases and software development. As a result highlights that software developers have clear concepts of object persistence through object-relational mapping (ORM, that learning these techniques software development through implementing own code or using APIs have a high degree of complexity and mostly (60% they are aware that these implementations generate low performance in applications. In addition, the opening of these highlights to choose alternative to organize and store information, different to the relational approach used for several years.

  2. Enabling the dynamic coupling between sensor web and Earth system models - The Self-Adaptive Earth Predictive Systems (SEPS) framework

    Science.gov (United States)

    di, L.; Yu, G.; Chen, N.

    2007-12-01

    The self-adaptation concept is the central piece of the control theory widely and successfully used in engineering and military systems. Such a system contains a predictor and a measurer. The predictor takes initial condition and makes an initial prediction and the measurer then measures the state of a real world phenomenon. A feedback mechanism is built in that automatically feeds the measurement back to the predictor. The predictor takes the measurement against the prediction to calculate the prediction error and adjust its internal state based on the error. Thus, the predictor learns from the error and makes a more accurate prediction in the next step. By adopting the self-adaptation concept, we proposed the Self-adaptive Earth Predictive System (SEPS) concept for enabling the dynamic coupling between the sensor web and the Earth system models. The concept treats Earth System Models (ESM) and Earth Observations (EO) as integral components of the SEPS coupled by the SEPS framework. EO measures the Earth system state while ESM predicts the evolution of the state. A feedback mechanism processes EO measurements and feeds them into ESM during model runs or as initial conditions. A feed-forward mechanism analyzes the ESM predictions against science goals for scheduling optimized/targeted observations. The SEPS framework automates the Feedback and Feed-forward mechanisms (the FF-loop). Based on open consensus-based standards, a general SEPS framework can be developed for supporting the dynamic, interoperable coupling between ESMs and EO. Such a framework can support the plug-in-and-play capability of both ESMs and diverse sensors and data systems as long as they support the standard interfaces. This presentation discusses the SEPS concept, the service-oriented architecture (SOA) of SEPS framework, standards of choices for the framework, and the implementation. The presentation also presents examples of SEPS to demonstrate dynamic, interoperable, and live coupling of

  3. Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks

    OpenAIRE

    Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly

    2014-01-01

    Hybrid mobile applications (apps) combine the features of Web applications and “native” mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources—file system, location, camera, contacts, etc.

  4. A Framework for Synchronous Web-Based Professional Development: Measuring the Impact of Webinar Instruction

    Science.gov (United States)

    Rich, Rachel L.

    2011-01-01

    Through the evolution and proliferation of the Internet, distance and online education have become more prevalent in modern society. Synchronous web-based professional development continues to gain popularity. Although online education has grown in popularity and breadth, there has been a lack of research about the impact of synchronous web-based…

  5. cl-dash: rapid configuration and deployment of Hadoop clusters for bioinformatics research in the cloud.

    Science.gov (United States)

    Hodor, Paul; Chawla, Amandeep; Clark, Andrew; Neal, Lauren

    2016-01-15

    : One of the solutions proposed for addressing the challenge of the overwhelming abundance of genomic sequence and other biological data is the use of the Hadoop computing framework. Appropriate tools are needed to set up computational environments that facilitate research of novel bioinformatics methodology using Hadoop. Here, we present cl-dash, a complete starter kit for setting up such an environment. Configuring and deploying new Hadoop clusters can be done in minutes. Use of Amazon Web Services ensures no initial investment and minimal operation costs. Two sample bioinformatics applications help the researcher understand and learn the principles of implementing an algorithm using the MapReduce programming pattern. Source code is available at https://bitbucket.org/booz-allen-sci-comp-team/cl-dash.git. hodor_paul@bah.com. © The Author 2015. Published by Oxford University Press.

  6. A Mobile Agent-based Web Page Constructing Framework MiPage

    Science.gov (United States)

    Fukuta, Naoki; Ozono, Tadachika; Shintani, Toramatsu

    In this paper, we present a programming framework, `MiPage', for realizing intelligent WWW applications based on the mobile agent technology. On the framework, an agent is programmed by using hyper text markup language and logic programming language. To realize the framework, we designed a new logic programming environment `MiLog', and an agent program compiler `MiPage Compiler'. The framework enables us to enhance both richness of the services and manageability of the application.

  7. COEUS: “semantic web in a box” for biomedical applications

    Directory of Open Access Journals (Sweden)

    Lopes Pedro

    2012-12-01

    Full Text Available Abstract Background As the “omics” revolution unfolds, the growth in data quantity and diversity is bringing about the need for pioneering bioinformatics software, capable of significantly improving the research workflow. To cope with these computer science demands, biomedical software engineers are adopting emerging semantic web technologies that better suit the life sciences domain. The latter’s complex relationships are easily mapped into semantic web graphs, enabling a superior understanding of collected knowledge. Despite increased awareness of semantic web technologies in bioinformatics, their use is still limited. Results COEUS is a new semantic web framework, aiming at a streamlined application development cycle and following a “semantic web in a box” approach. The framework provides a single package including advanced data integration and triplification tools, base ontologies, a web-oriented engine and a flexible exploration API. Resources can be integrated from heterogeneous sources, including CSV and XML files or SQL and SPARQL query results, and mapped directly to one or more ontologies. Advanced interoperability features include REST services, a SPARQL endpoint and LinkedData publication. These enable the creation of multiple applications for web, desktop or mobile environments, and empower a new knowledge federation layer. Conclusions The platform, targeted at biomedical application developers, provides a complete skeleton ready for rapid application deployment, enhancing the creation of new semantic information systems. COEUS is available as open source at http://bioinformatics.ua.pt/coeus/.

  8. Application of Spring Security Framework in Web Development%Spring Security 安全框架应用

    Institute of Scientific and Technical Information of China (English)

    周文红; 晏素芬; 蒋玉芳; 邓朝晖

    2013-01-01

    This article introduces securing Web application using Spring Security framework .It focuses on how to write configura-tion snippets in Web problem to secure it .Its working core is HTTP request filter chain .Spring Security supports users , password and roles.It can both secure Web view layer and business layer .It is a good Web securing solution for authentication and authority .%介绍在Web应用开发中使用Spring Security安全框架进行安全验证,并重点介绍如何在Web程序中进行配置,使得程序具有安全检查功能。 Spring Security框架的工作核心是HTTP请求过滤器链,它支持用户、密码和角色权限,可以对Web表现层和业务层进行安全验证,是一种较好的Web安全解决方案。

  9. Attend in groups: a weakly-supervised deep learning framework for learning from web data

    OpenAIRE

    Zhuang, Bohan; Liu, Lingqiao; Li, Yao; Shen, Chunhua; Reid, Ian

    2016-01-01

    Large-scale datasets have driven the rapid development of deep neural networks for visual recognition. However, annotating a massive dataset is expensive and time-consuming. Web images and their labels are, in comparison, much easier to obtain, but direct training on such automatically harvested images can lead to unsatisfactory performance, because the noisy labels of Web images adversely affect the learned recognition models. To address this drawback we propose an end-to-end weakly-supervis...

  10. Reference framework for integrating web resources as e-learning services in .LRN

    Directory of Open Access Journals (Sweden)

    Fabinton Sotelo Gómez

    2015-11-01

    Full Text Available The learning management platforms (LMS as Dot LRN (.LRN have been widely disseminated and used as a teaching tool. However, despite its great potential, most of these platforms do not allow easy integration of common services on the Web. Integration of external resources in LMS is critical to extend the quantity and quality of educational services LMS. This article presents a set of criteria and architectural guidelines for the integration of Web resources for e-learning in the LRN platform. To this end, three steps are performed: first; the possible integration technologies to be used are described, second; the Web resources that provide educational services and can be integrated into LMS platforms are analyzed, finally; some architectural aspects of the relevant platform are identified for integration. The main contributions of this paper are: a characterization of Web resources and educational services available today on the Web; and the definition of criteria and guidelines for the integration of Web resources to .LRN.

  11. A Novel Framework for Medical Web Information Foraging Using Hybrid ACO and Tabu Search.

    Science.gov (United States)

    Drias, Yassine; Kechid, Samir; Pasi, Gabriella

    2016-01-01

    We present in this paper a novel approach based on multi-agent technology for Web information foraging. We proposed for this purpose an architecture in which we distinguish two important phases. The first one is a learning process for localizing the most relevant pages that might interest the user. This is performed on a fixed instance of the Web. The second takes into account the openness and dynamicity of the Web. It consists on an incremental learning starting from the result of the first phase and reshaping the outcomes taking into account the changes that undergoes the Web. The system was implemented using a colony of artificial ants hybridized with tabu search in order to achieve more effectiveness and efficiency. To validate our proposal, experiments were conducted on MedlinePlus, a real website dedicated for research in the domain of Health in contrast to other previous works where experiments were performed on web logs datasets. The main results are promising either for those related to strong Web regularities and for the response time, which is very short and hence complies the real time constraint.

  12. The MPI Bioinformatics Toolkit for protein sequence analysis.

    Science.gov (United States)

    Biegert, Andreas; Mayer, Christian; Remmert, Michael; Söding, Johannes; Lupas, Andrei N

    2006-07-01

    The MPI Bioinformatics Toolkit is an interactive web service which offers access to a great variety of public and in-house bioinformatics tools. They are grouped into different sections that support sequence searches, multiple alignment, secondary and tertiary structure prediction and classification. Several public tools are offered in customized versions that extend their functionality. For example, PSI-BLAST can be run against regularly updated standard databases, customized user databases or selectable sets of genomes. Another tool, Quick2D, integrates the results of various secondary structure, transmembrane and disorder prediction programs into one view. The Toolkit provides a friendly and intuitive user interface with an online help facility. As a key feature, various tools are interconnected so that the results of one tool can be forwarded to other tools. One could run PSI-BLAST, parse out a multiple alignment of selected hits and send the results to a cluster analysis tool. The Toolkit framework and the tools developed in-house will be packaged and freely available under the GNU Lesser General Public Licence (LGPL). The Toolkit can be accessed at http://toolkit.tuebingen.mpg.de.

  13. Bioinformatics and Cancer

    Science.gov (United States)

    Researchers take on challenges and opportunities to mine "Big Data" for answers to complex biological questions. Learn how bioinformatics uses advanced computing, mathematics, and technological platforms to store, manage, analyze, and understand data.

  14. Deep learning in bioinformatics.

    Science.gov (United States)

    Min, Seonwoo; Lee, Byunghan; Yoon, Sungroh

    2016-07-29

    In the era of big data, transformation of biomedical big data into valuable knowledge has been one of the most important challenges in bioinformatics. Deep learning has advanced rapidly since the early 2000s and now demonstrates state-of-the-art performance in various fields. Accordingly, application of deep learning in bioinformatics to gain insight from data has been emphasized in both academia and industry. Here, we review deep learning in bioinformatics, presenting examples of current research. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues of deep learning in bioinformatics and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies.

  15. Knowledge sharing for pediatric pain management via a Web 2.0 framework.

    Science.gov (United States)

    Abidi, Syed Sibte Raza; Hussini, Salah; Sriraj, Wimorat; Thienthong, Somboon; Finley, G Allen

    2009-01-01

    The experiential knowledge of pediatric health practitioners encompasses vital insights into the clinical efficacy of diagnostic and therapeutic methods for pediatric pain management. Yet, this knowledge is not readily disseminated to other practitioners and translated into practice guidelines. We argue that a peer-to-peer knowledge sharing mechanism can serve as a key change agent to improve the attitudes, beliefs and methods for pediatric pain management. We are using collaborative technologies, in the realm of Web 2.0, to develop a web-based knowledge sharing medium for fostering a community of pediatric pain practitioners that engages in collaborative learning and problem solving. We present the design and use of a web portal featuring a discussion forum to facilitate experiential knowledge sharing based on our LINKS knowledge sharing model.

  16. Evaluating Web-Based Learning and Instruction (WBLI): A Case Study and Framework for Evaluation.

    Science.gov (United States)

    Michalski, Greg V.

    The purpose of this paper is to suggest an alternative approach to perform relevant and useful evaluations of Web-based learning and instruction (WBLI) that will accommodate performance and keep pace with the growing capabilities of the Internet. Discussion includes the advantages of WBLI, multimedia and streaming use in WBLI, building the…

  17. Application of probabilistic and fuzzy cognitive approaches in semantic web framework for medical decision support.

    Science.gov (United States)

    Papageorgiou, Elpiniki I; Huszka, Csaba; De Roo, Jos; Douali, Nassim; Jaulent, Marie-Christine; Colaert, Dirk

    2013-12-01

    This study aimed to focus on medical knowledge representation and reasoning using the probabilistic and fuzzy influence processes, implemented in the semantic web, for decision support tasks. Bayesian belief networks (BBNs) and fuzzy cognitive maps (FCMs), as dynamic influence graphs, were applied to handle the task of medical knowledge formalization for decision support. In order to perform reasoning on these knowledge models, a general purpose reasoning engine, EYE, with the necessary plug-ins was developed in the semantic web. The two formal approaches constitute the proposed decision support system (DSS) aiming to recognize the appropriate guidelines of a medical problem, and to propose easily understandable course of actions to guide the practitioners. The urinary tract infection (UTI) problem was selected as the proof-of-concept example to examine the proposed formalization techniques implemented in the semantic web. The medical guidelines for UTI treatment were formalized into BBN and FCM knowledge models. To assess the formal models' performance, 55 patient cases were extracted from a database and analyzed. The results showed that the suggested approaches formalized medical knowledge efficiently in the semantic web, and gave a front-end decision on antibiotics' suggestion for UTI.

  18. Towards Distributed Information Retrieval in the Semantic Web: Query Reformulation Using the oMAP Framework

    NARCIS (Netherlands)

    Straccia, U.; Troncy, R.

    2006-01-01

    This paper introduces a general methodology for performing distributed search in the Semantic Web. We propose to define this task as a three steps process, namely resource selection, query reformulation/ontology alignment and rank aggregation/data fusion. For the second problem, we have implemented

  19. An Intelligent Semantic E-Learning Framework Using Context-Aware Semantic Web Technologies

    Science.gov (United States)

    Huang, Weihong; Webster, David; Wood, Dawn; Ishaya, Tanko

    2006-01-01

    Recent developments of e-learning specifications such as Learning Object Metadata (LOM), Sharable Content Object Reference Model (SCORM), Learning Design and other pedagogy research in semantic e-learning have shown a trend of applying innovative computational techniques, especially Semantic Web technologies, to promote existing content-focused…

  20. An Intelligent Semantic E-Learning Framework Using Context-Aware Semantic Web Technologies

    Science.gov (United States)

    Huang, Weihong; Webster, David; Wood, Dawn; Ishaya, Tanko

    2006-01-01

    Recent developments of e-learning specifications such as Learning Object Metadata (LOM), Sharable Content Object Reference Model (SCORM), Learning Design and other pedagogy research in semantic e-learning have shown a trend of applying innovative computational techniques, especially Semantic Web technologies, to promote existing content-focused…

  1. A Study on Web Service Resource Framework%WSRF标准规范体系研究

    Institute of Scientific and Technical Information of China (English)

    韩涛

    2007-01-01

    Web Services和网格的发展为背景,介绍WSRF产生的原因,在此基础上描述WSRF的具体标准规范,最后对WSRF分别同网格领域的OGSI和Web服务领域的WSMF进行比较分析.

  2. A Conceptual Framework for a Web-based Knowledge Construction Support System.

    Science.gov (United States)

    Kang, Myunghee; Byun, Hoseung Paul

    2001-01-01

    Provides a conceptual model for a Web-based Knowledge Construction Support System (KCSS) that helps learners acquire factual knowledge and supports the construction of new knowledge through individual internalization and collaboration with other people. Considers learning communities, motivation, cognitive styles, learning strategies,…

  3. A WEB-BASED FRAMEWORK FOR VISUALIZING INDUSTRIAL SPATIOTEMPORAL DISTRIBUTION USING STANDARD DEVIATIONAL ELLIPSE AND SHIFTING ROUTES OF GRAVITY CENTERS

    Directory of Open Access Journals (Sweden)

    Y. Song

    2017-09-01

    Full Text Available Analysing spatiotemporal distribution patterns and its dynamics of different industries can help us learn the macro-level developing trends of those industries, and in turn provides references for industrial spatial planning. However, the analysis process is challenging task which requires an easy-to-understand information presentation mechanism and a powerful computational technology to support the visual analytics of big data on the fly. Due to this reason, this research proposes a web-based framework to enable such a visual analytics requirement. The framework uses standard deviational ellipse (SDE and shifting route of gravity centers to show the spatial distribution and yearly developing trends of different enterprise types according to their industry categories. The calculation of gravity centers and ellipses is paralleled using Apache Spark to accelerate the processing. In the experiments, we use the enterprise registration dataset in Mainland China from year 1960 to 2015 that contains fine-grain location information (i.e., coordinates of each individual enterprise to demonstrate the feasibility of this framework. The experiment result shows that the developed visual analytics method is helpful to understand the multi-level patterns and developing trends of different industries in China. Moreover, the proposed framework can be used to analyse any nature and social spatiotemporal point process with large data volume, such as crime and disease.

  4. a Web-Based Framework for Visualizing Industrial Spatiotemporal Distribution Using Standard Deviational Ellipse and Shifting Routes of Gravity Centers

    Science.gov (United States)

    Song, Y.; Gui, Z.; Wu, H.; Wei, Y.

    2017-09-01

    Analysing spatiotemporal distribution patterns and its dynamics of different industries can help us learn the macro-level developing trends of those industries, and in turn provides references for industrial spatial planning. However, the analysis process is challenging task which requires an easy-to-understand information presentation mechanism and a powerful computational technology to support the visual analytics of big data on the fly. Due to this reason, this research proposes a web-based framework to enable such a visual analytics requirement. The framework uses standard deviational ellipse (SDE) and shifting route of gravity centers to show the spatial distribution and yearly developing trends of different enterprise types according to their industry categories. The calculation of gravity centers and ellipses is paralleled using Apache Spark to accelerate the processing. In the experiments, we use the enterprise registration dataset in Mainland China from year 1960 to 2015 that contains fine-grain location information (i.e., coordinates of each individual enterprise) to demonstrate the feasibility of this framework. The experiment result shows that the developed visual analytics method is helpful to understand the multi-level patterns and developing trends of different industries in China. Moreover, the proposed framework can be used to analyse any nature and social spatiotemporal point process with large data volume, such as crime and disease.

  5. A simple versatile solution for collecting multidimensional clinical data based on the CakePHP web application framework.

    Science.gov (United States)

    Biermann, Martin

    2014-04-01

    Clinical trials aiming for regulatory approval of a therapeutic agent must be conducted according to Good Clinical Practice (GCP). Clinical Data Management Systems (CDMS) are specialized software solutions geared toward GCP-trials. They are however less suited for data management in small non-GCP research projects. For use in researcher-initiated non-GCP studies, we developed a client-server database application based on the public domain CakePHP framework. The underlying MySQL database uses a simple data model based on only five data tables. The graphical user interface can be run in any web browser inside the hospital network. Data are validated upon entry. Data contained in external database systems can be imported interactively. Data are automatically anonymized on import, and the key lists identifying the subjects being logged to a restricted part of the database. Data analysis is performed by separate statistics and analysis software connecting to the database via a generic Open Database Connectivity (ODBC) interface. Since its first pilot implementation in 2011, the solution has been applied to seven different clinical research projects covering different clinical problems in different organ systems such as cancer of the thyroid and the prostate glands. This paper shows how the adoption of a generic web application framework is a feasible, flexible, low-cost, and user-friendly way of managing multidimensional research data in researcher-initiated non-GCP clinical projects. Copyright © 2014 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  6. Towards a Semantic Web of Things: A Hybrid Semantic Annotation, Extraction, and Reasoning Framework for Cyber-Physical System.

    Science.gov (United States)

    Wu, Zhenyu; Xu, Yuan; Yang, Yunong; Zhang, Chunhong; Zhu, Xinning; Ji, Yang

    2017-02-20

    Web of Things (WoT) facilitates the discovery and interoperability of Internet of Things (IoT) devices in a cyber-physical system (CPS). Moreover, a uniform knowledge representation of physical resources is quite necessary for further composition, collaboration, and decision-making process in CPS. Though several efforts have integrated semantics with WoT, such as knowledge engineering methods based on semantic sensor networks (SSN), it still could not represent the complex relationships between devices when dynamic composition and collaboration occur, and it totally depends on manual construction of a knowledge base with low scalability. In this paper, to addresses these limitations, we propose the semantic Web of Things (SWoT) framework for CPS (SWoT4CPS). SWoT4CPS provides a hybrid solution with both ontological engineering methods by extending SSN and machine learning methods based on an entity linking (EL) model. To testify to the feasibility and performance, we demonstrate the framework by implementing a temperature anomaly diagnosis and automatic control use case in a building automation system. Evaluation results on the EL method show that linking domain knowledge to DBpedia has a relative high accuracy and the time complexity is at a tolerant level. Advantages and disadvantages of SWoT4CPS with future work are also discussed.

  7. Towards a Semantic Web of Things: A Hybrid Semantic Annotation, Extraction, and Reasoning Framework for Cyber-Physical System

    Science.gov (United States)

    Wu, Zhenyu; Xu, Yuan; Yang, Yunong; Zhang, Chunhong; Zhu, Xinning; Ji, Yang

    2017-01-01

    Web of Things (WoT) facilitates the discovery and interoperability of Internet of Things (IoT) devices in a cyber-physical system (CPS). Moreover, a uniform knowledge representation of physical resources is quite necessary for further composition, collaboration, and decision-making process in CPS. Though several efforts have integrated semantics with WoT, such as knowledge engineering methods based on semantic sensor networks (SSN), it still could not represent the complex relationships between devices when dynamic composition and collaboration occur, and it totally depends on manual construction of a knowledge base with low scalability. In this paper, to addresses these limitations, we propose the semantic Web of Things (SWoT) framework for CPS (SWoT4CPS). SWoT4CPS provides a hybrid solution with both ontological engineering methods by extending SSN and machine learning methods based on an entity linking (EL) model. To testify to the feasibility and performance, we demonstrate the framework by implementing a temperature anomaly diagnosis and automatic control use case in a building automation system. Evaluation results on the EL method show that linking domain knowledge to DBpedia has a relative high accuracy and the time complexity is at a tolerant level. Advantages and disadvantages of SWoT4CPS with future work are also discussed. PMID:28230725

  8. An Interactive Web-Based Analysis Framework for Remote Sensing Cloud Computing

    Science.gov (United States)

    Wang, X. Z.; Zhang, H. M.; Zhao, J. H.; Lin, Q. H.; Zhou, Y. C.; Li, J. H.

    2015-07-01

    Spatiotemporal data, especially remote sensing data, are widely used in ecological, geographical, agriculture, and military research and applications. With the development of remote sensing technology, more and more remote sensing data are accumulated and stored in the cloud. An effective way for cloud users to access and analyse these massive spatiotemporal data in the web clients becomes an urgent issue. In this paper, we proposed a new scalable, interactive and web-based cloud computing solution for massive remote sensing data analysis. We build a spatiotemporal analysis platform to provide the end-user with a safe and convenient way to access massive remote sensing data stored in the cloud. The lightweight cloud storage system used to store public data and users' private data is constructed based on open source distributed file system. In it, massive remote sensing data are stored as public data, while the intermediate and input data are stored as private data. The elastic, scalable, and flexible cloud computing environment is built using Docker, which is a technology of open-source lightweight cloud computing container in the Linux operating system. In the Docker container, open-source software such as IPython, NumPy, GDAL, and Grass GIS etc., are deployed. Users can write scripts in the IPython Notebook web page through the web browser to process data, and the scripts will be submitted to IPython kernel to be executed. By comparing the performance of remote sensing data analysis tasks executed in Docker container, KVM virtual machines and physical machines respectively, we can conclude that the cloud computing environment built by Docker makes the greatest use of the host system resources, and can handle more concurrent spatial-temporal computing tasks. Docker technology provides resource isolation mechanism in aspects of IO, CPU, and memory etc., which offers security guarantee when processing remote sensing data in the IPython Notebook. Users can write

  9. Regulatory bioinformatics for food and drug safety.

    Science.gov (United States)

    Healy, Marion J; Tong, Weida; Ostroff, Stephen; Eichler, Hans-Georg; Patak, Alex; Neuspiel, Margaret; Deluyker, Hubert; Slikker, William

    2016-10-01

    "Regulatory Bioinformatics" strives to develop and implement a standardized and transparent bioinformatic framework to support the implementation of existing and emerging technologies in regulatory decision-making. It has great potential to improve public health through the development and use of clinically important medical products and tools to manage the safety of the food supply. However, the application of regulatory bioinformatics also poses new challenges and requires new knowledge and skill sets. In the latest Global Coalition on Regulatory Science Research (GCRSR) governed conference, Global Summit on Regulatory Science (GSRS2015), regulatory bioinformatics principles were presented with respect to global trends, initiatives and case studies. The discussion revealed that datasets, analytical tools, skills and expertise are rapidly developing, in many cases via large international collaborative consortia. It also revealed that significant research is still required to realize the potential applications of regulatory bioinformatics. While there is significant excitement in the possibilities offered by precision medicine to enhance treatments of serious and/or complex diseases, there is a clear need for further development of mechanisms to securely store, curate and share data, integrate databases, and standardized quality control and data analysis procedures. A greater understanding of the biological significance of the data is also required to fully exploit vast datasets that are becoming available. The application of bioinformatics in the microbiological risk analysis paradigm is delivering clear benefits both for the investigation of food borne pathogens and for decision making on clinically important treatments. It is recognized that regulatory bioinformatics will have many beneficial applications by ensuring high quality data, validated tools and standardized processes, which will help inform the regulatory science community of the requirements

  10. UbiUnity: a Dynamic Visual Simulation Framework for Web of Things

    OpenAIRE

    Lavirotte, Stéphane; Tigli, Jean-Yves; Rocher, Gérald; El Beze, Léa; Palma, Adam

    2015-01-01

    International audience; The development of smart spaces is a complex and challenging task. The choice of suitable sensors and actuators to deploy in these physical testbeds is difficult without experimentation. Moreover, several challenges still remain in improving and testing new fields of application based on Web of Things (WoT). In this paper, we present UbiUnity, a dynamic visual simulator environment which can be used during the design phase of smart spaces. Our approach allows to define...

  11. Development of a change framework to study SME web site evolution

    OpenAIRE

    Alonso Mendo, Fernando

    2007-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. It has been suggested that the adoption of e-commerce by Small and Medium-sized Enterprises (SMEs) follows a sequence of stages with each representing increasing complexity and benefits. These models imply a development of their web sites in successive iterations or redesigns from basic use of the Internet (as a marketing tool) to the most advanced level of sophistication and integration....

  12. BigExcel : a web-based framework for exploring big data in Social Sciences

    OpenAIRE

    2015-01-01

    This research was pursued through an Amazon Web Services Education Research Grant. The first author was the recipient of an Erasmus Mundus scholarship. This paper argues that there are three fundamental challenges that need to be overcome in order to foster the adoption of big data technologies in non-computer science related disciplines: addressing issues of accessibility of such technologies for non-computer scientists, supporting the ad hoc exploration of large data sets with minimal ef...

  13. A Framework for Web-Based Interprofessional Education for Midwifery and Medical Students.

    Science.gov (United States)

    Reis, Pamela J; Faser, Karl; Davis, Marquietta

    2015-01-01

    Scheduling interprofessional team-based activities for health sciences students who are geographically dispersed, with divergent and often competing schedules, can be challenging. The use of Web-based technologies such as 3-dimensional (3D) virtual learning environments in interprofessional education is a relatively new phenomenon, which offers promise in helping students come together in online teams when face-to-face encounters are not possible. The purpose of this article is to present the experience of a nurse-midwifery education program in a Southeastern US university in delivering Web-based interprofessional education for nurse-midwifery and third-year medical students utilizing the Virtual Community Clinic Learning Environment (VCCLE). The VCCLE is a 3D, Web-based, asynchronous, immersive clinic environment into which students enter to meet and interact with instructor-controlled virtual patient and virtual preceptor avatars and then move through a classic diagnostic sequence in arriving at a plan of care for women throughout the lifespan. By participating in the problem-based management of virtual patients within the VCCLE, students learn both clinical competencies and competencies for interprofessional collaborative practice, as described by the Interprofessional Education Collaborative Core Competencies for Interprofessional Collaborative Practice. This article is part of a special series of articles that address midwifery innovations in clinical practice, education, interprofessional collaboration, health policy, and global health. © 2015 by the American College of Nurse-Midwives.

  14. Chemistry in Bioinformatics

    Science.gov (United States)

    Murray-Rust, Peter; Mitchell, John BO; Rzepa, Henry S

    2005-01-01

    Chemical information is now seen as critical for most areas of life sciences. But unlike Bioinformatics, where data is openly available and freely re-usable, most chemical information is closed and cannot be re-distributed without permission. This has led to a failure to adopt modern informatics and software techniques and therefore paucity of chemistry in bioinformatics. New technology, however, offers the hope of making chemical data (compounds and properties) free during the authoring process. We argue that the technology is already available; we require a collective agreement to enhance publication protocols. PMID:15941476

  15. Chemistry in Bioinformatics

    Directory of Open Access Journals (Sweden)

    Mitchell John

    2005-06-01

    Full Text Available Abstract Chemical information is now seen as critical for most areas of life sciences. But unlike Bioinformatics, where data is openly available and freely re-usable, most chemical information is closed and cannot be re-distributed without permission. This has led to a failure to adopt modern informatics and software techniques and therefore paucity of chemistry in bioinformatics. New technology, however, offers the hope of making chemical data (compounds and properties free during the authoring process. We argue that the technology is already available; we require a collective agreement to enhance publication protocols.

  16. Web Page Recommendation Using Web Mining

    Directory of Open Access Journals (Sweden)

    Modraj Bhavsar

    2014-07-01

    Full Text Available On World Wide Web various kind of content are generated in huge amount, so to give relevant result to user web recommendation become important part of web application. On web different kind of web recommendation are made available to user every day that includes Image, Video, Audio, query suggestion and web page. In this paper we are aiming at providing framework for web page recommendation. 1 First we describe the basics of web mining, types of web mining. 2 Details of each web mining technique.3We propose the architecture for the personalized web page recommendation.

  17. AN INTERACTIVE WEB-BASED ANALYSIS FRAMEWORK FOR REMOTE SENSING CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    X. Z. Wang

    2015-07-01

    Full Text Available Spatiotemporal data, especially remote sensing data, are widely used in ecological, geographical, agriculture, and military research and applications. With the development of remote sensing technology, more and more remote sensing data are accumulated and stored in the cloud. An effective way for cloud users to access and analyse these massive spatiotemporal data in the web clients becomes an urgent issue. In this paper, we proposed a new scalable, interactive and web-based cloud computing solution for massive remote sensing data analysis. We build a spatiotemporal analysis platform to provide the end-user with a safe and convenient way to access massive remote sensing data stored in the cloud. The lightweight cloud storage system used to store public data and users’ private data is constructed based on open source distributed file system. In it, massive remote sensing data are stored as public data, while the intermediate and input data are stored as private data. The elastic, scalable, and flexible cloud computing environment is built using Docker, which is a technology of open-source lightweight cloud computing container in the Linux operating system. In the Docker container, open-source software such as IPython, NumPy, GDAL, and Grass GIS etc., are deployed. Users can write scripts in the IPython Notebook web page through the web browser to process data, and the scripts will be submitted to IPython kernel to be executed. By comparing the performance of remote sensing data analysis tasks executed in Docker container, KVM virtual machines and physical machines respectively, we can conclude that the cloud computing environment built by Docker makes the greatest use of the host system resources, and can handle more concurrent spatial-temporal computing tasks. Docker technology provides resource isolation mechanism in aspects of IO, CPU, and memory etc., which offers security guarantee when processing remote sensing data in the IPython Notebook

  18. HOME USERS SECURITY AND THE WEB BROWSER INBUILT SETTINGS, FRAMEWORK TO SETUP IT AUTOMATICALLY

    OpenAIRE

    Mohammed Serrhini; Abdelazziz Ait Moussa

    2013-01-01

    We are living in the electronic age where electronic transactions such as e-mail, e-banking, e-commerce and e-learning becoming more and more prominent. To access online for this services, the web browser is todayâs almost unique software used. These daysâ hackers know that browsers are installed into all computers and can be used to compromise a machine by distributing malware via malicious or hacked websites. Also these sites use JavaScript to manipulate browsers and can drive user system t...

  19. A Management Framework for Web Service QoS Based on WSDL%一种基于WSDL的Web服务QoS管理框架

    Institute of Scientific and Technical Information of China (English)

    李璐

    2014-01-01

    随着Internet的发展,Web服务得到广泛应用,如何选择满足用户非功能性需求的Web服务是其中的一个重要问题。基于此,提出一种基于WSDL的Web服务QoS管理框架,引入多维QoS对Web服务的QoS需求进行全面分析,定义Web服务的多维QoS属性,并在此基础上基于WSDL实现了Web服务的QoS管理框架。%With the development of Internet, web service is used widely. How to choose the web service which can satisfy the non-functional requirement of users is an important problem in web service. To address this problem, a management framework for web service QoS based on WSDL is presented. It introduces multi-dimension QoS to analyse the QoS requirement of web service and define multi-dimension QoS properties of web service. On the basis of this, this paper achieves a management frame-work for web service QoS based on WSDL.

  20. The Aspergillus Mine - publishing bioinformatics

    DEFF Research Database (Denmark)

    Vesth, Tammi Camilla; Rasmussen, Jane Lind Nybo; Theobald, Sebastian

    so with no computational specialist. Here we present a setup for analysis and publication of genome data of 70 species of Aspergillus fungi. The platform is based on R, Python and uses the RShiny framework to create interactive web‐applications. It allows all participants to create interactive...... analysis which can be shared with the team and in connection with publications. We present analysis for investigation of genetic diversity, secondary and primary metabolism and general data overview. The platform, the Aspergillus Mine, is a collection of analysis tools based on data from collaboration...... with the Joint Genome Institute. The Aspergillus Mine is not intended as a genomic data sharing service but instead focuses on creating an environment where the results of bioinformatic analysis is made available for inspection. The data and code is public upon request and figures can be obtained directly from...

  1. A Web-based Collaborative System Framework for Green Building Certification

    Directory of Open Access Journals (Sweden)

    Jack C.P. Cheng

    2013-11-01

    Full Text Available The built environment is moving towards sustainable development and the number of green buildings increased worldwide in recent years. Green buildings are environmentally, socially and economically desirable; however the certification of green buildings is often expensive and labor-intensive. The document preparation and review process for green building certification is iterative in nature and requires the collaboration of many project participants, certification organizations and third party engineering consultants. This paper aims at developing a system framework that can assist the green building certification process. Different certification standards have been studied and their scope and credit calculation were compared to understand the requirements of the system framework. Based on the study, the required features were designed as follows, (1 role-based access control, (2 document manager and related document discovery, (3 workflow manager, (4 credit manager, and (5 knowledge manager. The features of the system provide a collaborative environment and reduce repetitive works commonly occurring in manual processes. The features of the system framework are discussed and illustrated considering the Hong Kong BEAM Plus standard. This paper also describes how the system framework enhances collaboration in the certification process

  2. Web Tutorials on Systems Thinking Using the Driver-Pressure-State-Impact-Response (DPSIR) Framework

    Science.gov (United States)

    This set of tutorials provides an overview of incorporating systems thinking into decision-making, an introduction to the DPSIR framework as one approach that can assist in the decision analysis process, and an overview of DPSIR tools, including concept mapping and keyword lists,...

  3. Designing Multi-Channel Web Frameworks for Cultural Tourism Applications: The MUSE Case Study.

    Science.gov (United States)

    Garzotto, Franca; Salmon, Tullio; Pigozzi, Massimiliano

    A framework for the design of multi-channel (MC) applications in the cultural tourism domain is presented. Several heterogeneous interface devices are supported including location-sensitive mobile units, on-site stationary devices, and personalized CDs that extend the on-site experience beyond the visit time thanks to personal memories gathered…

  4. Towards a career in bioinformatics.

    Science.gov (United States)

    Ranganathan, Shoba

    2009-12-03

    The 2009 annual conference of the Asia Pacific Bioinformatics Network (APBioNet), Asia's oldest bioinformatics organisation from 1998, was organized as the 8th International Conference on Bioinformatics (InCoB), Sept. 9-11, 2009 at Biopolis, Singapore. InCoB has actively engaged researchers from the area of life sciences, systems biology and clinicians, to facilitate greater synergy between these groups. To encourage bioinformatics students and new researchers, tutorials and student symposium, the Singapore Symposium on Computational Biology (SYMBIO) were organized, along with the Workshop on Education in Bioinformatics and Computational Biology (WEBCB) and the Clinical Bioinformatics (CBAS) Symposium. However, to many students and young researchers, pursuing a career in a multi-disciplinary area such as bioinformatics poses a Himalayan challenge. A collection to tips is presented here to provide signposts on the road to a career in bioinformatics. An overview of the application of bioinformatics to traditional and emerging areas, published in this supplement, is also presented to provide possible future avenues of bioinformatics investigation. A case study on the application of e-learning tools in undergraduate bioinformatics curriculum provides information on how to go impart targeted education, to sustain bioinformatics in the Asia-Pacific region. The next InCoB is scheduled to be held in Tokyo, Japan, Sept. 26-28, 2010.

  5. Moby and Moby 2: creatures of the deep (web).

    Science.gov (United States)

    Vandervalk, Ben P; McCarthy, E Luke; Wilkinson, Mark D

    2009-03-01

    Facile and meaningful integration of data from disparate resources is the 'holy grail' of bioinformatics. Some resources have begun to address this problem by providing their data using Semantic Web standards, specifically the Resource Description Framework (RDF) and the Web Ontology Language (OWL). Unfortunately, adoption of Semantic Web standards has been slow overall, and even in cases where the standards are being utilized, interconnectivity between resources is rare. In response, we have seen the emergence of centralized 'semantic warehouses' that collect public data from third parties, integrate it, translate it into OWL/RDF and provide it to the community as a unified and queryable resource. One limitation of the warehouse approach is that queries are confined to the resources that have been selected for inclusion. A related problem, perhaps of greater concern, is that the majority of bioinformatics data exists in the 'Deep Web'-that is, the data does not exist until an application or analytical tool is invoked, and therefore does not have a predictable Web address. The inability to utilize Uniform Resource Identifiers (URIs) to address this data is a barrier to its accessibility via URI-centric Semantic Web technologies. Here we examine 'The State of the Union' for the adoption of Semantic Web standards in the health care and life sciences domain by key bioinformatics resources, explore the nature and connectivity of several community-driven semantic warehousing projects, and report on our own progress with the CardioSHARE/Moby-2 project, which aims to make the resources of the Deep Web transparently accessible through SPARQL queries.

  6. A Web Based Framework for Pre-release Testing of Mobile Applications

    Directory of Open Access Journals (Sweden)

    Hamdy Abeer

    2016-01-01

    Full Text Available Mobile applications are becoming an integral part of daily life and of business’s marketing plan. They are helpful in promoting for the business, attracting and retaining customers. Software testing is vital to ensure the delivery of high quality mobile applications that could be accessed across different platforms and meet business and technical requirements. This paper proposes a web based tool, namely Pons, for the distribution of pre-release mobile applications for the purpose of manual testing. Pons facilities building, running, and manually testing Android applications directly in the browser. It gets the developers and end users engaged in testing the applications in one place, alleviates the tester’s burden of installing and maintaining testing environments, and provides a platform for developers to rapidly iterate on the software and integrate changes over time. Thus, it speeds up the pre-release testing process, reduces its cost and increases customer satisfaction.

  7. Bioinformatics for Exploration

    Science.gov (United States)

    Johnson, Kathy A.

    2006-01-01

    For the purpose of this paper, bioinformatics is defined as the application of computer technology to the management of biological information. It can be thought of as the science of developing computer databases and algorithms to facilitate and expedite biological research. This is a crosscutting capability that supports nearly all human health areas ranging from computational modeling, to pharmacodynamics research projects, to decision support systems within autonomous medical care. Bioinformatics serves to increase the efficiency and effectiveness of the life sciences research program. It provides data, information, and knowledge capture which further supports management of the bioastronautics research roadmap - identifying gaps that still remain and enabling the determination of which risks have been addressed.

  8. Feature selection in bioinformatics

    Science.gov (United States)

    Wang, Lipo

    2012-06-01

    In bioinformatics, there are often a large number of input features. For example, there are millions of single nucleotide polymorphisms (SNPs) that are genetic variations which determine the dierence between any two unrelated individuals. In microarrays, thousands of genes can be proled in each test. It is important to nd out which input features (e.g., SNPs or genes) are useful in classication of a certain group of people or diagnosis of a given disease. In this paper, we investigate some powerful feature selection techniques and apply them to problems in bioinformatics. We are able to identify a very small number of input features sucient for tasks at hand and we demonstrate this with some real-world data.

  9. A web-based collaborative framework for facilitating decision making on a 3D design developing process

    Directory of Open Access Journals (Sweden)

    Purevdorj Nyamsuren

    2015-07-01

    Full Text Available Increased competitive challenges are forcing companies to find better ways to bring their applications to market faster. Distributed development environments can help companies improve their time-to-market by enabling parallel activities. Although, such environments still have their limitations in real-time communication and real-time collaboration during the product development process. This paper describes a web-based collaborative framework which has been developed to support the decision making on a 3D design developing process. The paper describes 3D design file for the discussion that contains all relevant annotations on its surface and their visualization on the user interface for design changing. The framework includes a native CAD data converting module, 3D data based real-time communication module, revision control module for 3D data and some sub-modules such as data storage and data management. We also discuss some raised issues in the project and the steps underway to address them.

  10. Tool Integration Framework for Bio-Informatics

    Science.gov (United States)

    2007-04-01

    5 2.3 ESCHER Quality Controlled Repository...Toolchains .................................................................................................. 12 3.3 ESCHER Quality Controlled Repository...19 Figure 14: ESCHER Home Page

  11. Distributed computing in bioinformatics.

    Science.gov (United States)

    Jain, Eric

    2002-01-01

    This paper provides an overview of methods and current applications of distributed computing in bioinformatics. Distributed computing is a strategy of dividing a large workload among multiple computers to reduce processing time, or to make use of resources such as programs and databases that are not available on all computers. Participating computers may be connected either through a local high-speed network or through the Internet.

  12. Advance in structural bioinformatics

    CERN Document Server

    Wei, Dongqing; Zhao, Tangzhen; Dai, Hao

    2014-01-01

    This text examines in detail mathematical and physical modeling, computational methods and systems for obtaining and analyzing biological structures, using pioneering research cases as examples. As such, it emphasizes programming and problem-solving skills. It provides information on structure bioinformatics at various levels, with individual chapters covering introductory to advanced aspects, from fundamental methods and guidelines on acquiring and analyzing genomics and proteomics sequences, the structures of protein, DNA and RNA, to the basics of physical simulations and methods for conform

  13. Phylogenetic trees in bioinformatics

    Energy Technology Data Exchange (ETDEWEB)

    Burr, Tom L [Los Alamos National Laboratory

    2008-01-01

    Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.

  14. A collaborative (web-GIS) framework based on empirical data collected from three case studies in Europe for risk management of hydro-meteorological hazards

    NARCIS (Netherlands)

    Aye, Z.C.; Sprague, T.; Cortes Arevalo, V.J.; Prenger-Berninghoff, K.; Jaboyedoff, M.; Derron, M.H.

    2016-01-01

    This paper presents a collaborative framework of an interactive web-GIS platform integrated with a multi-criteria evaluation tool. The platform aims to support the engagement of different stakeholders and the encouragement of a collaborative, decision-making process for flood and landslide managemen

  15. State of the nation in data integration for bioinformatics.

    Science.gov (United States)

    Goble, Carole; Stevens, Robert

    2008-10-01

    Data integration is a perennial issue in bioinformatics, with many systems being developed and many technologies offered as a panacea for its resolution. The fact that it is still a problem indicates a persistence of underlying issues. Progress has been made, but we should ask "what lessons have been learnt?", and "what still needs to be done?" Semantic Web and Web 2.0 technologies are the latest to find traction within bioinformatics data integration. Now we can ask whether the Semantic Web, mashups, or their combination, have the potential to help. This paper is based on the opening invited talk by Carole Goble given at the Health Care and Life Sciences Data Integration for the Semantic Web Workshop collocated with WWW2007. The paper expands on that talk. We attempt to place some perspective on past efforts, highlight the reasons for success and failure, and indicate some pointers to the future.

  16. Survey of MapReduce frame operation in bioinformatics.

    Science.gov (United States)

    Zou, Quan; Li, Xu-Bin; Jiang, Wen-Rui; Lin, Zi-Yu; Li, Gui-Lin; Chen, Ke

    2014-07-01

    Bioinformatics is challenged by the fact that traditional analysis tools have difficulty in processing large-scale data from high-throughput sequencing. The open source Apache Hadoop project, which adopts the MapReduce framework and a distributed file system, has recently given bioinformatics researchers an opportunity to achieve scalable, efficient and reliable computing performance on Linux clusters and on cloud computing services. In this article, we present MapReduce frame-based applications that can be employed in the next-generation sequencing and other biological domains. In addition, we discuss the challenges faced by this field as well as the future works on parallel computing in bioinformatics.

  17. Web scraping technologies in an API world

    OpenAIRE

    Glez-Peña, Daniel; Lourenço, Anália; López-Fernández, Hugo; Reboiro-Jato, Miguel; Fdez-Riverola, Florentino

    2014-01-01

    Web services are the de facto standard in biomedical data integration. However, there are data integration scenarios that cannot be fully covered by Web services. A number of Web databases and tools do not support Web services, and existing Web services do not cover for all possible user data demands. As a consequence, Web data scraping, one of the oldest techniques for extracting Web contents, is still in position to offer a valid and valuable service to a wide range of bioinformatics applic...

  18. Research of Web Service Technology under Enterprise Framework%企业架构下Web Service技术的研究

    Institute of Scientific and Technical Information of China (English)

    谢景伟

    2011-01-01

    Web Service是一种面向服务的分布式计算模式,具有良好的、真正意义上的系统平台异构性和语言的独立性.本文首先介绍了Web Service的基本概念和特点.阐述了Web Service的核心技术:SOAP,WSDL,UDDI.最后分析了Web Service的安全性.

  19. The growing need for microservices in bioinformatics

    Directory of Open Access Journals (Sweden)

    Christopher L Williams

    2016-01-01

    Full Text Available Objective: Within the information technology (IT industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise′s overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Context: Bioinformatics relies on nimble IT framework which can adapt to changing requirements. Aims: To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics Conclusions: Use of the microservices framework

  20. The growing need for microservices in bioinformatics.

    Science.gov (United States)

    Williams, Christopher L; Sica, Jeffrey C; Killen, Robert T; Balis, Ulysses G J

    2016-01-01

    Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Bioinformatics relies on nimble IT framework which can adapt to changing requirements. To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics. Use of the microservices framework is an effective methodology for the fabrication and

  1. Implementing bioinformatic workflows within the bioextract server.

    Science.gov (United States)

    Lushbough, Carol M; Bergman, Michael K; Lawrence, Carolyn J; Jennewein, Doug; Brendel, Volker

    2008-01-01

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed service designed to provide researchers with the web ability to query multiple data sources, save results as searchable data sets, and execute analytic tools. As the researcher works with the system, their tasks are saved in the background. At any time these steps can be saved as a workflow that can then be executed again and/or modified later.

  2. Learning Genetics through an Authentic Research Simulation in Bioinformatics

    Science.gov (United States)

    Gelbart, Hadas; Yarden, Anat

    2006-01-01

    Following the rationale that learning is an active process of knowledge construction as well as enculturation into a community of experts, we developed a novel web-based learning environment in bioinformatics for high-school biology majors in Israel. The learning environment enables the learners to actively participate in a guided inquiry process…

  3. Learning Genetics through an Authentic Research Simulation in Bioinformatics

    Science.gov (United States)

    Gelbart, Hadas; Yarden, Anat

    2006-01-01

    Following the rationale that learning is an active process of knowledge construction as well as enculturation into a community of experts, we developed a novel web-based learning environment in bioinformatics for high-school biology majors in Israel. The learning environment enables the learners to actively participate in a guided inquiry process…

  4. Holistic Web-based Virtual Micro Controller Framework for Research and Education

    Directory of Open Access Journals (Sweden)

    Sven Seiler

    2012-11-01

    Full Text Available Education in the field of embedded system programming became an even more important aspect in the qualification of young engineers during the last decade. This development is accompanied by a rapidly increasing complexity of the software environments used with such devices. Therefore a qualified and solid teaching methodology is necessary, accompanied by industry driven technological innovation with an emphasis on programming. As part of three European projects regarding lifelong-learning a comprehensive blended learning concept for teaching embedded systems and robotics was developed by paper authors. It comprises basic exercises in micro controller programming up to high-level student robotic challenges. These implemented measures are supported by a distance learning environment. The programming of embedded systems and microcontroller technology has to be seen as the precursor for more complex robotic systems in this context, but with a high importance for later successfully working with the technology for further professional utilization with these technologies. Current paper introduces the most novel part; the online accessible Virtual Micro Controller Platform (VMCU and its underlying simulation framework platform. This approach conquers the major existing problems in engineering education: outdated hardware and limited lab times. This paper answers the question about advantages of using virtual hardware in an educational environment.

  5. Constructing Web subject gateways using Dublin Core, the Resource Description Framework and Topic Maps

    Directory of Open Access Journals (Sweden)

    Jesús Tramullas

    2006-01-01

    Full Text Available Introduction. Specialised subject gateways have become an essential tool for locating and accessing digital information resources, with the added value of organisation and previous evaluation catering for the needs of the varying communities using these. Within the framework of a research project on the subject, a software tool has been developed that enables subject gateways to be developed and managed. Method. General guidelines for the work were established which set out the main principles for the technical aspects of the application, on one hand, and on aspects of the treatment and management of information, on the other. All this has been integrated into a prototype model for developing software tools. Analysis. The needs analysis established the conditions to be fulfilled by the application. A detailed study of the available options for the treatment of information on metadata proved that the best option was to use the Dublin Core, and that the metadata set should be included, in turn, in RDF tags, or in tags based on XML. Results. The project has resulted in the development of two versions of an application called Potnia (versions 1 and 2, which fulfil the requirements set out in the main principles, and which have been tested by users in real application environments. Conclusion. The tagging layout found to be the best, and the one used by the writers, is based on integrating the Dublin Core metadata set within the Topic Maps paradigm, formatted in XTM.

  6. Bioinformatics of prokaryotic RNAs.

    Science.gov (United States)

    Backofen, Rolf; Amman, Fabian; Costa, Fabrizio; Findeiß, Sven; Richter, Andreas S; Stadler, Peter F

    2014-01-01

    The genome of most prokaryotes gives rise to surprisingly complex transcriptomes, comprising not only protein-coding mRNAs, often organized as operons, but also harbors dozens or even hundreds of highly structured small regulatory RNAs and unexpectedly large levels of anti-sense transcripts. Comprehensive surveys of prokaryotic transcriptomes and the need to characterize also their non-coding components is heavily dependent on computational methods and workflows, many of which have been developed or at least adapted specifically for the use with bacterial and archaeal data. This review provides an overview on the state-of-the-art of RNA bioinformatics focusing on applications to prokaryotes.

  7. A library-based bioinformatics services program.

    Science.gov (United States)

    Yarfitz, S; Ketchell, D S

    2000-01-01

    Support for molecular biology researchers has been limited to traditional library resources and services in most academic health sciences libraries. The University of Washington Health Sciences Libraries have been providing specialized services to this user community since 1995. The library recruited a Ph.D. biologist to assess the molecular biological information needs of researchers and design strategies to enhance library resources and services. A survey of laboratory research groups identified areas of greatest need and led to the development of a three-pronged program: consultation, education, and resource development. Outcomes of this program include bioinformatics consultation services, library-based and graduate level courses, networking of sequence analysis tools, and a biological research Web site. Bioinformatics clients are drawn from diverse departments and include clinical researchers in need of tools that are not readily available outside of basic sciences laboratories. Evaluation and usage statistics indicate that researchers, regardless of departmental affiliation or position, require support to access molecular biology and genetics resources. Centralizing such services in the library is a natural synergy of interests and enhances the provision of traditional library resources. Successful implementation of a library-based bioinformatics program requires both subject-specific and library and information technology expertise.

  8. A library-based bioinformatics services program*

    Science.gov (United States)

    Yarfitz, Stuart; Ketchell, Debra S.

    2000-01-01

    Support for molecular biology researchers has been limited to traditional library resources and services in most academic health sciences libraries. The University of Washington Health Sciences Libraries have been providing specialized services to this user community since 1995. The library recruited a Ph.D. biologist to assess the molecular biological information needs of researchers and design strategies to enhance library resources and services. A survey of laboratory research groups identified areas of greatest need and led to the development of a three-pronged program: consultation, education, and resource development. Outcomes of this program include bioinformatics consultation services, library-based and graduate level courses, networking of sequence analysis tools, and a biological research Web site. Bioinformatics clients are drawn from diverse departments and include clinical researchers in need of tools that are not readily available outside of basic sciences laboratories. Evaluation and usage statistics indicate that researchers, regardless of departmental affiliation or position, require support to access molecular biology and genetics resources. Centralizing such services in the library is a natural synergy of interests and enhances the provision of traditional library resources. Successful implementation of a library-based bioinformatics program requires both subject-specific and library and information technology expertise. PMID:10658962

  9. A Generic Framework for Extraction of Knowledge from Social Web Sources (Social Networking Websites) for an Online Recommendation System

    Science.gov (United States)

    Sathick, Javubar; Venkat, Jaya

    2015-01-01

    Mining social web data is a challenging task and finding user interest for personalized and non-personalized recommendation systems is another important task. Knowledge sharing among web users has become crucial in determining usage of web data and personalizing content in various social websites as per the user's wish. This paper aims to design a…

  10. Pattern recognition in bioinformatics.

    Science.gov (United States)

    de Ridder, Dick; de Ridder, Jeroen; Reinders, Marcel J T

    2013-09-01

    Pattern recognition is concerned with the development of systems that learn to solve a given problem using a set of example instances, each represented by a number of features. These problems include clustering, the grouping of similar instances; classification, the task of assigning a discrete label to a given instance; and dimensionality reduction, combining or selecting features to arrive at a more useful representation. The use of statistical pattern recognition algorithms in bioinformatics is pervasive. Classification and clustering are often applied to high-throughput measurement data arising from microarray, mass spectrometry and next-generation sequencing experiments for selecting markers, predicting phenotype and grouping objects or genes. Less explicitly, classification is at the core of a wide range of tools such as predictors of genes, protein function, functional or genetic interactions, etc., and used extensively in systems biology. A course on pattern recognition (or machine learning) should therefore be at the core of any bioinformatics education program. In this review, we discuss the main elements of a pattern recognition course, based on material developed for courses taught at the BSc, MSc and PhD levels to an audience of bioinformaticians, computer scientists and life scientists. We pay attention to common problems and pitfalls encountered in applications and in interpretation of the results obtained.

  11. Design and Development of a Sharable Clinical Decision Support System Based on a Semantic Web Service Framework.

    Science.gov (United States)

    Zhang, Yi-Fan; Gou, Ling; Tian, Yu; Li, Tian-Chang; Zhang, Mao; Li, Jing-Song

    2016-05-01

    Clinical decision support (CDS) systems provide clinicians and other health care stakeholders with patient-specific assessments or recommendations to aid in the clinical decision-making process. Despite their demonstrated potential for improving health care quality, the widespread availability of CDS systems has been limited mainly by the difficulty and cost of sharing CDS knowledge among heterogeneous healthcare information systems. The purpose of this study was to design and develop a sharable clinical decision support (S-CDS) system that meets this challenge. The fundamental knowledge base consists of independent and reusable knowledge modules (KMs) to meet core CDS needs, wherein each KM is semantically well defined based on the standard information model, terminologies, and representation formalisms. A semantic web service framework was developed to identify, access, and leverage these KMs across diverse CDS applications and care settings. The S-CDS system has been validated in two distinct client CDS applications. Model-level evaluation results confirmed coherent knowledge representation. Application-level evaluation results reached an overall accuracy of 98.66 % and a completeness of 96.98 %. The evaluation results demonstrated the technical feasibility and application prospect of our approach. Compared with other CDS engineering efforts, our approach facilitates system development and implementation and improves system maintainability, scalability and efficiency, which contribute to the widespread adoption of effective CDS within the healthcare domain.

  12. Penerapan Framework Yii dalam Pembangunan Sistem Informasi Asrama Santri Pondok Pesantren sebagai Media Pencarian Asrama Berbasis Web

    Directory of Open Access Journals (Sweden)

    Erliyah Nurul Jannah

    2015-10-01

    Full Text Available The need of information technology in the modern era is inevitable. It occurs in most of institutions, including the Islamic Boarding School. Parents of Islamic Boarding School students have difficulty in choosing a proper dorm for their child in new academic year. This happens because there are many choices provided by the boarding school. The dormitories vary in terms of cost, facilities, and activities of the dorm. Therefore, it is necessary to build a Dorm Information Systems (SIRAMA to assist parents and students in searching a dormitory that best meets their criteria. SIRAMA is a web-based application that serves as an information media about the dormitory at boarding school. The information consists of the initial cost of dormitory entrance, monthly fees, boarding facilities, and schedule activities of the dorm. SIRAMA is developed using waterfall model, implemented using PHP Yii framework as the programming language, and tested with black-box testing and user acceptance testing. The result shows that SIRAMA is capable to recommend a list of dormitories that meets the students or parents’ criteria. SIRAMA is also very well accepted by the users.

  13. Emergent Computation Emphasizing Bioinformatics

    CERN Document Server

    Simon, Matthew

    2005-01-01

    Emergent Computation is concerned with recent applications of Mathematical Linguistics or Automata Theory. This subject has a primary focus upon "Bioinformatics" (the Genome and arising interest in the Proteome), but the closing chapter also examines applications in Biology, Medicine, Anthropology, etc. The book is composed of an organized examination of DNA, RNA, and the assembly of amino acids into proteins. Rather than examine these areas from a purely mathematical viewpoint (that excludes much of the biochemical reality), the author uses scientific papers written mostly by biochemists based upon their laboratory observations. Thus while DNA may exist in its double stranded form, triple stranded forms are not excluded. Similarly, while bases exist in Watson-Crick complements, mismatched bases and abasic pairs are not excluded, nor are Hoogsteen bonds. Just as there are four bases naturally found in DNA, the existence of additional bases is not ignored, nor amino acids in addition to the usual complement of...

  14. Bioinformatics meets parasitology.

    Science.gov (United States)

    Cantacessi, C; Campbell, B E; Jex, A R; Young, N D; Hall, R S; Ranganathan, S; Gasser, R B

    2012-05-01

    The advent and integration of high-throughput '-omics' technologies (e.g. genomics, transcriptomics, proteomics, metabolomics, glycomics and lipidomics) are revolutionizing the way biology is done, allowing the systems biology of organisms to be explored. These technologies are now providing unique opportunities for global, molecular investigations of parasites. For example, studies of a transcriptome (all transcripts in an organism, tissue or cell) have become instrumental in providing insights into aspects of gene expression, regulation and function in a parasite, which is a major step to understanding its biology. The purpose of this article was to review recent applications of next-generation sequencing technologies and bioinformatic tools to large-scale investigations of the transcriptomes of parasitic nematodes of socio-economic significance (particularly key species of the order Strongylida) and to indicate the prospects and implications of these explorations for developing novel methods of parasite intervention.

  15. Engineering BioInformatics

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    @@ With the completion of human genome sequencing, a new era of bioinformatics st arts. On one hand, due to the advance of high throughput DNA microarray technol ogies, functional genomics such as gene expression information has increased exp onentially and will continue to do so for the foreseeable future. Conventional m eans of storing, analysing and comparing related data are already overburdened. Moreover, the rich information in genes , their functions and their associated wide biological implication requires new technologies of analysing data that employ sophisticated statistical and machine learning algorithms, powerful com puters and intensive interaction together different data sources such as seque nce data, gene expression data, proteomics data and metabolic pathway informati on to discover complex genomic structures and functional patterns with other bi ological process to gain a comprehensive understanding of cell physiology.

  16. Bioinformatics and moonlighting proteins

    Directory of Open Access Journals (Sweden)

    Sergio eHernández

    2015-06-01

    Full Text Available Multitasking or moonlighting is the capability of some proteins to execute two or more biochemical functions. Usually, moonlighting proteins are experimentally revealed by serendipity. For this reason, it would be helpful that Bioinformatics could predict this multifunctionality, especially because of the large amounts of sequences from genome projects. In the present work, we analyse and describe several approaches that use sequences, structures, interactomics and current bioinformatics algorithms and programs to try to overcome this problem. Among these approaches are: a remote homology searches using Psi-Blast, b detection of functional motifs and domains, c analysis of data from protein-protein interaction databases (PPIs, d match the query protein sequence to 3D databases (i.e., algorithms as PISITE, e mutation correlation analysis between amino acids by algorithms as MISTIC. Programs designed to identify functional motif/domains detect mainly the canonical function but usually fail in the detection of the moonlighting one, Pfam and ProDom being the best methods. Remote homology search by Psi-Blast combined with data from interactomics databases (PPIs have the best performance. Structural information and mutation correlation analysis can help us to map the functional sites. Mutation correlation analysis can only be used in very specific situations –it requires the existence of multialigned family protein sequences - but can suggest how the evolutionary process of second function acquisition took place. The multitasking protein database MultitaskProtDB (http://wallace.uab.es/multitask/, previously published by our group, has been used as a benchmark for the all of the analyses.

  17. Virtual Bioinformatics Distance Learning Suite

    Science.gov (United States)

    Tolvanen, Martti; Vihinen, Mauno

    2004-01-01

    Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…

  18. Virtual Bioinformatics Distance Learning Suite

    Science.gov (United States)

    Tolvanen, Martti; Vihinen, Mauno

    2004-01-01

    Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…

  19. 基于Web应用的自动化测试框架的研究%Research of Automated Testing Framework Based on Web Application

    Institute of Scientific and Technical Information of China (English)

    商宇

    2011-01-01

    为了提高Web系统自动化测试的效率,本文提出了一种新的基于Web应用的自动化测试框架,这一框架主要用于回归测试阶段的自动化测试工作。本测试框架集成免费的工具STAF、Bugzilla和JUnit,开发了一个可以通过Web页面访问,实时获得错误的信息,可以将失败的case直接发送到追踪的系统中的自动化测试框架WTAF。本框架使用自动化的测试技术较好地解决了Web应用手工测试效率较低的问题。%To improve the efficiency of automated testing,This article bring out a new automated testing framework on Web application,This framework was mainly applied in automated testing work during regression.This testing automated framework integrated free STAF, Bugzilla and JUnit,We develop an automated testing framework,which could visit via Web page,get real-time wrong information,and send failed case to the tracing bug system directly.Using automated testing technology,the framework can successfully solve the low efficiency problem existed in manual testing.

  20. Virtual bioinformatics distance learning suite*.

    Science.gov (United States)

    Tolvanen, Martti; Vihinen, Mauno

    2004-05-01

    Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material over the Internet. Currently, we provide two fully computer-based courses, "Introduction to Bioinformatics" and "Bioinformatics in Functional Genomics." Here we will discuss the application of distance learning in bioinformatics training and our experiences gained during the 3 years that we have run the courses, with about 400 students from a number of universities. The courses are available at bioinf.uta.fi.

  1. Description and testing of the Geo Data Portal: Data integration framework and Web processing services for environmental science collaboration

    Science.gov (United States)

    Blodgett, David L.; Booth, Nathaniel L.; Kunicki, Thomas C.; Walker, Jordan I.; Viger, Roland J.

    2011-01-01

    Interest in sharing interdisciplinary environmental modeling results and related data is increasing among scientists. The U.S. Geological Survey Geo Data Portal project enables data sharing by assembling open-standard Web services into an integrated data retrieval and analysis Web application design methodology that streamlines time-consuming and resource-intensive data management tasks. Data-serving Web services allow Web-based processing services to access Internet-available data sources. The Web processing services developed for the project create commonly needed derivatives of data in numerous formats. Coordinate reference system manipulation and spatial statistics calculation components implemented for the Web processing services were confirmed using ArcGIS 9.3.1, a geographic information science software package. Outcomes of the Geo Data Portal project support the rapid development of user interfaces for accessing and manipulating environmental data.

  2. 基于Web Services/Ontology的异构系统集成框架%Framework of Heterogeneous Systems Integration Based on Web Services/Ontology

    Institute of Scientific and Technical Information of China (English)

    岳元俊; 吕梦海; 詹建

    2006-01-01

    Web Services是一种面向服务的体系结构,其突出优点是实现了真正意义上的平台独立性和语言独立性.Ontology是集成各种异构系统时语义统一的有效方案.本文在分析了Web Services/Ontology的体系结构和关键技术后,提出了一个基于Web Services的Ontology集成框架,并对框架的工作原理进行了研究.该框架可以有效地用于处理复杂环境下各种异构系统的集成.

  3. Enhanced reproducibility of SADI web service workflows with Galaxy and Docker

    OpenAIRE

    2015-01-01

    Background: Semantic Web technologies have been widely applied in the life sciences, for example by data providers such as OpenLifeData and through web services frameworks such as SADI. The recently reported OpenLifeData2SADI project offers access to the vast OpenLifeData data store through SADI services. Findings: This article describes how to merge data retrieved from OpenLifeData2SADI with other SADI services using the Galaxy bioinformatics analysis platform, thus making this semantic data...

  4. framework

    Directory of Open Access Journals (Sweden)

    Taniana Rodríguez

    2014-01-01

    Full Text Available En este trabajo se propone una arquitectura de Aprendizaje Onto lógico, que es uno de los componentes claves del Marco Ontológi co Dinámico Semántico (MODS para la Web Semántica. Esta arquitect ura general soporta la adquisición automática de conocimiento léxico y semántico. En particular, permite la adquisición de co nocimiento sobre términos (palabras, conceptos (taxonómicos, n o taxonómicos, relaciones entre ellos, reglas de producción o ax iomas. La arquitectura establece donde cada conocimiento adquir ido debe ser incorporado en las estructuras del MODS, ya sea en su ontol ogía interpretativa, ontología lingüística, o lexicón. Además, el trabajo presenta un ejemplo de uso de la arquitectura para el caso del aprendizaje semántico.

  5. A Pedagogy-driven Framework for Integrating Web 2.0 tools into Educational Practices and Building Personal Learning Environments

    NARCIS (Netherlands)

    Rahimi, E.; Van den Berg, J.; Veen, W.

    2014-01-01

    While the concept of Web 2.0 based Personal Learning Environments (PLEs) has generated significant interest in educational settings, there is little consensus regarding what this concept means and how teachers and students can develop and deploy Web 2.0 based PLEs to support their teaching and

  6. A Pedagogy-driven Framework for Integrating Web 2.0 tools into Educational Practices and Building Personal Learning Environments

    NARCIS (Netherlands)

    Rahimi, E.; Van den Berg, J.; Veen, W.

    2014-01-01

    While the concept of Web 2.0 based Personal Learning Environments (PLEs) has generated significant interest in educational settings, there is little consensus regarding what this concept means and how teachers and students can develop and deploy Web 2.0 based PLEs to support their teaching and learn

  7. WebSelF

    DEFF Research Database (Denmark)

    Thomsen, Jakob Grauenkjær; Ernst, Erik; Brabrand, Claus

    2012-01-01

    We present WebSelF, a framework for web scraping which models the process of web scraping and decomposes it into four conceptually independent, reusable, and composable constituents. We have validated our framework through a full parameterized implementation that is flexible enough to capture...

  8. 基于Spring MVC及MyBatis的Web应用框架研究%Research of Web Application Framework Based on Spring MVC and MyBatis

    Institute of Scientific and Technical Information of China (English)

    徐雯; 高建华

    2012-01-01

    There are a lot of problems such as poor performance,high complexity,low reusability of code etc. in the Web application framework based on EJB. It is puts forward the Web application framework based on Spring MVC and MyBatis which combines B/S and C/S architecture,and analyzes and researches the frame structure and composition etc. in the paper. It is shows that Spring MVC and MyBatis application in Web system as an application example of TOPCard Credit Card Business System in the paper. Web application framework based on Spring MVC and MyBatis can solve problems such as poor performance,high complexity,and low reusability of code etc .through the analysis of experimental results.%基于EJB等重量级的Web应用框架存在很多问题,如性能差、复杂度高、代码复用率低等,提出了一种B/S结构与C/S结构相结合,采用Spring MVC设计模式和MyBatis为基础的Web应用框架,并对该框架的结构、组成等内容进行分析和研究.以TOPCard信用卡业务系统为应用实例,说明Spring MVC和MyBatis在Web系统中的应用.通过实验结果分析,基于Spring MVC及MyBatis的Web应用框架研究,可以解决性能差、复杂度高、代码复用率低等问题.

  9. Bioinformatics in High School Biology Curricula: A Study of State Science Standards

    Science.gov (United States)

    Wefer, Stephen H.; Sheppard, Keith

    2008-01-01

    The proliferation of bioinformatics in modern biology marks a modern revolution in science that promises to influence science education at all levels. This study analyzed secondary school science standards of 49 U.S. states (Iowa has no science framework) and the District of Columbia for content related to bioinformatics. The bioinformatics…

  10. Bioinformatics in High School Biology Curricula: A Study of State Science Standards

    Science.gov (United States)

    Wefer, Stephen H.; Sheppard, Keith

    2008-01-01

    The proliferation of bioinformatics in modern biology marks a modern revolution in science that promises to influence science education at all levels. This study analyzed secondary school science standards of 49 U.S. states (Iowa has no science framework) and the District of Columbia for content related to bioinformatics. The bioinformatics…

  11. Engineering bioinformatics: building reliability, performance and productivity into bioinformatics software.

    Science.gov (United States)

    Lawlor, Brendan; Walsh, Paul

    2015-01-01

    There is a lack of software engineering skills in bioinformatic contexts. We discuss the consequences of this lack, examine existing explanations and remedies to the problem, point out their shortcomings, and propose alternatives. Previous analyses of the problem have tended to treat the use of software in scientific contexts as categorically different from the general application of software engineering in commercial settings. In contrast, we describe bioinformatic software engineering as a specialization of general software engineering, and examine how it should be practiced. Specifically, we highlight the difference between programming and software engineering, list elements of the latter and present the results of a survey of bioinformatic practitioners which quantifies the extent to which those elements are employed in bioinformatics. We propose that the ideal way to bring engineering values into research projects is to bring engineers themselves. We identify the role of Bioinformatic Engineer and describe how such a role would work within bioinformatic research teams. We conclude by recommending an educational emphasis on cross-training software engineers into life sciences, and propose research on Domain Specific Languages to facilitate collaboration between engineers and bioinformaticians.

  12. Genome Exploitation and Bioinformatics Tools

    Science.gov (United States)

    de Jong, Anne; van Heel, Auke J.; Kuipers, Oscar P.

    Bioinformatic tools can greatly improve the efficiency of bacteriocin screening efforts by limiting the amount of strains. Different classes of bacteriocins can be detected in genomes by looking at different features. Finding small bacteriocins can be especially challenging due to low homology and because small open reading frames (ORFs) are often omitted from annotations. In this chapter, several bioinformatic tools/strategies to identify bacteriocins in genomes are discussed.

  13. Clustering Techniques in Bioinformatics

    Directory of Open Access Journals (Sweden)

    Muhammad Ali Masood

    2015-01-01

    Full Text Available Dealing with data means to group information into a set of categories either in order to learn new artifacts or understand new domains. For this purpose researchers have always looked for the hidden patterns in data that can be defined and compared with other known notions based on the similarity or dissimilarity of their attributes according to well-defined rules. Data mining, having the tools of data classification and data clustering, is one of the most powerful techniques to deal with data in such a manner that it can help researchers identify the required information. As a step forward to address this challenge, experts have utilized clustering techniques as a mean of exploring hidden structure and patterns in underlying data. Improved stability, robustness and accuracy of unsupervised data classification in many fields including pattern recognition, machine learning, information retrieval, image analysis and bioinformatics, clustering has proven itself as a reliable tool. To identify the clusters in datasets algorithm are utilized to partition data set into several groups based on the similarity within a group. There is no specific clustering algorithm, but various algorithms are utilized based on domain of data that constitutes a cluster and the level of efficiency required. Clustering techniques are categorized based upon different approaches. This paper is a survey of few clustering techniques out of many in data mining. For the purpose five of the most common clustering techniques out of many have been discussed. The clustering techniques which have been surveyed are: K-medoids, K-means, Fuzzy C-means, Density-Based Spatial Clustering of Applications with Noise (DBSCAN and Self-Organizing Map (SOM clustering.

  14. Changing the Face of Traditional Education: A Framework for Adapting a Large, Residential Course to the Web

    Directory of Open Access Journals (Sweden)

    Maureen Ellis

    2007-07-01

    Full Text Available At large, research universities, a common approach for teaching hundreds of undergraduate students at one time is the traditional, large, lecture-based course. Trends indicate that over the next decade there will be an increase in the number of large, campus courses being offered as well as larger enrollments in courses currently offered. As universities investigate alternative means to accommodate more students and their learning needs, Web-based instruction provides an attractive delivery mode for teaching large, on-campus courses. This article explores a theoretical approach regarding how Web-based instruction can be designed and developed to provide quality education for traditional, on-campus, undergraduate students. The academic debate over the merit of Web-based instruction for traditional, on-campus students has not been resolved. This study identifies and discusses instructional design theory for adapting a large, lecture-based course to the Web.

  15. 基于ASP.NET MVC框架的Web开发研究%Research on Web Development Based on ASP.NET MVC Framework

    Institute of Scientific and Technical Information of China (English)

    黄东连

    2015-01-01

    ASP.NET MVC framework is a new generation of Web application development framework released by Mi⁃crosoft, which is a powerful force for the.NET development platform in the Internet field, thus making Microsoft take a beneficial place in various MVC development framework. This paper firstly introduces the MVC design model, and then introduces the creation of Web application program based onASP.NET MVC under MicrosoftVisual Studio 2012, and introduces the development directory structure of the project.%ASP.NET MVC框架是微软公司推出的新一代Web应用程序开发框架,为.NET开发平台在互联网领域注入了一支强有力的主力军,从而使得微软公司在众多MVC开发框架中占据一席之地。本文首先介绍了MVC设计模式,然后介绍了在Mircrosoft Visual Studio 2012下创建基于ASP.NET MVC的Web应用程序,并对项目的开发目录结构进行了介绍。

  16. Applied bioinformatics: Genome annotation and transcriptome analysis

    DEFF Research Database (Denmark)

    Gupta, Vikas

    and dhurrin, which have not previously been characterized in blueberries. There are more than 44,500 spider species with distinct habitats and unique characteristics. Spiders are masters of producing silk webs to catch prey and using venom to neutralize. The exploration of the genetics behind these properties...... japonicus (Lotus), Vaccinium corymbosum (blueberry), Stegodyphus mimosarum (spider) and Trifolium occidentale (clover). From a bioinformatics data analysis perspective, my work can be divided into three parts; genome annotation, small RNA, and gene expression analysis. Lotus is a legume of significant...... has just started. We have assembled and annotated the first two spider genomes to facilitate our understanding of spiders at the molecular level. The need for analyzing the large and increasing amount of sequencing data has increased the demand for efficient, user friendly, and broadly applicable...

  17. Framework Orientado a Aspectos de Recopilación Automática de Datos para la Evolución de Usabilidad en Aplicaciones Web

    Directory of Open Access Journals (Sweden)

    Roberto Farias

    2016-08-01

    Full Text Available La Usabilidad consiste de un conjunto de atributos que permiten evaluar el esfuerzo que deberá invertir un usuario para realizar una tarea determinada a través de la utilización de un software específico. En este sentido, una aplicación se considerará más usable cuanto menos esfuerzo requiera para su utilización. En la actualidad, la World Wide Web, se ha constituido como un enorme repositorio de información que es presentada y puesta a disposición de los usuarios a través de diversos sitios o páginas web cuya finalidad es promover el acceso a la información y de esta manera,  garantizar la igualdad de oportunidades a quienes la consumen. Por otro lado, la evolución de las tecnologías que le dan soporte, la han convertido en una plataforma tecnológica sobre la cual es posible desarrollar aplicaciones con un nivel de interacción y funcionalidad similar al de las aplicaciones de escritorio (WIMP. Existen una gran variedad de trabajos sobre usabilidad web, centrados principalmente en aspectos tales como la navegación y la arquitectura de información de los sitios o páginas web. Sin embargo, la usabilidad de las aplicaciones web, que experimentan un alto grado de interacción con el usuario (action-oriented similar al de las aplicaciones de escritorio, no puede ser evaluadas de la misma manera que en un sitio web (information-oriented donde la interacción con el usuario es escasa y limitada. Si bien pueden encontrarse una diversidad de trabajos que tratan como objeto de estudio a la usabilidad en aplicaciones, en entornos de escritorio ó moviles, no se han reconocido trabajos referidos a la usabilidad de aplicaciones en entornos web. En este trabajo, se propone un framework basado en AOP, capaz de dar soporte al proceso de evaluación de usabilidad, en aplicaciones web, durante la ejecución de tareas de usuario.

  18. WebPASS PP (HR Personnel Management)

    Data.gov (United States)

    US Agency for International Development — WebPass Explorer (WebPASS Framework): USAID is partnering with DoS in the implementation of their WebPass Post Personnel (PS) Module. WebPassPS does not replace...

  19. WebPASS Explorer (HR Personnel Management)

    Data.gov (United States)

    US Agency for International Development — WebPass Explorer (WebPASS Framework): USAID is partnering with DoS in the implementation of their WebPass Post Personnel (PS) Module. WebPassPS does not replace...

  20. DESIGN AND DEVELOPMENT OF A FRAMEWORK BASED ON OGC WEB SERVICES FOR THE VISUALIZATION OF THREE DIMENSIONAL LARGE-SCALE GEOSPATIAL DATA OVER THE WEB

    Directory of Open Access Journals (Sweden)

    E. Roccatello

    2013-05-01

    This work could be seen as an extension of the Cityvu project, started in 2008 with the aim of a plug-in free OGC CityGML viewer. The resulting framework has also been integrated in existing 3DGIS software products and will be made available in the next months.

  1. DESIGN AND APPLICATION OF RIA WEBGIS FRAMEWORK BASED ON CLIENT-SIDE MVC%基于客户端MVC模式的RIA WebGIS框架设计与应用

    Institute of Scientific and Technical Information of China (English)

    郑文; 刘仁义; 杜震洪; 张丰

    2011-01-01

    WebGIS与RIA(Rich Internet Applications)技术的结合,可以提高系统的交互性和用户体验.但目前,缺乏一个RIA WebGIS下的MVC体系结构,对客户端系统进行MVC模式分层.在研究WebGIS的技术特点,分析现有RIA MVC框架的基础上,提出一种面向前端MVC的RIA WebGIS四层框架,并对该框架的模型层次和关键技术进行了剖析.最后将此框架应用于原型系统,验证了该框架的高效性和可扩展性.%The combination of WebGIS and RIA (Rich Intemet Applications) technology can improve system interactivity and user experience. However there lacks an MVC architecture for RIA WebGIS to hierarchize the client system. Based on the research on technological features of WebGIS and analysis on the present RIA MVC framework, the authors propose a four-tier RIA WebGIS framework orienting front-end MVC and dissection its model hierarchy and essential technologies. Experiments with the prototype prove that the application of the framework is both efficient and expandable.

  2. 开源Web自动化测试框架的改进研究%Research of Improvement for Open Source Web Automatic Testing Framework

    Institute of Scientific and Technical Information of China (English)

    黄侨; 葛世伦

    2012-01-01

    Currently reuse rate of "capture / playback "test for Web Application is not high, and the method of texting test scripts for it needs testers to have a higher capacity of programming. To solve this issue, a design of auto test framework and a set of "private language" for this framework based on Selenium are proposed-a kind of open-source Web application auto test tools, this kind of "private language" is a set of parsing rules for XML data-driven files. At last, this Web application auto test framework has been implemented. Data-driven file describes multi-request/response model of Web application behavior, it clearly defines outer test data in order to avoid "hard-code". Using this framework, testers develop test projects only by texting XML data-driven files, it has reduced the requirements for test as well as advanced test efficiency.%Web应用“捕捉/回放”式测试复用率不高,而编写测试脚本的测试方法对测试人员的程序设计能力又有较高要求.针对这个问题,根据Web应用的特性提出了一套自动化测试框架的设计,并基于开源Web自动化测试工具Selenium为此框架设计了一套“私有语言”,即基于XML的数据驱动文件的解析规则.最后实现了基于此数据驱动文件的Web自动化测试框架.数据驱动文件描述了Web应用行为的多请求/响应的模型,清晰地定义了外部测试数据以避免数据“硬编码”的缺陷.利用此框架,测试人员仅仅通过编写XML数据驱动文件就能进行测试工程的开发,有效降低测试门槛,提高测试效率.

  3. El framework jWebSocket y su interfaz de aplicaciones para el trabajo con Tarjetas Inteligentes

    Directory of Open Access Journals (Sweden)

    Ander Sánchez Jardines

    2013-09-01

    Full Text Available El jWebSocket es un marco de trabajo y a la vez un servidor de aplicaciones para la plataforma Java orientado al desarrollo de soluciones basadas en WebSockets, que gocen de altos niveles de velocidad, escalabilidad y seguridad. Sus grandes potencialidades en cuanto al soporte concurrente y su licencia de software libre hacen que sea adoptado por una gran comunidad de desarrolladores. El API[1] de tarjetas inteligentes es una extensión para el marco de trabajo jWebSocket, que le permite a este último soportar los requerimientos necesarios para desarrollar software empresarial y realizar disímiles operaciones con las  tarjetas inteligentes, obteniendo resultados favorables para el uso de cualquier navegador y brindando flexibilidad y tiempo real, características que distinguen a la web. El presente artículo expone un conjunto de ventajas y características del marco de trabajo jWebSocket y de su API para el intercambio con las tarjetas inteligentes, explicando conceptos relacionados con el tema, revelando las soluciones más destacadas en el mundo.[1] Interfaz de programación de aplicaciones (IPA o API (Application Programming Interface es el conjunto de funciones y procedimientos (o métodos, en la programación orientada a objetos que ofrece cierta biblioteca para ser utilizado por otro software como una capa de abstracción.

  4. Bioinformatics/biostatistics: microarray analysis.

    Science.gov (United States)

    Eichler, Gabriel S

    2012-01-01

    The quantity and complexity of the molecular-level data generated in both research and clinical settings require the use of sophisticated, powerful computational interpretation techniques. It is for this reason that bioinformatic analysis of complex molecular profiling data has become a fundamental technology in the development of personalized medicine. This chapter provides a high-level overview of the field of bioinformatics and outlines several, classic bioinformatic approaches. The highlighted approaches can be aptly applied to nearly any sort of high-dimensional genomic, proteomic, or metabolomic experiments. Reviewed technologies in this chapter include traditional clustering analysis, the Gene Expression Dynamics Inspector (GEDI), GoMiner (GoMiner), Gene Set Enrichment Analysis (GSEA), and the Learner of Functional Enrichment (LeFE).

  5. Preparing Teachers to Integrate Web 2.0 in School Practice: Toward a Framework for Pedagogy 2.0

    Science.gov (United States)

    Jimoyiannis, Athanassios; Tsiotakis, Panagiotis; Roussinos, Dimitrios; Siorenta, Anastasia

    2013-01-01

    Web 2.0 has captured the interest and the imagination of both educators and researchers while it is expected to exert a significant impact on instruction and learning, in the context of the 21st century education. Hailed as an open collaborative learning space, many questions remain unanswered regarding the appropriate teacher preparation and the…

  6. CONCEPTUAL FRAMEWORK WEB DESIGN PRESENTATIONS ON THE EXAMPLE OF MEDICAL SCIENTIFIC PUBLICATIONS WITH THE USE OF AS4U

    Directory of Open Access Journals (Sweden)

    K. I. Ilkanych

    2016-08-01

    Full Text Available 58 It was analyzed the usage of modern information web technologies that significantly affect the stage of development of information field in Ukraine. Using innovative information systems and technologies for the design and operation of websites can significantly simplify the exchange of information, because today we are talking about the relevance of information and its availability.

  7. Ontologies for Bioinformatics

    Directory of Open Access Journals (Sweden)

    Agnieszka Leszczynski

    2008-01-01

    Full Text Available The past twenty years have witnessed an explosion of biological data in diverse database formats governed by heterogeneous infrastructures. Not only are semantics (attribute terms different in meaning across databases, but their organization varies widely. Ontologies are a concept imported from computing science to describe different conceptual frameworks that guide the collection, organization and publication of biological data. An ontology is similar to a paradigm but has very strict implications for formatting and meaning in a computational context. The use of ontologies is a means of communicating and resolving semantic and organizational differences between biological databases in order to enhance their integration. The purpose of interoperability (or sharing between divergent storage and semantic protocols is to allow scientists from around the world to share and communicate with each other. This paper describes the rapid accumulation of biological data, its various organizational structures, and the role that ontologies play in interoperability.

  8. Training Experimental Biologists in Bioinformatics

    Directory of Open Access Journals (Sweden)

    Pedro Fernandes

    2012-01-01

    Full Text Available Bioinformatics, for its very nature, is devoted to a set of targets that constantly evolve. Training is probably the best response to the constant need for the acquisition of bioinformatics skills. It is interesting to assess the effects of training in the different sets of researchers that make use of it. While training bench experimentalists in the life sciences, we have observed instances of changes in their attitudes in research that, if well exploited, can have beneficial impacts in the dialogue with professional bioinformaticians and influence the conduction of the research itself.

  9. Analysis and Research on Using Struts Framework to Develop Web Application%Struts框架在Web开发中的应用

    Institute of Scientific and Technical Information of China (English)

    吕凯; 李绍杰; 王莹莹

    2009-01-01

    Struts is a framework that is used to develop a Web application, it uses MVC (Model-View-Controller) design to achieve a separation between good user interface and business logic. Pointing this problem, this text analyzes the Struts framework with regard to the technical points and describes the profiles, form validation, and other characteristics, solves the Struts problems when developing Web, finally adopting an example to describe the frame of Struts.%Struts是一个用于开发Web应用程序的框架,它采用MVC(Model-View-Controller)的设计思想,很好的实现了业务逻辑和用户界面的分离.针对这一思想,该文对Struts框架所涉及的技术要点进行了分析,并对配置文件、表单验证等特性进行了详细的描述,解决了在Web开发中所涉及的Struts难点问题,最后通过一个实例对Struts框架的结构进行说明.

  10. 基于Web Services的海洋表面温度场数据应用服务框架与实现%Design and Implementation of a Web Services-based Application Framework for Sea Surface Temperature Information

    Institute of Scientific and Technical Information of China (English)

    何亚文; 杜云艳; 肖如林; 侯立媛; 孙晓丹

    2009-01-01

    分析了海洋表面温度场数据应用研究现状,结合Web服务技术,提出了海洋表面温度场数据应用服务框架,把海温数据及应用模型均封装成Web服务,从而提高了分布式环境下,数据的互操作性、应用的可移植性及透明性.基于该框架,开发设计了"中国南海海温数据应用服务平台",该平台可以集成异构、分布式环境中的数据服务和应用服务,从而为用户提供透明的、"一站式"的海温数据Web应用,可为其它海洋信息的集成与应用提供借鉴.%Based on analysis of the study of the sea surface temperature data and application, the author put forward a Web Service-based application framework for sea surface temperature information. The use of Web service was proposed to solve the problems in heterogeneity, distribution, and efficiency triggered by networking. A prototype application was successfully designed based on the framework, namely: The Application Service Platform of Sea Surface Temperature Information in the South China Sea. It can integrate heterogeneous sea surface temperature data services and application services, and provide users with transparent, "one-stop" web applications on sea surface temperature. Users can access the platform to search and fetch valuable information and value-added applications. On the platform, all of the heterogeneous and distributed sea surface temperature information is encrypted, decrypted, monitored, and hence interchangeable according to international standards. The results confirm the feasibility of the application of Web Services to sea surface temperature information integration, and this study can also be referenced by other marine information.

  11. SME 2.0: Roadmap towards Web 2.0-Based Open Innovation in SME-Networks - A Case Study Based Research Framework

    Science.gov (United States)

    Lindermann, Nadine; Valcárcel, Sylvia; Schaarschmidt, Mario; von Kortzfleisch, Harald

    Small- and medium sized enterprises (SMEs) are of high social and economic importance since they represent 99% of European enterprises. With regard to their restricted resources, SMEs are facing a limited capacity for innovation to compete with new challenges in a complex and dynamic competitive environment. Given this context, SMEs need to increasingly cooperate to generate innovations on an extended resource base. Our research project focuses on the aspect of open innovation in SME-networks enabled by Web 2.0 applications and referring to innovative solutions of non-competitive daily life problems. Examples are industrial safety, work-life balance issues or pollution control. The project raises the question whether the use of Web 2.0 applications can foster the exchange of creativity and innovative ideas within a network of SMEs and hence catalyze new forms of innovation processes among its participants. Using Web 2.0 applications within SMEs implies consequently breaking down innovation processes to employees’ level and thus systematically opening up a heterogeneous and broader knowledge base to idea generation. In this paper we address first steps on a roadmap towards Web 2.0-based open innovation processes within SME-networks. It presents a general framework for interaction activities leading to open innovation and recommends a regional marketplace as a viable, trust-building driver for further collaborative activities. These findings are based on field research within a specific SME-network in Rhineland-Palatinate Germany, the “WirtschaftsForum Neuwied e.V.”, which consists of roughly 100 heterogeneous SMEs employing about 8,000 workers.

  12. A WEB-BASED FRAMEWORK FOR VISUALIZING INDUSTRIAL SPATIOTEMPORAL DISTRIBUTION USING STANDARD DEVIATIONAL ELLIPSE AND SHIFTING ROUTES OF GRAVITY CENTERS

    National Research Council Canada - National Science Library

    Y. Song; Z. Gui; H. Wu; Y. Wei

    2017-01-01

    .... The framework uses standard deviational ellipse (SDE) and shifting route of gravity centers to show the spatial distribution and yearly developing trends of different enterprise types according to their industry categories...

  13. Bioinformatics and the Undergraduate Curriculum

    Science.gov (United States)

    Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael

    2010-01-01

    Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…

  14. Visualising "Junk" DNA through Bioinformatics

    Science.gov (United States)

    Elwess, Nancy L.; Latourelle, Sandra M.; Cauthorn, Olivia

    2005-01-01

    One of the hottest areas of science today is the field in which biology, information technology,and computer science are merged into a single discipline called bioinformatics. This field enables the discovery and analysis of biological data, including nucleotide and amino acid sequences that are easily accessed through the use of computers. As…

  15. Reproducible Bioinformatics Research for Biologists

    Science.gov (United States)

    This book chapter describes the current Big Data problem in Bioinformatics and the resulting issues with performing reproducible computational research. The core of the chapter provides guidelines and summaries of current tools/techniques that a noncomputational researcher would need to learn to pe...

  16. Bioinformatics interoperability: all together now !

    NARCIS (Netherlands)

    Meganck, B.; Mergen, P.; Meirte, D.

    2009-01-01

    The following text presents some personal ideas about the way (bio)informatics2 is heading, along with some examples of how our institution – the Royal Museum for Central Africa (RMCA) – is gearing up for these new times ahead. It tries to find the important trends amongst the buzzwords, and to demo

  17. Bioinformatics and the Undergraduate Curriculum

    Science.gov (United States)

    Maloney, Mark; Parker, Jeffrey; LeBlanc, Mark; Woodard, Craig T.; Glackin, Mary; Hanrahan, Michael

    2010-01-01

    Recent advances involving high-throughput techniques for data generation and analysis have made familiarity with basic bioinformatics concepts and programs a necessity in the biological sciences. Undergraduate students increasingly need training in methods related to finding and retrieving information stored in vast databases. The rapid rise of…

  18. A Framework and a Language for Usability Automatic Evaluation of Web Sites by Static Analysis ofHTML Source Code

    OpenAIRE

    Beirekdar, Abdo; Vanderdonckt, Jean; Noirhomme-Fraiture, Monique

    2002-01-01

    Usability guidelines are supposed to help web designers to design usable sites. Unfortunately, studies carried out that applying these guidelines by designers is difficult, essentially because of the way of structuring or formulating them. One possible way to help designers in their task is to provide them with tools that evaluate the designed site (during or after design) and alerts them about usability errors. Ideally, these tools would support an appropriate guidelines de...

  19. POWER: PhylOgenetic WEb Repeater--an integrated and user-optimized framework for biomolecular phylogenetic analysis.

    Science.gov (United States)

    Lin, Chung-Yen; Lin, Fan-Kai; Lin, Chieh Hua; Lai, Li-Wei; Hsu, Hsiu-Jun; Chen, Shu-Hwa; Hsiung, Chao A

    2005-07-01

    POWER, the PhylOgenetic WEb Repeater, is a web-based service designed to perform user-friendly pipeline phylogenetic analysis. POWER uses an open-source LAMP structure and infers genetic distances and phylogenetic relationships using well-established algorithms (ClustalW and PHYLIP). POWER incorporates a novel tree builder based on the GD library to generate a high-quality tree topology according to the calculated result. POWER accepts either raw sequences in FASTA format or user-uploaded alignment output files. Through a user-friendly web interface, users can sketch a tree effortlessly in multiple steps. After a tree has been generated, users can freely set and modify parameters, select tree building algorithms, refine sequence alignments or edit the tree topology. All the information related to input sequences and the processing history is logged and downloadable for the user's reference. Furthermore, iterative tree construction can be performed by adding sequences to, or removing them from, a previously submitted job. POWER is accessible at http://power.nhri.org.tw.

  20. Geospatial Semantic Web Service Oriented Knowledge Annotation Framework%基于知识标注的地理信息语义服务框架研究

    Institute of Scientific and Technical Information of China (English)

    梁汝鹏; 李宏伟; 李文娟; 梁颖

    2012-01-01

    基于空间语义学基础理论,将传统的地理信息服务技术与新兴的语义服务技术相结合,建立地理信息语义服务框架及知识标注方法体系,并应用于地理空间信息服务处理中,提供智能化、自动化的地理信息服务发现、组合与触发策略;设计基于文本挖掘算法的半自动知识标注引擎,提高服务处理自动化程度,实现包含丰富语义信息的地理空间语义服务的发布;建立基于规则和推理引擎的地理信息服务匹配与发现机制,提高服务发现效率;建立地理信息服务语义标注与服务匹配评价方法,采用基于空间关联规则的标注正确性评估策略,验证知识标注价值.%While benefiting from the convenience of geospatial resource search through keyword WCSCWeb Catalog Service) ,users always encounter the problem of low comprehension and precisioa Therefore, in this paper, the geospatial semantic Web service framework and knowledge annotation methodology are designed based on geospatial semantics theory, so as to handle the problem of service discovery. At the same time, the knowledge annotation engine is designed to improve the efficiency of rich semantics service release. And mechanism of semantic matching of service goal is set up based on rule and reason engine. In the end, the way to evaluate the effect of geospatial semantic Web service framework is discussed. Through deploying SWS technology in geospatial Web service handle, a comprehensive framework is set up for semantically annotating geospatial services and for utilizing the kind of knowledge annotation in the process of service discovery, composition and invocation. The major challenge of the work relies on identifying and solving difference between OGC and SWS techniques. To satisfy the specific requirements of expressing geospatial semantics,the intelligent methods are developed for the (semi-)automatic knowledge annotation construction of geospatial Web service

  1. Virginia Bioinformatics Institute awards Transdisciplinary Team Science

    OpenAIRE

    Bland, Susan

    2009-01-01

    The Virginia Bioinformatics Institute at Virginia Tech, in collaboration with Virginia Tech's Ph.D. program in genetics, bioinformatics, and computational biology, has awarded three fellowships in support of graduate work in transdisciplinary team science.

  2. CluGene: A Bioinformatics Framework for the Identification of Co-Localized, Co-Expressed and Co-Regulated Genes Aimed at the Investigation of Transcriptional Regulatory Networks from High-Throughput Expression Data.

    Directory of Open Access Journals (Sweden)

    Tania Dottorini

    Full Text Available The full understanding of the mechanisms underlying transcriptional regulatory networks requires unravelling of complex causal relationships. Genome high-throughput technologies produce a huge amount of information pertaining gene expression and regulation; however, the complexity of the available data is often overwhelming and tools are needed to extract and organize the relevant information. This work starts from the assumption that the observation of co-occurrent events (in particular co-localization, co-expression and co-regulation may provide a powerful starting point to begin unravelling transcriptional regulatory networks. Co-expressed genes often imply shared functional pathways; co-expressed and functionally related genes are often co-localized, too; moreover, co-expressed and co-localized genes are also potential targets for co-regulation; finally, co-regulation seems more frequent for genes mapped to proximal chromosome regions. Despite the recognized importance of analysing co-occurrent events, no bioinformatics solution allowing the simultaneous analysis of co-expression, co-localization and co-regulation is currently available. Our work resulted in developing and valuating CluGene, a software providing tools to analyze multiple types of co-occurrences within a single interactive environment allowing the interactive investigation of combined co-expression, co-localization and co-regulation of genes. The use of CluGene will enhance the power of testing hypothesis and experimental approaches aimed at unravelling transcriptional regulatory networks. The software is freely available at http://bioinfolab.unipg.it/.

  3. WebSelF

    DEFF Research Database (Denmark)

    Thomsen, Jakob Grauenkjær; Ernst, Erik; Brabrand, Claus

    2012-01-01

    previous work on web scraping. We conducted an experiment that evaluated several qualitatively different web scraping constituents (including previous work and combinations hereof) on about 11,000 HTML pages on daily versions of 17 web sites over a period of more than one year. Our framework solves three...

  4. Fuzzification of Web Objects: A Semantic Web Mining Approach

    Directory of Open Access Journals (Sweden)

    Tasawar Hussain

    2012-03-01

    Full Text Available Web Mining is becoming essential to support the web administrators and web users in multi-ways such as information retrieval; website performance management; web personalization; web marketing and website designing. Due to uncontrolled exponential growth in web data, knowledge base retrieval has become a very challenging task. The one viable solution to the problem is the merging of conventional web mining with semantic web technologies. This merging process will be more beneficial to web users by reducing the search space and by providing information that is more relevant. Key web objects play significant role in this process. The extraction of key web objects from a website is a challenging task. In this paper, we have proposed a framework, which extracts the key web objects from web log file and apply a semantic web to mine actionable intelligence. This proposed framework can be applied to non-semantic web for the extraction of key web objects. We also have defined an objective function to calculate key web object from users perspective. We named this function as key web object function. KWO function helps to fuzzify the extracted key web objects into three categories as Most Interested, Interested, and Least Interested. Fuzzification of web objects helps us to accommodate the uncertainty among the web objects of being user attractive. We also have validated the proposed scheme with the help of a case study.

  5. Application of bioinformatics in tropical medicine

    Institute of Scientific and Technical Information of China (English)

    Wiwanitkit V

    2008-01-01

    Bioinformatics is a usage of information technology to help solve biological problems by designing novel and in-cisive algorithms and methods of analyses.Bioinformatics becomes a discipline vital in the era of post-genom-ics.In this review article,the application of bioinformatics in tropical medicine will be presented and dis-cussed.

  6. No-boundary thinking in bioinformatics research.

    Science.gov (United States)

    Huang, Xiuzhen; Bruce, Barry; Buchan, Alison; Congdon, Clare Bates; Cramer, Carole L; Jennings, Steven F; Jiang, Hongmei; Li, Zenglu; McClure, Gail; McMullen, Rick; Moore, Jason H; Nanduri, Bindu; Peckham, Joan; Perkins, Andy; Polson, Shawn W; Rekepalli, Bhanu; Salem, Saeed; Specker, Jennifer; Wunsch, Donald; Xiong, Donghai; Zhang, Shuzhong; Zhao, Zhongming

    2013-11-06

    Currently there are definitions from many agencies and research societies defining "bioinformatics" as deriving knowledge from computational analysis of large volumes of biological and biomedical data. Should this be the bioinformatics research focus? We will discuss this issue in this review article. We would like to promote the idea of supporting human-infrastructure (HI) with no-boundary thinking (NT) in bioinformatics (HINT).

  7. Characterizing web heuristics

    NARCIS (Netherlands)

    de Jong, Menno D.T.; van der Geest, Thea

    2000-01-01

    This article is intended to make Web designers more aware of the qualities of heuristics by presenting a framework for analyzing the characteristics of heuristics. The framework is meant to support Web designers in choosing among alternative heuristics. We hope that better knowledge of the

  8. Bioinformatics Tools for Small Genomes, Such as Hepatitis B Virus

    Directory of Open Access Journals (Sweden)

    Trevor G. Bell

    2015-02-01

    Full Text Available DNA sequence analysis is undertaken in many biological research laboratories. The workflow consists of several steps involving the bioinformatic processing of biological data. We have developed a suite of web-based online bioinformatic tools to assist with processing, analysis and curation of DNA sequence data. Most of these tools are genome-agnostic, with two tools specifically designed for hepatitis B virus sequence data. Tools in the suite are able to process sequence data from Sanger sequencing, ultra-deep amplicon resequencing (pyrosequencing and chromatograph (trace files, as appropriate. The tools are available online at no cost and are aimed at researchers without specialist technical computer knowledge. The tools can be accessed at http://hvdr.bioinf.wits.ac.za/SmallGenomeTools, and the source code is available online at https://github.com/DrTrevorBell/SmallGenomeTools.

  9. OPPL-Galaxy, a Galaxy tool for enhancing ontology exploitation as part of bioinformatics workflows

    Science.gov (United States)

    2013-01-01

    Background Biomedical ontologies are key elements for building up the Life Sciences Semantic Web. Reusing and building biomedical ontologies requires flexible and versatile tools to manipulate them efficiently, in particular for enriching their axiomatic content. The Ontology Pre Processor Language (OPPL) is an OWL-based language for automating the changes to be performed in an ontology. OPPL augments the ontologists’ toolbox by providing a more efficient, and less error-prone, mechanism for enriching a biomedical ontology than that obtained by a manual treatment. Results We present OPPL-Galaxy, a wrapper for using OPPL within Galaxy. The functionality delivered by OPPL (i.e. automated ontology manipulation) can be combined with the tools and workflows devised within the Galaxy framework, resulting in an enhancement of OPPL. Use cases are provided in order to demonstrate OPPL-Galaxy’s capability for enriching, modifying and querying biomedical ontologies. Conclusions Coupling OPPL-Galaxy with other bioinformatics tools of the Galaxy framework results in a system that is more than the sum of its parts. OPPL-Galaxy opens a new dimension of analyses and exploitation of biomedical ontologies, including automated reasoning, paving the way towards advanced biological data analyses. PMID:23286517

  10. Web Services Integration on the Fly

    Science.gov (United States)

    2008-12-01

    22 K. WEB SERVICES CHOREOGRAPHY DESCRIPTION LANGUAGE (WS-CDL...Intelligent Framework WSBPEL Web Services Business Process Execution Language WS-CDL Web Services Choreography Description Language WSDL Web...Language (WSBPEL) is a related technology addressing service orchestration. Web Services Choreography and Web Services Security are important areas

  11. Bioclipse: an open source workbench for chemo- and bioinformatics

    Directory of Open Access Journals (Sweden)

    Wagener Johannes

    2007-02-01

    Full Text Available Abstract Background There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no sucessful attempts have been made to integrate chemo- and bioinformatics into a single framework. Results Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Conclusion Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL, an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at http://www.bioclipse.net.

  12. Bioclipse: an open source workbench for chemo- and bioinformatics

    Science.gov (United States)

    Spjuth, Ola; Helmus, Tobias; Willighagen, Egon L; Kuhn, Stefan; Eklund, Martin; Wagener, Johannes; Murray-Rust, Peter; Steinbeck, Christoph; Wikberg, Jarl ES

    2007-01-01

    Background There is a need for software applications that provide users with a complete and extensible toolkit for chemo- and bioinformatics accessible from a single workbench. Commercial packages are expensive and closed source, hence they do not allow end users to modify algorithms and add custom functionality. Existing open source projects are more focused on providing a framework for integrating existing, separately installed bioinformatics packages, rather than providing user-friendly interfaces. No open source chemoinformatics workbench has previously been published, and no sucessful attempts have been made to integrate chemo- and bioinformatics into a single framework. Results Bioclipse is an advanced workbench for resources in chemo- and bioinformatics, such as molecules, proteins, sequences, spectra, and scripts. It provides 2D-editing, 3D-visualization, file format conversion, calculation of chemical properties, and much more; all fully integrated into a user-friendly desktop application. Editing supports standard functions such as cut and paste, drag and drop, and undo/redo. Bioclipse is written in Java and based on the Eclipse Rich Client Platform with a state-of-the-art plugin architecture. This gives Bioclipse an advantage over other systems as it can easily be extended with functionality in any desired direction. Conclusion Bioclipse is a powerful workbench for bio- and chemoinformatics as well as an advanced integration platform. The rich functionality, intuitive user interface, and powerful plugin architecture make Bioclipse the most advanced and user-friendly open source workbench for chemo- and bioinformatics. Bioclipse is released under Eclipse Public License (EPL), an open source license which sets no constraints on external plugin licensing; it is totally open for both open source plugins as well as commercial ones. Bioclipse is freely available at . PMID:17316423

  13. Implementation of a user-centered framework in the development of a web-based health information database and call center.

    Science.gov (United States)

    Taylor, Heather A; Sullivan, Dori; Mullen, Cydney; Johnson, Constance M

    2011-10-01

    As healthcare consumers increasingly turn to the World Wide Web (WWW) to obtain health information, it is imperative that health-related websites are user-centered. Websites are often developed without consideration of intended users' characteristics, literacy levels, preferences, and information goals resulting in user dissatisfaction, abandonment of the website, and ultimately the need for costly redesign. This paper provides a methodological review of a user-centered framework that incorporates best practices in literacy, information quality, and human-computer interface design and evaluation to guide the design and redesign process of a consumer health website. Following the description of the methods, a case analysis is presented, demonstrating the successful application of the model in the redesign of a consumer health information website with call center. Comparisons between the iterative revisions of the website showed improvements in usability, readability, and user satisfaction.

  14. A Bioinformatics Facility for NASA

    Science.gov (United States)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  15. An introduction to proteome bioinformatics.

    Science.gov (United States)

    Jones, Andrew R; Hubbard, Simon J

    2010-01-01

    This book is part of the Methods in Molecular Biology series, and provides a general overview of computational approaches used in proteome research. In this chapter, we give an overview of the scope of the book in terms of current proteomics experimental techniques and the reasons why computational approaches are needed. We then give a summary of each chapter, which together provide a picture of the state of the art in proteome bioinformatics research.

  16. Bioinformatics in high school biology curricula: a study of state science standards.

    Science.gov (United States)

    Wefer, Stephen H; Sheppard, Keith

    2008-01-01

    The proliferation of bioinformatics in modern biology marks a modern revolution in science that promises to influence science education at all levels. This study analyzed secondary school science standards of 49 U.S. states (Iowa has no science framework) and the District of Columbia for content related to bioinformatics. The bioinformatics content of each state's biology standards was analyzed and categorized into nine areas: Human Genome Project/genomics, forensics, evolution, classification, nucleotide variations, medicine, computer use, agriculture/food technology, and science technology and society/socioscientific issues. Findings indicated a generally low representation of bioinformatics-related content, which varied substantially across the different areas, with Human Genome Project/genomics and computer use being the lowest (8%), and evolution being the highest (64%) among states' science frameworks. This essay concludes with recommendations for reworking/rewording existing standards to facilitate the goal of promoting science literacy among secondary school students.

  17. Survey of Web Technologies

    OpenAIRE

    Špoljar, Boris

    2011-01-01

    The World Wide Web bas become an important platform for developing and running applications. A vital process while developing web applications is the choice of web technologies, on which the application will be build. The developers face a dizzying array of platforms, languages, frameworks and technical artifacts to choose from. The decison carries consequences on most other decisions in the development process. Thesis contains analisis, classifications and comparison of web technologies s...

  18. A Modular Framework for EEG Web Based Binary Brain Computer Interfaces to Recover Communication Abilities in Impaired People.

    Science.gov (United States)

    Placidi, Giuseppe; Petracca, Andrea; Spezialetti, Matteo; Iacoviello, Daniela

    2016-01-01

    A Brain Computer Interface (BCI) allows communication for impaired people unable to express their intention with common channels. Electroencephalography (EEG) represents an effective tool to allow the implementation of a BCI. The present paper describes a modular framework for the implementation of the graphic interface for binary BCIs based on the selection of symbols in a table. The proposed system is also designed to reduce the time required for writing text. This is made by including a motivational tool, necessary to improve the quality of the collected signals, and by containing a predictive module based on the frequency of occurrence of letters in a language, and of words in a dictionary. The proposed framework is described in a top-down approach through its modules: signal acquisition, analysis, classification, communication, visualization, and predictive engine. The framework, being modular, can be easily modified to personalize the graphic interface to the needs of the subject who has to use the BCI and it can be integrated with different classification strategies, communication paradigms, and dictionaries/languages. The implementation of a scenario and some experimental results on healthy subjects are also reported and discussed: the modules of the proposed scenario can be used as a starting point for further developments, and application on severely disabled people under the guide of specialized personnel.

  19. Requirements for future control room and visualization features in the Web-of-Cells framework defined in the ELECTRA project

    DEFF Research Database (Denmark)

    Tornelli, Carlo; Zuelli, Roberto; Marinelli, Mattia

    2017-01-01

    This paper outlines an overview of the general requirements for the control rooms of the future power systems (2030+). The roles and activities in the future control centres will evolve with respect to the switching, dispatching and restoration functions currently active. The control centre...... operators will supervise on the power system and intervene - when necessary - thanks to the maturation and wide scale deployment of flexible controls. For the identification of control room requirements, general trends in power system evolution are considered and mainly the outcomes of the ELECTRA IRP...... project, that proposes a new Web-of-Cell (WoC) power system control architecture. Dedicated visualization features are proposed, aimed to support the control room operators activities in a WoC oriented approach. Furthermore, the work takes into account the point of view of network operators about future...

  20. 主题Deep Web爬虫框架研究%Research for framework of subject deep web crawler

    Institute of Scientific and Technical Information of China (English)

    黄聪会; 张水平; 胡洋

    2010-01-01

    为满足用户精确化和个性化获取信息的需要,通过分析Deep Web信息的特点,提出了一个可搜索不同主题Deep Web 信息的爬虫框架.针对爬虫框架中Deep Web数据库发现和Deep Web爬虫爬行策略两个难题,分别提出了使用通用搜索引擎以加快发现不同主题的Deep Web数据库和采用常用字最大限度下载Deep Web信息的技术.实验结果表明了该框架采用的技术是可行的.

  1. Pladipus Enables Universal Distributed Computing in Proteomics Bioinformatics.

    Science.gov (United States)

    Verheggen, Kenneth; Maddelein, Davy; Hulstaert, Niels; Martens, Lennart; Barsnes, Harald; Vaudel, Marc

    2016-03-04

    The use of proteomics bioinformatics substantially contributes to an improved understanding of proteomes, but this novel and in-depth knowledge comes at the cost of increased computational complexity. Parallelization across multiple computers, a strategy termed distributed computing, can be used to handle this increased complexity; however, setting up and maintaining a distributed computing infrastructure requires resources and skills that are not readily available to most research groups. Here we propose a free and open-source framework named Pladipus that greatly facilitates the establishment of distributed computing networks for proteomics bioinformatics tools. Pladipus is straightforward to install and operate thanks to its user-friendly graphical interface, allowing complex bioinformatics tasks to be run easily on a network instead of a single computer. As a result, any researcher can benefit from the increased computational efficiency provided by distributed computing, hence empowering them to tackle more complex bioinformatics challenges. Notably, it enables any research group to perform large-scale reprocessing of publicly available proteomics data, thus supporting the scientific community in mining these data for novel discoveries.

  2. 基于WATIR的WEB自动化回归测试框架%Web automated regression test framework based on WATIR

    Institute of Scientific and Technical Information of China (English)

    黄梦薇; 黄大庆; 周未

    2012-01-01

    In the process of iterative development, a large number of regression test eases need to be operated, due to high repetition rate in the test cases, a web automated regression test framework is proposed to improve manual testing. After intensively comparing and studying currently existing testing tool and test script techniques, Watir is chosen as the driver, and the keyword driven test framework-WATF is designed. By applying WATF to practical regression test, test efficiency has been improved 75% compared with manual testing, and human input has been considerably reduced simultaneously.%由于在迭代开发模式中需要执行大量的回归测试,针对其测试项目重复率高的特点,提出了一种Web自动化回归测试框架来改进全手工的作业。通过对现有测试工具和测试脚本技术的对比研究,选择Watir作为驱动程序,设计了关键字驱动机制的测试框架WATF。实际使用WATF进行回归测试,和人工测试相比提高了75%的测试效率,大幅减少在回归测试上的人力投入。

  3. Bioinformatics Training Network (BTN): a community resource for bioinformatics trainers

    DEFF Research Database (Denmark)

    Schneider, Maria V.; Walter, Peter; Blatter, Marie-Claude

    2012-01-01

    Funding bodies are increasingly recognizing the need to provide graduates and researchers with access to short intensive courses in a variety of disciplines, in order both to improve the general skills base and to provide solid foundations on which researchers may build their careers. In response...... and clearly tagged in relation to target audiences, learning objectives, etc. Ideally, they would also be peer reviewed, and easily and efficiently accessible for downloading. Here, we present the Bioinformatics Training Network (BTN), a new enterprise that has been initiated to address these needs and review...

  4. Programming Social Applications Building Viral Experiences with OpenSocial, OAuth, OpenID, and Distributed Web Frameworks

    CERN Document Server

    LeBlanc, Jonathan

    2011-01-01

    Social networking has made one thing clear: websites and applications need to provide users with experiences tailored to their preferences. This in-depth guide shows you how to build rich social frameworks, using open source technologies and specifications. You'll learn how to create third-party applications for existing sites, build engaging social graphs, and develop products to host your own socialized experience. Programming Social Apps focuses on the OpenSocial platform, along with Apache Shindig, OAuth, OpenID, and other tools, demonstrating how they work together to help you solve pra

  5. Sounds of Web Advertising

    DEFF Research Database (Denmark)

    Jessen, Iben Bredahl; Graakjær, Nicolai Jørgensgaard

    2010-01-01

    Sound seems to be a neglected issue in the study of web ads. Web advertising is predominantly regarded as visual phenomena–commercial messages, as for instance banner ads that we watch, read, and eventually click on–but only rarely as something that we listen to. The present chapter presents...... an overview of the auditory dimensions in web advertising: Which kinds of sounds do we hear in web ads? What are the conditions and functions of sound in web ads? Moreover, the chapter proposes a theoretical framework in order to analyse the communicative functions of sound in web advertising. The main...... argument is that an understanding of the auditory dimensions in web advertising must include a reflection on the hypertextual settings of the web ad as well as a perspective on how users engage with web content....

  6. Bioinformatics

    DEFF Research Database (Denmark)

    Baldi, Pierre; Brunak, Søren

    as a strategic frontier between biology and computer science. Machine learning approaches (e.g. neural networks, hidden Markov models, and belief networsk) are ideally suited for areas in which there is a lot of data but little theory. The goal in machine learning is to extract useful information from a body...... of data by building good probabilistic models. The particular twist behind machine learning, however, is to automate the process as much as possible.In this book, the authors present the key machine learning approaches and apply them to the computational problems encountered in the analysis of biological...

  7. Incorporating a New Bioinformatics Component into Genetics at a Historically Black College: Outcomes and Lessons

    Science.gov (United States)

    Holtzclaw, J. David; Eisen, Arri; Whitney, Erika M.; Penumetcha, Meera; Hoey, J. Joseph; Kimbro, K. Sean

    2006-01-01

    Many students at minority-serving institutions are underexposed to Internet resources such as the human genome project, PubMed, NCBI databases, and other Web-based technologies because of a lack of financial resources. To change this, we designed and implemented a new bioinformatics component to supplement the undergraduate Genetics course at…

  8. Incorporating a New Bioinformatics Component into Genetics at a Historically Black College: Outcomes and Lessons

    Science.gov (United States)

    Holtzclaw, J. David; Eisen, Arri; Whitney, Erika M.; Penumetcha, Meera; Hoey, J. Joseph; Kimbro, K. Sean

    2006-01-01

    Many students at minority-serving institutions are underexposed to Internet resources such as the human genome project, PubMed, NCBI databases, and other Web-based technologies because of a lack of financial resources. To change this, we designed and implemented a new bioinformatics component to supplement the undergraduate Genetics course at…

  9. Static Consistency Checking of Web Applications with WebDSL

    NARCIS (Netherlands)

    Hemel, Z.; Groenewegen, D.M.; Kats, L.C.L.; Visser, E.

    2010-01-01

    Modern web application development frameworks provide web application developers with highlevel abstractions to improve their productivity. However, their support for static verification of applications is limited. Inconsistencies in an application are often not detected statically, but appear as

  10. Static Consistency Checking of Web Applications with WebDSL

    NARCIS (Netherlands)

    Hemel, Z.; Groenewegen, D.M.; Kats, L.C.L.; Visser, E.

    2010-01-01

    Modern web application development frameworks provide web application developers with highlevel abstractions to improve their productivity. However, their support for static verification of applications is limited. Inconsistencies in an application are often not detected statically, but appear as er

  11. 基于.NET平台的可视化Web应用程序工作原理的研究%Research of Visual Web Programming Theory Based on Net Framework

    Institute of Scientific and Technical Information of China (English)

    刘杰; 索静

    2012-01-01

    可视化Web应用程序开发工具的出现极大地提高了Web应用系统的生产效率。基于此,对.NET平台上的可视化Web应用程序开发工具原理进行了分析,介绍了其事件驱动Web应用程序的开发模式;对其实现可视化开发的几个关键技术,如Web页面设计可视化、页面状态的自动保存和恢复、浏览器端事件的捕捉和自动提交等关键技术的实现原理进行了分析;对Web程序的处理流程进行了描述,进而解释了其如何实现Web应用程序的可视化开发。%The emergence of visual web programming case tools greatly improves the efficiency of web application development. The article analyzes the implement of visual web programming case tools based on Microsoft Net framework; introduces the event - driv- en web - programming model; analyzes the key techniques for visual web programming, such as the visualization of web page design, the automatically save and restore of web page state, event catching in web browser and automatically submitting and etc.. It also de- scribes the processing flow of web programs, and interoperates how to make the implement for visual web programming.

  12. BioWarehouse: a bioinformatics database warehouse toolkit

    Directory of Open Access Journals (Sweden)

    Stringer-Calvert David WJ

    2006-03-01

    Full Text Available Abstract Background This article addresses the problem of interoperation of heterogeneous bioinformatics databases. Results We introduce BioWarehouse, an open source toolkit for constructing bioinformatics database warehouses using the MySQL and Oracle relational database managers. BioWarehouse integrates its component databases into a common representational framework within a single database management system, thus enabling multi-database queries using the Structured Query Language (SQL but also facilitating a variety of database integration tasks such as comparative analysis and data mining. BioWarehouse currently supports the integration of a pathway-centric set of databases including ENZYME, KEGG, and BioCyc, and in addition the UniProt, GenBank, NCBI Taxonomy, and CMR databases, and the Gene Ontology. Loader tools, written in the C and JAVA languages, parse and load these databases into a relational database schema. The loaders also apply a degree of semantic normalization to their respective source data, decreasing semantic heterogeneity. The schema supports the following bioinformatics datatypes: chemical compounds, biochemical reactions, metabolic pathways, proteins, genes, nucleic acid sequences, features on protein and nucleic-acid sequences, organisms, organism taxonomies, and controlled vocabularies. As an application example, we applied BioWarehouse to determine the fraction of biochemically characterized enzyme activities for which no sequences exist in the public sequence databases. The answer is that no sequence exists for 36% of enzyme activities for which EC numbers have been assigned. These gaps in sequence data significantly limit the accuracy of genome annotation and metabolic pathway prediction, and are a barrier for metabolic engineering. Complex queries of this type provide examples of the value of the data warehousing approach to bioinformatics research. Conclusion BioWarehouse embodies significant progress on the

  13. P2RP: a Web-based framework for the identification and analysis of regulatory proteins in prokaryotic genomes.

    Science.gov (United States)

    Barakat, Mohamed; Ortet, Philippe; Whitworth, David E

    2013-04-20

    Regulatory proteins (RPs) such as transcription factors (TFs) and two-component system (TCS) proteins control how prokaryotic cells respond to changes in their external and/or internal state. Identification and annotation of TFs and TCSs is non-trivial, and between-genome comparisons are often confounded by different standards in annotation. There is a need for user-friendly, fast and convenient tools to allow researchers to overcome the inherent variability in annotation between genome sequences. We have developed the web-server P2RP (Predicted Prokaryotic Regulatory Proteins), which enables users to identify and annotate TFs and TCS proteins within their sequences of interest. Users can input amino acid or genomic DNA sequences, and predicted proteins therein are scanned for the possession of DNA-binding domains and/or TCS domains. RPs identified in this manner are categorised into families, unambiguously annotated, and a detailed description of their features generated, using an integrated software pipeline. P2RP results can then be outputted in user-specified formats. Biologists have an increasing need for fast and intuitively usable tools, which is why P2RP has been developed as an interactive system. As well as assisting experimental biologists to interrogate novel sequence data, it is hoped that P2RP will be built into genome annotation pipelines and re-annotation processes, to increase the consistency of RP annotation in public genomic sequences. P2RP is the first publicly available tool for predicting and analysing RP proteins in users' sequences. The server is freely available and can be accessed along with documentation at http://www.p2rp.org.

  14. Bioinformatics in Africa: The Rise of Ghana?

    Science.gov (United States)

    Karikari, Thomas K.

    2015-01-01

    Until recently, bioinformatics, an important discipline in the biological sciences, was largely limited to countries with advanced scientific resources. Nonetheless, several developing countries have lately been making progress in bioinformatics training and applications. In Africa, leading countries in the discipline include South Africa, Nigeria, and Kenya. However, one country that is less known when it comes to bioinformatics is Ghana. Here, I provide a first description of the development of bioinformatics activities in Ghana and how these activities contribute to the overall development of the discipline in Africa. Over the past decade, scientists in Ghana have been involved in publications incorporating bioinformatics analyses, aimed at addressing research questions in biomedical science and agriculture. Scarce research funding and inadequate training opportunities are some of the challenges that need to be addressed for Ghanaian scientists to continue developing their expertise in bioinformatics. PMID:26378921

  15. Bioinformatics in Africa: The Rise of Ghana?

    Directory of Open Access Journals (Sweden)

    Thomas K Karikari

    2015-09-01

    Full Text Available Until recently, bioinformatics, an important discipline in the biological sciences, was largely limited to countries with advanced scientific resources. Nonetheless, several developing countries have lately been making progress in bioinformatics training and applications. In Africa, leading countries in the discipline include South Africa, Nigeria, and Kenya. However, one country that is less known when it comes to bioinformatics is Ghana. Here, I provide a first description of the development of bioinformatics activities in Ghana and how these activities contribute to the overall development of the discipline in Africa. Over the past decade, scientists in Ghana have been involved in publications incorporating bioinformatics analyses, aimed at addressing research questions in biomedical science and agriculture. Scarce research funding and inadequate training opportunities are some of the challenges that need to be addressed for Ghanaian scientists to continue developing their expertise in bioinformatics.

  16. Measuring the Relevancy between Tags and Citation in Social Web

    Directory of Open Access Journals (Sweden)

    Saif Ur Rehman

    2014-06-01

    Full Text Available With the advent of web, massive information is available to the internet users. One can acquire information from this according to his or her own field of interest; for example we can have large amount of information on bioinformatics available on the web, computer researcher community can found any type of published data at any period of time with just a single click on the Google or any other well renewed web search engines. Filtering the most relevant information from a large dump of online information is considered a challenging task, which is gaining popularity in the web research community. Now, various scientific tools and techniques have been introduced which enable the users to extract the relevant and required information. The accuracy of the information extracted is an interrogative mark. In research community the citation is very common term. Citations are used to extract the historic information relevant to some particular topic. But the citation of a specific research article requires enough time to cite a paper. However, in today’s social bookmarking period of the concept of tags is gaining popularity. Tags are assigned to papers or some topic by reviewers or its readers in a short period of time. In this study, we worked to find the relevance among the citations of a research paper to the tags assigned to these papers. Furthermore, we have obtained the titles of the cited papers and then perform comprehensive analysis on how much the tags are involved in titling the upcoming research articles. This will be helpful to argue that a tag can be used to assess the future diffusion of a research paper. For this we have provided our own framework. For validation of our framework, we have used the CiteULike, a very renewed social bookmarking web site articles to evaluate the performance of our proposed framework.

  17. Establishing bioinformatics research in the Asia Pacific

    Directory of Open Access Journals (Sweden)

    Tammi Martti

    2006-12-01

    Full Text Available Abstract In 1998, the Asia Pacific Bioinformatics Network (APBioNet, Asia's oldest bioinformatics organisation was set up to champion the advancement of bioinformatics in the Asia Pacific. By 2002, APBioNet was able to gain sufficient critical mass to initiate the first International Conference on Bioinformatics (InCoB bringing together scientists working in the field of bioinformatics in the region. This year, the InCoB2006 Conference was organized as the 5th annual conference of the Asia-Pacific Bioinformatics Network, on Dec. 18–20, 2006 in New Delhi, India, following a series of successful events in Bangkok (Thailand, Penang (Malaysia, Auckland (New Zealand and Busan (South Korea. This Introduction provides a brief overview of the peer-reviewed manuscripts accepted for publication in this Supplement. It exemplifies a typical snapshot of the growing research excellence in bioinformatics of the region as we embark on a trajectory of establishing a solid bioinformatics research culture in the Asia Pacific that is able to contribute fully to the global bioinformatics community.

  18. BioRuby: bioinformatics software for the Ruby programming language.

    Science.gov (United States)

    Goto, Naohisa; Prins, Pjotr; Nakao, Mitsuteru; Bonnal, Raoul; Aerts, Jan; Katayama, Toshiaki

    2010-10-15

    The BioRuby software toolkit contains a comprehensive set of free development tools and libraries for bioinformatics and molecular biology, written in the Ruby programming language. BioRuby has components for sequence analysis, pathway analysis, protein modelling and phylogenetic analysis; it supports many widely used data formats and provides easy access to databases, external programs and public web services, including BLAST, KEGG, GenBank, MEDLINE and GO. BioRuby comes with a tutorial, documentation and an interactive environment, which can be used in the shell, and in the web browser. BioRuby is free and open source software, made available under the Ruby license. BioRuby runs on all platforms that support Ruby, including Linux, Mac OS X and Windows. And, with JRuby, BioRuby runs on the Java Virtual Machine. The source code is available from http://www.bioruby.org/. katayama@bioruby.org

  19. Automation of Bioinformatics Workflows using CloVR, a Cloud Virtual Resource

    Science.gov (United States)

    Vangala, Mahesh

    2013-01-01

    Exponential growth of biological data, mainly due to revolutionary developments in NGS technologies in past couple of years, created a multitude of challenges in downstream data analysis using bioinformatics approaches. To handle such tsunami of data, bioinformatics analysis must be carried out in an automated and parallel fashion. A successful analysis often requires more than a few computational steps and bootstrapping these individual steps (scripts) into components and the components into pipelines certainly makes bioinformatics a reproducible and manageable segment of scientific research. CloVR (http://clovr.org) is one such flexible framework that facilitates the abstraction of bioinformatics workflows into executable pipelines. CloVR comes packaged with various built-in bioinformatics pipelines that can make use of multicore processing power when run on servers and/or cloud. CloVR is amenable to build custom pipelines based on individual laboratory requirements. CloVR is available as a single executable virtual image file that comes bundled with pre-installed and pre-configured bioinformatics tools and packages and thus circumvents the cumbersome installation difficulties. CloVR is highly portable and can be run on traditional desktop/laptop computers, central servers and cloud compute farms. In conclusion, CloVR provides built-in automated analysis pipelines for microbial genomics with a scope to develop and integrate custom-workflows that make use of parallel processing power when run on compute clusters, there by addressing the bioinformatics challenges with NGS data.

  20. Historical Quantitative Reasoning on the Web

    NARCIS (Netherlands)

    Meroño-Peñuela, A.; Ashkpour, A.

    2016-01-01

    The Semantic Web is an extension of the Web through standards by the World Wide Web Consortium (W3C) [4]. These standards promote common data formats and exchange protocols on the Web, most fundamentally the Resource Description Framework (RDF). Its ultimate goal is to make the Web a suitable data s

  1. A Research Framework of Web Search Engine Usage Mining%Web搜索引擎日志挖掘研究框架

    Institute of Scientific and Technical Information of China (English)

    王继民; 李雷明子; 孟涛

    2011-01-01

    Log files of search engines record the interactive procedure between users and the system completely. Mining the logs can help us to discover the characteristics of user behaviors and to improve the performance of search systems. This paper gives a framework on Web search engine usage mining, which includes the choice of data collections, the methods of data preprocessing, and an analysis and comparison of search behaviors from different countries. We also explore its applications on improving the effectiveness and efficiency of search engines.%搜索引擎日志记录了用户与系统交互的整个过程.对日志文件进行挖掘,可以发现用户进行Web搜索的行为特征与规律,有效改善搜索引擎系统的性能.在对国内外相关研究进行系统梳理和总结的基础上,文章提出了一个Web搜索引擎日志挖掘的研究框架,主要包括日志挖掘的研究内容、数据集的选择方法、数据预处理的方法、不同地域用户行为的特征与比较、如何应用于系统性能的改善等内容.

  2. Web Application Framework Composition Based on J2EE%基于J2EE体系的Web应用框架整合

    Institute of Scientific and Technical Information of China (English)

    程洪; 钱乐秋; 马舜雄

    2005-01-01

    在研究了大量流行的Web应用框架的基础上,提出了一种Web应用框架整合模型,即WAFC模型(Web Application Framework Composition),该模型基于分层思想,结合设计模式方法,给出了一组约束,增加了域对象层、服务定位层和数据接口层,有效地解决了框架整合过程中出现的功能冗余、层间通信不便、耦合度太高等问题.该模型充分挖掘了各个框架的长处,使它们以一种松散耦合方式结合,形成一个更高层次的应用框架.此外还结合了具体实例,分析和探讨了该模型的实例化过程.

  3. Scheme of Mobile Web Front-end Display Based on MVVM Framework%基于MVVM架构的移动Web前端展示方案

    Institute of Scientific and Technical Information of China (English)

    封宇; 陈宁江

    2014-01-01

    为解决移动终端用户个性化丰富体验的问题,本文分析当前基于MVC和MVP的移动Web开发框架,针对这2种模式在移动终端应用时展示逻辑没有完成与业务逻辑分离的问题,提出引入MVVM架构解决展示逻辑与业务逻辑完成分离的问题,为用户在移动终端上提供个性化体验。实例表明,使用基于MVVM架构设计的移动终端应用系统,能有效地实现展示逻辑与业务逻辑完成分离。%To solve the problems of the mobile terminal user personalization rich experience, based on the MVC and MVP mobile Web development framework are analyzed, according to problems of show logic and business logic when using the MVC and the MVP to develop the mobile terminal applications, the MVVM architecture is proposed, and it can provide the personalized experi-ence on the mobile terminals. Examples shows that using MVVM based architecture to design the mobile terminal applications can effectively achieve the complete separation of show logic and business logic.

  4. A Mathematical Optimization Problem in Bioinformatics

    Science.gov (United States)

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  5. Using "Arabidopsis" Genetic Sequences to Teach Bioinformatics

    Science.gov (United States)

    Zhang, Xiaorong

    2009-01-01

    This article describes a new approach to teaching bioinformatics using "Arabidopsis" genetic sequences. Several open-ended and inquiry-based laboratory exercises have been designed to help students grasp key concepts and gain practical skills in bioinformatics, using "Arabidopsis" leucine-rich repeat receptor-like kinase (LRR…

  6. Online Bioinformatics Tutorials | Office of Cancer Genomics

    Science.gov (United States)

    Bioinformatics is a scientific discipline that applies computer science and information technology to help understand biological processes. The NIH provides a list of free online bioinformatics tutorials, either generated by the NIH Library or other institutes, which includes introductory lectures and "how to" videos on using various tools.

  7. A Mathematical Optimization Problem in Bioinformatics

    Science.gov (United States)

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  8. Using "Arabidopsis" Genetic Sequences to Teach Bioinformatics

    Science.gov (United States)

    Zhang, Xiaorong

    2009-01-01

    This article describes a new approach to teaching bioinformatics using "Arabidopsis" genetic sequences. Several open-ended and inquiry-based laboratory exercises have been designed to help students grasp key concepts and gain practical skills in bioinformatics, using "Arabidopsis" leucine-rich repeat receptor-like kinase (LRR…

  9. Bioinformatics in the secondary science classroom: A study of state content standards and students' perceptions of, and performance in, bioinformatics lessons

    Science.gov (United States)

    Wefer, Stephen H.

    The proliferation of bioinformatics in modern Biology marks a new revolution in science, which promises to influence science education at all levels. This thesis examined state standards for content that articulated bioinformatics, and explored secondary students' affective and cognitive perceptions of, and performance in, a bioinformatics mini-unit. The results are presented as three studies. The first study analyzed secondary science standards of 49 U.S States (Iowa has no science framework) and the District of Columbia for content related to bioinformatics at the introductory high school biology level. The bionformatics content of each state's Biology standards were categorized into nine areas and the prevalence of each area documented. The nine areas were: The Human Genome Project, Forensics, Evolution, Classification, Nucleotide Variations, Medicine, Computer Use, Agriculture/Food Technology, and Science Technology and Society/Socioscientific Issues (STS/SSI). Findings indicated a generally low representation of bioinformatics related content, which varied substantially across the different areas. Recommendations are made for reworking existing standards to incorporate bioinformatics and to facilitate the goal of promoting science literacy in this emerging new field among secondary school students. The second study examined thirty-two students' affective responses to, and content mastery of, a two-week bioinformatics mini-unit. The findings indicate that the students generally were positive relative to their interest level, the usefulness of the lessons, the difficulty level of the lessons, likeliness to engage in additional bioinformatics, and were overall successful on the assessments. A discussion of the results and significance is followed by suggestions for future research and implementation for transferability. The third study presents a case study of individual differences among ten secondary school students, whose cognitive and affective percepts were

  10. Bioinformatics clouds for big data manipulation

    KAUST Repository

    Dai, Lin

    2012-11-28

    As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics.This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor. 2012 Dai et al.; licensee BioMed Central Ltd.

  11. BIOINFORMATICS FOR UNDERGRADUATES OF LIFE SCIENCE COURSES

    Directory of Open Access Journals (Sweden)

    J.F. De Mesquita

    2007-05-01

    Full Text Available In the recent years, Bioinformatics has emerged as an important research tool. Theability to mine large databases for relevant information has become essential fordifferent life science fields. On the other hand, providing education in bioinformatics toundergraduates is challenging from this multidisciplinary perspective. Therefore, it isimportant to introduced undergraduate students to the available information andcurrent methodologies in Bioinformatics. Here we report the results of a course usinga computer-assisted and problem -based learning model. The syllabus was comprisedof theoretical lectures covering different topics within bioinformatics and practicalactivities. For the latter, we developed a set of step-by-step tutorials based on casestudies. The course was applied to undergraduate students of biological andbiomedical courses. At the end of the course, the students were able to build up astep-by-step tutorial covering a bioinformatics issue.

  12. Bioinformatics clouds for big data manipulation

    Directory of Open Access Journals (Sweden)

    Dai Lin

    2012-11-01

    Full Text Available Abstract As advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS, Software as a Service (SaaS, Platform as a Service (PaaS, and Infrastructure as a Service (IaaS, and present our perspectives on the adoption of cloud computing in bioinformatics. Reviewers This article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor.

  13. Concepts Of Bioinformatics And Its Application In Veterinary ...

    African Journals Online (AJOL)

    Concepts Of Bioinformatics And Its Application In Veterinary Research And ... Bioinformatics is the science of managing and analyzing biological information. Because of the rapidly growing sequence biological data, bioinformatics tools and ...

  14. Bio-Search Computing: integration and global ranking of bioinformatics search results.

    Science.gov (United States)

    Masseroli, Marco; Ghisalberti, Giorgio; Ceri, Stefano

    2011-09-06

    In the Life Sciences, numerous questions can be addressed only by comprehensively searching different types of data that are inherently ordered, or are associated with ranked confidence values. We previously proposed Search Computing to support the integration of the results of search engines with other data and computational resources. This paper presents how well known bioinformatics resources can be described as search services in the search computing framework and integrated analyses over such services can be carried out. An initial set of bioinformatics services has been described and registered in the search computing framework and a bioinformatics search computing (Bio-SeCo) application using these services has been created. This current prototype application, the available services that it uses, the queries that are supported, the kind of interaction that is therefore made available to the users, and the future scenarios are here described and discussed.

  15. Computational biology and bioinformatics in Nigeria.

    Science.gov (United States)

    Fatumo, Segun A; Adoga, Moses P; Ojo, Opeolu O; Oluwagbemi, Olugbenga; Adeoye, Tolulope; Ewejobi, Itunuoluwa; Adebiyi, Marion; Adebiyi, Ezekiel; Bewaji, Clement; Nashiru, Oyekanmi

    2014-04-01

    Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.

  16. Computational biology and bioinformatics in Nigeria.

    Directory of Open Access Journals (Sweden)

    Segun A Fatumo

    2014-04-01

    Full Text Available Over the past few decades, major advances in the field of molecular biology, coupled with advances in genomic technologies, have led to an explosive growth in the biological data generated by the scientific community. The critical need to process and analyze such a deluge of data and turn it into useful knowledge has caused bioinformatics to gain prominence and importance. Bioinformatics is an interdisciplinary research area that applies techniques, methodologies, and tools in computer and information science to solve biological problems. In Nigeria, bioinformatics has recently played a vital role in the advancement of biological sciences. As a developing country, the importance of bioinformatics is rapidly gaining acceptance, and bioinformatics groups comprised of biologists, computer scientists, and computer engineers are being constituted at Nigerian universities and research institutes. In this article, we present an overview of bioinformatics education and research in Nigeria. We also discuss professional societies and academic and research institutions that play central roles in advancing the discipline in Nigeria. Finally, we propose strategies that can bolster bioinformatics education and support from policy makers in Nigeria, with potential positive implications for other developing countries.

  17. MVC模式下的Web系统快速开发框架设计%Design of Rapid Development Framework of Web System Based on MVC Model

    Institute of Scientific and Technical Information of China (English)

    饶浩

    2015-01-01

    The rapid development framework of web information system based on Model,Viewand Controller (MVC)model is discussed and analyzed in this paper.The object relation mapping,wrapper API and URL mapping table are introduced.The system,regarded as MVC structure,through the basic data model operation,is implemented separately in functions.As its shorter development cycle and higher efficiency development process,in development and maintenance,it has more advantages than the tradi-tional one.%讨论了一种基于MVC (Model模型,View视图,Controller控制器)模式的Web信息系统快速开发框架,描述了开发框架的设计,介绍了数据库API 封装、对象关系映射、URL映射表等操作。系统开发被抽象为MVC结构,开发过程在业务规则之下进行最根本的数据模型操作,很好地把系统业务剥离出来实现。快速开发框架的开发周期比较短,开发效率比较高,从开发和维护等角度相比较,比传统的开发方式更具优势。

  18. When cloud computing meets bioinformatics: a review.

    Science.gov (United States)

    Zhou, Shuigeng; Liao, Ruiqi; Guan, Jihong

    2013-10-01

    In the past decades, with the rapid development of high-throughput technologies, biology research has generated an unprecedented amount of data. In order to store and process such a great amount of data, cloud computing and MapReduce were applied to many fields of bioinformatics. In this paper, we first introduce the basic concepts of cloud computing and MapReduce, and their applications in bioinformatics. We then highlight some problems challenging the applications of cloud computing and MapReduce to bioinformatics. Finally, we give a brief guideline for using cloud computing in biology research.

  19. The EMBRACE web service collection.

    Science.gov (United States)

    Pettifer, Steve; Ison, Jon; Kalas, Matús; Thorne, Dave; McDermott, Philip; Jonassen, Inge; Liaquat, Ali; Fernández, José M; Rodriguez, Jose M; Pisano, David G; Blanchet, Christophe; Uludag, Mahmut; Rice, Peter; Bartaseviciute, Edita; Rapacki, Kristoffer; Hekkelman, Maarten; Sand, Olivier; Stockinger, Heinz; Clegg, Andrew B; Bongcam-Rudloff, Erik; Salzemann, Jean; Breton, Vincent; Attwood, Teresa K; Cameron, Graham; Vriend, Gert

    2010-07-01

    The EMBRACE (European Model for Bioinformatics Research and Community Education) web service collection is the culmination of a 5-year project that set out to investigate issues involved in developing and deploying web services for use in the life sciences. The project concluded that in order for web services to achieve widespread adoption, standards must be defined for the choice of web service technology, for semantically annotating both service function and the data exchanged, and a mechanism for discovering services must be provided. Building on this, the project developed: EDAM, an ontology for describing life science web services; BioXSD, a schema for exchanging data between services; and a centralized registry (http://www.embraceregistry.net) that collects together around 1000 services developed by the consortium partners. This article presents the current status of the collection and its associated recommendations and standards definitions.

  20. The EMBRACE web service collection

    DEFF Research Database (Denmark)

    Pettifer, S.; Ison, J.; Kalas, M.

    2010-01-01

    The EMBRACE (European Model for Bioinformatics Research and Community Education) web service collection is the culmination of a 5-year project that set out to investigate issues involved in developing and deploying web services for use in the life sciences. The project concluded that in order...... for web services to achieve widespread adoption, standards must be defined for the choice of web service technology, for semantically annotating both service function and the data exchanged, and a mechanism for discovering services must be provided. Building on this, the project developed: EDAM......, an ontology for describing life science web services; BioXSD, a schema for exchanging data between services; and a centralized registry (http://www.embraceregistry.net) that collects together around 1000 services developed by the consortium partners. This article presents the current status of the collection...

  1. Programming NET Web Services

    CERN Document Server

    Ferrara, Alex

    2007-01-01

    Web services are poised to become a key technology for a wide range of Internet-enabled applications, spanning everything from straight B2B systems to mobile devices and proprietary in-house software. While there are several tools and platforms that can be used for building web services, developers are finding a powerful tool in Microsoft's .NET Framework and Visual Studio .NET. Designed from scratch to support the development of web services, the .NET Framework simplifies the process--programmers find that tasks that took an hour using the SOAP Toolkit take just minutes. Programming .NET

  2. Web applications using the Google Web Toolkit

    OpenAIRE

    von Wenckstern, Michael

    2013-01-01

    This diploma thesis describes how to create or convert traditional Java programs to desktop-like rich internet applications with the Google Web Toolkit. The Google Web Toolkit is an open source development environment, which translates Java code to browser and device independent HTML and JavaScript. Most of the GWT framework parts, including the Java to JavaScript compiler as well as important security issues of websites will be introduced. The famous Agricola board game will be ...

  3. Web applications using the Google Web Toolkit

    OpenAIRE

    von Wenckstern, Michael

    2013-01-01

    This diploma thesis describes how to create or convert traditional Java programs to desktop-like rich internet applications with the Google Web Toolkit. The Google Web Toolkit is an open source development environment, which translates Java code to browser and device independent HTML and JavaScript. Most of the GWT framework parts, including the Java to JavaScript compiler as well as important security issues of websites will be introduced. The famous Agricola board game will be ...

  4. RESTful Web Services Cookbook

    CERN Document Server

    Allamaraju, Subbu

    2010-01-01

    While the REST design philosophy has captured the imagination of web and enterprise developers alike, using this approach to develop real web services is no picnic. This cookbook includes more than 100 recipes to help you take advantage of REST, HTTP, and the infrastructure of the Web. You'll learn ways to design RESTful web services for client and server applications that meet performance, scalability, reliability, and security goals, no matter what programming language and development framework you use. Each recipe includes one or two problem statements, with easy-to-follow, step-by-step i

  5. Engineering Adaptive Web Applications

    DEFF Research Database (Denmark)

    Dolog, Peter

    2007-01-01

    suit the user profile the most. This paper summarizes the domain engineering framework for such adaptive web applications. The framework provides guidelines to develop adaptive web applications as members of a family. It suggests how to utilize the design artifacts as knowledge which can be used......Information and services on the web are accessible for everyone. Users of the web differ in their background, culture, political and social environment, interests and so on. Ambient intelligence was envisioned as a concept for systems which are able to adapt to user actions and needs....... With the growing amount of information and services, the web applications become natural candidates to adopt the concepts of ambient intelligence. Such applications can deal with divers user intentions and actions based on the user profile and can suggest the combination of information content and services which...

  6. Rich Internet Web Application Development using Google Web Toolkit

    Directory of Open Access Journals (Sweden)

    Niriksha Bhojaraj Kabbin

    2015-05-01

    Full Text Available Web applications in today’s world has a great impact on businesses and are popular since they provide business benefits and hugely deployable. Developing such efficient web applications using leading edge web technologies that promise to deliver upgraded user interface, greater scalability and interoperability, improved performance and usability, among different systems is a challenge. Google Web Toolkit (GWT is one such framework that helps to build Rich Internet Applications (RIAs that enable fertile development of high performance web applications. This paper puts an effort to provide an effective solution to develop quality web based applications with an added layer of security.

  7. Agile parallel bioinformatics workflow management using Pwrake.

    OpenAIRE

    2011-01-01

    Abstract Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environm...

  8. Coronavirus Genomics and Bioinformatics Analysis

    Directory of Open Access Journals (Sweden)

    Kwok-Yung Yuen

    2010-08-01

    Full Text Available The drastic increase in the number of coronaviruses discovered and coronavirus genomes being sequenced have given us an unprecedented opportunity to perform genomics and bioinformatics analysis on this family of viruses. Coronaviruses possess the largest genomes (26.4 to 31.7 kb among all known RNA viruses, with G + C contents varying from 32% to 43%. Variable numbers of small ORFs are present between the various conserved genes (ORF1ab, spike, envelope, membrane and nucleocapsid and downstream to nucleocapsid gene in different coronavirus lineages. Phylogenetically, three genera, Alphacoronavirus, Betacoronavirus and Gammacoronavirus, with Betacoronavirus consisting of subgroups A, B, C and D, exist. A fourth genus, Deltacoronavirus, which includes bulbul coronavirus HKU11, thrush coronavirus HKU12 and munia coronavirus HKU13, is emerging. Molecular clock analysis using various gene loci revealed that the time of most recent common ancestor of human/civet SARS related coronavirus to be 1999-2002, with estimated substitution rate of 4´10-4 to 2´10-2 substitutions per site per year. Recombination in coronaviruses was most notable between different strains of murine hepatitis virus (MHV, between different strains of infectious bronchitis virus, between MHV and bovine coronavirus, between feline coronavirus (FCoV type I and canine coronavirus generating FCoV type II, and between the three genotypes of human coronavirus HKU1 (HCoV-HKU1. Codon usage bias in coronaviruses were observed, with HCoV-HKU1 showing the most extreme bias, and cytosine deamination and selection of CpG suppressed clones are the two major independent biological forces that shape such codon usage bias in coronaviruses.

  9. Planning bioinformatics workflows using an expert system.

    Science.gov (United States)

    Chen, Xiaoling; Chang, Jeffrey T

    2017-04-15

    Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. https://github.com/jefftc/changlab. jeffrey.t.chang@uth.tmc.edu.

  10. The Study of An Architecture Framework of DSS Based on Web Service--Implementation of Model Service in DSS%基于Web Service的DSS体系结构研究--DSS中模型服务的实现

    Institute of Scientific and Technical Information of China (English)

    杨骅飞; 王永军; 刘风宝

    2003-01-01

    本文从DSS开发集成的角度,分析了DSS的不同发展阶段的体系结构,提出了基于Web Service的DSS体系结构.介绍了Web Service的三个参与者和三个基本操作的体系结构,最后详细阐述了DSS中模型服务的实现框架.

  11. Rabifier2: an improved bioinformatic classifier of Rab GTPases.

    Science.gov (United States)

    Surkont, Jaroslaw; Diekmann, Yoan; Pereira-Leal, José B

    2017-02-15

    The Rab family of small GTPases regulates and provides specificity to the endomembrane trafficking system; each Rab subfamily is associated with specific pathways. Thus, characterization of Rab repertoires provides functional information about organisms and evolution of the eukaryotic cell. Yet, the complex structure of the Rab family limits the application of existing methods for protein classification. Here, we present a major redesign of the Rabifier, a bioinformatic pipeline for detection and classification of Rab GTPases. It is more accurate, significantly faster than the original version and is now open source, both the code and the data, allowing for community participation. Rabifier and RabDB are freely available through the web at http://rabdb.org . The Rabifier package can be downloaded from the Python Package Index at https://pypi.python.org/pypi/rabifier , the source code is available at Github https://github.com/evocell/rabifier . jsurkont@igc.gulbenkian.pt or jleal@igc.gulbenkian.pt. Supplementary data are available at Bioinformatics online.

  12. Implementasi Rails Pada Pengembangan Aplikasi Web: Universociety

    OpenAIRE

    Fenti, Des Erita

    2010-01-01

    Kajian ini bertujuan untuk mengembangkan aplikasi web berdasarkan gaya arsitektur Representational State Transfer (REST) dengan mempergunakan framework Rails 2. Aplikasi ini dikembangkan menggunakan teknik Model-View-Controller (MVC) yang terdapat pada framework Rails. Pengembangan aplikasi tersebut meliputi pemodelan data dan pengembangan antarmuka publik berbentuk web service untuk mengakses resource yang dikandung aplikasi. Objektif utama aplikasi web adalah untuk melihat sejauh mana REST ...

  13. MetaRouter: bioinformatics for bioremediation

    Science.gov (United States)

    Pazos, Florencio; Guijas, David; Valencia, Alfonso; De Lorenzo, Victor

    2005-01-01

    Bioremediation, the exploitation of biological catalysts (mostly microorganisms) for removing pollutants from the environment, requires the integration of huge amounts of data from different sources. We have developed MetaRouter, a system for maintaining heterogeneous information related to bioremediation in a framework that allows its query, administration and mining (application of methods for extracting new knowledge). MetaRouter is an application intended for laboratories working in biodegradation and bioremediation, which need to maintain and consult public and private data, linked internally and with external databases, and to extract new information from it. Among the data-mining features is a program included for locating biodegradative pathways for chemical compounds according to a given set of constraints and requirements. The integration of biodegradation information with the corresponding protein and genome data provides a suitable framework for studying the global properties of the bioremediation network. The system can be accessed and administrated through a web interface. The full-featured system (except administration facilities) is freely available at http://pdg.cnb.uam.es/MetaRouter. Additional material: http://www.pdg.cnb.uam.es/biodeg_net/MetaRouter. PMID:15608267

  14. Rough-fuzzy pattern recognition applications in bioinformatics and medical imaging

    CERN Document Server

    Maji, Pradipta

    2012-01-01

    Learn how to apply rough-fuzzy computing techniques to solve problems in bioinformatics and medical image processing Emphasizing applications in bioinformatics and medical image processing, this text offers a clear framework that enables readers to take advantage of the latest rough-fuzzy computing techniques to build working pattern recognition models. The authors explain step by step how to integrate rough sets with fuzzy sets in order to best manage the uncertainties in mining large data sets. Chapters are logically organized according to the major phases of pattern recognition systems dev

  15. Soft computing application in bioinformatics

    Institute of Scientific and Technical Information of China (English)

    MITRA Sushmita

    2008-01-01

    This article provides an outline on a recent application of soft computing for the mining of microarray gene expres-sions. We describe investigations with an evolutionary-rough feature selection algorithm for feature selection and classifica-tion on cancer data. Rough set theory is employed to generate reducts, which represent the minimal sets of non-redundant eatures capable of discerning between all objects, in a multi-objective framework. The experimental results demonstrate the ffectiveness of the methodology on three cancer datasets.

  16. Augmenting the Web through Open Hypermedia

    DEFF Research Database (Denmark)

    Bouvin, N.O.

    2003-01-01

    Based on an overview of Web augmentation and detailing the three basic approaches to extend the hypermedia functionality of the Web, the author presents a general open hypermedia framework (the Arakne framework) to augment the Web. The aim is to provide users with the ability to link, annotate, a......, and otherwise structure Web pages, as they see fit. The paper further discusses the possibilities of the concept through the description of various experiments performed with an implementation of the framework, the Arakne Environment......Based on an overview of Web augmentation and detailing the three basic approaches to extend the hypermedia functionality of the Web, the author presents a general open hypermedia framework (the Arakne framework) to augment the Web. The aim is to provide users with the ability to link, annotate...

  17. Bioinformatics for cancer immunotherapy target discovery

    DEFF Research Database (Denmark)

    Olsen, Lars Rønn; Campos, Benito; Barnkob, Mike Stein

    2014-01-01

    cancer immunotherapies has yet to be fulfilled. The insufficient efficacy of existing treatments can be attributed to a number of biological and technical issues. In this review, we detail the current limitations of immunotherapy target selection and design, and review computational methods to streamline...... therapy target discovery in a bioinformatics analysis pipeline. We describe specialized bioinformatics tools and databases for three main bottlenecks in immunotherapy target discovery: the cataloging of potentially antigenic proteins, the identification of potential HLA binders, and the selection epitopes...... and co-targets for single-epitope and multi-epitope strategies. We provide examples of application to the well-known tumor antigen HER2 and suggest bioinformatics methods to ameliorate therapy resistance and ensure efficient and lasting control of tumors....

  18. Adapting bioinformatics curricula for big data.

    Science.gov (United States)

    Greene, Anna C; Giffin, Kristine A; Greene, Casey S; Moore, Jason H

    2016-01-01

    Modern technologies are capable of generating enormous amounts of data that measure complex biological systems. Computational biologists and bioinformatics scientists are increasingly being asked to use these data to reveal key systems-level properties. We review the extent to which curricula are changing in the era of big data. We identify key competencies that scientists dealing with big data are expected to possess across fields, and we use this information to propose courses to meet these growing needs. While bioinformatics programs have traditionally trained students in data-intensive science, we identify areas of particular biological, computational and statistical emphasis important for this era that can be incorporated into existing curricula. For each area, we propose a course structured around these topics, which can be adapted in whole or in parts into existing curricula. In summary, specific challenges associated with big data provide an important opportunity to update existing curricula, but we do not foresee a wholesale redesign of bioinformatics training programs.

  19. The Discuss Of High-performance Web Development. Net Framework%浅谈.Net Framework下的高性能Web开发

    Institute of Scientific and Technical Information of China (English)

    张正风; 黄智

    2013-01-01

    This article focuses on the Asp.Net Web server (designated for receiving a user request processing business logic and re?sponse Html server) Web optimization process. The article discussed from four aspects of the application-level and page-level cache-level and code level. Web optimization techniques far more than the article discusses these Therefore, this study suggests that Web developers in the development and deployment of large-scale or large amount of data to Web applications, a variety of technologies must be integrated in the development, configuration, environmental aspects and strive to achieve excellence. Final?ly, the authors propose: high performance and maintainability conflict, need to find a balance point, relevant project stakeholders to be carefully weighed.%文章集中探讨了在Asp.Net Web 服务器(特指用于接收用户请求,处理业务逻辑和响应Html 的服务器)下进行Web优化的过程.文章从应用程序级别、页面级别、缓存级别和代码级别四个方面进行讨论.Web优化技术远不止文章论述这些,所以,文章建议Web开发人员在开发和部署大型或者大数据量Web应用时,一定要综合多种技术,在开发、配置、环境等方面力争做到精益求精.最后,文章提出:高性能和可维护性是冲突的,需要找一个平衡点,相关项目干系人要认真权衡.

  20. Cotton Databases and Web Resources%棉花Databases和Web资源

    Institute of Scientific and Technical Information of China (English)

    Russell J. KOHEL; John Z. YU; Piyush GUPTA; Rajeev AGRAWAL

    2002-01-01

    @@ There are several web sites for which information is available to the cotton research community. Most of these sites relate to resources developed or available to the research community. Few provide bioinformatic tools,which usually relate to the specific data sets and materials presented in the database. Just as the bioinformatics area is evolving, the available resources reflect this evolution.

  1. The Pan-European Reference Grid Developed in the ELECTRA Project for Deriving Innovative Observability Concepts in the Web-of-Cells Framework

    DEFF Research Database (Denmark)

    Marinelli, Mattia; Pertl, Michael; Rezkalla, Michel M.N.

    2016-01-01

    at system-wide scale. The methodology proposed in the task analyzes the system performance by investigating typical phenomena peculiar to each stability type and by developing observables necessary for the novel Web-of-Cells based control methods to operate properly at cell- and inter-cell level. Crucial......In the ELECTRA EU project, an innovative approach for frequency and voltage control is investigated, with reference to future power system scenarios characterized by massive amounts of distributed energy resources. A control architecture based on dividing the power system into a web of subsystems...

  2. UBioLab: a web-laboratory for ubiquitous in-silico experiments.

    Science.gov (United States)

    Bartocci, Ezio; Cacciagrano, Diletta; Di Berardini, Maria Rita; Merelli, Emanuela; Vito, Leonardo

    2012-07-09

    The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists –for what concerns their management and visualization– and for bioinformaticians –for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle –and possibly to handle in a transparent and uniform way– aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features –as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques– give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.

  3. Semantic Web

    Directory of Open Access Journals (Sweden)

    Anna Lamandini

    2011-06-01

    Full Text Available The semantic Web is a technology at the service of knowledge which is aimed at accessibility and the sharing of content; facilitating interoperability between different systems and as such is one of the nine key technological pillars of TIC (technologies for information and communication within the third theme, programme specific cooperation of the seventh programme framework for research and development (7°PQRS, 2007-2013. As a system it seeks to overcome overload or excess of irrelevant information in Internet, in order to facilitate specific or pertinent research. It is an extension of the existing Web in which the aim is for cooperation between and the computer and people (the dream of Sir Tim Berners –Lee where machines can give more support to people when integrating and elaborating data in order to obtain inferences and a global sharing of data. It is a technology that is able to favour the development of a “data web” in other words the creation of a space in both sets of interconnected and shared data (Linked Data which allows users to link different types of data coming from different sources. It is a technology that will have great effect on everyday life since it will permit the planning of “intelligent applications” in various sectors such as education and training, research, the business world, public information, tourism, health, and e-government. It is an innovative technology that activates a social transformation (socio-semantic Web on a world level since it redefines the cognitive universe of users and enables the sharing not only of information but of significance (collective and connected intelligence.

  4. Implementing bioinformatic workflows within the bioextract server

    Science.gov (United States)

    Computational workflows in bioinformatics are becoming increasingly important in the achievement of scientific advances. These workflows typically require the integrated use of multiple, distributed data sources and analytic tools. The BioExtract Server (http://bioextract.org) is a distributed servi...

  5. Bioinformatics in Undergraduate Education: Practical Examples

    Science.gov (United States)

    Boyle, John A.

    2004-01-01

    Bioinformatics has emerged as an important research tool in recent years. The ability to mine large databases for relevant information has become increasingly central to many different aspects of biochemistry and molecular biology. It is important that undergraduates be introduced to the available information and methodologies. We present a…

  6. "Extreme Programming" in a Bioinformatics Class

    Science.gov (United States)

    Kelley, Scott; Alger, Christianna; Deutschman, Douglas

    2009-01-01

    The importance of Bioinformatics tools and methodology in modern biological research underscores the need for robust and effective courses at the college level. This paper describes such a course designed on the principles of cooperative learning based on a computer software industry production model called "Extreme Programming" (EP).…

  7. Bioinformatics: A History of Evolution "In Silico"

    Science.gov (United States)

    Ondrej, Vladan; Dvorak, Petr

    2012-01-01

    Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…

  8. Privacy Preserving PCA on Distributed Bioinformatics Datasets

    Science.gov (United States)

    Li, Xin

    2011-01-01

    In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…

  9. Hardware Acceleration of Bioinformatics Sequence Alignment Applications

    NARCIS (Netherlands)

    Hasan, L.

    2011-01-01

    Biological sequence alignment is an important and challenging task in bioinformatics. Alignment may be defined as an arrangement of two or more DNA or protein sequences to highlight the regions of their similarity. Sequence alignment is used to infer the evolutionary relationship between a set of pr

  10. Mass spectrometry and bioinformatics analysis data

    Directory of Open Access Journals (Sweden)

    Mainak Dutta

    2015-03-01

    Full Text Available 2DE and 2D-DIGE based proteomics analysis of serum from women with endometriosis revealed several proteins to be dysregulated. A complete list of these proteins along with their mass spectrometry data and subsequent bioinformatics analysis are presented here. The data is related to “Investigation of serum proteome alterations in human endometriosis” by Dutta et al. [1].

  11. Bioinformatics in Undergraduate Education: Practical Examples

    Science.gov (United States)

    Boyle, John A.

    2004-01-01

    Bioinformatics has emerged as an important research tool in recent years. The ability to mine large databases for relevant information has become increasingly central to many different aspects of biochemistry and molecular biology. It is important that undergraduates be introduced to the available information and methodologies. We present a…

  12. Privacy Preserving PCA on Distributed Bioinformatics Datasets

    Science.gov (United States)

    Li, Xin

    2011-01-01

    In recent years, new bioinformatics technologies, such as gene expression microarray, genome-wide association study, proteomics, and metabolomics, have been widely used to simultaneously identify a huge number of human genomic/genetic biomarkers, generate a tremendously large amount of data, and dramatically increase the knowledge on human…

  13. Bioinformatics: A History of Evolution "In Silico"

    Science.gov (United States)

    Ondrej, Vladan; Dvorak, Petr

    2012-01-01

    Bioinformatics, biological databases, and the worldwide use of computers have accelerated biological research in many fields, such as evolutionary biology. Here, we describe a primer of nucleotide sequence management and the construction of a phylogenetic tree with two examples; the two selected are from completely different groups of organisms:…

  14. A bioinformatics approach to marker development

    NARCIS (Netherlands)

    Tang, J.

    2008-01-01

    The thesis focuses on two bioinformatics research topics: the development of tools for an efficient and reliable identification of single nucleotides polymorphisms (SNPs) and polymorphic simple sequence repeats (SSRs) from expressed sequence tags (ESTs) (Chapter 2, 3 and 4), and the subsequent imple

  15. SPECIES DATABASES AND THE BIOINFORMATICS REVOLUTION.

    Science.gov (United States)

    Biological databases are having a growth spurt. Much of this results from research in genetics and biodiversity, coupled with fast-paced developments in information technology. The revolution in bioinformatics, defined by Sugden and Pennisi (2000) as the "tools and techniques for...

  16. A COMPARATIVE ANALYSIS OF WEB INFORMATION EXTRACTION TECHNIQUES DEEP LEARNING vs. NAÏVE BAYES vs. BACK PROPAGATION NEURAL NETWORKS IN WEB DOCUMENT EXTRACTION

    OpenAIRE

    J. Sharmila; Subramani, A.

    2016-01-01

    Web mining related exploration is getting the chance to be more essential these days in view of the reason that a lot of information is overseen through the web. Web utilization is expanding in an uncontrolled way. A particular framework is required for controlling such extensive measure of information in the web space. Web mining is ordered into three noteworthy divisions: Web content mining, web usage mining and web structure mining. Tak-Lam Wong has proposed a web content mining methodolog...

  17. Nanoinformatics: an emerging area of information technology at the intersection of bioinformatics, computational chemistry and nanobiotechnology.

    Science.gov (United States)

    González-Nilo, Fernando; Pérez-Acle, Tomás; Guínez-Molinos, Sergio; Geraldo, Daniela A; Sandoval, Claudia; Yévenes, Alejandro; Santos, Leonardo S; Laurie, V Felipe; Mendoza, Hegaly; Cachau, Raúl E

    2011-01-01

    After the progress made during the genomics era, bioinformatics was tasked with supporting the flow of information generated by nanobiotechnology efforts. This challenge requires adapting classical bioinformatic and computational chemistry tools to store, standardize, analyze, and visualize nanobiotechnological information. Thus, old and new bioinformatic and computational chemistry tools have been merged into a new sub-discipline: nanoinformatics. This review takes a second look at the development of this new and exciting area as seen from the perspective of the evolution of nanobiotechnology applied to the life sciences. The knowledge obtained at the nano-scale level implies answers to new questions and the development of new concepts in different fields. The rapid convergence of technologies around nanobiotechnologies has spun off collaborative networks and web platforms created for sharing and discussing the knowledge generated in nanobiotechnology. The implementation of new database schemes suitable for storage, processing and integrating physical, chemical, and biological properties of nanoparticles will be a key element in achieving the promises in this convergent field. In this work, we will review some applications of nanobiotechnology to life sciences in generating new requirements for diverse scientific fields, such as bioinformatics and computational chemistry.

  18. Bioinfogrid:. Bioinformatics Simulation and Modeling Based on Grid

    Science.gov (United States)

    Milanesi, Luciano

    2007-12-01

    Genomics sequencing projects and new technologies applied to molecular genetics analysis are producing huge amounts of raw data. In future the trend of the biomedical scientific research will be based on computing Grids for data crunching applications, data Grids for distributed storage of large amounts of accessible data and the provision of tools to all users. Biomedical research laboratories are moving towards an environment, created through the sharing of resources, in which heterogeneous and dispersed health data, such as molecular data (e.g. genomics, proteomics), cellular data (e.g. pathways), tissue data, population data (e.g. Genotyping, SNP, Epidemiology), as well the data generated by large scale analysis (eg. Simulation data, Modelling). In this paper some applications developed in the framework of the European Project "Bioinformatics Grid Application for life science - BioinfoGRID" will be described in order to show the potentiality of the GRID to carry out large scale analysis and research worldwide.

  19. The European Bioinformatics Institute in 2016: Data growth and integration.

    Science.gov (United States)

    Cook, Charles E; Bergman, Mary Todd; Finn, Robert D; Cochrane, Guy; Birney, Ewan; Apweiler, Rolf

    2016-01-04

    New technologies are revolutionising biological research and its applications by making it easier and cheaper to generate ever-greater volumes and types of data. In response, the services and infrastructure of the European Bioinformatics Institute (EMBL-EBI, www.ebi.ac.uk) are continually expanding: total disk capacity increases significantly every year to keep pace with demand (75 petabytes as of December 2015), and interoperability between resources remains a strategic priority. Since 2014 we have launched two new resources: the European Variation Archive for genetic variation data and EMPIAR for two-dimensional electron microscopy data, as well as a Resource Description Framework platform. We also launched the Embassy Cloud service, which allows users to run large analyses in a virtual environment next to EMBL-EBI's vast public data resources.

  20. Using a Theoretical Framework to Investigate Whether the HIV/AIDS Information Needs of the AfroAIDSinfo Web Portal Members Are Met: A South African eHealth Study

    Directory of Open Access Journals (Sweden)

    Hendra Van Zyl

    2014-03-01

    Full Text Available eHealth has been identified as a useful approach to disseminate HIV/AIDS information. Together with Consumer Health Informatics (CHI, the Web-to-Public Knowledge Transfer Model (WPKTM has been applied as a theoretical framework to identify consumer needs for AfroAIDSinfo, a South African Web portal. As part of the CHI practice, regular eSurveys are conducted to determine whether these needs are changing and are continually being met. eSurveys show high rates of satisfaction with the content as well as the modes of delivery. The nature of information is thought of as reliable to reuse; both for education and for referencing of information. Using CHI and the WPKTM as a theoretical framework, it ensures that needs of consumers are being met and that they find the tailored methods of presenting the information agreeable. Combining ICTs and theories in eHealth interventions, this approach can be expanded to deliver information in other sectors of public health.

  1. Hidden Page WebCrawler Model for Secure Web Pages

    Directory of Open Access Journals (Sweden)

    K. F. Bharati

    2013-03-01

    Full Text Available The traditional search engines available over the internet are dynamic in searching the relevant content over the web. The search engine has got some constraints like getting the data asked from a varied source, where the data relevancy is exceptional. The web crawlers are designed only to more towards a specific path of the web and are restricted in moving towards a different path as they are secured or at times restricted due to the apprehension of threats. It is possible to design a web crawler that will have the capability of penetrating through the paths of the web, not reachable by the traditional web crawlers, in order to get a better solution in terms of data, time and relevancy for the given search query. The paper makes use of a newer parser and indexer for coming out with a novel idea of web crawler and a framework to support it. The proposed web crawler is designed to attend Hyper Text Transfer Protocol Secure (HTTPS based websites and web pages that needs authentication to view and index. User has to fill a search form and his/her creditionals will be used by the web crawler to attend secure web server for authentication. Once it is indexed the secure web server will be inside the web crawler’s accessible zone

  2. Agile parallel bioinformatics workflow management using Pwrake

    Directory of Open Access Journals (Sweden)

    Tanaka Masahiro

    2011-09-01

    Full Text Available Abstract Background In bioinformatics projects, scientific workflow systems are widely used to manage computational procedures. Full-featured workflow systems have been proposed to fulfil the demand for workflow management. However, such systems tend to be over-weighted for actual bioinformatics practices. We realize that quick deployment of cutting-edge software implementing advanced algorithms and data formats, and continuous adaptation to changes in computational resources and the environment are often prioritized in scientific workflow management. These features have a greater affinity with the agile software development method through iterative development phases after trial and error. Here, we show the application of a scientific workflow system Pwrake to bioinformatics workflows. Pwrake is a parallel workflow extension of Ruby's standard build tool Rake, the flexibility of which has been demonstrated in the astronomy domain. Therefore, we hypothesize that Pwrake also has advantages in actual bioinformatics workflows. Findings We implemented the Pwrake workflows to process next generation sequencing data using the Genomic Analysis Toolkit (GATK and Dindel. GATK and Dindel workflows are typical examples of sequential and parallel workflows, respectively. We found that in practice, actual scientific workflow development iterates over two phases, the workflow definition phase and the parameter adjustment phase. We introduced separate workflow definitions to help focus on each of the two developmental phases, as well as helper methods to simplify the descriptions. This approach increased iterative development efficiency. Moreover, we implemented combined workflows to demonstrate modularity of the GATK and Dindel workflows. Conclusions Pwrake enables agile management of scientific workflows in the bioinformatics domain. The internal domain specific language design built on Ruby gives the flexibility of rakefiles for writing scientific workflows

  3. Bioinformatic tools and guideline for PCR primer design | Abd ...

    African Journals Online (AJOL)

    Bioinformatic tools and guideline for PCR primer design. ... AFRICAN JOURNALS ONLINE (AJOL) · Journals · Advanced Search · USING AJOL · RESOURCES ... Bioinformatics has become an essential tool not only for basic research but also ...

  4. LAMP架构搭建与网站运行实例%Building the LAMP Framework and Giving the Examples of Web Site

    Institute of Scientific and Technical Information of China (English)

    余立强

    2011-01-01

    描述LAMP架构的搭建过程及其在Linux中编译安装软件时的参数选择,通过编写PHP程序来测试LAMP架构搭建的正确性,用开源团购网站程序来展示LAMP架构的实际应用。%The paper described the LAMP framework build process and the parameter choice of translating and installing software. The accuracy of building the LAMP framework was tested by writing the PHP program. The practical application of the LAMP framework was de

  5. A bioinformatics pipeline to build a knowledge database for in silico antibody engineering.

    Science.gov (United States)

    Zhao, Shanrong; Lu, Jin

    2011-04-01

    , including de novo library design in selection of favorable germline V gene scaffolds and CDR lengths. In addition, we have also developed a web application framework to present our knowledge database, and the web interface can help people to easily retrieve a variety of information from the knowledge database.

  6. MALINA: a web service for visual analytics of human gut microbiota whole-genome metagenomic reads.

    Science.gov (United States)

    Tyakht, Alexander V; Popenko, Anna S; Belenikin, Maxim S; Altukhov, Ilya A; Pavlenko, Alexander V; Kostryukova, Elena S; Selezneva, Oksana V; Larin, Andrei K; Karpova, Irina Y; Alexeev, Dmitry G

    2012-12-07

    MALINA is a web service for bioinformatic analysis of whole-genome metagenomic data obtained from human gut microbiota sequencing. As input data, it accepts metagenomic reads of various sequencing technologies, including long reads (such as Sanger and 454 sequencing) and next-generation (including SOLiD and Illumina). It is the first metagenomic web service that is capable of processing SOLiD color-space reads, to authors' knowledge. The web service allows phylogenetic and functional profiling of metagenomic samples using coverage depth resulting from the alignment of the reads to the catalogue of reference sequences which are built into the pipeline and contain prevalent microbial genomes and genes of human gut microbiota. The obtained metagenomic composition vectors are processed by the statistical analysis and visualization module containing methods for clustering, dimension reduction and group comparison. Additionally, the MALINA database includes vectors of bacterial and functional composition for human gut microbiota samples from a large number of existing studies allowing their comparative analysis together with user samples, namely datasets from Russian Metagenome project, MetaHIT and Human Microbiome Project (downloaded from http://hmpdacc.org). MALINA is made freely available on the web at http://malina.metagenome.ru. The website is implemented in JavaScript (using Ext JS), Microsoft .NET Framework, MS SQL, Python, with all major browsers supported.

  7. Component-Based Approach for Educating Students in Bioinformatics

    Science.gov (United States)

    Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.

    2009-01-01

    There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…

  8. Component-Based Approach for Educating Students in Bioinformatics

    Science.gov (United States)

    Poe, D.; Venkatraman, N.; Hansen, C.; Singh, G.

    2009-01-01

    There is an increasing need for an effective method of teaching bioinformatics. Increased progress and availability of computer-based tools for educating students have led to the implementation of a computer-based system for teaching bioinformatics as described in this paper. Bioinformatics is a recent, hybrid field of study combining elements of…

  9. Discovering diamonds under coal piles: Revealing exclusive business intelligence about online consumers through the use of Web Data Mining techniques embedded in an analytical customer relationship management framework

    Directory of Open Access Journals (Sweden)

    Myriam Ertz

    2016-02-01

    Full Text Available Web Mining has gained prominence over the last decade. This rise is concomitant with the upsurge of pure players, the multiple challenges of data deluge, the trend toward automation and integration within organization, as well as a desire for hyper segmentation. Confronted, partly or totally, with these multiple issues, companies recourse increasingly to replicate the data mining toolbox on web data. Although much is known about the technical aspect of WM, little is known about the extent to which WM actually fits within a customer relationship management system, designed at attracting and retaining the maximum amount of customers. An exploratory study involving twelve senior professionals and scholars indicated that WM is well-suited to achieve most of the customer relationship management objective, with regards to the profiling of existing web customers. The results of this study suggest that the engineering of WM processes into analytic customer relationship management systems, may yield highly beneficial returns, provided that some guidelines are scrupulously followed.

  10. Swiss EMBnet node web server.

    Science.gov (United States)

    Falquet, Laurent; Bordoli, Lorenza; Ioannidis, Vassilios; Pagni, Marco; Jongeneel, C Victor

    2003-07-01

    EMBnet is a consortium of collaborating bioinformatics groups located mainly within Europe (http://www.embnet.org). Each member country is represented by a 'node', a group responsible for the maintenance of local services for their users (e.g. education, training, software, database distribution, technical support, helpdesk). Among these services a web portal with links and access to locally developed and maintained software is essential and different for each node. Our web portal targets biomedical scientists in Switzerland and elsewhere, offering them access to a collection of important sequence analysis tools mirrored from other sites or developed locally. We describe here the Swiss EMBnet node web site (http://www.ch.embnet.org), which presents a number of original services not available anywhere else.

  11. Bioinformatics and systems biology research update from the 15(th) International Conference on Bioinformatics (InCoB2016).

    Science.gov (United States)

    Schönbach, Christian; Verma, Chandra; Bond, Peter J; Ranganathan, Shoba

    2016-12-22

    The International Conference on Bioinformatics (InCoB) has been publishing peer-reviewed conference papers in BMC Bioinformatics since 2006. Of the 44 articles accepted for publication in supplement issues of BMC Bioinformatics, BMC Genomics, BMC Medical Genomics and BMC Systems Biology, 24 articles with a bioinformatics or systems biology focus are reviewed in this editorial. InCoB2017 is scheduled to be held in Shenzen, China, September 20-22, 2017.

  12. A lightweight, flow-based toolkit for parallel and distributed bioinformatics pipelines

    Directory of Open Access Journals (Sweden)

    Cieślik Marcin

    2011-02-01

    Full Text Available Abstract Background Bioinformatic analyses typically proceed as chains of data-processing tasks. A pipeline, or 'workflow', is a well-defined protocol, with a specific structure defined by the topology of data-flow interdependencies, and a particular functionality arising from the data transformations applied at each step. In computer science, the dataflow programming (DFP paradigm defines software systems constructed in this manner, as networks of message-passing components. Thus, bioinformatic workflows can be naturally mapped onto DFP concepts. Results To enable the flexible creation and execution of bioinformatics dataflows, we have written a modular framework for parallel pipelines in Python ('PaPy'. A PaPy workflow is created from re-usable components connected by data-pipes into a directed acyclic graph, which together define nested higher-order map functions. The successive functional transformations of input data are evaluated on flexibly pooled compute resources, either local or remote. Input items are processed in batches of adjustable size, all flowing one to tune the trade-off between parallelism and lazy-evaluation (memory consumption. An add-on module ('NuBio' facilitates the creation of bioinformatics workflows by providing domain specific data-containers (e.g., for biomolecular sequences, alignments, structures and functionality (e.g., to parse/write standard file formats. Conclusions PaPy offers a modular framework for the creation and deployment of parallel and distributed data-processing workflows. Pipelines derive their functionality from user-written, data-coupled components, so PaPy also can be viewed as a lightweight toolkit for extensible, flow-based bioinformatics data-processing. The simplicity and flexibility of distributed PaPy pipelines may help users bridge the gap between traditional desktop/workstation and grid computing. PaPy is freely distributed as open-source Python code at http://muralab.org/PaPy, and

  13. Bioinformatics analyses for signal transduction networks

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Research in signaling networks contributes to a deeper understanding of organism living activities. With the development of experimental methods in the signal transduction field, more and more mechanisms of signaling pathways have been discovered. This paper introduces such popular bioin-formatics analysis methods for signaling networks as the common mechanism of signaling pathways and database resource on the Internet, summerizes the methods of analyzing the structural properties of networks, including structural Motif finding and automated pathways generation, and discusses the modeling and simulation of signaling networks in detail, as well as the research situation and tendency in this area. Now the investigation of signal transduction is developing from small-scale experiments to large-scale network analysis, and dynamic simulation of networks is closer to the real system. With the investigation going deeper than ever, the bioinformatics analysis of signal transduction would have immense space for development and application.

  14. Bioinformatics in New Generation Flavivirus Vaccines

    Directory of Open Access Journals (Sweden)

    Penelope Koraka

    2010-01-01

    Full Text Available Flavivirus infections are the most prevalent arthropod-borne infections world wide, often causing severe disease especially among children, the elderly, and the immunocompromised. In the absence of effective antiviral treatment, prevention through vaccination would greatly reduce morbidity and mortality associated with flavivirus infections. Despite the success of the empirically developed vaccines against yellow fever virus, Japanese encephalitis virus and tick-borne encephalitis virus, there is an increasing need for a more rational design and development of safe and effective vaccines. Several bioinformatic tools are available to support such rational vaccine design. In doing so, several parameters have to be taken into account, such as safety for the target population, overall immunogenicity of the candidate vaccine, and efficacy and longevity of the immune responses triggered. Examples of how bio-informatics is applied to assist in the rational design and improvements of vaccines, particularly flavivirus vaccines, are presented and discussed.

  15. Bioinformatics for saffron (Crocus sativus L. improvement

    Directory of Open Access Journals (Sweden)

    Ghulam A. Parray

    2009-02-01

    Full Text Available Saffron (Crocus sativus L. is a sterile triploid plant and belongs to the Iridaceae (Liliales, Monocots. Its genome is of relatively large size and is poorly characterized. Bioinformatics can play an enormous technical role in the sequence-level structural characterization of saffron genomic DNA. Bioinformatics tools can also help in appreciating the extent of diversity of various geographic or genetic groups of cultivated saffron to infer relationships between groups and accessions. The characterization of the transcriptome of saffron stigmas is the most vital for throwing light on the molecular basis of flavor, color biogenesis, genomic organization and biology of gynoecium of saffron. The information derived can be utilized for constructing biological pathways involved in the biosynthesis of principal components of saffron i.e., crocin, crocetin, safranal, picrocrocin and safchiA

  16. Bioinformatics Approaches for Human Gut Microbiome Research

    Directory of Open Access Journals (Sweden)

    Zhijun Zheng

    2016-07-01

    Full Text Available The human microbiome has received much attention because many studies have reported that the human gut microbiome is associated with several diseases. The very large datasets that are produced by these kinds of studies means that bioinformatics approaches are crucial for their analysis. Here, we systematically reviewed bioinformatics tools that are commonly used in microbiome research, including a typical pipeline and software for sequence alignment, abundance profiling, enterotype determination, taxonomic diversity, identifying differentially abundant species/genes, gene cataloging, and functional analyses. We also summarized the algorithms and methods used to define metagenomic species and co-abundance gene groups to expand our understanding of unclassified and poorly understood gut microbes that are undocumented in the current genome databases. Additionally, we examined the methods used to identify metagenomic biomarkers based on the gut microbiome, which might help to expand the knowledge and approaches for disease detection and monitoring.

  17. Understanding the cosmic web

    CERN Document Server

    Cautun, Marius; Jones, Bernard J T; Frenk, Carlos S

    2015-01-01

    We investigate the characteristics and the time evolution of the cosmic web from redshift, z=2, to present time, within the framework of the NEXUS+ algorithm. This necessitates the introduction of new analysis tools optimally suited to describe the very intricate and hierarchical pattern that is the cosmic web. In particular, we characterize filaments (walls) in terms of their linear (surface) mass density. This is very good in capturing the evolution of these structures. At early times the cosmos is dominated by tenuous filaments and sheets, which, during subsequent evolution, merge together, such that the present day web is dominated by fewer, but much more massive, structures. We also show that voids are more naturally described in terms of their boundaries and not their centres. We illustrate this for void density profiles, which, when expressed as a function of the distance from void boundary, show a universal profile in good qualitative agreement with the theoretical shell-crossing framework of expandin...

  18. Bioinformatics Training: A Review of Challenges, Actions and Support Requirements

    DEFF Research Database (Denmark)

    Schneider, M.V.; Watson, J.; Attwood, T.;

    2010-01-01

    As bioinformatics becomes increasingly central to research in the molecular life sciences, the need to train non-bioinformaticians to make the most of bioinformatics resources is growing. Here, we review the key challenges and pitfalls to providing effective training for users of bioinformatics...... services, and discuss successful training strategies shared by a diverse set of bioinformatics trainers. We also identify steps that trainers in bioinformatics could take together to advance the state of the art in current training practices. The ideas presented in this article derive from the first...

  19. VLSI Microsystem for Rapid Bioinformatic Pattern Recognition

    Science.gov (United States)

    Fang, Wai-Chi; Lue, Jaw-Chyng

    2009-01-01

    A system comprising very-large-scale integrated (VLSI) circuits is being developed as a means of bioinformatics-oriented analysis and recognition of patterns of fluorescence generated in a microarray in an advanced, highly miniaturized, portable genetic-expression-assay instrument. Such an instrument implements an on-chip combination of polymerase chain reactions and electrochemical transduction for amplification and detection of deoxyribonucleic acid (DNA).

  20. [Applied problems of mathematical biology and bioinformatics].

    Science.gov (United States)

    Lakhno, V D

    2011-01-01

    Mathematical biology and bioinformatics represent a new and rapidly progressing line of investigations which emerged in the course of work on the project "Human genome". The main applied problems of these sciences are grug design, patient-specific medicine and nanobioelectronics. It is shown that progress in the technology of mass sequencing of the human genome has set the stage for starting the national program on patient-specific medicine.